Get the latest tech news

ML Interpretable model almost doubles medical professionals’ accuracy to read confusing EEGS, assisting their decision making rather than telling them what to do


Interpretable model almost doubles medical professionals’ accuracy while showing its work, assisting their decision making rather than telling them what to do

Researchers at Duke University have developed an assistive machine learning model that greatly improves the ability of medical professionals to read the electroencephalography (EEG) charts of intensive care patients. Because EEG readings are the only method for knowing when unconscious patients are in danger of suffering a seizure or are having seizure-like events, the computational tool could help save thousands of lives each year. This starfish-like graph is a visual representation of how a new AI algorithm helps medical care professionals read the EEG patterns of patients in danger of suffering brain damage from seizures or seizure-like events.

Get the Android app

Or read this on r/technology

Read more on:

Photo of Decision

Decision

Photo of accuracy

accuracy

Photo of EEGS

EEGS

Related news:

News photo

New exponent functions that make SiLU and SoftMax 2x faster, at full accuracy

News photo

Sony reverses unpopular Helldivers 2 decision after blistering player reaction

News photo

Exclusive: Alembic debuts hallucination-free AI for enterprise data analysis and decision support