Algorithmic diagnostic assessments for diabetes-induced blindness present promising outcomes for medical settings | Information

0
361



AI algorithms are inconsistent in detecting diabetic eye diseases



Diabetes-induced blindness is noisy a new article by Aaron Lee, Assistant Professor at the UW School of Medicine. With increased care needs and the few ophthalmologists and opticians able to identify the condition, scientists are looking for ways to use artificial intelligence to effectively automate screening to maximize coverage.

According to Lee, one of the factors that makes diabetic retinopathy such an important problem is that it is difficult to identify and therefore may be identified late, resulting in irreversible damage.

In an ideal world, everyone would have routine checkups and retinopathy wouldn’t be a problem. But, Lee explained, it’s not that easy.

“The screening rate of people who need an eye exam is very low in the US,” Lee said. “And most people don’t know, and those who often forget.”

For those receiving an exam, the waiting time between the exam and deciding whether to need further evaluation and treatment by a professional can be subject to lengthy delays, patient deprioritization, or patients who simply ignore the referral.

The chain of events that takes place between annual eye exams, coupled with the low availability of qualified professionals, has increasingly led researchers to look at machine learning approaches to screening for diabetic retinopathy, which would increase the number of patients that are treated in a timely manner.

In the study, Lee and his co-authors examined seven different image classification algorithms on 311,604 images from 23,724 different patients with diabetes.

One of the challenges of finding an algorithm that is appropriate for the context is that researchers need to balance an algorithm’s negative predictive ability with its sensitivity, Lee explained.

“We chose a negative predictive value for this study because it was a screening problem,” said Lee. “We wanted to be sure that there really is no disease when the algorithm says there is no disease.”

However, one of the challenges for researchers is finding the right negative predictive value and sensitivity of an algorithm without rendering the algorithm unusable. If both are set too high, the algorithm will refer all it checks to a test.

For Lee and his team, the results of the study showed different sensitivity rates of 50.98% to 85.90%Despite the varying performance of the algorithms, Lee emphasized that they all perform better than a human and highlighted the importance and potential effectiveness of these devices for practical use.

Of the algorithms that performed poorly, Lee and other researchers are investigating what could be improved to make them useful for clinical use. Lee explained that researchers are looking for algorithms that have appropriate levels of prediction while being durable and scalable for real-world environments.

There are currently two FDA approved algorithms that can be used in clinical settings. According to Lee, the process for the FDA to approve an algorithm is lengthy and rigorous, but there is hope that more algorithms like this one will be operational in the future.

Reach reporter Thelonious Goerz at news@dailyuw.com. Twitter: @TheloniousGoerz

Do you like what you read? Support high quality student journalism by donating here.