Artificial intelligence holds enormous potential for innovation and medical progress.
At the same time, experts warn that it is not magic. On the contrary, if handled awkwardly, it could potentially dangerously exacerbate existing supply imbalances.
In a digital session of the HIMSS21 global conference on Monday, Dr. John Halamka, President of the Mayo Clinic Platform, proposed a solution: transparency about the development and suitability of an algorithm for a particular purpose.
Halamka spoke to HIMSS Executive Vice President of Media Georgia Galanoudis during the afternoon session, “The Year That Shook the World”. They discussed how AI and machine learning are driving progress in many sectors and whether it is possible to protect the role of AI in the patient’s medical journey while eliminating prejudice.
“Your optimism about AI is legitimate, but there are caveats,” Halamka said. “As a society we have to define transparency of communication: to define how we evaluate the usefulness of an algorithm.”
Halamka compared the algorithmic transparency to the readily available information on food packaging. “Shouldn’t we as a society require nutritional labeling for our algorithms?” he said.
So who should be responsible for such a label? In terms of bias or effectiveness, Halamka proposed public-private collaboration between government, academia, and industry.
“I think it will happen very soon,” he predicted.
Halamka said we are in a “perfect storm” for innovation when it comes to bias and fairness in AI – and that a consortium would ideally be tasked with tackling the technology that would make the type of transparency necessary is.
Transparency will also be crucial, Halamka said, to ensure that the AI’s momentum is maintained throughout the algorithmic equity effort. He gave the example of a Mayo Clinic algorithm to identify a low fraction ejection.
“We then ran a prospective, randomized, controlled study … and stratified it by race, ethnicity, age, and gender to see how this algorithm actually works in the real world,” he explained. They then published the results.
Looking to the future, Halamka predicted that clinicians will be able to leverage the knowledge of broad circles of patients of the past “to take care of the patients of the future.”
AI enhancement of human decision making can help clinicians overcome the biases shaped by their own individual experiences, he said.
He outlined what Mayo calls the “four big challenges”: collecting new data (and trying to standardize that data), making discoveries, validating algorithms, and translating the end result into the workflow.
“Let’s hope that government, academia and industry work on these four challenges and that we will all be in a better place,” he said.
Kat Jercich is the Editor-in-Chief of Healthcare IT News.
Twitter: @kjercich
Email: kjercich@himss.org
Healthcare IT News is a HIMSS Media publication.