Newsletter
Sed ut perspiciatis unde.
The Gordon and Betty Moore Foundation awarded $1.25 million in funding to a joint project of Vanderbilt University Medical Center and Duke University School of Medicine to work with the Coalition for Health AI and the University of Iowa to develop a model framework and improve oversight of AI technology by health systems.
WHY IT MATTERS
While health systems are actively developing, deploying and using algorithms, the VUMC and Duke investigators say gaps in oversight, resources and organizational infrastructure at healthcare institutions compromise the safety, fairness and quality of AI use in clinical decisions.
They aim to determine the essential capabilities that health systems must establish to ensure they are well-prepared for the trustworthy utilization of AI models.
“This work will produce new tools and capabilities that our health system needs to ensure that we select, deploy and monitor health AI to make healthcare more safe, effective, ethical and equitable for all,” said Dr. Peter Embí, one of the VUMC’s project leads, in the grant announcement Thursday.
The project’s goal of building an empirically-supported maturity model for healthcare AI could ultimately help health systems comprehensively document which algorithms are deployed, what their values are, who is monitoring them and who is accountable for their uses.
Over the next year, VUMC and Duke teams will engage different stakeholders from CHAI and various health systems to outline the significant components health systems should have in place for the trustworthy implementation of AI.
“Creating a maturity model framework for health AI will enable health systems to identify their strengths and weaknesses when procuring and deploying AI solutions, ultimately driving the transformation of healthcare for the better,” said Nicoleta Economou, director of algorithm-based clinical decision support oversight at Duke AI Health.
THE LARGER TREND
CHAI, which published its blueprint for AI in healthcare earlier this year, takes a patient-centric approach to addressing barriers to trust in AI and machine learning. It says that it seeks to align health AI standards with reporting in order to help patients and their clinicians better evaluate the algorithms that contribute to their care.
“Transparency and trust in AI tools that will be influencing medical decisions is absolutely paramount for patients and clinicians,” said Dr. Brian Anderson, chief digital health physician at MITRE, a CHAI cofounder, upon the blueprint’s publication.
In August, Anderson discussed the importance of testability, transparency and usability of AI detailed in its guide for safe and reliable AI deployment.
He told Healthcare IT News that the White House, government agencies and other CHAI partners bring the private and public sectors, communities and patients together to work on the metrics, measurements and tools to manage the product lifecycles of AI models over time.
“There are a lot of unanswered questions,” he said of the “breathtaking” speed of AI innovation.
“As an example, in the regulatory space, I’m worried that collectively, not just our government, but society in general, we just haven’t had the chance to understand what are an agreed upon set of guardrails that we need to put in place around some of these models because health is such a highly consequential space.
“We are all patients,” he said.
ON THE RECORD
“If we are to realize the full potential of AI technologies, health systems must develop a more mature process for implementing these tools,” Michael Pencina, chief data scientist at Duke Health and the Duke School of Medicine’s vice dean for data science, said in the statement.
“Improving oversight of AI technology in healthcare systems is crucial for ensuring the safety and efficacy of patient care.”
Andrea Fox is senior editor of Healthcare IT News.
Email: [email protected]
Healthcare IT News is a HIMSS Media publication.