In a recent Children's Hospital Association webinar, experts from digital health care leader AVIA Health gave an overview of the regulatory, ethical, and legal risks of generative artificial intelligence in health care.
Here are three key takeaways.
1. The U.S. regulatory environment needs further maturity
Historically, it has taken decades for Congress to pass major laws to regulate new technologies. For AI, regulations will have to consider patient data privacy, intellectual property, medical malpractice liability, and quality control and standardization.
Here’s the outlook so far:
No major law. The U.S. has not taken substantive action on AI, though the Biden administration has released a blueprint as a regulatory guide. Without a major law in place, federal and private groups are creating frameworks for ethical use.
Executive order. The White House issued an executive order establishing Health and Human Services (HHS) as the central authority on artificial intelligence and coordinator across federal agencies.
New FDA model. Large language models differ from software-as-medical devices that are already regulated and introduce new challenges for regulatory bodies. The FDA has proposed a total product life cycle approach, emphasizing the importance of transparency and real-world performance monitoring.
Rest of the world. The European Union has developed the world’s first comprehensive regulation on AI. It affects some of the largest companies like IBM and Microsoft, and it could influence U.S. laws. There are no global regulations, but most countries have developed some of their own.
2. Ethical and legal questions exist across multiple levels
AI has the potential to enable providers to embody the Hippocratic oath and the potential to violate it.
Three major concerns include:
How to prevent bias. All AI has bias. Because bias exists in the world, it finds its way into AI through data, modeling, and human intervention. Children’s hospitals will need an internal evaluation process to measure bias within their AI models.
How to protect patient data. With AI, data sharing will increase, and data collection will become more vulnerable. AI has proven it can reidentify de-identified data and make it easier for cyber criminals to infiltrate systems.
How to reduce legal liabilities. AI brings increased liability, though to what extent is unknown. Most applicable healthcare laws and regulations predate the development of AI.
3. Governance is key
Effectively addressing ethical and legal considerations requires an appropriate governance structure. In the near term, a centralized structure is most effective because it drives strategic alignment, optimizes internal resources, and grants clear authority to set AI policies and processes. Every structure will vary depending on needs but should include strategic, tactical, and operational levels.