FDA says flexibility and special mechanisms for large language models are needed when regulating AI for health care and biomedicine
AI holds great potential for transforming aspects of health care, but its regulation should be coordinated across all other regulated industries and the government, the U.S Food and Drug Administration has said.
The approach should also be compatible with international organizations due to the fact the FDA regulates industries that have globally distributed products and services, it said.
The agency, which regulates food, medical devices, drugs and cosmetics, among other things, released a special communication outlining its perspective on how AI in health care and biomedicine regulation should be approached.
Like other regulatory organizations, the FDA is grappling with how to regulate AI, particularly considering the technology's fast-paced and opaque nature. So far, the only comprehensive regulatory framework for AI across different industries is the EU AI Act that was passed earlier in this year.
In the document the FDA outlined some approaches it believes are needed to regulate the growing proliferation of AI. These include flexible mechanisms to keep up with the pace of change in AI across biomedicine and health care, as well as a life cycle management approach incorporating regular local post-market performance monitoring for health-related AI.
Related:FDA Clears AI Tool for Detecting Cancer Signs in Bone Marrow
"Many proposed applications in health care will require FDA oversight given their intended use for diagnosis, treatment or prevention of diseases or conditions," the agency wrote.
In particular, it believes special mechanisms are needed to evaluate large language models and their uses, which present a "unique challenge" because of the potential for unforeseen, emergent consequences, the FDA said.
The agency has already authorized almost 1,000 AI-enabled medical devices and has received hundreds of regulatory submissions for drugs that used AI in their discovery and development, but it has not yet authorized a large language model.
The agency highlighted that continuous evaluations may surpass the limits of current regulatory systems.
"Regulated industries, academia and the FDA will need to develop and optimize the tools needed to assess the ongoing safety and effectiveness of AI in healthcare and biomedicine," it said.
It added that any approach taken should balance the needs of the entire spectrum of health ecosystem interests, from large firms to start-ups.
The FDA said it will continue to play a central role "with a focus on health outcomes" but all involved industries will need to attend to AI with "the care and rigor this potentially transformative technology merits."