As artificial intelligence increasingly takes on new patient-facing roles that bring privacy and safety risks, the laws governing its use have made far less progress. Comprehensive federal rules are largely absent, so individual states have stepped in to fill the void.
The lack of legal clarity triggers challenges and confusion across healthcare, including pharma where more companies are launching platforms that directly interact with patients.
“For companies looking to think through a homogenous AI workflow or business plan, it's very difficult, because you're looking at each of the states that have implemented AI legislation, and trying to figure out a single, cohesive path,” said Aaron Maguregui, a healthcare AI attorney at Foley & Lardner.
The Trump administration has sought to accelerate the technology through initiatives like the Stargate Program aimed at building infrastructure, earning early criticism for its deregulatory bent and for not developing national standards that would give the industry guardrails as it grows.
But in December, the administration moved in that direction when the HHS issued a Request for Information aimed at learning how to encourage AI use in clinical care while avoiding patient harm and privacy risks.
“Data ownership in the healthcare space is heavily regulated, and so understanding where HIPAA and AI mix is part of this RFI process,” Maguregui said.
The goal is to ease the path to innovation while allowing companies to maintain their intellectual property rights and products. In the meantime, pharma companies using AI in consumer-facing ways are being left in limbo.
“I hope that a federal framework comes fast, but I don't see it happening. And even if it does come, it's going to take time to mature,” he said.
The DTC factor
Inconsistent regulation is becoming increasingly problematic as in the rapidly-expanding AI field. Major pharma companies, for example, have been rolling out direct-to-consumer platforms featuring AI-led patient education. Some states passed laws that regulate aspects of these direct interactions, focusing initially on the most pressing threats. Others still rely on existing, non-specific privacy and consumer rules.
“The first one that I think is fairly common and obvious is just disclosure laws,” Maguregui said. “Are you using AI? Is AI being utilized during this consumer interaction?”
Some state provisions require a human to remain in the AI loop, ensuring that a person, not a machine, is the ultimate decision maker. Others are trying to make it clear to patients that AI is not a medical professional by barring chatbots from using medical titles, such as doctor or nurse, Maguregui said.
California has been at the forefront in AI regulation, taking steps to ensure AI developers are transparent about their data sources while requiring them to ensure their products operate in an ethical, non-biased way, he said. Colorado, another leader in the space, introduced the concept of high-risk AI to operationalize governance around algorithm discrimination and documentation, Maguregui said.
Innovation without harm
The hope is to ultimately find a balance between innovation and regulation with unifying guidance at the federal level.
“On one hand, we want to emphasize innovation and pushing the needle in terms of technology and how helpful AI could be,” Maguregui said. But it’s critical to also understand how AI is built and deployed to ensure safety and quality, he added.
Keeping up with technology as it advances will be a major challenge as new iterations are released. While federal rules are hashed out, compliance is a piecemeal proposition.
“It's really understanding that as of right now and probably for the foreseeable future, understanding where the state law trends and regulation are going are paramount,” Maguregui said.
Companies should also be aware that it’s not only their internal processes that need to comply with these state regulations. If they have a substantial supply chain with companies also using AI, vendor diligence is crucial, Maguregui said, pointing out that it’s critical to ask hard questions about how they are developing and incorporating the technology. Organizations should also keep a close eye on the shifting sands at the state and federal levels to ensure they are in compliance as rules evolve so that AI tools can deliver promised value, while limiting harm.