Health
Flawed AI, Fast Approvals: Is the FDA Endangering Global Drug Safety?

- Verified expert testimonies raise serious concerns over AI-generated errors in drug approval decisions.
- Global pharmaceutical brands must re-evaluate regulatory strategies as confidence in US systems falters.
The integration of artificial intelligence into drug evaluation by the United States Food and Drug Administration (FDA) has attracted widespread attention in recent months, especially among global pharmaceutical brands and regulators. What was intended to streamline drug approval processes is now facing mounting criticism as verified accounts emerge detailing serious flaws in the system. The use of AI in critical regulatory functions is no longer a theoretical risk—it has become a tangible concern with direct consequences for the healthcare sector worldwide.
A report published by CNN on July 24, 2025, based on interviews with six current and former FDA staff, has revealed credible claims of systematic errors stemming from the agency’s internal AI tool. According to these insiders, the AI model employed by the FDA occasionally produces incorrect interpretations of submitted data, misrepresents trial outcomes, and even invents non-existent studies. These are not speculative claims. They are grounded in direct experience and raise urgent questions for pharmaceutical companies operating in and out of the United States.
What the AI at the FDA Does
Based on verified accounts from multiple industry and regulatory sources, the AI system used by the FDA is responsible for parsing large volumes of scientific documentation submitted during drug trials. It generates analytical summaries, flags patterns, and provides decision support for human reviewers. While the agency has not released a full technical breakdown of the system, its primary objective is well documented: reduce the average drug review time, which can stretch up to 10 years and cost companies billions in R&D and regulatory compliance.
In principle, using AI to analyse terabytes of clinical data sounds practical. It could help overwhelmed reviewers identify key findings more quickly and avoid missing important details buried in technical reports. The FDA has reportedly framed this system as a “supplement” to human decision-making. Yet multiple internal sources describe situations where the AI’s output significantly shaped final review outcomes. In one case cited in the CNN report, a reviewer noted that the AI tool, Elsa, referenced a nonexistent study, leading to incorrect conclusions that required manual correction. This output made it into the internal summary documents until it was manually caught later in the process.
Verified Risks: AI Hallucinations in a Regulatory Context
A key issue here is the phenomenon of AI hallucinations—when generative models produce inaccurate or entirely fabricated content that appears plausible. In the context of drug approval, these hallucinations can translate into major real-world errors. One of the examples cited in the report includes the AI referencing a study that did not exist in any known medical database. These are not hypothetical risks; they are specific, observed failures confirmed by those involved in the process.
Even if a human reviewer ultimately catches the mistake, the risk of delayed detection could influence the review trajectory. Consider the impact on smaller companies that may not have the legal or regulatory firepower to contest such outcomes. For global pharmaceutical firms, the AI’s unpredictability introduces a new layer of compliance uncertainty.
Implications for Global Pharmaceutical Brands
The implications extend beyond the US. Many global regulators, such as the UK’s Medicines and Healthcare Products Regulatory Agency (MHRA) and the European Medicines Agency (EMA), often take their lead from FDA approvals. If confidence in the FDA’s decision-making processes begins to erode, global trust in American pharmaceutical brands and drug safety assurance may weaken.
Companies must now consider whether approvals secured in the US will face additional scrutiny in Europe, Asia or Latin America. Will the UK’s MHRA require a second look at US-approved treatments? Will European bodies begin to distance their evaluation frameworks from FDA precedence? These are not distant possibilities; European lawmakers have already raised questions in parliament regarding AI governance and the extent to which regulatory automation should be allowed to influence medical decisions.
The Pressure Behind the Decision
It is important to acknowledge why the FDA adopted this AI system in the first place. The traditional drug review process is long, expensive and under constant pressure to speed up. The average cost to bring a new drug to market now exceeds $2.6 billion, according to data from the Tufts Centre for the Study of Drug Development. In a race to reduce approval timeframes, regulatory bodies have turned to automation and AI to compress the timelines and lower costs.
But these benefits come with trade-offs. Speed and accuracy are not always compatible. Pharmaceutical brands that invest in robust testing, documentation and regulatory compliance may find themselves penalised by flawed AI readings that misrepresent the strength of their data.
Reactions from the Industry
Many pharmaceutical firms have already begun to adapt. Some are hiring internal data scientists to pre-validate how their submissions might be interpreted by language models. Others are preparing dual submissions: one formatted for traditional review and another optimised for machine parsing.
A biotech firm based in Cambridge, UK, told us that they now perform “AI audits” before submitting any documents to the FDA. This includes testing how different large language models interpret the key clinical trial outputs. “We can’t afford to leave anything to chance,” their Chief Regulatory Officer said. “Even if the model gets 90% of it right, it’s the 10% that can kill a deal or delay approval by a year.”
The shift is also altering legal strategies. Law firms advising pharmaceutical clients are being asked to prepare for appeals specifically citing AI misinterpretations. These are highly complex challenges—there is currently no official channel through which to challenge a regulatory decision based on algorithmic error.
Reputation Risk is Brand Risk
Trust is central to any pharmaceutical brand. If a drug is fast-tracked by AI and later pulled from the market due to side effects that were overlooked or misrepresented during the approval phase, the fallout could be catastrophic, not just in human terms but also in commercial and reputational terms.
For example, in 2023 the diabetes drug Mounjaro, developed by Eli Lilly, saw massive global uptake following US approval and endorsement by several national health systems. If such a high-stakes approval were driven by flawed AI interpretation today, a post-approval safety issue would carry implications not only for the brand but for the regulators who endorsed it.
Transparency is key here. Yet as of July 2025, the FDA has not disclosed detailed information on the AI’s architecture, data training protocols, or risk mitigation frameworks. This opacity makes it difficult for companies to prepare or push back against flawed interpretations.
Actionable Measures for Global Regulatory Teams
Given the unpredictable influence of AI on drug approvals, pharmaceutical brands must take practical steps to safeguard their interests and maintain trust in their product pipelines:
- Internal AI Readiness Checks: Before submitting any drug documentation to the FDA, teams should conduct internal tests using NLP tools to simulate how the content could be interpreted by AI.
- Data Structuring: Ensure all trial results, summaries and side-effect profiles are structured and labelled to reduce the likelihood of misinterpretation.
- AI-Aware Writing Practices: Use simplified, unambiguous language where possible to avoid confusion in automated parsing.
- Multi-region Submission Planning: Prepare alternate versions of submissions tailored to specific regulatory frameworks in the UK, EU, and Asia.
- Transparent Communication: If discrepancies emerge between what your team submitted and what the FDA returns, escalate those issues early and clearly. Document every exchange.
Global Regulatory Ramifications
If trust in the FDA’s AI system continues to decline, the ripple effects will be felt across international agencies. Global harmonisation efforts—already complex—could become even more fragmented. The WHO’s efforts to establish a global regulatory framework for digital health and AI may face renewed urgency.
For UK-based global drug brands, this situation presents both a challenge and an opportunity. By demonstrating robust human review processes and submitting to more transparent national regulators, brands can differentiate themselves in a market where trust is paramount.
It’s a call for brands to reframe their regulatory strategy—not just around compliance but around confidence, clarity and control.