Blogs

Let's
Connect
DISMISS

AI in Pharmacovigilance: Knowing When to Say No

  published on:   10/01/2025         Author:   Gaurav Goel

Avoiding the Pitfalls of Misapplied AI to Protect Patient Safety and Ensure Compliance

The hyperactivity around AI today feels like a modern gold rush, with every industry racing to stake its claim. Not a single day passes without the news of the impact (positive or negative) that AI is going to have in every aspect of our lives. Startups, tech giants, and even governments are in a frenzy, branding every innovation as “AI-powered” to catch the wave. While AI undoubtedly holds great promise for improving efficiency and accuracy, the rush to “AI-ify” every process is leading to a surge of unrealistic expectations, questionable solutions, and even potential risks.

Pharmacovigilance (PV) must exercise caution with AI adoption due to the high stakes involved in patient safety, regulatory compliance, and ethical considerations. There has been a lot of talks and indeed solutions to show how AI will improve the efficiency of PV processes. However, it is crucial to cut through this noise and discern where AI can truly add value versus where it may introduce unnecessary complexity or even compromise patient safety. In this article, I will explore PV use cases that should not be solved using AI, highlighting the dangers of applying AI inappropriately.

The Hype vs. Reality: AI in Pharmacovigilance

AI has certainly made its mark in PV by enabling automation in case processing, predictive signal detection, and adverse event reporting. Its ability to process vast amounts of data quickly and uncover hidden patterns is actually transformative. No doubt about it. However, not all PV processes are suitable for AI-driven solutions. In some cases, the application of AI may be not only ineffective but also dangerous. The following sections outline scenarios where AI should not be the solution.

Complex Clinical Evaluations

AI should not replace human expertise in complex clinical evaluations that require nuanced medical judgment

These situations often involve:

Causality Assessment: Causality assessment involves determining the likelihood that a drug caused a particular adverse event. This process is highly nuanced and context-dependent, requiring a deep understanding of clinical, pharmacological, and patient-specific factors. AI models may struggle with the subjective nature of these assessments, as they often rely on predefined algorithms that cannot fully capture the complexity of individual patient cases.

Benefit-Risk Analysis: Evaluating the overall benefit-risk profile of a medication involves complex decision-making that AI may not be equipped to handle comprehensively.

Potential Risks

Over-simplification of Complex Cases: AI may reduce causality to binary outcomes, missing subtleties that could affect patient safety.

Lack of Accountability: Automated assessments may obscure the decision-making process, making it difficult to justify or audit conclusions during regulatory reviews.

Ethical Concerns: The “black-box” nature of many AI models can lead to ethical dilemmas if patients are harmed based on AI-driven decisions.

Causality assessments and Benefit risk analysis should be performed by experienced medical professionals who can apply clinical judgment and consider case-specific details that go beyond what AI can evaluate.

Regulatory Reporting and Compliance

Regulatory reporting requires strict adherence to guidelines that can vary significantly across different regions and change over time. AI systems can struggle to keep up with these changes, especially if they are not frequently updated. Additionally, regulators expect transparency and traceability in decision-making, which many AI models, particularly deep learning ones, cannot provide. AI should not be the sole arbiter in matters of ethical consideration and regulatory compliance:

Ethics Committee Decisions: AI lacks the moral reasoning capabilities necessary for making ethical judgments in clinical trial designs or post-marketing studies.

Regulatory Submissions: While AI can assist in preparing documentation, the final review and approval of regulatory submissions should involve human oversight to ensure compliance and quality.

Potential Risks:

Non-Compliance: AI systems may inadvertently miss regulatory updates, leading to non-compliant submissions.

Lack of Traceability: “Black-box” AI models make it difficult to explain how a decision was reached, which is problematic for regulatory audits.

Fines and Penalties: Non-compliance due to AI-driven errors can result in financial penalties and damage to the company’s reputation.

Regulatory reporting should prioritize compliance over automation. Manual review and validation of AI outputs are essential to ensure accuracy and adherence to guidelines.

Quality Review of Narrative Reports

Narrative reports in PV contain rich, unstructured information that requires contextual understanding. AI-powered natural language processing (NLP) tools, while impressive, are not yet capable of fully grasping the subtleties, idioms, and clinical nuances present in human-written text. Automated systems can miss critical details or misinterpret context, leading to inaccurate case assessments.

Potential Risks:

Loss of Critical Information: Important context may be lost if AI misinterprets narratives or fails to recognize clinical nuances.

Increased Workload: False positives generated by AI could increase the manual review burden rather than reducing it.

Over-Reliance on Automation: Trusting AI for narrative quality review could lead to a decline in manual vigilance, risking patient safety.

Quality review of narrative reports should be performed by trained medical professionals who can understand the clinical context and nuances that AI may overlook.

Critical Safety Decisions

Certain critical safety decisions should remain in the hands of experienced professionals:

Signal Validation: While AI can assist in signal detection, the final validation of safety signals should involve human expertise to ensure accuracy and contextual understanding

Risk Management Planning: Developing and updating risk management plans requires strategic thinking and a holistic view of the drug’s safety profile that AI currently cannot replicate.

Potential Risk: Automated decision-making in these areas could result in missed critical safety issues or inappropriate risk mitigation strategies

Rare Adverse Event Detection

AI models rely on large datasets to learn patterns and make predictions. However, rare adverse events are, by definition, infrequent and often lack sufficient data for training robust models. Attempting to detect these rare cases using AI can lead to false positives or, worse, false negatives. A model trained on sparse data may overlook critical signals or misinterpret benign cases as significant, which could delay appropriate action or generate unnecessary alarms.

Potential Risks:

Missed Detection: Failure to identify rare but critical safety signals.

False Sense of Security: Over-reliance on AI might lead to the neglect of manual review and expert judgment.

Regulatory Compliance Issues: Regulators may not accept AI-driven conclusions without traditional human validation.

For rare adverse event detection, rely on human expertise supported by traditional statistical methods. These approaches are better suited to handle the nuances and context that rare cases present.

Handling Sensitive Patient Data

Pharmacovigilance relies on the handling of sensitive patient data, including medical histories, personal identifiers, and sometimes even genetic information. AI models, especially those involving machine learning, require vast amounts of data for training, which could pose significant privacy and security risks. Improper handling of sensitive data can lead to compliance issues, especially with regulations like GDPR and HIPAA.

Potential Risks:

Data Privacy Breaches: AI systems are vulnerable to hacking and misuse, which could expose sensitive patient information.

Bias and Discrimination: AI models trained on biased data may inadvertently propagate inequities, affecting patient safety outcomes for underrepresented groups.

Regulatory Non-Compliance: Failure to meet stringent data protection standards could result in heavy fines and loss of credibility.

Limit AI applications to de-identified data and ensure robust data governance frameworks are in place. Human oversight is essential to ensure that sensitive patient data is handled ethically and securely.

Differentiating Appropriate and Inappropriate AI Use Cases

To help decision-makers distinguish between suitable and unsuitable AI applications in PV, consider the following guidelines:

Complexity of Judgment: If a task requires complex medical judgment, ethical considerations, or strategic decision-making, human expertise should take precedence over AI

Data Availability and Quality: AI is most effective when trained on large, high-quality datasets. For rare events or situations with limited data, AI may not be reliable

Regulatory Acceptance: Consider whether regulatory authorities have provided guidance or acceptance for AI use in specific PV processes

Transparency and Explainability: If the decision-making process needs to be fully transparent and explainable, AI might not be suitable, as many AI algorithms operate as “black boxes”

Human Oversight Requirement: For tasks where human oversight is critical for safety and quality assurance, AI should play a supportive role rather than a decisive one

Conclusion

The promise of AI in pharmacovigilance is exciting, but the indiscriminate application of AI can be risky. Understanding where AI adds genuine value and where it poses dangers is crucial for making informed decisions. As a PV leader, your role is to strike the right balance between innovation and safety, ensuring that AI serves as a tool to enhance — not replace — human expertise in safeguarding patient health.

By being critical of AI claims and focusing on patient-centric outcomes, we can embrace technology without losing sight of our primary mission: ensuring the safety and well-being of patients around the world.

Vitrana uses cookies to to analyse our traffic and to personalise the content. We also disclose information about your use of our site with our analytics partners. Additional details are available in our cookie policy.

Accept Cookies
DISMISS