Why Data, Trust, and Skills Are the Foundations of AI-Driven Pharmacoviginlance

Published on: 

Beena Wood, Qinecsa, explains how AI could revolutionize pharmacovigilance, if data, trust, skills, and organizational barriers are addressed first.

Advertisement

PharmTech recently spoke with Beena Wood, chief product officer, Qinecsa Solutions, to get her perspective on trends that shaped pharmaceutical development and manufacturing in 2025 and where things are headed in 2026. In this part 2 of our three-part interview, Wood outlines a forward-looking vision for how AI could transform pharmacovigilance (PV), while candidly addressing the structural, regulatory, and organizational barriers that must be overcome first. She imagines a future in which AI-enabled surveillance systems can rapidly detect safety signals worldwide—across languages, geographies, and datasets—within hours rather than months. In Wood’s words, this would allow systems to “detect—not weeks, not months, but hours—the same reports across multiple languages,” cross-reference similar cases globally, and proactively alert manufacturers and regulators in near real time.

While Wood emphasizes that this vision is technically feasible, she stresses that today’s biggest obstacle is not algorithmic sophistication but weak data foundations. Fragmented, non-interoperable, and poorly validated datasets severely limit AI’s effectiveness in PV, she says. As she puts it, “No amount of AI sophistication will help” if the underlying data are not harmonized. She notes that leading organizations in 2025 are those investing in these “boring foundations” now, enabling faster and more scalable progress later.

Trust and explainability emerge as another critical theme, with Wood framing trust as a combination of transparency, auditability, accuracy, and risk-based evaluation. Black-box AI systems, she argues, will not gain traction in high-risk domains like safety monitoring, especially under such evolving regulatory frameworks as the EU AI Act. Clear guardrails, ethical frameworks, and responsible AI practices are essential for adoption.

Wood also highlights economic and organizational readiness as key barriers. PV teams must compete for investment by clearly articulating return on investment, while organizations need robust change management strategies to integrate AI effectively. She cautions that the real skills challenge is not simply technical proficiency but critical thinking—teaching professionals how to evaluate AI outputs, recognize limitations, and identify hallucinations. Ultimately, Wood sees AI not as a replacement for human expertise, but as an amplifier: tools that, when used responsibly, can make skilled professionals “super humans” rather than substitutes.

Access Part 1 of this interview!


Transcript

Editor's note: This transcript is a lightly edited rendering of the original audio/video content. It may contain errors, informal language, or omissions as spoken in the original recording.

So think of a patient in rural India and, say they are on this newly approved gene therapy, and they experienced an unusual reaction. And then think about the possibility of an intelligence surveillance system that, within hours, is able to detect—not weeks, not months, but hours—the same reports across multiple languages, is able to cross reference it with about 47 similar cases globally and then it identifies that there is a genetic marker correlation using real-world data analysis and then, simultaneously, alerts the manufacturers as well as the regulators and agentic AI coordinating already the proactive measures.

That's where I want PV to get to. And it is theoretically possible with AI capabilities, with proactive surveillances, but we don't have the foundations there that we need in order to make it real. And that's where the barriers that you're talking about come into play and things that I talked about earlier. So, for example, the data foundations, the data heterogeneity problem and the fragmentation problem. There is simply… the uncomfortable truth for us is that we are building AI systems on data foundation that aren’t either standardized or they're not interoperable and they're often not validated for AI use cases. And so, when you have this data scattered with incompatible formats and legacy systems and regulatory silos, and if you want to create that AI model without a harmonization, it's going to be extremely difficult.

No amount of AI sophistication will help. And so that's, I think, an uncomfortable truth, and that made us realize this year, in 2025. The good organizations are the ones who are starting to do those boring foundations so they can scale very quickly afterwards.

I think the other barrier is still the explainability and trust of these AI systems. So, when I look at the trust, I think of it as the explainability, meaning the transparency, the auditability, as well as the accuracy, and weighed by the risk. That's how I look at trust in an AI system.

All of these play a part in either magnifying it or diminishing the use cases for AI. So, if we still talk about AI as a black box, it's not going to work. It's super important to have those guardrails. What kind of guardrails, what kind of ethical framework, responsible AI are we bringing? How can it be trusted? So I think that's another important barrier that, as vendors, as well as consumers, and put that in context of EU AI Act and in a high-risk system—I know there are still conversations about what becomes high-risk systems within our domain—I think all of that becomes really important. And then the economic reality is actually a barrier. I can say, definitely, with my domain, it's about asking for initial investment for AI promising efficiency gains. We know there'll be efficiency gains, but how do we sit in around the R&D table where PV and safety can build those ROI and use cases, compete against other use cases, in order to get those investments to be able to do it? How do we build that ROI for the company?

I have to say the organization’s readiness is another one. I think not looking at it as a small change, but actually having an entire change management for AI and even modifying the way that we look at skillset. I mean, is it a job description or are we building in a set of skills we need that are transferable across the organization? So I think those are the things that are barriers and what needs to be overcome.

What concerns me about that is I'm not sure, as an industry, if we're solving for the right problem. When we are talking about skills or barriers, and we have the contradictory report about shadow AI and 90% are using AI anyway, whether it's enterprise approved or not, the real skills challenge over there is: How are we teaching people to critically evaluate AI outputs in the context of the work? For example, in context of safety, how are we teaching people to evaluate if AI is operating within competency or is it hallucinating confidently? How do we teach that? How do we teach when AI cannot answer a question? And so that's how I would frame that question.

And also I would see it as, not thinking about technical skills in isolation. AI skills sometimes is considered as a technical skill, so not thinking about that in isolation, but thinking it as an intersection of multiple capabilities and collaboration. So, for example, data scientists and technology folks, do they understand the PV domain? Do the PV domain folks understand technology? How do we bring that together as a collaboration to bridge those gaps? Are they change management capabilities so we can actually do get to those transformative successes?

I think human intellect is inherently beautiful. And I don't think it can truly ever be replaced, but what we can do is we can enable these AI tools to maybe make us super humans, is how I would think about it.