OR WAIT null SECS
© 2025 MJH Life Sciences™ , Pharmaceutical Technology - Pharma News and Development Insights. All rights reserved.
In a three-part series, the author explores the potential use of agentic AI in pharmaceutical R&D.
Agentic artificial intelligence (AI)—the autonomous coordination of goal-driven AI “agents”—is arguably the most significant shift in AI since the advent of ChatGPT, because of the potential to redefine the way that organizations operate. Its autonomy lies in the ability of AI agents and their orchestrators (or “super-agents”) to reason, synthesize knowledge, and adaptively determine and coordinate the tasks and interactions needed to achieve a goal.
The ability to reason, anticipate, generate insight and knowledge, and make better decisions is perfectly matched to life sciences, an industry which is data-rich, process-heavy, and outcome-critical. As the pharmaceutical industry’s fascination with agentic AI grows, this three-part series explores the technology’s potential in pharma R&D, beginning with a summation of AI’s evolution up to now.
As the global pharma industry adapts to advances in science, evolving regulatory demands, and pressures on price and margins, AI has gained traction both as a practical solution to processing soaring workloads and as a means of transforming the work that companies do. Today, the technology is being deployed with increasing momentum, as well as rising ambition, as AI continues to progress and as organizations come to understand its full potential.
Generative AI (GenAI), which came to mainstream attention with the launch of ChatGPT in 2022, has proved a disruptive catalyst for whole industries, offering the ability to take what has gone before, amalgamate knowledge, distill key facts including fresh insights, and present these in new ways. In life sciences R&D, across functions including regulatory affairs and drug safety/pharmacovigilance, AI has paved the way for intelligent automation of highly labor-intensive routine processes. The technology is being actively used in marketing authorization application preparation, product change control/regulatory impact assessment management, adverse event case processing, and safety reporting.
Through these proliferating applications, AI has contributed to tangible improvements in process cost-efficiency. Gains have included accelerated task execution, honed accuracy and consistency in process output, and large-scale resource optimization, as teams of professionals have clawed back time for more strategic and challenging tasks. In deploying AI across these discrete use cases, pharma organizations and their function leads have learned a lot about the technology’s potential and how best to leverage it to get good and trusted results.
All of this has paved the way for an even bigger wave of AI advancement, in the form of agentic AI. This represents the move from the automation of existing or pre-defined processes to much greater autonomy over what AI does and how. Instead of single AI applications managing a process, agentic AI is more collaborative, multiplying the efficiency gains. It involves the autonomous contributions of a series of specialist AI agents to fulfill an assigned goal via the most effective means possible. These agents are coordinated by an AI “super-agent.” Given a goal, this AI orchestrator and its charges each deploy their respective intelligence, experience, and reasoning to fulfill their contribution to the wider goal in the most effective and efficient way they can. Combining all of this autonomous decision-making enables step changes in delivery through in-flight process adaptation.
So why is this significant for pharma?
Agentic AI is not just about doing things more efficiently and more accurately, as with previous incarnations of AI. Reaching beyond the scope of individual tasks, it offers potential for extended process reinvention.
Agentic AI is disruptive to the point that Microsoft’s CEO has suggested it will upend cloud software delivery models (1). The reason for this is that AI has the potential to diminish the centrality of the user interface by autonomously invoking multiple applications or tools to complete a broader task. This doesn’t just have implications for the way companies procure and use software; it also challenges afresh the traditional boundaries companies have between functional teams and between their respective data sets.
All of the above has renewed emphasis on the quality and richness of companies’ underlying data assets, and how well these can be combined and leveraged by different agents in different contexts. It also raises expectations around how confidently AI-based reasoning and distilled insights can be trusted.
It is here that the potential for agentic AI collides with the need for robust system and data interoperability, clearly defined parameters (multi-agent operating frameworks), and strong governance—that’s if agentic AI is to be validated and relied upon for use in a regulated, patient-oriented context.
Ideally, a balance should be struck between empowering agentic AI with all of the knowledge, detail, and logic it needs to add value above and beyond what humans can achieve, while containing the technology’s leeway (so that it doesn’t overstep or go rogue), a concept known as “bounded autonomy”.
This last provision, which will be explored in more detail in the third article of this series, is critical. GenAI’s detractors have long been concerned about the various platforms’ propensity to “hallucinate” to fill gaps in their knowledge when providing an answer or summary (although, this is being addressed rapidly) (2). Without transparency enabling users to check the authenticity of sources or the veracity of findings, no company would be able to trust the output of GenAI tools, especially in a strictly regulated environment.
In an agentic AI context, where systems are afforded new autonomy across extended workflows, the stakes for governance rise sharply. The risks go well beyond incorrect outputs; they include unintended data movement, loss of operational control, misaligned decision-making, and blurred lines of accountability. Managing these demands a shift from static safeguards to dynamic oversight: governance that can adapt as agents interact, roles evolve, and contexts change. This will be explored in depth in the final part of the series, but the core point is clear—without a robust, adaptive governance framework, the very autonomy that makes agentic AI powerful can also amplify risk.
In summary, the arrival of agentic AI introduces both significant potential for operational reinvention and the need for advanced systems and frameworks to facilitate positive transformation while mitigating risk.
The next article in this three-part series will consider the big-picture potential for agentic AI in pharma R&D functions, and describe some initial use cases now being piloted. Part Three will then draw out the critical success factors involved to unlock the technology’s full potential.
1. Satya Nadella | BG2 w/ Bill Gurley & Brad Gerstner. YouTube.com, Dec. 12, 2024 (accessed Aug. 15, 2025).
2. Google Cloud. What Are AI Hallucinations? cloud.google.com, accessed Aug. 15, 2025.
Jason Bryant is vice-president, Product Management for AI & Data at ArisGlobal, based in London. A data science actuary, he has built his career in fintech and health-tech, and specializes in AI-powered, data-driven, yet human-centric product innovation. He previously led an AstraZeneca digital incubator and today remains on the board of a health charity, Scleroderma & Raynaud's UK (SRUK), which is dedicated to improving the lives of people affected by those conditions in the UK.