
OR WAIT null SECS
© 2025 MJH Life Sciences™ , Pharmaceutical Technology - Pharma News and Development Insights. All rights reserved.
Eva-Maria Hempe, NVIDIA, says AI collaboration optimizes life science research via routine automation, freeing human experts for complex strategy, ethics, and exploring scarce data.
In this part 3 of a 3-part interview regarding the presentation “The State of AI in Next-Generation R&D” at CPHI Europe 2025, held Oct 28-30 in Frankfurt, Germany, Eva-Maria Hempe, Head of Healthcare & Life Sciences, NVIDIA, highlights the strategic necessity of integrating artificial intelligence (AI) while maintaining critical human oversight and addressing geopolitical concerns surrounding data sovereignty. She suggests that the future involves a truly collaborative framework in which technology optimizes routine tasks, freeing human researchers for complex decision making. Hempe specifically emphasizes Europe's burgeoning role in developing trusted, ethical, and sovereign AI tailored for the healthcare and life sciences sectors.
In this collaborative model, AI is positioned to manage high-throughput activities, such as data mining, crawling through vast datasets, early candidate selection, and performing extensive literature reviews that exceed human capability, according to Hempe. Conversely, she underscores the necessity of human expertise, particularly for interpreting complex results, devising experimental strategies, and guaranteeing ethical standards. Hempe adds that a core limitation of AI arises in exploring new areas where data is scarce; since AI relies fundamentally on data, human experts must intervene where knowledge gaps exist. In summing up this essential partnership, she stresses, "Human expertise remains really crucial for interpreting those complex results and coming up with experimental strategies, and especially also for ensuring ethical standards."
Further exploring advanced applications, Hempe notes the potential of AI agents as reasoning models equipped with memory and tools designed for multi-step tasks. She explains that an AI agent can transform a research question into a literature search, analyze findings, generate a hypothesis, and potentially initiate a virtual screening workflow to create molecule variants, ultimately summing the effort into a comprehensive report for the researcher. Geopolitically, the increasing focus on interdependency and sovereignty means helping European biopharma secure sensitive data by building sovereign AI models, according to Hempe. Although the current share of AI compute in Europe is around 4%, initiatives are underway—including public-private partnerships—to develop AI infrastructure and “AI data factories” to turn unstructured information into insights at scale, driving ethical and sovereign AI leadership
Check out Part 1 and Part 2 of this interview and access all our CPHI Europe coverage!
*Editor’s Note: This transcript is a direct, unedited rendering of the original audio/video content. It may contain errors, informal language, or omissions as spoken in the original recording.
There's always going to be a human or the experiment in the loop. In a way, that's what I really like about AI in this field. You always have still the experiment, which gives you an answer like are you going down the right track or not. So, what I see in the future is a really collaborative future. There's going to be some AI-driven automation for the more routine parts for the data mining, crawling through large amounts of data for early candidate selection, doing some of the literature review. Finding the signals across more papers than a human could do but then the human expertise remains really crucial for interpreting those complex results and coming up with experimental strategies, and especially also for ensuring ethical standards. In line with what I said before, you have to be aware of the limitations of AI and particularly in scarce data areas, AI is limited. AI is based on data, and we want to explore new things, so by definition, these are going to be areas where we're going to have less data. There you cannot rely on AI in the same way as you can in very well researched areas. So, that's then where the human expert comes in and so I see a lot of more routine steps to be automated but then for the really nuances for some of the creativity, maybe also for making a bold bet, that's where a human in the loop oversight will really be essential.
I think there is an element about sovereignty as well, and we're seeing some of that with some of the large geopolitical developments that there is increasingly a question of interdependency and sovereignty. What we do in Europe, we're really helping European pharmas to build sovereign AI models so that they can secure their sensitive data but at the same time don't have to compromise on being able to do large-scale biological modeling. and that also extends to AI infrastructure and we believe that Europe has the opportunity to really lead in trusted, ethical, and sovereign AI for healthcare and life sciences and it's going to be built on strong regulatory frameworks and also growing AI infrastructure. Right now, the share of AI compute in Europe is somewhere around 4%, but this is changing and we're seeing a lot of that. We're seeing a lot of initiatives coming in both as by driven by the public sector as well as public-private partnerships to really address this and to have AI factories, AI data factories that are turning those unstructured information into insights at scale in Europe.
One other area we are really interested in are AI agents. It's a bit of a buzzword recently, but what AI agents are is a reasoning model with some memory and some tools and you can really use this to give multi-step tasks to an AI. One of the examples is what we talked about before, literature review. What an AI agent can do if you give it the right access to PubMed or some other literature database, you can give it a question, it turns that question to literature search, it then analyzes the literature search, turns it into hypothesis, and then you could even go further and add something like a virtual screening workflow to it so once the AI has reasoned that your target protein is this and this and the standard of care is a particular molecule you can start creating variants of that molecule through a virtual screening workflow and really explore in that space further, and then the last step is really summing it all up into a report. And then this is something the researcher can go and continue working with, or with what we saw or with the lab in the loop, you could even think about it for the second step—we're not quite there yet ,but we're getting there—can you link that to an automated lab which then is running the experiments already by itself, maybe with a human in the loop who does a last oversight and kind of avoids the lab to order really expensive reagents or really expensive compounds? But that's one of the ways we see the future unfolding.