
OR WAIT null SECS
© 2026 MJH Life Sciences™ , Pharmaceutical Technology - Pharma News and Development Insights. All rights reserved.
Richard Jaenisch explains how digitally interactive SOPs and Human-AI Training Parallelization help build, measure, and continuously improve AI-era workforce skills.
This video interview expands on a PDA Week 2026 session exploring how to integrate generative AI into biomanufacturing without sidelining human expertise. Richard Jaenisch, senior director of Education, Outreach, and Digital Experience at Open Biopharma Research and Training Institute, and David Jaenisch, director of Technology at Prompting Integration Consulting, LLC (PICLLC), discuss the Human-AI Training Parallelization: Learning (HATPL) system and its real-world impact on workforce development.
They describe how digitally interactive SOPs (DISOPs) support HATPL, turning procedures into adaptive, competency-focused training modules and assessments. At Open Biopharma, DISOPs are made available to all training courses and also deployed internally to track competency assessment pass rates over time. Because the system asks one question at a time and adjusts difficulty based on each learner’s answers and validation content, instructors gain a kind of “Rosetta Stone” view of how skills develop across sections and sessions.
In their pilot on AI literacy, staff already showed strong baseline competence after several years of human–AI training, but regular DISOP-based assessments plus small, project-centered trainings revealed measurable growth and clearer risk mitigation. The parallel use of DISOPs and hands-on projects creates a traceable record of individual capabilities, helping leaders align people to the right initiatives and identify where targeted upskilling is needed.
The interview also highlights how near-real-time insight prevents learners from falling behind in intensive workshops. If a participant’s competency scores start to lag, instructors can immediately adjust curriculum and support so that no one is “done” on day one of a two-week program. Ultimately, Richard Jaenisch argues that structured, adaptive tools like DISOPs are essential for making generative AI a force-multiplier for people and processes, rather than a replacement for human expertise.
Be on the lookout for Part 3 of this interview and check out Part 1: Human-AI Training Parallelization: Empowering Life Sciences Without Replacing Expertise!
Editor's note: This transcript is a lightly edited rendering of the original audio/video content. It may contain errors, informal language, or omissions as spoken in the original recording.
My name is Richard Jaenisch. I am the senior director of Education, Outreach, and Digital Experience at Open BioPharma Research and Training Institute in Carlsbad, California.
I am David Jaenisch. I am the director of technology at Prompting Integration and
How we deploy DISOPs at OpenBiopharma is two different ways. One way is we provide it to any training course that happens at OpenBiopharma. So anyone hosting their own course, developing a training with us, co-developing a training, whatever they're doing, we will have it on-site at OpenBiopharma, and we will give them the opportunity to take that curriculum, upload it into DISOPs, and run that competency assessment.
So that data doesn't really give us a lot of the ongoing data which we like to see, unless those individuals continue on with that trend or do multiple trainings with us and we can see the growth over time. We deployed this tool in October of last year. So you would imagine that we haven't had a lot of consistent, specific trainings from multiple groups during that time period to be able to read on that.
However, we also deploy it internally. And so internally, what we found is... we really look at the competency assessment pass rates, and we say, "Okay, how are people doing with these things?" And we have noted that over time we do see a lot more progress on them, and I think that that's twofold. One is you're getting used to the structure because the system is more familiar.
But because the system really only asks one question at a time, and because it is adaptive difficulty, changing the flow according to what you are doing, and how you are answering and how your answer matches with things in the validation material and the validation questions, then, you know, the pass rates really kind of can change from section to section. And so we have a kind of Rosetta Stone, if you will, that allows us to see and assess how those competency assessments trend over time.
And generally speaking, we have found that a lot of our folks are learning a lot more because the first tool we deployed this with was our AI literacy campaigns because it made the most sense, and because I had a lot of the material right and ready to go, and that's how we did our pilot with it, and it worked out very well. We found that not only did most of our staff already have, probably because we've been deploying human AI training parallelization for three years now, but, we found that they already had a good level of competency ahead of time.
But the benefit of deploying DISOPs with regular intervals and also interjecting occasional small, very practical trainings, that are really based around their projects, really meant that the combination of these two, having that hands-on project and having this be the certification that kind of is that traceable record that allows us to understand it, really sees that parallel track so that we can understand their growth, have risk mitigation because we understand where their skills are and where their competencies are. It's very helpful for us in terms of not only figuring out who aligns with what projects, but also who may be needing more certain skills.
There are certain skills that maybe there's a deficiency, maybe we're in the middle of a workshop, and we're deploying this, and we find that there's an individual who's kind of starting to lag behind. But the benefit is then we can then change our curriculum to make sure that they don't fall behind. Because being able to have them know that early means that we don't have to worry about that. In a two-week workshop, if you fall behind on the first day, you're done, man. That's going to be really difficult. So when we deploy it in that situation, we see that very quickly that the individuals benefit because we don't see them lag in that competency assessment in the future days ahead.