
OR WAIT null SECS
© 2026 MJH Life Sciences™ , Pharmaceutical Technology - Pharma News and Development Insights. All rights reserved.
In the concluding segment of a multi-part interview, Jason Bryant of ArisGlobal explains how human-in-the-loop and human-on-the-loop are both still relevant in AI implementation in pharma.
First in a series of written articles and now in multiple interviews, Jason Bryant, senior vice-president of Product Management—AI at ArisGlobal, has presented a comprehensive evaluation of the potential of agentic AI to assist processes in the pharmaceutical industry.
The third and final portion of Bryant’s exclusive video interview with PharmTech® deals with the processes known as human-in-the-loop (HITL) and human-on-the-loop (HOTL), and how they relate to AI workflows. Bryant uses the parallel of air traffic control to explain.
“Think of agents within your multi-agent system as the airplanes,” Bryant says in the interview. “So the agents are the planes, and the agents have their own autonomy to act. The orchestrator in this analogy is air traffic management. So this is a coordination layer that is setting rules and setting constraints and sharing context, and it's dealing with all of these planes in the air at a volume and a speed and a complexity that's beyond human capabilities, and that's why we have these complex systems that keep those planes in the air doing what they're meant to be doing.”
HITL does not replace HOTL in this analogy, Bryant explains; HITL remains.
“There are deliberate points in the process where you design the human to be, and in this case, for example, the pilot is a human, and the pilot flies the plane,” he says. “They take off, they land the plane. They are in there. That's their role. But with HOTL, you have humans that are not only pilots that can respond to the unexpected, but you've got humans in the control towers that are actually even in control of the air traffic management system itself.”
The third part of Bryant’s interview can be viewed above. Watch the first part here, and the second part here.
The three articles written by Bryant are available here, here, and here.
Editor's note: This transcript is a lightly edited rendering of the original audio/video content. It may contain errors, informal language, or omissions as spoken in the original recording.
Most of what is being marketed as agentic AI today is either vapor or, frankly, fake. So the mindset and the design shift here is critical. You have to design for open platforms. You have to design for interoperability, preferred discovery, and if you build walled gardens now, you limit those possibilities. You limit the entire enterprise. Later, you're really going to shoot yourself in the foot.
So you start with an open platform mindset, shared orchestration, shared governance, shared agent capabilities. It's all in the sharing and the mindset, and the platform becomes architected for interoperability. So you want to avoid traditional and sort of departmental little mini-AI ecosystems. Really, that's going to become technical debt.
So I think that it's important to acknowledge that internally, teams within these gardens will build their own agents, either experimentally, that's healthy, or it might be because, for internal use they want to promote an app store of agents within the enterprise, of which the best ones emerge that solve particular problems, elegantly repeated frustrations. Again, that's useful, but the real enterprise value here requires a connective tissue.
Data quality still matters, and really one of the biggest reasons for it is it's not just bad in, bad out, but when you're talking about systems that have these controllable levels of autonomy, without that data quality, you're going to see an amplification of value, but also potentially errors. They can compound.
HITL, human in the loop, is where humans are designed to be inserted into fixed places within a process. That's a design choice, whereas HOTL, human on the loop, is dynamic, so humans are ultimately controlling the system, but they are also invoked by the system, say when there is certain risk or certain uncertainty, or where there are anomalies that are appearing.
So the air traffic control model, I think, which obviously has been around long before GenAI, captures this concept, I think, pretty well here. In the analogy, think of agents within your multi-agent system as the airplanes. So the agents are the planes, and the agents have their own autonomy to act. The orchestrator in this analogy is air traffic management. So this is a coordination layer that is setting rules and setting constraints and sharing context, and it's dealing with all of these planes in the air at a volume and a speed and a complexity that's beyond human capabilities, and that's why we have these complex systems that keeps those planes in the air doing what they're meant to be doing.
Now, model context protocol, MCP, is frankly, how to connect the data that's being shared. So these planes sharing data about their position, about their velocity, their speed, and about their identity, about who they are, and that's a common layer that allows these airplanes, as well as the orchestrator, to really understand each other. Then you have protocols. That's a protocol about sharing data. It's really a universal adapter for data.
There's another protocol about how the agents can connect to each other rather than connecting to data. And here in our analogy, this is collision avoidance. Effectively, planes are able to negotiate directly in the absence of humans. You climb, I descend. And that's done within the rules of the system, within this bounded autonomy.
Now with HITL and HOTL, you mentioned the question, the HITL version here, because HITL is not replaced by HOTL. HITL remains. There are deliberate points in the process where you design the human to be, and in this case, for example, the pilot is a human, and the pilot flies the plane. They take off, they land the plane. They are in there. That's their role. But with HOTL, you have humans that are not only pilots that can respond to the unexpected, but you've got humans in the control towers that are actually even in control of the air traffic management system itself. So the humans are supervising the whole picture, and they're stepping in when the thresholds demand it. And that's really the model for agentic AI in pharma.