
OR WAIT null SECS
Susan Schniepp discusses how AI and digital twins speed up pharma batch release and detect data integrity issues while maintaining human oversight.
Susan Schniepp, Regulatory Compliance Associates Inc., shared her insights on the transformative role of emerging technologies in drug manufacturing. Having participated in the planning committees for PDA Week 2026, Schniepp is at the forefront of introducing new technologies like digital twins and artificial intelligence to the industry.
Check out the 3-part video interview with Schneipp:
Part 1: Digital Twins and the Future of Pharma Validation
Part 2: Identifying Data Integrity Hotspots Using AI Technology
Schniepp: My understanding of a digital twin is that it is a system designed to mimic exactly what you have put into live operation. It runs alongside your actual production line. Potential issues come when the two systems, the physical and the digital, begin to divert from one another. If they are no longer in sync, it triggers a specific response from the team, such as a new validation, a technical correction, or another corrective activity to bring them back into alignment.
Data integrity is a major concern as we move away from traditional paper systems or middle electronic systems. In those older models, humans handled every document and performed every approval manually. Now, we use digital twins to identify hotspots, specific areas where there is a high potential for data integrity violations based on how documents are being handled. Once the AI identifies these risks, the human staff can shore up"the system to prevent a violation from ever actually occurring.
A middle electronic system is like Documentum, where you use electronics to move files around, but a human still has to physically look at and approve everything. With AI and machine learning, we are allowing the machine to do much more. The system itself collects the data and assures its correctness. The human's role shifts from doing the primary approval of every detail to reviewing the data the digital twin or system has already aggregated.
This is a significant shift for the industry. AI acts like one centralized brain that compiles information from various sources that a person would normally have to hunt for manually. The AI creates a dossier for the batch, noting that parameters were met and identifying any deviations. It can then essentially tell the human, "All of these things related, and there were no deviations. You can release this batch.” This speeds up the process significantly because the human professional is reviewing a summarized dossier rather than starting the data collection from scratch.
We treat a conflict between the human's opinion and the machine's data as a deviation. Because our industry relies on Corrective and Preventive Action systems, this discrepancy prompts a formal investigation to determine why these two events are conflicting. The traditional model is to investigate the root cause. Since the technology is still so new, I believe the human should take precedence. The investigation should focus on why the digital twin conflicts with the human, though we may occasionally find that the human was actually the one who was wrong.
The industry is currently struggling with exactly where to place the human in the loop to be most efficient. My concern is that at some point, the machine will absorb so much data and learn so much faster than a person can that it will become more capable than the human. I worry that the human will be out of the loop eventually. However, for now, the human must remain the final decision-maker, absorbing the data provided by the AI to make the final call on whether a batch is safe for release.