OR WAIT null SECS
© 2024 MJH Life Sciences™ and Pharmaceutical Technology. All rights reserved.
Pharmaceutical Technology Europe
When validating automated systems from third-party providers, using the V model and failure modes effects and criticality analysis (FMECA) early in the process can help.
Pharmaceutical manufacturers often depend on third parties for the supply of automated systems and other manufacturing equipment. Manufacturers must demonstrate to regulatory agencies that their production processes — including any third-party elements and systems — meet the necessary standards. As a consequence, any third-party systems built to purpose must comply with documented procedures embodied in, for example, GAMP 5 and ASTM E-2500.1,2
Don Farrall/Getty Images
The approach to system validation defined in GAMP guidelines can be symbolized by a model that was originally created for software development: the V model. The V model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing. The model deploys a well-structured method in which each phase can be implemented by the detailed documentation of the previous phase. Using this model and its associated documentation can control risk during system validation, which is the main focus of this article.
Figure 1 illustrates the V model. The left-hand arm represents the succession of specification documents. This begins with the user requirement specifications (URS) against which a functional requirement specification can be generated. In turn, a set of implementation specifications can be defined that embody how a specific solution will be engineered.
Figure 1: The V model.
The system provider is usually responsible for producing and maintaining these documents, including the URS, which will have been reviewed and agreed by both parties. Against each level on the left-hand side of the V, there is a corresponding test specification on the right arm. As the elements of the solution are implemented and brought together, the tests demonstrate that the intended functions are achieved and that the end-user requirements are satisfied.
The testing process includes installation qualification (IQ), to demonstrate that the installed system is complete, correctly configured and has the right services supplied to it; and operational qualification (OQ), which ensures that the installed system functions safely and to specification as a subsystem.
The author says...
Occasionally, factors may vary from when they are qualified; for example, ambient temperature. In tropical or semitropical locations, the indoor temperature can soar late in the day once the air conditioning switches off, allowing, for example, embedded computers to overheat and causing the system to suddenly fail. In this situation, the system provider is best placed to define the IQ and OQ test protocols because these require detailed knowledge of system behaviour and the service requirements. For a substantive system, the provider should supply a building and services specification that enables the site to be properly prepared before installation. By contrast, the performance qualification (PQ), which assesses compliance with the user process requirements, must be the responsibility of the user.
Where the third-party element is a standard product, the purpose of the validation is to confirm that the system meets process needs, and that the supplied version is installed and working as intended. In these instances, the documentation can be standardized; for example, the OQ test protocol could be a standard document for the product and the IQ could be a checklist generated from a generic template. The manufacturer may also request copies of generic or type approval documentation.
Automated systems, however, must be designed for purpose, and the project to supply the final system must adhere to necessary standards and generate all documentation, including system-specific specifications and test protocols. A system provider that is already familiar with these standards can add a great deal of value in helping to draw up these documents.
Engineering companies are well-acquainted with using a failure modes effects and criticality analysis (FMECA) to control risks in both the project and the system design.3 Although there are standardized techniques for quantifying and prioritising risks once identified, there is no guidebook that explains how to find all the relevant risk factors in a particular process or project. To be most effective, the ability to think laterally is required: to think of potential error conditions and how they might be alleviated.
In contrast, the validation process tends to be reductive and those involved often expect that the analysis will lead to the identification of additional functions that require more testing. This is far from exciting for the engineering team, and highlights the tension between the need for formalized structure and the role of creativity.4
It is not uncommon to find that testing and compliance are addressed late in the project, therefore, documentation may lag behind the supply of the system and limit when the system can be handed over for formal validation and productive use. Although this retrospective approach enables all regulatory and quality goals to be met, it can miss the intended purpose of minimizing risk through design because, even if valuable improvements are identified, it will be too late in the process to incorporate them into the system.
Additionally, by focusing on confirming tight statements of functionality, the risk analysis and testing strategy do not encourage consideration of how a system might fail in practice, which might have influenced the original design process. For example, a function specified for an automated end-of-line test platform for a medical device was to ensure that two similar components in the device had not been exchanged in error during assembly. Although the test function was feasible and the testing machine was reliable, a better solution, such as a design modification to one of the components to make it impossible for confusion during assembly, could have been identified if a wider risk examination had been conducted earlier.
Risk analysis should be conducted when the URS is first drawn up and should be ongoing. Users may be aware of errors that can occur in an existing process that the URS can address. However, some historic failure modes become less important or irrelevant in the automated version whereas new factors can become critical. At this stage, a risk analysis can help identify new issues, especially if the review includes people unfamiliar with the existing manual process.
The effective early use of the FMECA can be illustrated by the following example, which again concerns a medical device. A spring force produced by the medical device was the subject of a proposed static test system. In developing the test platform, the FMECA revealed three additional potential failure modes in the assembly that would not be detected in a static test. Subsequently, the test system was developed into a machine that could perform a dynamic test programme to confirm the absence of any of these failure modes. If the problem had been identified after the original test platform had already been finalized, a substantial redesign exercise would have been required, leading to the necessity to update design documents, revisit testing specifications and decide which retrospective retesting might be involved after implementing the change. At best, the change adds cost and potential delay to the supply of the test system; at worst, the launch of the device itself could have been severely delayed.
As a project develops, known risk factors should be addressed and resolved. However, focusing only on the factors identified early on can be another reductive exercise whereby the items in the FMECA are ticked off one by one, typically by identifying a test condition to be incorporated into one or other test protocol. However, this approach fails to consider that new risk factors might emerge at any time throughout the life cycle of a project and these may change the nature of previously identified risks.
In the example considered above, where additional functionality is added to the scope of an end-of-line testing machine, it was realized at a project review that an earlier decision to effect a movement pneumatically could potentially lead to error in the measurement system now that additional test functions had been added. The actuator was replaced with a servo motor to eliminate the risk of that error, which, if not identified, may have led to incorrect results from the testing process, potentially compromising product safety.
Risk analysis may also sometimes identify unexpected benefits. For example, when developing a design specification for a machine incorporating a precision microbalance, it was known from the URS that the balance had to be easily removable by the user and that there had to be a way for the operator to see the raw balance output to check the device calibration.
A straightforward conversion of these requirements would have yielded a perfectly adequate system, but in the course of analysing device operation, it was realized that a user function for simple weighing would be more beneficial. This function avoided the need to remove and replace the balance, and could be implemented using a subset of the user interface already required, eliminating the need for a special maintenance screen.
The URS was amended to include this feature which, in retrospect, seems obvious, but which could have been easily overlooked by focusing too much on the original URS.
Conducting a risk analysis early on can ensure the identified risks are addressed in the URS, and can reveal unexpected ways to enhance the project. However, it is also important to be aware that new risk factors may emerge at any time and affect previously identified problems.
As well as conducting risk analysis, there are also other ways to make the validation process easier. We offer the following advice:
Peter Woods is Programme Manager at GB Innomech Ltd (UK).
David Beale is Technical Director at GB Innomech Ltd (UK).
1. International Society for Pharmaceutical Engineering (ISPE) — GAMP 5, February 2008. www.ispe.org
2. ASTM — ASTM E2500 - 07 Standard Guide for Specification, Design, and Verification of Pharmaceutical and Biopharmaceutical Manufacturing Systems and Equipment. www.astm.org
3. Weibull.com, "Failure Modes and Effects Analysis (FMEA) and Failure Modes, Effects and Criticality Analysis (FMECA)". www.weibull.com
4. J.S. Brown and P. Duguid, MIT Sloan Management Review, 42(4), 93 (2001).