OR WAIT null SECS
© 2024 MJH Life Sciences™ and Pharmaceutical Technology. All rights reserved.
Pharmaceutical Technology Europe
Part I of this article was published in the March 2003 issue of 21 CFR Part 11: Compliance and Beyond. In this issue, Part II discusses the potential advances and changes that must be made for computer validation to remain innovative and relevant to the industry.
The objective of this article is to attempt to look at the future state of computer validation principles based on current events and trends in regulations, business practices and technology. This crystal ball approach will prepare the industry for the future by establishing a current-state best-practice foundation of computer validation principles as well as improving computer validation practices.
Part I of this article was published in the March 2003 issue of 21 CFR Part 11: Compliance and Beyond. In this issue, Part II discusses the potential advances and changes that must be made for computer validation to remain innovative and relevant to the industry.
With the increased need to conduct computer validation, there comes a need to improve the efficiency of how computer validation is performed. To have better efficiency in computer validation practices, a better infrastructure for computer validation practices is required. In addition to an existing computer validation programme that may already have been established (for example, the availability of an inventory list of the systems, a master plan on the validation status of the systems or a prioritization of how the systems are going to be validated and procedures for conducting computer validation), the following are some conceptual approaches that can be considered for building the future infrastructure (please note that these concepts are simply food for thought).
The current computer validation practices for preparing a full and thorough requirements document before system purchase may hamper the effort of purchasing a system in an expedient manner. In addition, the current operational lifestyle for new systems acquisition is mainly related to the purchase of COTS software, which typically means there is an existing client base and, perhaps, some familiarity with the product. The concept of screening requirements is the use of a high-level requirements document to select a vendor, which enables a purchase order to be issued and then allows the system acquisition process to begin. Once the vendor is selected, the strategy is to work with the vendor in acquiring or developing the specifications document. The potential for developing and using a screening requirements process can only be considered for purchasing a system that meets all of the following criteria:
The authors
As with other types of software systems, a risk factor is involved in the use of this screening requirements concept. Those risks may vary from system to system, from project to project and from company to company. From a computer validation perspective, the main risk factor to be considered when using this concept is the capability of it meeting the basic objective of computer validation: providing documented evidence that provides a high degree of assurance that the system reliably and consistently does what it is designed to do. When this concept is applied, and the above criteria are met, the basic objective of computer validation can still be met. Mitigating the risk arising from this approach can be done by ensuring that the system is qualified to meet business needs (for example, through testing). Hence, incorporating and meeting business needs should be included in validating the system.
This section is not intended to introduce a concept regarding how a vendor audit should be conducted, but rather to introduce the concept of determining whether there is a need to conduct a vendor audit (or the type and level of the audit). Having such a process will define a consistent approach for evaluating vendors and expedite the decision making process.
The following are some of the questions that can be considered for this process:
Other factors can also be considered and a weighting factor can be applied to each of those factors. The totalled result can be used to determine whether a vendor audit should be conducted. Having this type of process in place will expedite the process of computer validation by providing guidance as to which vendors must be audited.
In addition to considering the need for vendor audits to inspect the quality of the product, alternatives - such as third party certifications - may be evaluated. The introduction by the US Food and Drug Administration (FDA) of a systems approach to inspection may facilitate the use of third party certification to evaluate the quality of the software without performing a vendor audit. Accredited certification bodies certify the quality of a product by conducting audits performed by independent external quality assurance agencies known as third party certification bodies.39 The certification bodies use technically qualified auditors who have been specially trained and subjected to professional selection. Having this type of process in place would expedite computer validation by potentially avoiding long discussions regarding the quality of the product. Instead, the audit can concentrate on the capability of the product to meet user needs.
It should be noted that FDA does not currently recognize any third party certifications, including ISO 9001 certification. The industry must work with FDA to provide it with a level of trust in such a scheme before it can be relied on to mitigate the validation workload. In the meantime, the PDA effort (Technical Report 32) of vendor audits and the establishment of a vendor audit package repository centre39 should be considered as a means of satisfying the supplier audit requirement.
To increase the efficiency of the process for making a decision and for information availability, the need for systems integration (for example, between chromatographic systems and LIMS or between LIMS and MRP) will increase and become more prominent in the future.
An industry leader speaks
To anticipate this need, one has to consider the development of a corporate data dictionary standard (for example, establishing standard definitions for terms such as sample, item code and unit of measurement, and creating a naming convention for products or data elements that will be shared between the computer systems). Building the data dictionary can be started whenever a new system is deployed or from the existing systems through the collection of data dictionary items from those systems. The collected data dictionary items from the various systems can then be compiled and cleaned (that is, eliminating redundancies and selecting a primary data item preference) to provide a controlled version of the data dictionary. When another system is being considered for deployment or acquisition, the data dictionary can be used to assess whether the data elements already exist in the dictionary or a new element must be introduced to the library. This data dictionary can also be used to support requirements development and will be an aid to system or vendor selection. For example, the data dictionary can help determine the complexity of integrating or interfacing new and existing systems.
Currently, almost all instruments or equipment being purchased contain a microprocessor, even those as simple as a pH meter. The level of validation (or qualification) of a simple microprocessor-based system does not always require the same level of validation effort that a more complex system requires. Hence, a simple conceptual process for determining the level of validation (or qualification) effort is needed. This concept is based on the premise that the level of validation (or qualification) can be categorized as needing either calibration, qualification or validation. For example, laboratory instruments or equipment that meet all of the following criteria can be categorized as needing calibration only:
After calibration, qualification must also be performed if the following factors are required to operate or use the equipment or instrument:
For an instrument or equipment that does not fall into these categories (that is, calibration or calibration 1 qualification), then validation is required. The level of validation can then be divided, based on whether the system is a custom-developed or a COTS system. For the custom system, a vendor audit, design specifications, programming standards and a source code walk-through and review may have to be considered and included as part of the validation activities. For COTS packages, much of this work will have already been done by the vendor (and verified through an audit). The criteria described above offer a conceptual depiction of the types of issues that must be considered when attempting to determine an appropriate level for validation activities.
During validation, consider analysing and selecting data and test sets that were used for the testing. The data and test sets will be useful for future testing activities that can take advantage of regression analysis testing. The data and test sets should be able to form the baseline of the system's current operational state and be used for ongoing support activities. These data and test sets should be meaningful in testing the important system functions. Hence, if there is a need to verify an important system function, this regression analysis can save time by eliminating the need to develop new data sets for testing. For example, regression testing might be considered for verifying the availability and operation of certain functions during an upgrade that should not affect the operation of those functions. This approach is even more desirable in light of the fact that regulators typically want to see the system pass the same challenges after a change as it did originally. Besides upgrades, patches or fixes (whether for computer hardware, the operating system, application program or third party programs), the regression analysis test can also be considered as part of a disaster recovery programme, for integration with other systems, for periodic review of the system's validation status and for troubleshooting activities.
Change control is a validation activity that has traditionally been resource intensive. Assuming that each change control request takes 2 h (this is an assumption - in actuality each request may take longer), and assuming there are three changes per system per year, having 100 systems means that change control will require 600 resource hours per year. Hence, a process to improve the efficiency of change control should be considered.
The concept of statistical sampling to review the change process associated with systems undergoing identical change is considered here. The sampling of these systems should be based on good, technical statistics or current regulatory acceptable practices (for example, =n 1 1, as in the sampling methods for raw materials, might be applicable to identical computer systems undergoing exactly the same change). It should be noted that this approach is not universally applicable and one should consider the suitability of this approach to different situations. For example, it might be suitable for a client–server system with several identical clients (hardware, software, application, environment), but it may not be suitable for systems traditionally validated on an individual basis (for example, tablet presses). If this type of approach is selected, one could suggest two types of tests as a minimum for the client server example above:
An additional aspect to consider when applying statistical sampling is the need to predefine the acceptability of similar systems. If the systems are not identical, then consideration must be given to what is an acceptable delta for the differences between those similar systems. This approach will be more effective if there are more similar systems, making standardization of systems a key factor in operating the business. Some of the deltas that one can consider may include the differences in software such as operating systems, third party tools, application programs, versions, patches and fixes, and the deltas in hardware, equipment, instruments or other peripherals that are components of the system. Great care must be taken in justifying an acceptable delta.
Other approaches can be used to improve the change control process (for example, defining a clear boundary of the different levels of documentation required for GMP and non-GMP hybrid type of systems). Running regression analysis tests when a non-GMP function is changed could also be considered - regression tests may only take a couple of minutes rather than the 2 h in the example given at the beginning of this section. Another consideration is the availability of a configuration management tool that can be an aid to a faster determination process when evaluating the effect of a change. The configuration management tool can maintain the traceability of the the validation documents. Facile traceability (for example, requirements to specifications, specifications to test sets and operating procedures to training requirements) can have a significant effect on the effort to implement change. When a requirement or a function is changed, the effect of the change can be traced to the specifications, test sets and operating procedures that may require updating. Knowing the effect may allow a better evaluation of the acceptance of the change and also a better resource requirement estimate for implementing the change.
Finally, statistical sampling can also be considered as part of a testing strategy for projects implementing or deploying multiple systems that are the same or very similar (that is, within an acceptable delta). A similar approach, sometimes referred to as matrix validation, is used in the context of validating manufacturing equipment and processes.
When statistical sampling is used, it is recommended that professional statistical support is used rather than relying on ad hoc advice. It is vital that statistical techniques are used appropriately.
The increased use of systems means an increase in a business' operational dependency on a system's performance and availability. Preparing for a system's unavailability means planning how to recover the system and also how to continue the business operation while the system is unavailable. Some factors to consider for this type of plan may include the following:
Whether it is a periodic review, audit, validation evaluation or gap analysis, the basic objective of all these activities is the same: to evaluate the validation status and find the areas where validation (and possibly regulatory compliance) can be improved. With the increased number of systems, this activity may require substantial resources. Hence, a process for determining the level of this activity based on the system's risk is worth considering.
The approach for the levels of review can generally be categorized into the following:
These levels of review can then be applied according to the system's risk. Various factors can be considered in assessing the system's risk. The risk factors are mostly related to the degree of regulatory exposure and the importance of the system's ability to support business operations. The following are just some of the risk factors one can consider:
Weighting factors can then be applied to those various types of risk, and the calculated level of risk can be correlated to the level of review that must be performed. This type of review approach will enable resources to be focussed and allocated accordingly. In addition, by assigning a unique colour or number to the review finding on the basis of the level of review, this periodic review approach can also be used as a management tool (for example, assigning the colour red to every review finding, indicating that a document is not available or assigning yellow to findings that indicate the expected section of the document is not available). The compiled results can then be placed in a table, whereby systems with the highest number of red flags indicate a lower degree of compliance and may indicate the immediate need to address those findings.
As more and more systems are being deployed, the number of systems that will be retired also increases. Hence, it is important to establish a process for system retirement before the need arises. The main consideration for system retirement is the accessibility and disposition of data from the system being retired. It must be established whether the data from the system can also be retired or if it must be available for future access. On the basis of the data disposition, one of the approaches that can be considered is to classify the retirement process as "full retirement" or "semi-retirement." Full retirement means every part of the system is being retired, including the data, software, hardware and equipment. Semi-retirement means only part of the system is being retired. In semi-retirement mode, data will typically be made available for current or future operation. This semi-retirement process is then simply based on the available alternatives for making the data available. Some consideration points when determining these alternatives may depend upon the portability of the data, cost and system capability (risk analysis factors), including the following:
The above items are just some of the factors to be considered when retiring a system. Without doubt, Part 11 regulatory requirements add complexity to the data migration issues (for example, having the need to maintain a link between the electronic record and the electronic signature).
Leveraging a validation document from existing validations is a typical approach one uses when expediting the preparation of the validation documents. This leveraging process is usually performed by searching a library of available documents, selecting the documents that can be used and then customizing them to conform to the system being validated. In this process, it is this customization phase that usually takes the most resources. Therefore, the process for customizing the documents is an area requiring improvement. One approach to consider is modularizing the validation documents into components and objects, which can then be selected and compiled into the desired and appropriate validation documents. Some of the factors to consider in modularizing the validation documents are
Once a strategy or plan for modularizing the validation document library is established, additional improvements can also be made (for example, creating a validation document compiler to allow a 'cut-and-paste' or 'click-and-drag' mechanism to aid in the creation of the documents). This standardization practice has the added benefit of simplifying the task of reviewing validation documentation and can have a significant, positive effect on the review cycle time.
The previous concepts cover several areas of a system development life cycle: acquisition, vendor audit, requirements, validation approach, change control, regression testing, periodic review and system retirement. This section deals specifically with the implementation of 21 CFR Part 11. The typical route for implementing Part 11 is done by assessing the system, developing a remedial action plan by identifying the potential solutions, budgeting a plan and executing it. However, there are additional points that are worth consideration:
It is not the objective of this article to be comprehensive in all Part 11 aspects to be considered, but merely to provide some examples to clarify the objective of this section, which is to consider the potential issues when implementing the regulation and the need for addressing those issues in a cohesive manner.
Increasing the efficiency of validation activities is a key factor for success in the future of computer validation. Many other methods and concepts are available when trying to increase the efficiency of performing computer validation, but they are too numerous to be included in this article. For example, consolidating the IQ, OQ and PQ into one document can help expedite the validation exercise by decreasing the number of signatures needed for review and approval. Another example is the development of a checklist for reviewing and accepting a vendor-provided validation document. In some cases, the use of automated test tools may also help when expediting the completion of validation, although it must be carefully considered because this practice also introduces significant complexity to the process of test scripting. Another success factor is to stay abreast of developments in the areas of regulations, technology and computer validation by joining professional organizations such as GAMP and PDA.
In lieu of the other potential concepts and approaches, it is hoped that the concepts offered in this article provide some help and benefits for decreasing the need for applying future smoke-jumping methodology in meeting computer validation needs. An open discussion and anonymous questions can be posted on the message section of www.ComputerValidation.com
1.
www.fda.gov/cder/guidance/105-115.htm
2. www.fda.gov/cder/present/phrma5-2000/lillie/index.htm
3. www.fda.gov/cber/gdlns/einddft.pdf
4. www.fda.gov/ora/compliance_ref/bimo/ffinalcct.htm
5. www.fda.gov/cder/guidance/iche3.pdf
6. www.fda.gov/cder/guidance/2353fnl.pdf
7. www.fda.gov/cder/guidance/3223fnl.htm
8. www.fda.gov/cber/gdlns/ebla.pdf
9. www.fda.gov/ora/compliance_ref/cpg/cpggenl/cpg160-850.htm
10. www.fda.gov/ora/compliance_ref/Part1 1/dockets_index.htm
11. www.fda.gov/cder/guidance/105-115.htm#SEC
12. www.fda.gov/cber/gdlns/ichclinical.pdf
13. www.fda.gov/oc/leveraging/default.htm
14. www.fda.gov/oc/speeches/2000/scienceforum.html
15. www.contracts.ogc.doc.gov/cld/regs/65FR25508.html and http://ec.fed.gov/gpedoc.htm
16. www.fda.gov/ora/inspect_ref/igs/foodcomp.html
17. www.fda.gov/foi/warning.htm
18. www.fda.gov/foi/electrr.htm
19. www.fda.gov/cdrh/ode/guidance/585.html
20. www.fda.gov/ora/compliance_ref/bimo/ffinalcct.htm
21. www.fda.gov/cdrh/comp/guidance/938.html
22. www.fda.gov/foi/warning_letters/m5056n.pdf and www.fda.gov/foi/warning_letters/m5057n.pdf
23. www.fda.gov/cder/warn/cyber/cyber2000.htm
24."New Drugs Bringing Questions and Recalls. Side Effects Kill Thousands of Patients Every Year. Poor Monitoring, Speedy Approvals and Aggressive Advertising are Blamed," Philadelphia Inquirer (Philadelphia, Pennsylvania, USA, 7 January 2001).
25. www.oscargruss.com/?/healthtech.nsf/vwSpecialReportsWeb and www.informedinvestors.com/iif_forums/client_capsule.cfm?CompanyID=519
26. www.ncbi.nlmnih.gov/BLAST/
27. www.ncbi.nlm.nih.gov/Entrez/
28. www.delsyspharma.com/techmain.html
29. www.domainpharma.com/dom5/products/default.htm