OR WAIT null SECS
© 2024 MJH Life Sciences™ and Pharmaceutical Technology. All rights reserved.
Using the four-phased method to assess QRM can ensure continual improvement and that regulatory requirements are met.
Quality risk management (QRM) is being performed more frequently in the pharmaceutical industry due to expanded requirements by health authorities like FDA, the European Medicines Agency (EMA), and members of the Pharmaceutical Inspection Cooperation/Scheme (PIC/S).Because of this, it is reasonable to anticipate that stakeholders and leaders within a pharmaceutical organization would want to examine indicators related to how well the risk management process is progressing, ways to improve the execution of risk assessments, and the impact that a particular risk assessment had—or did not have—on the affected product, process, or business unit.Evaluation of a QRM exercise is not a stated requirement in either International Council for Harmonisation (ICH) Q9R1 or in the ICH Q10 Pharmaceutical Quality System document. Evaluating a QRM activity would, however, be an effective way to help demonstrate management’s commitment to quality and the organization’s quality system that is stated in ICH Q10:
“Management should:
Evaluating QRM actitivies can also aid decision makers in achieving the objectives outlined in ICHQ9R1 (2).
“Decision makers should:
While several methods have been described for evaluating the components or maturity of a risk management program within an organization (3–5), a structured method to evaluate a single risk management activity (including assessment, evaluation, reduction, communication, and review/monitoring) has yet to be published. To be useful, an evaluation approach should have the characteristics of being objective, simple to use, not requiring a significant increase in workload, addressing the life cycle of a risk management exercise, and providing reliable feedback that could be used to improve the risk management process.Such an evaluation approach could be based on an evaluation method well known to those involved in training: the Kirkpatrick Model.
For those involved in organizational training, learning, and development, two important activities are assessment and evaluation. While assessment and evaluation can be done in various ways, a well-accepted schema was developed in the 1950s by a then PhD student, Donald Kirkpatrick (6). The “Kirkpatrick Model” as it is known today consists of four levels (Table I) that can provide information concerning what the participant(s) has learned, the instructors, the course itself, how participants are applying the learnings in their work activities, and the effect that the learning has had (or not) on the organization. The most recent version of the model is quite similar to the original but has been updated to clarify the terms and its application. Unifying these four levels is the concept of “return on expectations” that connects the learning event with the goals (i.e., expectations) that both learners and leadership have of the event.
The Kirkpatrick model can be used in different ways, specifically in formative (making improvements) and summative (making decisions) applications. When used formatively, the results help improve the process of learning (e.g., obtaining feedback on how easily [or not] a user was able to move through an e-learning course and then making changes to make the interface more intuitive and easier to use). In a summative application, decision makers look at outcomes (e.g., determining if the learner has met a defined standard and is qualified. Or, for instance, deciding if the learning solution or training course should be released for wide-scale use). Table II presents Kirkpatrick levels and questions asked for formative and summative applications related to training.
As mentioned earlier, practitioners of QRM and leadership have an interest in determining if the outcomes of QRM are successful in their organizations as well as finding ways of improving the execution of a risk assessment and ensuring that subjectivity is minimized. Making simple modifications to the Kirkpatrick model and applying those to the risk-management lifecycle is discussed as follows.
While other models and tools exist to support business process and software development and improvement such as the Capability Maturity Model Integration (CMMI) (7), the approach presented here is narrower in scope, simpler to use, can be adapted easily to the specific needs of an organization, and does not require external resources for support.
The initiation and risk assessment phases typically involve a team of subject matter experts that is often led by a trained facilitator.The results of the assessment are evaluated to determine which risks meet the criteria of “significant” risks requiring reduction through the application of controls and mitigations.Monitoring and review are incorporated to determine if the controls are effective and if any of the facts or assumptions used in the risk assessment have changed.
QRM practitioners and leaders who oversee the risk management process and the execution of specific risk assessments might ask a number of formative and summative questions concerning the QRM activities.These can be placed into four categories that are similar to the four levels of the Kirkpatrick Model and are shown in Table III.
The example questions above and elsewhere in this paper are based on what a risk management practitioner might ask; however, leadership at a particular organization might have other success criteria or indicators that they believe to be important.Perhaps, for example, it might be that each team member participated in 90% of the risk team meetings or that the report was reviewed and approved in a given time period. These expectations can be easily included in the evaluation process to track and trend metrics.
When evaluating learning outcomes as well as risk management outcomes, one is looking for evidence or indirect measures, not necessarily “proof” or direct measures. Obtaining qualitative and quantitative data from a variety of methods and from different perspectives can provide a multi-dimensional understanding of how well the methods, processes, and activities are meeting the desired goals.Examples of methods that can be used for the four levels are described as follows. As will be shown, some of the activities are already included in the risk management life cycle but are not used in evaluation of a specific risk assessment to make improvements.
Level one, reaction. Surveys can be taken at multiple time points during an extended risk assessment (for large projects, risk assessments sometimes last for months) providing the facilitator information to make team meetings more effective.Reaction surveys can also be done at the end of the project with the results applied to the next risk assessment performed to aid lessons learned and knowledge management. Level 1 evaluations can be either paper-based or e-surveys using a Likert Scale of 1 to 5 with 5 being the best and asking questions about the assessment process, such as:
The questionnaire could also ask for specific examples of good practice and suggestions or ideas of where improvements could be made. As mentioned above, other stakeholders may have additional expectations to include in the evaluation.
Level two, goal completion. This evaluation would be completed at the end of the risk assessment and when risk reduction controls have been prescribed.When the risk assessment phase has been completed, an experienced risk assessor or expert could examine the spreadsheets or reports from the team to determine if a risk question was properly written and used; and if the team logically and consistently identified the hazard (the source of harm), what caused that hazard to be expressed (the hazardous situation), and the impact of the harm.The expert could also review the key-word scales that were used to determine if they were applied consistently. Subjectivity can be minimized through the use of well-defined scales and reviewing their application is crucial to ensuring that heuristics and bias did not skew the outcome of the assessment.
After the risk evaluation and risk reduction phases, an expert could look at the causes of the harm and the controls to determine if they are properly aligned, that is, if the controls are appropriate and matched to the causes. This phase is also an important step to ensure that a robust set of controls have been identified to address the root cause of the hazard and that the team adequately brainstormed a combination of controls, where necessary, to aid in continuous process improvement.
In a summative mode, if the documentation met the requirements, it could be approved; in a formative mode, ways to improve the process through a redesign of the forms used to collect knowledge obtained through the risk assessment or by incorporating additional guidance in a job aid could be noted.In practice, the quality unit typically gives what amounts to a summative evaluation when they officially approve the risk documents.
Level three, transfer/implementation. This level of evaluation considers if the results of the risk management exercise, specifically the risk controls that have been identified to reduce the risks, have been successfully implemented. This is often done through the corrective action/preventive action (CAPA) system which is integrated with a change control process.
This evaluation could consider the timeliness between identification of the controls and their being fully implemented.For those controls that would take considerable time (e.g., designing, constructing, installing, and qualifying a piece of complicated equipment), interim controls may be appropriate to reduce risks to some degree and maintain a state of control.Qualification activities that show the method or equipment works as intended, that training was provided and applied by personnel, and that relevant procedures were revised and approved could also be indicators (i.e., evidence) of successful implementation.
In practice, these activities are often considered when confirming a CAPA has been successfully completed through what is known as an “effectiveness check.”
Level four, results/impact. This is the most challenging level when considering a QRM exercise because, if the risk management activity was successful, you would not see a deviation, failure, or other unwanted events. Specifying the goal(s) (or stakeholder expectations) of the risk management exercise at the outset can make this step easier. In the ICH Q9 model, this is done during the risk review/risk monitoring phase, an activity that is often overlooked or given short-shrift (8).
If there has been a historical rate of the unwanted events, say a specific root cause resulting in a deviation or product recall, that rate could be compared to the rate after the new controls were put in place. Another view of results would be if new hazards or failure modes were found after the risk assessment was completed that could indicate the assessment team did not have sufficient experience with the process under review at the initial onset of the assessment. If the results of the risk management exercise could be applied to other sites, products, or platforms, or if a risk assessment would contribute to process understanding, these could be positive indicators of success and benefit.
For learning professionals, it is rare to apply all four levels to all learning solutions or training courses. Level one is most widely used, and to a lesser extent, level two. Level three, transfer/implementation is sometimes determined not by learning personnel but by auditors and inspectors who discover if the training in a task or procedure is being properly used back in the workplace. In a QRM setting, the amount of evaluation could be based on the importance of the risk management activity being executed.
For decades, the Kirkpatrick Model has helped learning professionals make formative and summative decisions.This model can be adapted and applied to QRM programs and meets the criteria of being simple to use (i.e., answering basic questions); not requiring a significant increase in workload (i.e., some of actions are already taking place during review/approval and monitoring), addressing the life cycle of a risk
management exercise (i.e., done at multiple points), and providing information that could improve the risk management process (i.e., through the use of formative questions).
As regulators increase their requirements on the use of QRM in the pharma and biopharma industry and be more proactive and less reactive, it is important that the industry execute these assessments as efficiently and effectively as possible. Using the four-phased method described here can contribute to continual improvement and meeting the expectations of both the organization and the regulators.
Appreciation to Hal Baseman, Dr. Umit Kartoglu, and Dr. Kelly Waldron for their valuable comments on early drafts of this article.
James Vesper, PhD, MPH (corresponding author) is director, Learning Solutions at ValSource, Jvesper@valsource.com. Amanda McFarland, MSc is senior QRM and Microbiology consultant at ValSource.
Editor’s Note: This article was peer reviewed by a member of Pharmaceutical Technology’s Editorial Advisory Board.
Submitted: March 30, 2023.
Accepted: May 2, 2023.
Pharmaceutical Technology
Vol. 47, No. 6
June 2023
Pages: 26–29
When referring to this article, please cite it as Vesper, J. and McFarland, A. A Four-Phased Approach for Evaluating a Quality Risk Management Activity. Pharmaceutical Technology 2023 47 (6).