72
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The problem with Plan-Do-Study-Act cycles

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Introduction Quality improvement (QI) methods have been introduced to healthcare to support the delivery of care that is safe, timely, effective, efficient, equitable and cost effective. Of the many QI tools and methods, the Plan-Do-Study-Act (PDSA) cycle is one of the few that focuses on the crux of change, the translation of ideas and intentions into action. As such, the PDSA cycle and the concept of iterative tests of change are central to many QI approaches, including the model for improvement,1 lean,2 six sigma3 and total quality management.4 PDSA provides a structured experimental learning approach to testing changes. Previously, concerns have been raised regarding the fidelity of application of PDSA method, which may undermine learning efforts,5 the complexity of its use in practice5 6 and as to the appropriateness of the PDSA method to address the significant challenges of healthcare improvement.7 This article presents our reflections on the full potential of using PDSA in healthcare, but in doing so we explore the inherent complexity and multiple challenges of executing PDSA well. Ultimately, we argue that the problem with PDSA is the oversimplification of the method as it has been translated into healthcare and the failure to invest in a rigorous and tailored application of the approach. The value of PDSA in healthcare improvement The purpose of the PDSA method lies in learning as quickly as possible whether an intervention works in a particular setting and to making adjustments accordingly to increase the chances of delivering and sustaining the desired improvement. In contrast to controlled trials, PDSAs allow new learning to be built in to this experimental process. If problems are identified with the original plan, then the theory can be revised to build on this learning and a subsequent experiment conducted to see if it has resolved the problem, and to identify if any further problems also need to be addressed. In the complex social systems of healthcare, this flexibility and adaptability of PDSA are important features that support the adaption of interventions to work in local settings. A successful PDSA process does not equal a successful QI project or programme. The intended output of PDSA is learning and informed action. Successful application of the PDSA methodology may enable users to achieve their QI goals more efficiently or to reach QI goals they would otherwise not have achieved. But it is also successful if it saves wasted effort by revealing QI goals that cannot be achieved under realistic constraints or if it identifies new problems to tackle instead of the originally identified issue. A well-conducted PDSA promises learning. But it does not, and cannot, promise that users will achieve their desired outcomes. As PDSA has been translated into healthcare from industrial settings, an emphasis has been placed on rapid small-scale tests of change, often on one, three and then five patients in ‘ramps’ of increasing scale, and responsibility delegated to frontline staff and improvement or quality managers. This pragmatic approach has been embraced and has been seen as providing a new freedom for healthcare staff to lead change and improvement in local care settings. However, the process of change rarely progresses in simple linear ramps.6 8 The conduct of PDSAs can reveal other related issues that need to be addressed in order to achieve the improvement goal. Such issues may relate to minor changes to current practices or processes of care, but can often reveal larger cultural or organisational issues that need to be addressed and overcome. Recent evaluations have reported on the failure of the PDSA method to help frontline staff address the multiple improvement challenges they faced as the scale of investigation and range of issues they needed to address increased.7 9 A report evaluating the Safer Clinical Systems programme in the UK identified ‘the need for clarity about when improvement approaches based on PDSA cycles are appropriate and when they are not’, viewing some challenges as ‘too big and hairy’ for the PDSA method and beyond the scope of small-scale tests of change run by local clinical teams.7 We argue that any improvement situation, no matter how big and hairy, is conducive to application of the PDSA method. The four stages of PDSA mirror the scientific experimental method of formulating a hypothesis, collecting data to test this hypothesis, analysing and interpreting the results and making inferences to iterate the hypothesis.5 10 Whether improvement initiatives have been planned at national level to support standardisation of care or planned over a cup of coffee to solve a minor local problem, we believe there will always be a role for PDSA. In moving from planning to implementing a change in practice, PDSA provides a structure for experimental learning to know whether a change has worked or not, and to learn and act upon any new information as a result. But it is not a magic bullet. Increasingly complex problems require increasingly sophisticated application of the PDSA method, and this is where we believe the problem with the PDSA method lies. Its simplicity belies its sophistication One of the main narratives surrounding the use of PDSA in healthcare is that it is easy, and can be applied in practice by anyone. At one level this is true, and the simplicity of the PDSA method and its applicability to many different situations can be viewed as one of its main strengths. However, this simplicity also creates some of the greatest challenges to using PDSA successfully. Users need to understand how to adapt the use of PDSA to address different problems and different stages in the lifecycle of each improvement project. This requires an extensive repertoire of skills and knowledge to be used in conjunction with the basic PDSA model. One of the main problems encountered in using PDSA is the misperception that it can be used as a standalone method. PDSA needs to be used as part of a suite of QI methods, the exact nature of which may be influenced by the broader methodological approach that is being followed (eg, model for improvement, lean). An important role of the wider methodological approach is to conduct investigations prior to starting the use of PDSA to ensure that the problem is correctly understood and framed. Investigations can include process mapping, failure mode effects analysis, cause and effect analysis, stakeholder engagement and interviews, data analysis and review of existing evidence. A second misperception is that the PDSA is limited to small-scale tests of change on one, three and five patients. PDSA is an extremely flexible method that can be adapted to support the scale up of interventions and used in conjunction with monitoring activities to support sustainability. But, this flexibility gives rise to a number of key dimensions that require careful consideration. For instance, the scope and scale of change, the amount of preparation prior to use, rigour of the evaluation, time, expertise, management support and funding must be carefully aligned. Often these needs must be rebalanced over the project's lifecycle. If managed well, these adjustments enable the use of PDSA to adapt to new learning and support the design and conduct of ‘tests of change’ as they increase in scale, and often complexity, to achieve the desired improvement goal. Using PDSA as an iterative design framework to help solve ‘big hairy problems’ or ‘big hairy audacious goals’11 is, therefore, entirely appropriate. In fact, developing solutions to large-scale ‘wicked problems’12 may require ‘an iterative explorative and generative’13 approach of the sort PDSA provides, in which ‘knowledge is built through designing’.13 The key is to understand that this framework will need to be implemented (and resourced) very differently for large and complex problems than for smaller and more ‘tame’ problems. One size does not fit all. While frontline staff with little training or support may successfully address some quality problems, the complexity of many problems demands greater organisational support, with direct involvement of senior managers to facilitate adequate planning. Projects in which frontline staff must fend for themselves also run the risk of insufficient usage of theory and existing evidence to develop the intervention and a suboptimal evaluation. Quick (not dirty) tests of change In healthcare, PDSA training often overemphasises the conceptual simplicity of the framework and underemphasises the different ways in which the method can be adapted to solve increasingly complex problems. This frequently leads people to leap into PDSA with insufficient prior investigation and framing of the problem, to delegate management of the process to frontline staff who have little influence over broader systemic concerns that need to be addressed, and to provide these staff with little support to overcome the obstacles and barriers they face. The resources, skills and expertise required to apply PDSA in the real world are often significantly underestimated, leading to projects that are destined to fail. This has led to the impression that PDSA cycles involve ‘quick and dirty’ tests of change. In the rush to empower healthcare staff, there is a danger that the scientific rigour of the PDSA method is frequently compromised. A systematic review5 revealed that the core principles of PDSA are often not executed in practice, with ‘substantial variability with which they are designed, executed and reported in the healthcare literature’.6 A failure to properly execute PDSAs can undermine learning efforts… ‘if data collection does not occur frequently enough, if iterative cycles are few, and if system-level changes are not apparent as a result of these cycles, the improvement work is less likely to succeed’.6 While its scientific principles differ from those of controlled trials, rigour in the application of PDSA is still required for PDSA to maximise the learning obtained from tests of change. In addition to a lack of fidelity with PDSA guiding principles, there is the need to ensure that each stage of the cycle is conducted well. But the frenetic culture endemic in healthcare organisations can make it difficult to achieve sustained engagement in the deliberative processes of PDSA. Just get on with it While ‘planning paralysis’ can be an issue in healthcare organisations, the more common problem is a serious underinvestment in the planning phase. The pervasive cultural compulsion to ‘just get on with it’14 leads many teams to move too quickly from ‘plan’ to ‘do.’ The consequences of skipping this up-front work can include wasted PDSA cycles or projects that fail altogether. Table 1 describes some of the key failure modes for the planning and preplanning (ie, investigation and problem-framing) steps of the PDSA process. Table 1 Key failure modes for the investigation/problem framing and plan steps PDSA stages Key failure modes Potential consequence Investigation and problem framing Define the problem; determine its causes/contributing factors; identify stakeholders; set the criteria for success Poor definition of the problem and its causes/contributing factors1 5 21–25 Time, money and goodwill may be wasted trying to solve the wrong problem or solve it in the wrong way Failure to clearly define the criteria for success and how performance will be measured5 22 26 A poor match between the design of the intervention and its intended impact; inability to assess success during ‘study’ phase Failure to identify key stakeholders22 27 Important knowledge may be left out of the planning process Plan Design an intervention and data collection plan; specify how the intervention will be implemented (Do), evaluated (Study) and sustained (if successful) No theory of change/programme theory connecting the intervention to its intended outcomes28–31 Poorly targeted interventions that may be inefficient or may fail altogether. Poor buy-in due to a perceived lack of legitimacy Planned intervention, implementation plan and study protocol that are not in proportion to one another and the problem to be solved22 32 33 Underinvestment leading to projects that do not achieve their goals or that cannot be proven to have achieved their goals; Overinvestment leading to wasted resources Designing a data collection and analysis plan that is incapable of providing the required answers26 Impossible to know if the intervention was effective; excessive PDSA cycles required; aggravation among frontline staff that the administrative burden of data collection was wasted Not consulting key stakeholders during the planning stage21 27 34 35 Proceeding with an intervention that is predictably doomed to fail; disengagement among frontline staff Not planning for the ‘who, what, where, when, and how’ of implementation (the ‘do’ phase)5 22 36 Poor understanding of resource requirements and cost-effectiveness; poor execution of the ‘do’ and ‘study’ phases Adopting weak interventions (eg, administrative controls, such as training and policies) without considering more robust options37–41 Interventions that do not achieve their goals or do not sustain them Not assessing cultural and structural barriers/facilitators related to the intervention14 21 42–44 ‘Fish out of water’ interventions put in place without attention to the broader changes required to make them successful; systemic issues not tackled and only superficial change attempts made Failure to plan for how the intervention will be sustained in practice, if successful16 7 38 45 46 Performance reverts to previous standards, staff frustrated with unsuccessful change effort and disengage from future attempts Failure to consider the intervention's failure modes and potential side effects (positive and negative)21 45 47 Interventions that are designed to fail or that create more problems than they solve; failure to select the most cost-effective solutions PDSA, Plan-Do-Study-Act. Why do planning failures present such a challenge to the successful use of PDSA? It is much more difficult to correctly execute and learn from a plan that has not been well thought out. And even perfect execution cannot ensure success if the plan, itself, is wrong. The iterative nature of PDSA enables course corrections, but this feature of the approach is much more effective if there was a clear and reasoned course in the first place. Many of the barriers to success in the do, study and act phases can be predicted and mitigated through more effective planning. Overcoming the prevailing culture of ‘Do, Do, Do’ The structured, reflective practice required for PDSA runs counter to the main mode of operation in healthcare organisations, ‘doing’, with the time required for planning and reflection regarded as a luxury rather than a necessity. As a result, teams often get ‘stuck’ in the ‘do’ phase, failing to progress to the ‘study’ phase. While these problems may reflect poor planning, they may also be caused by problems beyond the control of the project team, such as the challenges of creating time to conduct tests of change, staff turnover and changing or competing priorities. To stop at the ‘do’ phase is to throw away the core contribution of PDSA: its support for iterative design as a way of making improvement interventions more successful.15 Another important but frequently overlooked part of the ‘do’ phase is inductive learning, noticing the unexpected and feeding these observations into the study phase. Poor planning or conduct of the ‘do’ phase in turn can significantly undermine the ‘study’ phase. In some cases, improvement teams appear to bypass the ‘study’ phase altogether, moving directly from ‘do’ to ‘act’.5 In other cases, the ‘study’ phase may collect insufficient data or may not collect the right type of data to answer questions about the intervention's effectiveness and acceptability. For instance, quantitative data can assess the impact of a given change, without qualitative feedback; the reasons for the results or staff attitudes and ideas about what could be improved will remain unknown. It is also possible that teams draw the wrong conclusions from the data they have collected or fail to notice unanticipated consequences, which may lead to incorrect actions. Failure to take appropriate action based on what was learned from the ‘study’ phase and previous PDSA cycles is another common concern.5 Inappropriate actions may include adopting or scaling up an intervention that has not proven effective and acceptable,16 or ending a project that has proved successful, or is on track to do so. An important part of the act phase consists of reviewing and revising the theory of how the intervention is intended to achieve its desired impact. This iterative refinement of theory is a key component of PDSA methodology, which is often overlooked in practice. Effectively managing the PDSA process is about more than individual PDSA steps or cycles. Connecting PDSA cycles together is a messier and far more complicated endeavour than most of the literature on the approach suggests.6 Progression across cycles is seldom linear, and double-loop learning17 may lead to revised goals, as well as revised interventions, and requires significant oversight to manage emergent learning and coordination of PDSA activities over time. Table 2 describes some of the key failure modes for the execution of the do, study and act steps of the PDSA process. Table 2 Key failure modes for executing the do, study and act steps PDSA stages Key failure modes Potential consequence Do Implement the plan (including both the QI intervention and the data collection plan) Failure to implement the QI intervention as intended27 36 Impossible to learn whether the planned QI intervention works as expected; wasted effort; disillusionment among staff involved with intervention design Failure to collect the data as intended27 36 Undercuts the Study phase; may be difficult or impossible to tell whether the intervention worked as expected; difficult or impossible to learn about the effectiveness of the original data collection plan Failure to capture unanticipated learning17 22 27 Missed learning opportunities (especially for qualitative learning about how and why the intervention did/did not work); project failure; unnecessary PDSA cycles Failure to abandon the Do phase despite manifest failure or severe negative side effects24 Wasted effort; excessive disruption; adverse outcomes from side effects Study Analyse data and compare results to the definition of success; distil and communicate what has been learned from the formal data analysis and unanticipated learning Failure to conduct a study5 or inappropriate failure to follow the study plan No/limited opportunity to learn whether the intervention works as intended; potential for biased and misleading results Failure to communicate what has been learned27 46 Loss of stakeholder engagement; reinventing the same broken wheel in the service of other QI projects; loss of institutional knowledge if there is turnover among project leaders Act Based on what has been learned, either: Revisit the investigation and problem framing phase Begin a new PDSA cycle at the Plan phase Fully implement and sustain the intervention or End the project without investing further effort Failure to engage in ‘double loop learning’17 that questions the goals of the project in light of what has been learned Wasted effort continuing to work on the wrong problem, or one that cannot realistically be solved; Excessive PDSA cycles spent trying to achieve a goal that is set too high, when a more realistic goal would deliver real improvement Moving too quickly from small-scale tests of change to full-scale implementation and sustainment5 Failure to uncover barriers to broader use prior to implementation; project failure; disruption associated with deimplementation; wasted resources/goodwill PDSA, Plan-Do-Study-Act; QI, quality improvement. The problem with PDSA: failure to invest in rigorous and tailored application While the PDSA method is conceptually simple, simple does not mean easy. That said, PDSA is a powerful approach, and projects that make successful use of PDSA can solve specific quality problems and also help shape the culture of healthcare organisations for the better. So, the effort required to apply PDSA successfully has a substantial return on investment. But the resources and supportive context required for success (including funding, methodological expertise, buy-in and sustained effort)18 are often underestimated. Inadequate human resources and financial support doom many projects to fail and also undermine organisational culture, contributing to change fatigue and disillusionment as yet another project produces no real improvement. It is therefore crucial, at both the project level and the programmatic level, that the resource requirements for successful application of PDSA for a given project are well understood and that the process is well managed. The barriers to ensuring this type of practice in a healthcare culture of ‘just get on with it’ and ‘do, do, do’ are difficult to overcome. To be successful, the use of PDSA must be supported by a significant investment in leadership, expertise and resources for change. Academia and researchers have a potential role to play to support appropriate rigour of planning and studying and understanding how to manage emergent learning while engaging diverse stakeholder groups. Working in partnership will be beneficial to support effective use of PDSA and is essential to establish genuine learning organisations.19 20

          Related collections

          Most cited references13

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Large scale organisational intervention to improve patient safety in four UK hospitals: mixed method evaluation

          Objectives To conduct an independent evaluation of the first phase of the Health Foundation’s Safer Patients Initiative (SPI), and to identify the net additional effect of SPI and any differences in changes in participating and non-participating NHS hospitals. Design Mixed method evaluation involving five substudies, before and after design. Setting NHS hospitals in the United Kingdom. Participants Four hospitals (one in each country in the UK) participating in the first phase of the SPI (SPI1); 18 control hospitals. Intervention The SPI1 was a compound (multi-component) organisational intervention delivered over 18 months that focused on improving the reliability of specific frontline care processes in designated clinical specialties and promoting organisational and cultural change. Results Senior staff members were knowledgeable and enthusiastic about SPI1. There was a small (0.08 points on a 5 point scale) but significant (P<0.01) effect in favour of the SPI1 hospitals in one of 11 dimensions of the staff questionnaire (organisational climate). Qualitative evidence showed only modest penetration of SPI1 at medical ward level. Although SPI1 was designed to engage staff from the bottom up, it did not usually feel like this to those working on the wards, and questions about legitimacy of some aspects of SPI1 were raised. Of the five components to identify patients at risk of deterioration—monitoring of vital signs (14 items); routine tests (three items); evidence based standards specific to certain diseases (three items); prescribing errors (multiple items from the British National Formulary); and medical history taking (11 items)—there was little net difference between control and SPI1 hospitals, except in relation to quality of monitoring of acute medical patients, which improved on average over time across all hospitals. Recording of respiratory rate increased to a greater degree in SPI1 than in control hospitals; in the second six hours after admission recording increased from 40% (93) to 69% (165) in control hospitals and from 37% (141) to 78% (296) in SPI1 hospitals (odds ratio for “difference in difference” 2.1, 99% confidence interval 1.0 to 4.3; P=0.008). Use of a formal scoring system for patients with pneumonia also increased over time (from 2% (102) to 23% (111) in control hospitals and from 2% (170) to 9% (189) in SPI1 hospitals), which favoured controls and was not significant (0.3, 0.02 to 3.4; P=0.173). There were no improvements in the proportion of prescription errors and no effects that could be attributed to SPI1 in non-targeted generic areas (such as enhanced safety culture). On some measures, the lack of effect could be because compliance was already high at baseline (such as use of steroids in over 85% of cases where indicated), but even when there was more room for improvement (such as in quality of medical history taking), there was no significant additional net effect of SPI1. There were no changes over time or between control and SPI1 hospitals in errors or rates of adverse events in patients in medical wards. Mortality increased from 11% (27) to 16% (39) among controls and decreased from 17% (63) to 13% (49) among SPI1 hospitals, but the risk adjusted difference was not significant (0.5, 0.2 to 1.4; P=0.085). Poor care was a contributing factor in four of the 178 deaths identified by review of case notes. The survey of patients showed no significant differences apart from an increase in perception of cleanliness in favour of SPI1 hospitals. Conclusions The introduction of SPI1 was associated with improvements in one of the types of clinical process studied (monitoring of vital signs) and one measure of staff perceptions of organisational climate. There was no additional effect of SPI1 on other targeted issues nor on other measures of generic organisational strengthening.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            The promise of Lean in health care.

            An urgent need in American health care is improving quality and efficiency while controlling costs. One promising management approach implemented by some leading health care institutions is Lean, a quality improvement philosophy and set of principles originated by the Toyota Motor Company. Health care cases reveal that Lean is as applicable in complex knowledge work as it is in assembly-line manufacturing. When well executed, Lean transforms how an organization works and creates an insatiable quest for improvement. In this article, we define Lean and present 6 principles that constitute the essential dynamic of Lean management: attitude of continuous improvement, value creation, unity of purpose, respect for front-line workers, visual tracking, and flexible regimentation. Health care case studies illustrate each principle. The goal of this article is to provide a template for health care leaders to use in considering the implementation of the Lean management system or in assessing the current state of implementation in their organizations. Copyright © 2013 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Successful risk assessment may not always lead to successful risk control: A systematic literature review of risk control after root cause analysis.

              Root cause analysis is perhaps the most widely used tool in healthcare risk management, but does it actually lead to successful risk control? Are there categories of risk control that are more likely to be effective? And do healthcare risk managers have the tools they need to support the risk control process? This systematic review examines how the healthcare sector translates risk analysis to risk control action plans and examines how to do better. It suggests that the hierarchy of risk controls should inform risk control action planning and that new tools should be developed to improve the risk control process.
                Bookmark

                Author and article information

                Journal
                BMJ Qual Saf
                BMJ Qual Saf
                qhc
                bmjqs
                BMJ Quality & Safety
                BMJ Publishing Group (BMA House, Tavistock Square, London, WC1H 9JR )
                2044-5415
                2044-5423
                March 2016
                23 December 2015
                : 25
                : 3
                : 147-152
                Affiliations
                [1 ]NIHR CLAHRC NWL, Imperial College London , London, UK
                [2 ]Department of Management, University of Notre Dame, Notre Dame, Indiana, USA
                [3 ]Evidence-Based Health Solutions, LLC, Notre Dame, Indiana, USA
                Author notes
                [Correspondence to ] Dr Julie E Reed, NIHR CLAHRC NWL, Imperial College London, Chelsea and Westminster Hospital, 369 Fulham Road, London SW10 9NH, UK; julie.reed02@ 123456imperial.ac.uk
                Author information
                http://orcid.org/0000-0002-9974-2017
                Article
                bmjqs-2015-005076
                10.1136/bmjqs-2015-005076
                4789701
                26700542
                0d67fbec-d05a-44d6-886d-be37ecf5509c
                Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

                This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

                History
                : 23 November 2015
                Categories
                1506
                The Problem With…
                Custom metadata
                unlocked

                Public health
                quality improvement,healthcare quality improvement,quality measurement
                Public health
                quality improvement, healthcare quality improvement, quality measurement

                Comments

                Comment on this article