Introduction
Quality improvement (QI) methods have been introduced to healthcare to support the
delivery of care that is safe, timely, effective, efficient, equitable and cost effective.
Of the many QI tools and methods, the Plan-Do-Study-Act (PDSA) cycle is one of the
few that focuses on the crux of change, the translation of ideas and intentions into
action. As such, the PDSA cycle and the concept of iterative tests of change are central
to many QI approaches, including the model for improvement,1 lean,2 six sigma3 and
total quality management.4
PDSA provides a structured experimental learning approach to testing changes. Previously,
concerns have been raised regarding the fidelity of application of PDSA method, which
may undermine learning efforts,5 the complexity of its use in practice5
6 and as to the appropriateness of the PDSA method to address the significant challenges
of healthcare improvement.7
This article presents our reflections on the full potential of using PDSA in healthcare,
but in doing so we explore the inherent complexity and multiple challenges of executing
PDSA well. Ultimately, we argue that the problem with PDSA is the oversimplification
of the method as it has been translated into healthcare and the failure to invest
in a rigorous and tailored application of the approach.
The value of PDSA in healthcare improvement
The purpose of the PDSA method lies in learning as quickly as possible whether an
intervention works in a particular setting and to making adjustments accordingly to
increase the chances of delivering and sustaining the desired improvement. In contrast
to controlled trials, PDSAs allow new learning to be built in to this experimental
process. If problems are identified with the original plan, then the theory can be
revised to build on this learning and a subsequent experiment conducted to see if
it has resolved the problem, and to identify if any further problems also need to
be addressed. In the complex social systems of healthcare, this flexibility and adaptability
of PDSA are important features that support the adaption of interventions to work
in local settings.
A successful PDSA process does not equal a successful QI project or programme. The
intended output of PDSA is learning and informed action. Successful application of
the PDSA methodology may enable users to achieve their QI goals more efficiently or
to reach QI goals they would otherwise not have achieved. But it is also successful
if it saves wasted effort by revealing QI goals that cannot be achieved under realistic
constraints or if it identifies new problems to tackle instead of the originally identified
issue. A well-conducted PDSA promises learning. But it does not, and cannot, promise
that users will achieve their desired outcomes.
As PDSA has been translated into healthcare from industrial settings, an emphasis
has been placed on rapid small-scale tests of change, often on one, three and then
five patients in ‘ramps’ of increasing scale, and responsibility delegated to frontline
staff and improvement or quality managers. This pragmatic approach has been embraced
and has been seen as providing a new freedom for healthcare staff to lead change and
improvement in local care settings.
However, the process of change rarely progresses in simple linear ramps.6
8 The conduct of PDSAs can reveal other related issues that need to be addressed in
order to achieve the improvement goal. Such issues may relate to minor changes to
current practices or processes of care, but can often reveal larger cultural or organisational
issues that need to be addressed and overcome.
Recent evaluations have reported on the failure of the PDSA method to help frontline
staff address the multiple improvement challenges they faced as the scale of investigation
and range of issues they needed to address increased.7
9 A report evaluating the Safer Clinical Systems programme in the UK identified ‘the
need for clarity about when improvement approaches based on PDSA cycles are appropriate
and when they are not’, viewing some challenges as ‘too big and hairy’ for the PDSA
method and beyond the scope of small-scale tests of change run by local clinical teams.7
We argue that any improvement situation, no matter how big and hairy, is conducive
to application of the PDSA method. The four stages of PDSA mirror the scientific experimental
method of formulating a hypothesis, collecting data to test this hypothesis, analysing
and interpreting the results and making inferences to iterate the hypothesis.5
10
Whether improvement initiatives have been planned at national level to support standardisation
of care or planned over a cup of coffee to solve a minor local problem, we believe
there will always be a role for PDSA. In moving from planning to implementing a change
in practice, PDSA provides a structure for experimental learning to know whether a
change has worked or not, and to learn and act upon any new information as a result.
But it is not a magic bullet. Increasingly complex problems require increasingly sophisticated
application of the PDSA method, and this is where we believe the problem with the
PDSA method lies.
Its simplicity belies its sophistication
One of the main narratives surrounding the use of PDSA in healthcare is that it is
easy, and can be applied in practice by anyone. At one level this is true, and the
simplicity of the PDSA method and its applicability to many different situations can
be viewed as one of its main strengths. However, this simplicity also creates some
of the greatest challenges to using PDSA successfully. Users need to understand how
to adapt the use of PDSA to address different problems and different stages in the
lifecycle of each improvement project. This requires an extensive repertoire of skills
and knowledge to be used in conjunction with the basic PDSA model.
One of the main problems encountered in using PDSA is the misperception that it can
be used as a standalone method. PDSA needs to be used as part of a suite of QI methods,
the exact nature of which may be influenced by the broader methodological approach
that is being followed (eg, model for improvement, lean). An important role of the
wider methodological approach is to conduct investigations prior to starting the use
of PDSA to ensure that the problem is correctly understood and framed. Investigations
can include process mapping, failure mode effects analysis, cause and effect analysis,
stakeholder engagement and interviews, data analysis and review of existing evidence.
A second misperception is that the PDSA is limited to small-scale tests of change
on one, three and five patients. PDSA is an extremely flexible method that can be
adapted to support the scale up of interventions and used in conjunction with monitoring
activities to support sustainability. But, this flexibility gives rise to a number
of key dimensions that require careful consideration. For instance, the scope and
scale of change, the amount of preparation prior to use, rigour of the evaluation,
time, expertise, management support and funding must be carefully aligned. Often these
needs must be rebalanced over the project's lifecycle. If managed well, these adjustments
enable the use of PDSA to adapt to new learning and support the design and conduct
of ‘tests of change’ as they increase in scale, and often complexity, to achieve the
desired improvement goal.
Using PDSA as an iterative design framework to help solve ‘big hairy problems’ or
‘big hairy audacious goals’11 is, therefore, entirely appropriate. In fact, developing
solutions to large-scale ‘wicked problems’12 may require ‘an iterative explorative
and generative’13 approach of the sort PDSA provides, in which ‘knowledge is built
through designing’.13 The key is to understand that this framework will need to be
implemented (and resourced) very differently for large and complex problems than for
smaller and more ‘tame’ problems. One size does not fit all.
While frontline staff with little training or support may successfully address some
quality problems, the complexity of many problems demands greater organisational support,
with direct involvement of senior managers to facilitate adequate planning. Projects
in which frontline staff must fend for themselves also run the risk of insufficient
usage of theory and existing evidence to develop the intervention and a suboptimal
evaluation.
Quick (not dirty) tests of change
In healthcare, PDSA training often overemphasises the conceptual simplicity of the
framework and underemphasises the different ways in which the method can be adapted
to solve increasingly complex problems. This frequently leads people to leap into
PDSA with insufficient prior investigation and framing of the problem, to delegate
management of the process to frontline staff who have little influence over broader
systemic concerns that need to be addressed, and to provide these staff with little
support to overcome the obstacles and barriers they face. The resources, skills and
expertise required to apply PDSA in the real world are often significantly underestimated,
leading to projects that are destined to fail.
This has led to the impression that PDSA cycles involve ‘quick and dirty’ tests of
change. In the rush to empower healthcare staff, there is a danger that the scientific
rigour of the PDSA method is frequently compromised. A systematic review5 revealed
that the core principles of PDSA are often not executed in practice, with ‘substantial
variability with which they are designed, executed and reported in the healthcare
literature’.6 A failure to properly execute PDSAs can undermine learning efforts…
‘if data collection does not occur frequently enough, if iterative cycles are few,
and if system-level changes are not apparent as a result of these cycles, the improvement
work is less likely to succeed’.6 While its scientific principles differ from those
of controlled trials, rigour in the application of PDSA is still required for PDSA
to maximise the learning obtained from tests of change.
In addition to a lack of fidelity with PDSA guiding principles, there is the need
to ensure that each stage of the cycle is conducted well. But the frenetic culture
endemic in healthcare organisations can make it difficult to achieve sustained engagement
in the deliberative processes of PDSA.
Just get on with it
While ‘planning paralysis’ can be an issue in healthcare organisations, the more common
problem is a serious underinvestment in the planning phase. The pervasive cultural
compulsion to ‘just get on with it’14 leads many teams to move too quickly from ‘plan’
to ‘do.’ The consequences of skipping this up-front work can include wasted PDSA cycles
or projects that fail altogether. Table 1 describes some of the key failure modes
for the planning and preplanning (ie, investigation and problem-framing) steps of
the PDSA process.
Table 1
Key failure modes for the investigation/problem framing and plan steps
PDSA stages
Key failure modes
Potential consequence
Investigation and problem framing
Define the problem; determine its causes/contributing factors; identify stakeholders;
set the criteria for success
Poor definition of the problem and its causes/contributing factors1
5
21–25
Time, money and goodwill may be wasted trying to solve the wrong problem or solve
it in the wrong way
Failure to clearly define the criteria for success and how performance will be measured5
22
26
A poor match between the design of the intervention and its intended impact; inability
to assess success during ‘study’ phase
Failure to identify key stakeholders22
27
Important knowledge may be left out of the planning process
Plan
Design an intervention and data collection plan; specify how the intervention will
be implemented (Do), evaluated (Study) and sustained (if successful)
No theory of change/programme theory connecting the intervention to its intended outcomes28–31
Poorly targeted interventions that may be inefficient or may fail altogether. Poor
buy-in due to a perceived lack of legitimacy
Planned intervention, implementation plan and study protocol that are not in proportion
to one another and the problem to be solved22
32
33
Underinvestment leading to projects that do not achieve their goals or that cannot
be proven to have achieved their goals; Overinvestment leading to wasted resources
Designing a data collection and analysis plan that is incapable of providing the required
answers26
Impossible to know if the intervention was effective; excessive PDSA cycles required;
aggravation among frontline staff that the administrative burden of data collection
was wasted
Not consulting key stakeholders during the planning stage21
27
34
35
Proceeding with an intervention that is predictably doomed to fail; disengagement
among frontline staff
Not planning for the ‘who, what, where, when, and how’ of implementation (the ‘do’
phase)5
22
36
Poor understanding of resource requirements and cost-effectiveness; poor execution
of the ‘do’ and ‘study’ phases
Adopting weak interventions (eg, administrative controls, such as training and policies)
without considering more robust options37–41
Interventions that do not achieve their goals or do not sustain them
Not assessing cultural and structural barriers/facilitators related to the intervention14
21
42–44
‘Fish out of water’ interventions put in place without attention to the broader changes
required to make them successful; systemic issues not tackled and only superficial
change attempts made
Failure to plan for how the intervention will be sustained in practice, if successful16
7
38
45
46
Performance reverts to previous standards, staff frustrated with unsuccessful change
effort and disengage from future attempts
Failure to consider the intervention's failure modes and potential side effects (positive
and negative)21
45
47
Interventions that are designed to fail or that create more problems than they solve;
failure to select the most cost-effective solutions
PDSA, Plan-Do-Study-Act.
Why do planning failures present such a challenge to the successful use of PDSA? It
is much more difficult to correctly execute and learn from a plan that has not been
well thought out. And even perfect execution cannot ensure success if the plan, itself,
is wrong.
The iterative nature of PDSA enables course corrections, but this feature of the approach
is much more effective if there was a clear and reasoned course in the first place.
Many of the barriers to success in the do, study and act phases can be predicted and
mitigated through more effective planning.
Overcoming the prevailing culture of ‘Do, Do, Do’
The structured, reflective practice required for PDSA runs counter to the main mode
of operation in healthcare organisations, ‘doing’, with the time required for planning
and reflection regarded as a luxury rather than a necessity. As a result, teams often
get ‘stuck’ in the ‘do’ phase, failing to progress to the ‘study’ phase. While these
problems may reflect poor planning, they may also be caused by problems beyond the
control of the project team, such as the challenges of creating time to conduct tests
of change, staff turnover and changing or competing priorities. To stop at the ‘do’
phase is to throw away the core contribution of PDSA: its support for iterative design
as a way of making improvement interventions more successful.15 Another important
but frequently overlooked part of the ‘do’ phase is inductive learning, noticing the
unexpected and feeding these observations into the study phase.
Poor planning or conduct of the ‘do’ phase in turn can significantly undermine the
‘study’ phase. In some cases, improvement teams appear to bypass the ‘study’ phase
altogether, moving directly from ‘do’ to ‘act’.5 In other cases, the ‘study’ phase
may collect insufficient data or may not collect the right type of data to answer
questions about the intervention's effectiveness and acceptability. For instance,
quantitative data can assess the impact of a given change, without qualitative feedback;
the reasons for the results or staff attitudes and ideas about what could be improved
will remain unknown. It is also possible that teams draw the wrong conclusions from
the data they have collected or fail to notice unanticipated consequences, which may
lead to incorrect actions.
Failure to take appropriate action based on what was learned from the ‘study’ phase
and previous PDSA cycles is another common concern.5 Inappropriate actions may include
adopting or scaling up an intervention that has not proven effective and acceptable,16
or ending a project that has proved successful, or is on track to do so. An important
part of the act phase consists of reviewing and revising the theory of how the intervention
is intended to achieve its desired impact. This iterative refinement of theory is
a key component of PDSA methodology, which is often overlooked in practice.
Effectively managing the PDSA process is about more than individual PDSA steps or
cycles. Connecting PDSA cycles together is a messier and far more complicated endeavour
than most of the literature on the approach suggests.6 Progression across cycles is
seldom linear, and double-loop learning17 may lead to revised goals, as well as revised
interventions, and requires significant oversight to manage emergent learning and
coordination of PDSA activities over time.
Table 2 describes some of the key failure modes for the execution of the do, study
and act steps of the PDSA process.
Table 2
Key failure modes for executing the do, study and act steps
PDSA stages
Key failure modes
Potential consequence
Do
Implement the plan (including both the QI intervention and the data collection plan)
Failure to implement the QI intervention as intended27
36
Impossible to learn whether the planned QI intervention works as expected; wasted
effort; disillusionment among staff involved with intervention design
Failure to collect the data as intended27
36
Undercuts the Study phase; may be difficult or impossible to tell whether the intervention
worked as expected; difficult or impossible to learn about the effectiveness of the
original data collection plan
Failure to capture unanticipated learning17
22
27
Missed learning opportunities (especially for qualitative learning about how and why
the intervention did/did not work); project failure; unnecessary PDSA cycles
Failure to abandon the Do phase despite manifest failure or severe negative side effects24
Wasted effort; excessive disruption; adverse outcomes from side effects
Study
Analyse data and compare results to the definition of success; distil and communicate
what has been learned from the formal data analysis and unanticipated learning
Failure to conduct a study5 or inappropriate failure to follow the study plan
No/limited opportunity to learn whether the intervention works as intended; potential
for biased and misleading results
Failure to communicate what has been learned27
46
Loss of stakeholder engagement; reinventing the same broken wheel in the service of
other QI projects; loss of institutional knowledge if there is turnover among project
leaders
Act
Based on what has been learned, either:
Revisit the investigation and problem framing phase
Begin a new PDSA cycle at the Plan phase
Fully implement and sustain the intervention or
End the project without investing further effort
Failure to engage in ‘double loop learning’17 that questions the goals of the project
in light of what has been learned
Wasted effort continuing to work on the wrong problem, or one that cannot realistically
be solved; Excessive PDSA cycles spent trying to achieve a goal that is set too high,
when a more realistic goal would deliver real improvement
Moving too quickly from small-scale tests of change to full-scale implementation and
sustainment5
Failure to uncover barriers to broader use prior to implementation; project failure;
disruption associated with deimplementation; wasted resources/goodwill
PDSA, Plan-Do-Study-Act; QI, quality improvement.
The problem with PDSA: failure to invest in rigorous and tailored application
While the PDSA method is conceptually simple, simple does not mean easy. That said,
PDSA is a powerful approach, and projects that make successful use of PDSA can solve
specific quality problems and also help shape the culture of healthcare organisations
for the better. So, the effort required to apply PDSA successfully has a substantial
return on investment. But the resources and supportive context required for success
(including funding, methodological expertise, buy-in and sustained effort)18 are often
underestimated. Inadequate human resources and financial support doom many projects
to fail and also undermine organisational culture, contributing to change fatigue
and disillusionment as yet another project produces no real improvement. It is therefore
crucial, at both the project level and the programmatic level, that the resource requirements
for successful application of PDSA for a given project are well understood and that
the process is well managed.
The barriers to ensuring this type of practice in a healthcare culture of ‘just get
on with it’ and ‘do, do, do’ are difficult to overcome. To be successful, the use
of PDSA must be supported by a significant investment in leadership, expertise and
resources for change.
Academia and researchers have a potential role to play to support appropriate rigour
of planning and studying and understanding how to manage emergent learning while engaging
diverse stakeholder groups. Working in partnership will be beneficial to support effective
use of PDSA and is essential to establish genuine learning organisations.19
20