Avoiding the “Whack-a Mole” Approach to Patient Safety Events: the Safety Assessment Code matrix

March 2021 MITE Quality Improvement Patient Safety Hot Topic

Avoiding the “Whack-a Mole” Approach to Patient Safety Events: the Safety Assessment Code matrix

Erin Graydon Baker, MS, RRT, CPPS, CPHRM

Clinical Risk Manager, MaineHealth

Learning Objectives:

  1. Describe how and when to prioritize immediate safety threats
  2. Explain the Safety Assessment Code ( SAC) matrix

In the December 2020 MITE Hot Topic, “Prioritization Methods: Which QI Project Solution Ideas Should We Tackle First?” author Lauren Atkinson describes the impact to effort matrix for quality improvement. The impact to effort matrix helps us to prioritize the most impactful improvements and discriminates them from efforts which may be too great for the anticipated impact.  A prioritization process for safety that is described by the Institute for Healthcare Improvement (IHI)/ National Patient Safety Foundation (NPSF) is similar in intent but more applicable to identifying and classifying adverse events and near misses.

All healthcare personnel are encouraged to report hazards, near misses, and adverse events that reach the patient regardless of whether injury occurs to the patient or staff. Failure to report can negatively affect our ability to mitigate the risk of harm. Solutions to ensuring staff reporting include an easy to use on-line reporting system, visible actions as the result of reports, and feedback to staff when the reports have led to improvements.  For some personnel, though, the reports seem to disappear into the “black hole” of reporting systems where seemingly nothing is done with the information.  To those receiving the safety reports, the daily work of reviewing and acting upon the all reports seems like a poor game of “Whack-a Mole” – when one issue seems resolved, another similar event pops up somewhere else. It can be both exhausting and non-productive to react to each event.  How should we prioritize the most significant events while trending and tracking those events that may be latent errors leading to something harmful?

The IHI/ NPSF describes a process called the Safety Assessment Code (SAC) matrix. SAC multiplies the probability that another event will happen if nothing is done by the actual and/or potential harm to patients or staff to assign a severity score. The highest scores deserve a deeper level of investigation whereas a lower score indicates events that should be trended and tracked over time. This level of prioritization allows for targeted improvements where it will matter most without losing sight of those latent errors that provide valuable information over time.

The matrix below describes how to score severity and probability in order to assign an overall safety score.  To use this grid, estimate how frequently this same event might occur. For example, falls might occur frequently, but historically, the actual or potential harm has been low because of the interventions we have in place to reduce serious harm.  A frequent event with minor harm would score “1” and would signal us to trend these.  However, if we had a 10-fold medication error in dosage which, although uncommon, could have a catastrophic impact on the patient, then the score would be “3”. This would warrant a full Root Cause Analysis.

A trained safety team uses this method best with interrater reliability in scoring and prioritizing events. Understanding SAC helps those who file reports recognize that all reports are reviewed with triage in mind. Some will receive intensive review, while others will contribute to data aggregation and monitoring.

For more details on the probability and severity categories, use the  link (1): http://www.ihi.org/resources/Pages/Tools/RCA2-Improving-Root-Cause-Analyses-and-Actions-to-Prevent-Harm.aspx

References

  1. ​​National Patient Safety Foundation. ​RCA2: Improving Root Cause Analyses and Actions to Prevent Harm. Boston, MA: National Patient Safety Foundation; 2015.

Want to earn CME Credit? Go to CloudCME to review the materials, take a short quiz and evaluation!

 

 

 

 

Introduction to Simulation Modelling

Introduction to Simulation Modelling

Mohit Shukla, MS, LSSBB

Quality Management Engineer

MaineHealth Performance Improvement Team

Learning objectives:

  1. Describe Discrete Event Simulation with an example
  2. State when simulation might be more appropriate than other Lean improvement tools

Simulation modelling is the application of computation models built to replicate real life phenomena and/or processes in order to make inferences of interest. It falls within the field of Operations Research and has been applied to complex problems in healthcare since the 1960s [1]. Based on implementation strategy, simulations may be categorized as Continuous, Monte Carlo, Discrete-Event, Agent-Based or Hybrid simulations [2]. The most frequently applied in Healthcare Operations is Discrete Event Simulation (DES), which allows us to emulate real-life processes in a software environment, experiment, and assess the impact of changing the variables of a system on the outcomes of interest. For instance, let’s say we wanted to optimize the number of check-out counters open in Hannaford in the last hour of the day to ensure the store is ready for close at the earliest. We could try small tests of change (reduce/hire) until we find the right mix, but quite often, that approach can be too slow or too expensive. Simulation can help. Start by obtaining the number of check-outs from your sales database in the last hour by day of week, then use the trusty clipboard to understand the distribution of check-out time (e.g. 20% took about 2 minutes, 40% took about 5 minutes, and 20% took longer). From these two data points, a simulation model can be used to create several scenarios – such as opening 2, 4, 6 or more stations, redeploying an Associate to assist with packing up groceries instead of running another check-out, etc., and assessing the impact of those choices on both the time taken to clear the queue as well as the utilization of the assigned resources – without actually doing anything!

DES couples the principles of probability models and queuing theory with large scale random sampling. While most of it is done in specialized software, small scale simulation can be done in Excel as well. For instance, going back to the Hannaford example – if we assume that between 50 and 60 people show up to check-out between 8pm and 9pm on a weekday, and it takes approximately four minutes to check-out each person on average – we can build a simple model in excel to get started:

By changing the values in columns B, C and D, you can compare what happens with 2-4-6-more check-out staff. The key, as always with statistical inferences, is to have enough values. That is, the more values we have in column B, the closer the data gets to being normally distributed and the more robust is our estimate of the central tendency. For fun, start with a 1000+ values!

DES has found several areas of application in healthcare at Maine Medical Center – for instance, in streamlining workflows at the Covid Clinics, estimating the impact of the surgical schedule on Intermediate Care (IMC) bed needs, forecasting number of emergency department beds needed over the next 5, 10 and 15 years, and many others.

 

Figure 1 A simulation model to assess workflows at one of the Covid Vaccine Clinics

Figure 2 A model to simulate ED capacity and project needs for the next decade

Given the effort involved in building a good simulation, it is always best to first ask “what are you trying to achieve?” In Process Improvement, it never hurts to start with understanding the process (preferably with the right control charts!) by conducting a root-cause analysis and trying out a few PDCA (Plan-Do-Check-Act) cycles. If, however, we find ourselves dealing with a complex system comprised of many interacting factors and expensive tests-of-change, simulation can help build and test different solutions or alternatives to recommend the best place to start.

References:

[1] Henderson, S.G., Biller, B., Hsieh, M.H., Shortle, J., Tew, J.D., Barton, R.R., Brailsford, S.; Tutorial: Advances and challenges in healthcare simulation modeling. In Proceedings of the 2007 Winter Simulation Conference.

[2] Preston White Jr., K; Ingalls, R.G.; The Basics of Simulation. In Proceedings of the 2020 Winter Simulation Conference.

 

Design Thinking

January MITE Article – Design Thinking

Stephen Tyzik

Director of Performance Improvement MMC & MMP

 

Learning Objectives

1) Define design thinking and its 5 phases

2) Articulate the need for design thinking in Healthcare

3) Outline a design thinking implementation plan

Over a decade ago Donald Berwick, MD (President Emeritus and Senior Fellow, Institute for Healthcare Improvement), suggested that healthcare “workers and leaders can often best find the gaps that matter by listening very carefully to the people they serve: patients and families.”1 One framework that aims to leverage the wants, needs and desires of patients is Design Thinking (DT).

DT is a systematic innovation process that prioritizes deep empathy for end-user experiences and challenges improvement team members to fully understand a problem, with the ultimate goal of developing more comprehensive and effective solutions.  There are five phases that combine to make up the process of DT; Empathize, Define, Ideate, Prototype and Test.2 One of the unique aspects of DT is that unlike other sequential improvement frameworks, DT is an iterative process (figure 1) based on new levels of understanding.

Figure 1. Design Thinking Stages

Author/Copyright holder: Teo Yu Siang and Interaction Design Foundation. Copyright terms and license: CC BY-NC-SA 3.0

Within the healthcare industry, it’s easy to find those who desire the best for the patients that they serve. With that in mind, why is the industry still littered with opportunities to clarify confusion, improve experiences and eliminate waste? One reason may lie in the fact that the healthcare profession is populated with highly educated professionals working in high stress environments to solve the most complex of medical issues. When we embark on process improvement and systems redesign aimed to improve efficiency of the end-user experience, it’s natural to believe that we know best. However, DT allows us to acknowledge and adapt to the evolving, complex nature of the healthcare landscape in a way that goes beyond our internal biases.

 

So where do we begin? The answer lies in stage 1 (Empathy) of the DT process, developing the sincerest levels of empathy for the problem you are trying to solve through the lens of the end-user’s experience. This is done by obtaining the voice of the customer, both through patient interviews and by observing the challenge at hand. These steps are crucial to allowing us to set aside our own biases and assumptions to gain the insight needed. Stage 2 (Define) is characterized by collating the information obtained through voice of the customer into problem definitions. These definitions should be framed as the core problem statements, written from the perspective of the end-user, which the team aims to improve. Stage 3 (Ideate) of DT begins the process of challenging assumptions and generating ideas within a multidisciplinary team. The power of this stage is in leveraging diversity of perspectives which provides a broad framework that sets the course for stage 4 (Prototype), an innovative solution design that embraces all possibilities. This stage is highlighted by prototyping solutions that we believe are the most likely to address the problem statements we created. Next is stage 5 (Test), testing our solutions. From this point we will either have success or we will gain new knowledge. In turn, this knowledge is what fuels this iterative process to continually adjust our assumptions and further improve. This process may sound very familiar to another improvement methodology, Plan-Do-Study-Act (PDSA), which is the driver of continuous improvement. In DT, once we clear stage 5, PDSA is utilized to convert the learnings into new tests of change.

 

References:

  1. Berwick DM Improvement, trust, and the healthcare workforce BMJ Quality & Safety 2003;12:i2-i6 Improvement, trust, and the healthcare workforce | BMJ Quality & Safety
  2. Interaction Design Foundation, Design Thinking, viewed 3 January 2021, https://www.interaction-design.org/literature/topics/design-thinking.
  3. Healthcare Financial Management Association, How design thinking in healthcare can improve customer service 2019, viewed 3 January 2021, https://www.hfma.org/topics/finance-and-business-strategy/article/how-design-thinking-in-healthcare-can-improve-customer-service.html

Prioritization Methods: Which QI Project Solution Ideas Should We Tackle First?

December 2020 PSQI Hot Topic

Prioritization Methods: Which QI Project Solution Ideas Should We Tackle First?

Lauren Atkinson, MPH, CST

Improvement Specialist Supervisor

Maine Medical Partners

 

Learning Objectives:

  1. Describe when to use a project prioritization tool.
  2. Understand how to set up and facilitate use of an impact/effort matrix with a group.
  3. Differentiate which project ideas to prioritize first using an impact/effort matrix.

Healthcare teams often have many great ideas about how to make their processes better. So how do you decide which idea to tackle as a first step? If your team has already engaged in a root cause analysis which yielded several solution ideas, a priority matrix can help make that decision objective and thorough. An impact effort matrix is a prioritization tool that should be used to think about which solution ideas to begin working on first, depending on resources (time and cost) and the potential impact the change will have. It leverages stakeholder consensus to find the most efficient path to achieve meaningful goals for patients and staff.

The impact/effort matrix is a tool that is very easy to use. The level of impact an idea would have is shown on the y axis, and the level of effort the change would require is shown on the x axis, as seen in the example below. The matrix is broken down into four quadrants. Quick wins, which have a high impact, and require minimal effort to complete, should be started first. If a team is working together for the first time, quick wins can be very important to keep team members engaged in the improvement process, and to build excitement around what they can accomplish as a team. The ideas that fall into the ‘major projects’ quadrant should be considered when there are enough resources and leadership buy-in to achieve success. Fill-ins should be completed as time allows, and thankless tasks should be re-evaluated or discarded.

This matrix should be used as a consensus-building tool to help drive decision-making. When facilitating the use of this tool, it’s important to have as many project stakeholders present as possible to be able to make the most informed decisions about where ideas fall on the impact/effort matrix. To begin, draw the matrix out on a large flip chart, white board or electronic board. Then, write all of the solution ideas on sticky notes.  Prior to assessing the impact of a potential idea, the group should revisit the goal statements and consider the patient, care team, financial, and safety outcomes that idea will produce. When considering the effort the idea will require to implement, the team should consider the time needs, number of staff, cost, and educational gaps that need to be closed. Next, based on the consensus of the group, the team should place the sticky notes on the grid in their respective quadrants as described above. Once complete, the finalized matrix should guide the action planning as the team moves into the next phase of idea implementation.

References:

Bens, I. Facilitating With Ease. Wiley, 2018.

Impact Effort Matrix. American Society for Quality.  Accessed November 2, 2020. https://asq.org/quality-resources/impact-effort-matrix

Impact Effort Matrix.  MaineHealth Performance Improvement. Accessed November 13, 2020. https://home.mainehealth.org/2/MMC/CenterforPerformanceImprovement/Tools%20and%20Templates/Impact%20Effort%20Matrix.pdf

Want to earn CME Credit? Go to CloudCME!

 

Ch-ch-ch-ch-changes: Understanding Variation

Ch-ch-ch-ch-changes: Understanding Variation

Mark Parker, MD
Vice President, Quality and Safety
Maine Medical Center
November, 2020

Learning Objectives:
1. Recognize the features of a stable system
2. Differentiate common cause from special cause variation

David Robert Jones, a.k.a. David Bowie (c.1947-2016), and William Edwards Deming (c.1900-1993) had overlapping lifespans although it is likely that they did not know each other. The pop icon and the champion of statistical process control shared fame, though, for their association with “Changes” – one created a rock anthem to musical reinvention and the frequently changing world; the other dedicated a career to studying systems and understanding variation. The latter is our focus for this edition of QI/PS Hot Topic.
As described in the Model for Improvement (QS/PI Hot Topic, December, 2019), measurement is the answer to the question, “How will we know that a change is an improvement?” Yet, measurement is not helpful if we do not interpret measurement correctly through application of appropriate statistical rules. Frequently in Quality Improvement, we engage in a project and measure parameters of change over time during our PDSA cycles. It is not uncommon for teams to see early data trends and declare success or failure after a limited number of data points. Pre-conceived biases about the predicted effects of interventions may color the interpretation of results.
Every stable system, whether it is the production line at Toyota or the operating room at Maine Medical Center, has a central tendency (median or mean). And every stable system has data points that fluctuate around the central tendency. Such variation is predictable and is known as “Common Cause” variation (Figure 1) – the response to the normal variables that affect the system every day. For example, it is known that the operating room case volume will increase in the middle of each week due to the elective surgery scheduling tendencies of the surgeons and their support staffs. Conversely, the number of cases runs below the daily average on weekends due a paucity of elective cases. On balance, though, the average number of cases is predictable from week to week. But what if a sudden event perturbs the stable system? Think of the early impacts of the Covid-19 pandemic – elective surgeries were cancelled for a prolonged period and the average number of surgical cases dropped precipitously. This is known as “Special Cause” variation (Figure 2).

Most clinicians and data scientists are facile with traditional descriptive statistics and the concept of statistical significance – the mathematical likelihood that an outcome did not occur by chance alone. The same type of rigor in data interpretation is required in quality improvement. In quality improvement, however, we apply techniques known as statistical process control (SPC). In both forms of statistics, mathematical rules govern the identification of meaningful process or outcome changes. In QI, we use these rules to discern common cause variation from special cause variation.
Special cause variation is neither good nor bad inherently. It depends on the context. The drop in surgical cases due to Covid-19 was not desirable and the special cause was an external and unanticipated factor. However, the return of case volume to previous averages was desirable and was the result of a specific intervention by hospital leadership and surgical services – a deliberate decision to reintroduce elective cases when circumstances were safe to do so. This special cause was attributable to a planned intervention.
In a later editions of Hot Topic, my colleague, Dr. Alan Picarillo, will discuss the traditional methodologies for graphing small time dependent data sets (run charts: usually < 20-30 points) and larger data sets (statistical process control; ≥ 20-30 points). Statistical rules govern the discrimination of common cause variation from special cause variation on run charts or statistical process control charts (figure 3). As the number of data points accumulate, there is more confidence in the statistical result.
Figure 3. API Rules for Detecting Special Cause in statistical process control (ref.2)

Practitioners of quality improvement must be familiar with the concept of common cause and special cause variation, along with the statistical rules that help discriminate important variation. The risk for improvement teams is misinterpretation of data trends and the effects of interventions over time – cardinal mistakes to the eminent engineer and scholar who studied systems. W. Edwards Deming probably would have been puzzled by, and perhaps disagreed with the lyric, “Time may change me, but I can’t trace time.” Time is, after all, the independent variable of every system process. Nevertheless, he might have appreciated a hit record that titled his life’s work.

References
1. Provost, Lloyd and Murray, Sandra. 2011. The Health Care Data Guide. San Francisco: Jossey-Bass Publishers. www.josseybass.com
2. Scoville Associates. QI-Charts for Microsoft Excel. Version 2.0.23. 2009.

 

Want to earn CME Credit? Click here to go to view this tip on CloudCME

Integration of Toyota Principles in Healthcare for Quality Improvement

Integration of Toyota Principles in Healthcare for Quality Improvement

Vijayakrishnan Poondi Srinivasan, MS, LSSBB

Quality Management Engineer

Maine Medical Center

Learning Objectives:

  1. State the Principles of Toyota Production System
  2. Describe the need for application of Toyota Principles in Healthcare
  3. Explain the integration of Toyota Principles with key elements of Healthcare

Toyota Production System (TPS) is a manufacturing philosophy created by one of the leading automobile manufacturing companies called “Toyota” during post World War II Japan. TPS uses a process-oriented approach focusing on respect for people, teamwork, mutual trust and commitment, elimination of waste, and continuous quality improvement. The principles of TPS are all statements of beliefs and values focused on Philosophy, Process, People, and Problem Solving. In contrast to traditional hierarchical management structures, TPS values the importance of partnerships between management and employees at all levels.

Similar to manufacturing organizations, healthcare is facing challenges from rising labor and material costs, intense competition, scarce human resources, customer demand for impeccable quality, and stringent safety and performance standards. Integration of TPS in Healthcare helps to create an environment to do the right things – improve flow, improve quality of life of people, reduce waste and focus on continuous improvement. Virginia Mason was the first Health System to integrate Toyota management philosophy throughout its entire system. They created Virginia Mason Production System (VPMS) by combining TPS and elements from the philosophies of kaizen (see PSQI Hot Topic January, 2020; S.Tyzik) and lean to improve quality and safety, reduce the burden of work for team members, and decrease the cost of providing care.

In general, application of TPS in Healthcare is mainly focused on operational aspects using lean tools. A more integrative approach focused on task, structural, and cultural level of the organization is discussed below for successful implementation of TPS in Healthcare:

  1. All work must be highly specified as to content, sequence, timing, and outcome – accurate documentation of Patient’s medical record, developing processes to streamline the workflow, and tracking patient-centered outcome measures.
  2. Every customer-supplier connection must be direct, and there must be an unambiguous yes-or-no way to send requests and receive responses – direct communication between the patient and the caregiver, improved communication between caregivers regarding the patient’s condition and plan of care, and secured access to patient information.
  3. The pathway for every product and service must be simple and direct – develop and implement “Clinical Pathway” for each treatment initiative based on the “best practice” methodology.
  4. Any improvement must be made in accordance with the scientific method, under the guidance of a teacher, at the lowest possible level of the organization – identify quality improvement projects that focus on improving the workflow of front-line staff and patient safety.

The principles listed above specify how the work is performed (focused on patient care), how knowledge is transferred between workers and within the system (improving the quality of life of caregivers), how production is coordinated between tasks and services (improved flow within the system), and how the process is controlled, measured, and sustained (reduce waste and focus on continuous improvement). Therefore, approaching improvement efforts in healthcare using the principles listed above will create an environment for achieving the organization’s strategic goals much like Toyota – think, develop processes, develop people, and solve problems.

References:

  1. Jeffrey K Liker, Michael Hoseus “Toyota Culture – The Heart and Soul of the Toyota Way”, edition 2008.
  2. Kevin F Collins, Senthil Kumar Muthusamy “Applying the Toyota Production System to a Healthcare Organization: Case Study on a Rural Community Healthcare Provider”, Quality Management Journal, 2007.
  3. Gabriela S Spagnol, Li Li Min, and David Newbold “Lean Principles in Healthcare: An Overview of Challenges and Improvements”, IFAC, 2013.
  4. Joanne Farley Serembus, Faye Meloy, and Bobbie Posmontier “Learning from Business: Incorporating the Toyota Production System into Nursing Curricula”, 2012.
  5. David M Clark, Kate Silvester, and Simon Knowles “Lean Management Systems: Creating a Culture of Continuous Quality Improvement”, 2013.
  6. Virginia Mason Production System – https://www.virginiamason.org/VMPS

Storyboard

Storyboard

Sonja Orff RN, MS, CNL, CSCT

Maine Medical Center, Department of Surgery, Operative and Perioperative Services

 

Learning Objectives:

  1. Describe the purpose of a storyboard
  2. Summarize the utilization of a storyboard as a quality improvement tool
  3. Synthetize the key elements of a storyboard

What is a Storyboard?  

A storyboard is a visual collage of a particular subject or subject matter. It is a page-limited document that tells a story. The board can be merely illustrations, or the illustrations can be combined with text. Storyboards are an alternate way to communicate, disseminate, and share knowledge and information to a variety of audiences. The condensed platform portrays and organizes the author(s) thought processes and the assimilation of ideas as they integrate and converge in some ways and radiate and branch off in others.1

History of the Storyboard

Leonardo da Vinci was thought to be the first person in history to utilize storyboards. His talents as an illustrator are perfect examples of one-page documents telling a clear and concise story. Many of his sketches and works have been used as blueprints to replicate different types of working models2 (Figure 1).

Figure 1. Leonardo da Vinci, Machine Gears Engineer Sketch

In the 1930s Walt Disney Studios was known to utilize storyboards in the pre-production phase as a tool to plan and develop engaging and coherent stories3 (Figure 2).

Figure 2. Walt Disney and storyboard.

In the early 1960s, Hughes Aircraft Company adopted storyboards from the film industry as a business communication tool. The aircraft engineers used the storyboard as a way to organize subject matter in a page-limited document.1

Today, the use of storyboard varies depending on the audience and the innovation of the creators. Examples are the use by industries for ad campaigns, commercials, movies, proposals, and project management.

Storyboards and Quality Improvement

As a quality improvement tool, storyboards can be used as a way to communicate and/or showcase a team’s decisions and steps undertaken to solve a particular problem. The storyboard pulls in compelling information to educate its audience on how the team came to work on a particular concern and how the team achieved their outcome(s).  It is a document that provides information on the whole project. Storyboard is a tool that keeps all team members on the same page and can be a team building process that generates enthusiasm for the project. Quality efforts can be complex and the storyboard provides a format to showcase difficult to understand work while bringing it to life. This visual method to display work provides a context for discussion and feedback in which there is opportunity to acquire important insight. For quality improvement, the storyboard serves as a consistent document that highlights a team’s efforts in an organized high-level fashion, from the beginning to end of a project, with the intent to convince or persuade an action.4

“When you help people understand what you do, you’ll be more successful in attracting support for your work.” 5

Key Elements of a Quality Improvement Storyboard: Format, Attract, & Share

When you create a storyboard considering the following essentials6,7:

  • Choose a formatg. : Plan Do Study Act (PDSA), DMAIC (Define, Measure, Analyze, Improve, Control), or other
  • Define the concern
  • State a clear purpose and problem statement
  • Provide clear example(s) of why the topic is relevant
  • Provide background information and metrics when necessary
  • Showcase the team and the steps taken to reach a goal
  • Show connectivity of work process
  • Ensure it is a page-limited document
  • Attract readers with use of picture aides
  • Visually exhibit the analysis: fishbone diagrams, control charts, histograms, Pareto charts, surveys
  • Share relevant information
  • Use data to back up recommendations
  • Share barriers and new ideas
  • Share best practice
  • Highlight progress
  • Show innovation

 

Below an example of a quality improvement storyboard8 (Figure 3).

 

Figure 3. Safe Surgery Debriefing.

References:

  1. Barkman, P. (1985), The Storyboard Method: A Neglected Aspect of Organizational Communication. The Bulletin, September 1985, P 21-23.
  2. StudyMode, 2020, retrieved, August 2020, https://www.studymode.com/essays/a-History-Of-Storyboarding-110969.html#:~:text=Walt%20Disney%20and%20his%20artists%20%22invented%22%20the%20storyboard,became%20the%20planning%20process%20for%20Disney%27s%20entire%20organization.
  3. Carnahan, A. (2013), retrieved, August 2020, https://www.waltdisney.org/blog/open-studio-storyboards
  4. Fraser, S. (2003) Project storyboards: catalysts for collaborative improvement. International Journal of Health Care Quality Assurance, 16, 6/7 p. 300-305.
  5. Harvard Business Review, 2014, retrieved, August 2020, https://hbr.org/2014/07/how-to-tell-a-great-story
  6. Institute for Health Care Improvement (IHI), 2020, retrieved, August 2020, http://www.ihi.org/resources/Pages/Tools/Storyboards.aspx
  7. Lean Six Sigma, 2005, retrieved, August 2020, https://www.slideshare.net/goleansixsigma/lean-six-sigma-storyboard-go-leansixsigmacom?next_slideshow=1
  8. Orff, S. 2019, Safe Surgery Debriefing. Greenbelt Walk, Portland, Maine.

Leveraging Small Incremental Improvement to Deliver Value

August Hot Topic provided by: Suneela Nayak MS, RN & Senior Director of Operational Excellence

Purpose:

Improvement methods have long been available to leaders and teams.  This Hot Topic explores ways in which small incremental improvements can be leveraged to achieve big goals

Learning Objectives:

  1. Describe lessons learned from how successful healthcare ’transformers’ have achieved and sustained performance improvement with small incremental improvement.
  2. Discuss ways in which small incremental improvement can yield aggregate change resulting in high value and sustained improvement.

Following traditional thinking, healthcare leaders have believed that transformational change can only come only from big well-resourced projects that can be rapidly implemented. In a recent publication Richard Bohmer, a physician and former Professor in Business Administration at the Harvard Business School, debunks this thinking based on an examination of organizations that have achieved and sustained substantial performance improvements such as Seattle’s Virginia Mason Medical Center1. Bohmer notes that these ”successful transformers constantly make small-scale changes to their structures and processes over long period…everything from communicating with patients to cleaning gastroscopes has been redesigned “2.  Major change then emerges from incremental, sustained wins.

In recent years, Operational Excellence has methodically built a platform for incremental workflow improvement.  Five years later, it is well positioned to complement larger projects by yielding dividends such as engaging the care team, delivering high value-low risk return on investment (ROI), and providing a platform for strategy deployment to achieve larger organizational goals.

So, how can small incremental workflow improvements at the local level deliver all this?

First, small incremental improvements work because teams empowered with easy-to-use improvement tools lend their considerable talent to solving problems they care about. This early engagement opens the door to unleashing the full power of the front line to get behind performance improvement.  Bringing deep contextual knowledge, well coached teams take ownership for finding solutions.  Engagement builds as care teams re-connect to their own call to service, feel valued for their contribution by the daily and repeated signals of trust executives send as they attentively listen to KPI presentations, and witness executive ownership for resolving barriers to care. Intellectually rewarded teams soon develop improvement proficiency and focus their valuable capacity on sequential incremental innovations leading to outcomes that matter such as key performance metrics and benchmarks.

Second, incremental workflow improvements are usually simpler to implement because they don’t dramatically change current process.  Locally ‘owned and operated’ PDSA cycles focusing on small gains don’t usually need approval, funding, or sponsorship to get started, and importantly, do not usually encounter challenges such as resistance to change.  At least in part, this is because of rediscovered ‘joy in work’ experienced when exhausting and frustrating redundancy and rework are alleviated.

This brings me to how incremental improvement can yield high value-low risk ROI, a concept of particular value in this time of economic constraint. It is worth noting that while results of each small incremental change seldom produces dramatic ROI, it also generally has little or no cost, is low-risk, and rapid in cadence. Additionally, because each improvement is coupled with engaged ownership, improvements realized from such investments can be long lasting.  The accumulation of numerous small incremental improvements can soon yield sustained, aggregate change adding up to meaningful ROI.

Lastly, how can incremental improvement be leveraged for strategy deployment?   The Operational Excellence Platform provides a stable daily management system for executives and care teams to connect and discuss barriers to safe, reliable and effective patient care.   Recent Op Ex pivots to support operations during our Covid-19 experience provides testimony for this assertion.  Carefully designed improvement tools, empower teams to systematically align their improvement work with key strategic priorities by focusing on Patients, People, Populations, or Value.  Fluid, bi-directional flow of information is channeled every day during Gemba Walks when teams present their KPIs to rounding executives.  In turn, executives gain rich insights about how high-level decisions made at the policy and budget tables impact the real work of care delivery.  In these ways, each small but successful KPI along with the richly talented, hardworking and empowered team behind it, move us closer to achieving the goals of the quadruple aim.

  1. Bohmer RMJ. The Hard Work of Health Care Transformation. n engl j med 375;8 nejm.org August 25, 2016
  2. Bohmer RMJ. Designing care: aligning the nature and management of health care. Boston: Harvard Business Press, June 2009.

Pareto Chart

Pareto Chart

Alan P. Picarillo, MD

Maine Neonatology Associates & the Maine Medical Center Department of Pediatrics.

Objectives:

1.  Describe the Pareto principle

2.  State the graphical representation of the Pareto chart

As physicians continue to look for ways to deliver high quality care to their patients and families, newer tools and methods are being developed. Initially developed by manufacturing, quality improvement methods have been slowly adopted by healthcare and have now become standard curriculum taught in medical schools. The IHI Model for Improvement allow for teams to create a model for change, test proposed changes in clinical situations, measure the results and then accept or modify the proposed changes. Additional tools, such as process maps, Pareto charts, Ishikawa diagrams and key driver diagrams allow for further structure to the team during the creation and implementation of a quality improvement initiative. These tools are important in providing structure and visual representation for ongoing quality projects. As participation in quality improvement is becoming an expectation for health care providers, familiarity with these tools will assist teams with implementing improved processes in their local systems of care.

 

The Pareto principle was initially described by management consultant Joseph Juran as he described than for many events, approximately 80% of the effects come from 20% of the causes1.  This principle was named for Italian economist Vilfredo Pareto, who determined that 85% of the overall wealth in Milan was concentrated in only 15% of the population.  This was adopted by accident prevention practitioners as hazards could be addressed in a systematic order and then targeted interventions to eliminate the more common causes of injury will be more successful than random targeted interventions.

 

Juran adapted this principle of separating out the vital few causes of an event from the trivial many, as a majority of organizational effects resulted from just a few causes.   This became the basis for the Pareto chart (Figure 1), a bar graph in which causative factors for defects in a process are ordered from more frequent to less frequent, allowing for the team to concentrate their efforts on the factors that have the greatest impact2.   The horizontal axis of the chart contains the categories of the problem identified and the vertical axis contains the frequency of the measurement.  Simply, a large proportion of quality problems are created by a small number of causes, allowing for a more focused approach to prioritize more frequent problems in a certain process.   By focusing on the largest and most frequent issues, which can be graphically represented, the team can focus their efforts to achieve the greatest improvement3.

The Pareto Chart shows the relative frequency of defects in rank-order, and thus provides a prioritization tool so that process improvement activities can be organized to “get the most bang for the buck”, or “pick the low-hanging fruit”.  There are many computerized programs that can construct Pareto charts, from statistical programs or even Microsoft Excel©, although basic charts can even be constructed by hand.  The Pareto chart is a valuable quality improvement tool that allows team members to separate out the “vital few” from the “trivial many” when assessing potential defects in a given process.

 

Figure 1 Pareto chart: Example of Pareto chart of type of medication errors, with the high frequency errors to the right, representing the “80%” and the lower frequency errors to the left, representing the remaining “20%”4.

References

  1. Juran JM, Godfrey AB. Juran’s Quality Handbook (5th edition). New York City: McGraw-Hill; 1998
  2. Wilkinson L Revising the Pareto Chart The American Statistician  60 , Iss. 4,2006
  3. American Society for Quality. Cause analysis tools: Pareto chart. 2009 [accessed 8/9/2017]; http://www.asq.org/learn-about-quality/cause-analysis-tools/overview/pareto.html
  4. http://www.cec.health.nsw.gov.au/quality-improvement/improvement-academy/quality- improvement-tools/pareto-charts

Finding Balance Between Improvement Discipline & Tool Fatigue

 

Jordan S. Peck, PhD

Vice President of Physician Practice Operations

Southern Maine Health Care

 

Learning Objectives:

  • Describe common challenges associated with performance improvement work
  • Recognize applications of a few improvement tools in appropriate context

When you visit the MaineHealth Center for Performance Improvement (CPI) website, one of the links is “Tools and Trainings.”[1] Whenever I see this link, a montage of eye rolls goes through my head. Generally I have received positive responses to a disciplined quality/performance improvement (QI/PI) project approach and to the corresponding tools, but I have also heard:

  • “Why can’t we ‘just do it!”
  • “I don’t like all of this data collection, my team prefers to just ‘PDSA’…”
  • “Whatever, I am sure the charter is fine”… 6 months later… “why we are working on this?”
  • “We don’t need a Lean person, I can throw a bunch of post it notes on the wall!”
  • “Lean Black Belt… are you going to kick our patients?”
  • “There are too many templates to fill out, I don’t have time for this!”
  • “Why do we need a whole process just to get people to do their jobs?”

Figure 1 is a list of Lean tools created by the Lean EdNet[2]

Modern Lean literature talks about Lean Daily Management Systems (coined “Operational Excellence at MaineHealth) and culture change as opposed to tools.[1] However, even these high level initiatives offer a healthy dose of tools  such as Letter Charts, Run Charts, Pareto Charts, Action Plan Documents, Strategic Goals driver documents, etc.

Why do all of these tools exist? Why can’t we “just do it?” When leading (QI/PI) projects it is difficult to find the balance between being disciplined and getting overwhelmed with tools. When faced with this problem, I keep the following principles in mind:

  1. We don’t know the real problem: I have always loved the phrase (used by TV doctors), “What SEEMS to be the problem?” The phrase implies that the patient will describe a symptom and let the expert really understand the underlying problem. Similarly, if we initiate a project based on a symptom and without a disciplined approach, we treat the symptoms and not the problem. The benefits of avoiding this are obvious, but we shouldn’t throw the whole QI/PI toolbox at it. Often a 5 Why’s exercise is enough and you don’t need to make a fishbone diagram; or a process map is enough and you don’t need to make a spaghetti diagram.
  2. Symptoms are experienced in different ways by each stakeholder: Often you think you know the problem, but the person next to you has experienced it in a totally different way. A fundamental to project success is getting everyone on the same page. Some people use an “A3” document for this; others use actual project management-style charters. Remain aware of many charter tool options and pick a tool that ensures the conversation has happened without overwhelming and unnecessary details.
  3. Humans struggle with just ‘getting it done,’ even when they are committed: It is well understood that people struggle with weight, reading that book on our night stand, and with any other New Year’s resolution. But when someone is struggling to achieve a task at work we ask “why they can’t just do their job.” The QI/PI tools are designed to acknowledge that even the most committed person has very real barriers to completing seemingly simple tasks. Like the myriad tools and plans to help people meet their weight loss goals, QI/PI tools, processes, and plans are needed to make it easy to do the right thing. Working in healthcare, I have seen a lot of teams that have identified “just do its” through short conversations, avoiding a deeper project. The problem is that these solutions are often really “just remember its” and rely on a human to figure it out for themselves moving forward. Without a process or method to ensure that an action happens, no real solution has been generated.
  4. The Solution isn’t obvious or it would have been fixed already: Even a published best practice is sometimes unsuccessful in a new context. For this reason, many tools such as PDSA and control charts are designed to ensure that we do not move forward without proving our implemented solution. If it doesn’t work, then it is time to think of a new solution. If you are unwilling to spend the time proving that your solution is successful then you should re-consider working on the project at all.

When it comes to tool fatigue, we can often be our own worst enemies. If we rush into projects without having the discipline to ensure that our efforts are targeted correctly and that our solutions were truly successful, we create unnecessary, unsustainable work for our colleagues. This has led QI/PI professionals to quickly adopt the phrase “go slow to go fast.”[2] To avoid spinning your wheels, address the key 4 elements with some level of discipline. Yet it doesn’t necessitate using the full tool box in every project. Finding the balance requires practice and patience.

 

[1] https://home.mainehealth.org/2/MMC/CenterforPerformanceImprovement/SitePages/Home.aspx

[2] https://ocw.mit.edu/courses/aeronautics-and-astronautics/16-660j-introduction-to-lean-six-sigma-methods-january-iap-2012/lecture-notes-1/

[3] Mann, D., 2014 “Creating a Lean Culture,” 3rd Edition, Routledge, 2014

[4] https://medium.com/@reganbach/go-slow-to-go-fast-8c3055e723ed