Innovations in Improvement: Virtual Improvement Coaching during the Covid-19 Pandemic

QIPS (Quality Improvement Patient Safety) Hot Topic-Suneela Nyack MS, RN August 2021

Innovations in Improvement: Virtual Improvement Coaching during the Covid-19 Pandemic

Learning Objectives

  1. Examine pros and cons of a virtual improvement coaching model necessitated by the Covid-19 pandemic
  2. Discuss the value of virtual coaching methods for healthcare improvement teams
  3. Determine opportunities to refine a Virtual Improvement Coaching Model

While the Covid-19 pandemic caused much disruption to established performance improvement workflows, it has also created opportunity for innovation and rapid transformational change.  To stay true to our MaineHealth Quality and Safety Mission, “Create a culture of continuous improvement that promotes quality and value in our healthcare system with an alignment of purpose…,” we resolved to innovate and adapt our traditional coaching methods.

To their credit, many MaineHealth care teams continued Key Performance Indicator (KPI) practice throughout the Covid-19 pandemic. Inspired by their engagement we turned to contemporary articles, blogs and webinars to learn more about remote and virtual coaching methods as a compliment to traditional techniques. Many authors underscored the value of establishing teamwork and camaraderie as key ingredients for the success of remote teams measured in terms of achieving strategic goals. Laura Spawn for Forbes (2020)1 writes, “Overall, achieving goals as a team creates a culture of clear communication, job satisfaction, and motivation”.  She asserts this outcome is best achieved when all employees pitch in to meet strategic goals.  In a similar vein, Graham Kenny, (Harvard Business Review 2020)2, discusses the importance of aligned top-down and bottom-up relationships for high value KPIs and strategy deployment, causality, and rapid responsiveness to change. In an earlier HBR publication, Dhawan and Chamorro-Premuzic3 suggest best practices to minimize the challenge of working with remote teams: clarity of written communications, avoiding digital overload, establishing communication norms, customizing approaches for individuals, and lastly, creating intentional space for celebrations.

Six months after we launched virtual improvement coaching, we conducted a survey of department leaders and teams active with KPIs and Operational Excellence to learn about what we could do better. All respondents agreed that their Op Ex Coach adequately addressed their questions and empowered them to advance their improvement work. More than 90% reported they had a strong relationship with their Op Ex Coach and were comfortable reaching out for help. This feedback suggests early success of our emerging Virtual Coaching Model and importantly, offers direction for improvement.  In combination with wisdom gleaned from the literature, we identified opportunities for refining virtual coaching techniques:

 

  1. Establish strong relationships early in the process to foster camaraderie and teamwork as precursors for successful virtual coaching.
  2. Invest time in pre-work such as agenda planning to improve productivity during coaching sessions
  3. When possible, concurrently engage key stakeholders located in diverse settings to optimize value for the improvement team
  4. Develop proficiency to fully leverage screen sharing capabilities on video conferencing platforms.
  5. Deploy specific facilitation techniques to empower problem articulation, explain complex processes, and generate improvement ideas in a virtual forum

In summary, in keeping with our core values and mission, we have innovated an effective Virtual Improvement Coaching Model to support care teams and leaders. Guidance from a range of authors and PDSA (Plan-Do-Study-Act) thinking has led to proven virtual coaching workflows, optimized by proficiency in videoconferencing applications. Unexpected wins of a Virtual Coaching Model are expanded capacity, and the opportunity to engage key stakeholders from different locations at the same time.  Lastly, emerging expertise with Virtual Coaching techniques adds to our improvement toolkit, and opens the door to further innovations.

 

References

  1. Spawn, L. (2020). Four Strategies for Setting Measurable Goals in a Remote Work Environment. Forbes. https://www.forbes.com/sites/forbeshumanresourcescouncil/2020/01/29/four-strategies-for-setting-measurable-goals-in-a-remote-work-environment/?sh=7f9441603014.
  2. Kenny, G. (2020). What are your KPIs Really Measuring? Harvard Business Review.

 https://hbr.org/2020/09what-are-your-kpis-really-measuring.

  1. Dhawan, E. and Chamorro-Premuzic, T. (2018). How to Collaborate Effectively If Your Team Is Remote. Harvard Business Review. https://hbr.org/2018/02/how-to-collaborate-effectively-if-your-team-is-remote.

Want to earn CE credits? Click here!

Driver Diagrams: Connecting Your Aim to Your Actions

Driver Diagrams: Connecting Your Aim to Your Actions

July 2021

Olivia Morejon, MS

Improvement Specialist II

Maine Medical Center/ MaineHealth

 

Learning Objectives

  1. Describe when to use a driver diagram
  2. Differentiate between the aim, primary drivers, secondary drivers, and change ideas/interventions
  3. Construct and facilitate the use of a driver diagram with a group

Sometimes the aim of your Quality Improvement work can seem inexplicably broad and much feels out of your control. As the old adage “How do you eat an elephant?” tells us, in those times, it can be very helpful to break down a large goal into smaller, more manageable pieces. In Improvement Science, this leads us to Driver Diagrams. A driver diagram visually breaks down and summarizes the larger aim and the smaller goals and steps that will ultimately help achieve that aim.

A driver diagram consists of four interconnected parts: the aim (or the goal), the primary drivers, the secondary drivers, and change ideas (or interventions). Your aim is the overarching goal of the project, or in the healthcare setting, what is ultimately meaningful to patients. It should be measureable and achievable, and it should be summarized into one or two sentences. The primary drivers are the large areas you will need to work on in order to achieve your aim. The secondary drivers are what need to be in place to achieve your primary drivers. It can be difficult to differentiate between primary and secondary drivers. Reviewing a process map can be helpful in identifying larger areas to use as primary drivers and smaller steps that make secondary drivers. Finally, the change ideas are the ideas of what you or your team would like to test in order to move towards the aim. All of the change ideas together can also form your project plan, the actions and changes to make in order to achieve your goal. The example below illustrates all four components of a driver diagram.

Example: reproduced from Reference 1

 

A driver diagrams is most often helpful early in planning a project. It can help transform a larger, global goal into actionable goals and interventions. But putting together a driver diagram yourself can be difficult, and final product will be improved by including others involved in the process.

  1. Pull together your team. Gathering the Subject Matter Experts for your process is key to the success of your project. Choose stakeholders from all areas of the process.
  2. Develop your aim. This should be the overall goal of the project, and guided by data to focus on big quality issues and determine achievable targets for improvement.
  3. Brainstorm Drivers. Work as a team to generate drivers, or areas or items that would contribute towards achieving your aim. You can either shout out drivers for a facilitator to record or have team members record their idea on sticky notes to later share with the whole group.
  4. Group similar ideas to form Primary Drivers. Similar areas of interest can be used as primary drivers, while the more specific details will form the Secondary Drivers.
  5. Expand upon the grouped ideas to form Secondary Drivers. This can be done by both reviewing some of the specific ideas within the grouped primary drivers, as well as brainstorming interventions specific to primary driver.
  6. Ask the group to identify change ideas for each Secondary Driver. These will be your tests of change to implement within your process.
  7. Revisit the Driver Diagram as a part of the PDSA cycle. Update the document with new change ideas as they emerge.

 

References:

  1. Driver Diagram | IHI – Institute for Healthcare Improvement. (2016). IHI- Institute for Healthcare Improvement. http://www.ihi.org/resources/Pages/Tools/Driver-Diagram.aspx
  2. Understanding Driver Diagrams. (n.d.). Life QI System. https://help.lifeqisystem.com/driver-diagrams/understanding-driver-diagrams (accessed July 19, 2021)

Want to earn CE credits? Click here!

 

Run Charts

Run Charts

Alan Picarillo, MD, FAAP

Medical Director of NICU/CCN

The Barbara Bush Children’s Hospital

Learning objectives:

  1. Describe why dynamic data Is preferable to static data for quality improvement
  2. Explain the basic components of a run chart
  3. Identify the basic rules of a run chart

Measurement of data is a core concept of quality improvement. Analysis of those collected data requires a distinct approach as compared to other areas of research. The usual research model for data is pre- versus post- intervention and that model is an incredibly important tenet of scientific inquiry. Many statistical tools are based upon those comparisons of data and outcomes.  The Model for Improvement, which is familiar to readers of this series, requires that one identifies measures to evaluate the impact of planned changes before considering the change ideas themselves.  The central role of measurement dates back to the work of Walter Shewhart and W. Edwards Deming and their effort to understand and measure data variation.

Traditional healthcare data are primarily static. Basic statistics are oriented towards cause and effect relationships in order to determine the significance of differential outcomes.  However, data for quality improvement is inherently time-oriented and therefore should be examined over time.   Aggregated data before and after an intervention can fail to show important trends which may be visible with more frequent or granular data.  Performance measured annually may not show changes which accurately reflect the impact of intervention or show improvement opportunities, such as monthly data might.  It is this display of dynamic data over time that forms the foundation for statistical data analysis in quality improvement.

A run chart is the most basic and commonly used graph of time-series data, but allows for more rigorous interpretation than simple linear graphs of data.  The x-axis represents time and is often plotted based on a specific measure of time (e.g., day, week, month) and the y-axis is the measure of interest.  Also included on a run chart is the centerline, or measurement of central tendency, which is typically the median FIGURE 1.  It is this centerline that allows for analysis of data variation and there are rules to determine significant data variations (signals) versus normal data variations (noise).  Also, annotations to the run chart and specific goal lines allow for a complete pictorial representation to illustrate the QI project for others.

Figure 1 Annotated run chart

Commonly used rules for detecting data signals in a run chart are the following FIGURE 2:

  1. Shift: Six or more consecutive points either above or below the median (centerline). Those values that fall on the median are skipped and do not add to or break a shift.
  2. Trend: Five of more consecutive points all going up or going down
  3. Too few or too many runs: A significant data variation can be signaled by either too many or too few runs, or crossings of the centerline. Critical value tables exist in the literature to determine significant variation (5% risk of failing the run test for random patterns of data)
  4. Astronomical data point: A data point that is unusually different from the rest of the data points.

Figure 2 Run chart rules

References

Langley GJ, Moen RD, Nolan KM, Nolan TW, Norman CL, Provost LP. The improvement guide: a practical approach to enhancing organizational performance. San Francisco, CA: John Wiley & Sons; 2009

Institute for Healthcare Improvement. Institute for Healthcare Improvement: science of improvement: establishing measures. http://www.ihi.org:80/resources/Pages/HowtoImprove/ScienceofImprovementEstablishingMeasures.aspx.  Accessed 21 Apr 2021

Perla RJ, Provost LP, Murray SK. The run chart: a simple analytical tool for learning from variation in healthcare processes. BMJ Qual Saf. 2011;20:46 51.

Want to earn Continuing Education Credits? Click here

PDSA Cycles for Continuous Improvement

April MITE Hot Topic – PDSA Cycles for Continuous Improvement

Author: Hilary Perrey, MHA, LSSBB

Improvement Specialist II

Maine Medical Center Performance Improvement

Learning Objectives:

  1. Describe what PDSA cycles are and how we use them in continuous improvement.
  2. Explain how to conduct PDSA cycles.
  3. Illustrate a helpful PDSA cycle template.

Have you ever heard a beautiful symphony orchestra and wondered how they achieved such musical perfection? How did a particular surgeon gain advanced technical skill in his or her profession? Plan – Do – Study – Act (PDSA) cycles are a four-step model for change and can be used in achieving continuous improvement – from a symphony to the operating room.

Walter Shewhart, a statistician at Bell Telephone Laboratories, developed the Shewhart Cycle in the 1920’s.1 The Shewhart Cycle is best known as the Plan – Do – Check – Act (PDCA) cycle. In 1993, W. Edwards Deming adapted the PDCA cycle as the Plan – Do – Study – Act (PDSA) cycle.2 Just as a circle has no end, the PDSA cycle can be repeated for continuous improvement. The PDSA cycle is also known as the Deming Cycle or the Deming Wheel.1

The MaineHealth branded PDSA template (figure 1) can be used in continuous improvement and quality improvement efforts. The following example illustrates a PDSA in action from the Oncology Service Line’s Multidisciplinary Thoracic Review (MATRIx) Program. The MATRIx Program was the FY19 Oncology Service Line’s Clinical Transformation Project and subsequently its FY20 Maine Medical Partners quality improvement initiative. The goal of the MATRIx project was to decrease time to surgery and time to various clinical endpoints for patients on different referral and treatment pathways.

The MATRIx team sought to increase efficiencies in OR scheduling as a means to decrease time to surgery for patients with suspicious lung nodules (figure 2). In the Plan phase, the baseline average time to surgery in FY19 Q4 was 34 days. Our team shadowed MATRIx scheduling, MATRIx clinic flow, and OR scheduling to understand if there were opportunities to improve processes and reduce time to surgery. In the Do phase, the clinic’s Office Manager collected data on how often creative scheduling occurs for surgery dates scheduled between 1/7/20 – 1/30/20 to understand how frequently scheduling workarounds are needed. In the Study phase, creative scheduling data was reviewed. In the Act phase, the Senior Director of Oncology Services met with the Director of Perioperative Services and requested improved OR scheduling for the MATRIx Program. Subsequently, the OR and MATRIx team collaborated to prioritize thoracic surgeries for cancer patients. The Medical Director of Thoracic Surgery advocated for an increase from one to two dedicated surgical teams so each thoracic surgeon can operate with its own team, cycling back into Plan. The thoracic surgeons trained an additional surgical nurse and technician and now have two dedicated thoracic surgery teams which may improve scheduling, time to surgery, and increase surgical volume (more Do, Study, and Act). This PDSA cycle demonstrates how incremental improvement can be achieved and how one PDSA cycle leads to another. This is the path of continuous improvement.

Figure 1. MaineHealth PDSA template

The PDSA cycle template and other MaineHealth Performance Improvement tools can be accessed via SharePoint at https://home.mainehealth.org/2/MMC/CenterforPerformanceImprovement/SitePages/AllTools.aspx.

References:

  1. Luc R. Pelletier, Christy L. Beaudin. HQ Solutions Resource for the Healthcare Quality Professional, Fourth Edition, 2018. National Association for Healthcare Quality.
  2. Ronald Moen. Foundation and History of the PDSA Cycle. Microsoft Word – PDSA_History._16th_Deming_Research_Seminar_Feb._2010.Moen.docx

Want to earn CE Credits? Click here!

Avoiding the “Whack-a Mole” Approach to Patient Safety Events: the Safety Assessment Code matrix

March 2021 MITE Quality Improvement Patient Safety Hot Topic

Avoiding the “Whack-a Mole” Approach to Patient Safety Events: the Safety Assessment Code matrix

Erin Graydon Baker, MS, RRT, CPPS, CPHRM

Clinical Risk Manager, MaineHealth

Learning Objectives:

  1. Describe how and when to prioritize immediate safety threats
  2. Explain the Safety Assessment Code ( SAC) matrix

In the December 2020 MITE Hot Topic, “Prioritization Methods: Which QI Project Solution Ideas Should We Tackle First?” author Lauren Atkinson describes the impact to effort matrix for quality improvement. The impact to effort matrix helps us to prioritize the most impactful improvements and discriminates them from efforts which may be too great for the anticipated impact.  A prioritization process for safety that is described by the Institute for Healthcare Improvement (IHI)/ National Patient Safety Foundation (NPSF) is similar in intent but more applicable to identifying and classifying adverse events and near misses.

All healthcare personnel are encouraged to report hazards, near misses, and adverse events that reach the patient regardless of whether injury occurs to the patient or staff. Failure to report can negatively affect our ability to mitigate the risk of harm. Solutions to ensuring staff reporting include an easy to use on-line reporting system, visible actions as the result of reports, and feedback to staff when the reports have led to improvements.  For some personnel, though, the reports seem to disappear into the “black hole” of reporting systems where seemingly nothing is done with the information.  To those receiving the safety reports, the daily work of reviewing and acting upon the all reports seems like a poor game of “Whack-a Mole” – when one issue seems resolved, another similar event pops up somewhere else. It can be both exhausting and non-productive to react to each event.  How should we prioritize the most significant events while trending and tracking those events that may be latent errors leading to something harmful?

The IHI/ NPSF describes a process called the Safety Assessment Code (SAC) matrix. SAC multiplies the probability that another event will happen if nothing is done by the actual and/or potential harm to patients or staff to assign a severity score. The highest scores deserve a deeper level of investigation whereas a lower score indicates events that should be trended and tracked over time. This level of prioritization allows for targeted improvements where it will matter most without losing sight of those latent errors that provide valuable information over time.

The matrix below describes how to score severity and probability in order to assign an overall safety score.  To use this grid, estimate how frequently this same event might occur. For example, falls might occur frequently, but historically, the actual or potential harm has been low because of the interventions we have in place to reduce serious harm.  A frequent event with minor harm would score “1” and would signal us to trend these.  However, if we had a 10-fold medication error in dosage which, although uncommon, could have a catastrophic impact on the patient, then the score would be “3”. This would warrant a full Root Cause Analysis.

A trained safety team uses this method best with interrater reliability in scoring and prioritizing events. Understanding SAC helps those who file reports recognize that all reports are reviewed with triage in mind. Some will receive intensive review, while others will contribute to data aggregation and monitoring.

For more details on the probability and severity categories, use the  link (1): http://www.ihi.org/resources/Pages/Tools/RCA2-Improving-Root-Cause-Analyses-and-Actions-to-Prevent-Harm.aspx

References

  1. ​​National Patient Safety Foundation. ​RCA2: Improving Root Cause Analyses and Actions to Prevent Harm. Boston, MA: National Patient Safety Foundation; 2015.

Want to earn CE credits? Go to CloudCME to review the materials, take a short quiz and evaluation!

 

 

 

 

Introduction to Simulation Modelling

Introduction to Simulation Modelling

Mohit Shukla, MS, LSSBB

Quality Management Engineer

MaineHealth Performance Improvement Team

Learning objectives:

  1. Describe Discrete Event Simulation with an example
  2. State when simulation might be more appropriate than other Lean improvement tools

Simulation modelling is the application of computation models built to replicate real life phenomena and/or processes in order to make inferences of interest. It falls within the field of Operations Research and has been applied to complex problems in healthcare since the 1960s [1]. Based on implementation strategy, simulations may be categorized as Continuous, Monte Carlo, Discrete-Event, Agent-Based or Hybrid simulations [2]. The most frequently applied in Healthcare Operations is Discrete Event Simulation (DES), which allows us to emulate real-life processes in a software environment, experiment, and assess the impact of changing the variables of a system on the outcomes of interest. For instance, let’s say we wanted to optimize the number of check-out counters open in Hannaford in the last hour of the day to ensure the store is ready for close at the earliest. We could try small tests of change (reduce/hire) until we find the right mix, but quite often, that approach can be too slow or too expensive. Simulation can help. Start by obtaining the number of check-outs from your sales database in the last hour by day of week, then use the trusty clipboard to understand the distribution of check-out time (e.g. 20% took about 2 minutes, 40% took about 5 minutes, and 20% took longer). From these two data points, a simulation model can be used to create several scenarios – such as opening 2, 4, 6 or more stations, redeploying an Associate to assist with packing up groceries instead of running another check-out, etc., and assessing the impact of those choices on both the time taken to clear the queue as well as the utilization of the assigned resources – without actually doing anything!

DES couples the principles of probability models and queuing theory with large scale random sampling. While most of it is done in specialized software, small scale simulation can be done in Excel as well. For instance, going back to the Hannaford example – if we assume that between 50 and 60 people show up to check-out between 8pm and 9pm on a weekday, and it takes approximately four minutes to check-out each person on average – we can build a simple model in excel to get started:

By changing the values in columns B, C and D, you can compare what happens with 2-4-6-more check-out staff. The key, as always with statistical inferences, is to have enough values. That is, the more values we have in column B, the closer the data gets to being normally distributed and the more robust is our estimate of the central tendency. For fun, start with a 1000+ values!

DES has found several areas of application in healthcare at Maine Medical Center – for instance, in streamlining workflows at the Covid Clinics, estimating the impact of the surgical schedule on Intermediate Care (IMC) bed needs, forecasting number of emergency department beds needed over the next 5, 10 and 15 years, and many others.

 

Figure 1 A simulation model to assess workflows at one of the Covid Vaccine Clinics

Figure 2 A model to simulate ED capacity and project needs for the next decade

Given the effort involved in building a good simulation, it is always best to first ask “what are you trying to achieve?” In Process Improvement, it never hurts to start with understanding the process (preferably with the right control charts!) by conducting a root-cause analysis and trying out a few PDCA (Plan-Do-Check-Act) cycles. If, however, we find ourselves dealing with a complex system comprised of many interacting factors and expensive tests-of-change, simulation can help build and test different solutions or alternatives to recommend the best place to start.

References:

[1] Henderson, S.G., Biller, B., Hsieh, M.H., Shortle, J., Tew, J.D., Barton, R.R., Brailsford, S.; Tutorial: Advances and challenges in healthcare simulation modeling. In Proceedings of the 2007 Winter Simulation Conference.

[2] Preston White Jr., K; Ingalls, R.G.; The Basics of Simulation. In Proceedings of the 2020 Winter Simulation Conference.

Want to earn CE credits? Click here!

Design Thinking

January MITE Article – Design Thinking

Stephen Tyzik

Director of Performance Improvement MMC & MMP

 

Learning Objectives

1) Define design thinking and its 5 phases

2) Articulate the need for design thinking in Healthcare

3) Outline a design thinking implementation plan

Over a decade ago Donald Berwick, MD (President Emeritus and Senior Fellow, Institute for Healthcare Improvement), suggested that healthcare “workers and leaders can often best find the gaps that matter by listening very carefully to the people they serve: patients and families.”1 One framework that aims to leverage the wants, needs and desires of patients is Design Thinking (DT).

DT is a systematic innovation process that prioritizes deep empathy for end-user experiences and challenges improvement team members to fully understand a problem, with the ultimate goal of developing more comprehensive and effective solutions.  There are five phases that combine to make up the process of DT; Empathize, Define, Ideate, Prototype and Test.2 One of the unique aspects of DT is that unlike other sequential improvement frameworks, DT is an iterative process (figure 1) based on new levels of understanding.

Figure 1. Design Thinking Stages

Author/Copyright holder: Teo Yu Siang and Interaction Design Foundation. Copyright terms and license: CC BY-NC-SA 3.0

Within the healthcare industry, it’s easy to find those who desire the best for the patients that they serve. With that in mind, why is the industry still littered with opportunities to clarify confusion, improve experiences and eliminate waste? One reason may lie in the fact that the healthcare profession is populated with highly educated professionals working in high stress environments to solve the most complex of medical issues. When we embark on process improvement and systems redesign aimed to improve efficiency of the end-user experience, it’s natural to believe that we know best. However, DT allows us to acknowledge and adapt to the evolving, complex nature of the healthcare landscape in a way that goes beyond our internal biases.

 

So where do we begin? The answer lies in stage 1 (Empathy) of the DT process, developing the sincerest levels of empathy for the problem you are trying to solve through the lens of the end-user’s experience. This is done by obtaining the voice of the customer, both through patient interviews and by observing the challenge at hand. These steps are crucial to allowing us to set aside our own biases and assumptions to gain the insight needed. Stage 2 (Define) is characterized by collating the information obtained through voice of the customer into problem definitions. These definitions should be framed as the core problem statements, written from the perspective of the end-user, which the team aims to improve. Stage 3 (Ideate) of DT begins the process of challenging assumptions and generating ideas within a multidisciplinary team. The power of this stage is in leveraging diversity of perspectives which provides a broad framework that sets the course for stage 4 (Prototype), an innovative solution design that embraces all possibilities. This stage is highlighted by prototyping solutions that we believe are the most likely to address the problem statements we created. Next is stage 5 (Test), testing our solutions. From this point we will either have success or we will gain new knowledge. In turn, this knowledge is what fuels this iterative process to continually adjust our assumptions and further improve. This process may sound very familiar to another improvement methodology, Plan-Do-Study-Act (PDSA), which is the driver of continuous improvement. In DT, once we clear stage 5, PDSA is utilized to convert the learnings into new tests of change.

 

References:

  1. Berwick DM Improvement, trust, and the healthcare workforce BMJ Quality & Safety 2003;12:i2-i6 Improvement, trust, and the healthcare workforce | BMJ Quality & Safety
  2. Interaction Design Foundation, Design Thinking, viewed 3 January 2021, https://www.interaction-design.org/literature/topics/design-thinking.
  3. Healthcare Financial Management Association, How design thinking in healthcare can improve customer service 2019, viewed 3 January 2021, https://www.hfma.org/topics/finance-and-business-strategy/article/how-design-thinking-in-healthcare-can-improve-customer-service.html

Want to earn CE credits? Click here!

Prioritization Methods: Which QI Project Solution Ideas Should We Tackle First?

December 2020 PSQI Hot Topic

Prioritization Methods: Which QI Project Solution Ideas Should We Tackle First?

Lauren Atkinson, MPH, CST

Improvement Specialist Supervisor

Maine Medical Partners

 

Learning Objectives:

  1. Describe when to use a project prioritization tool.
  2. Understand how to set up and facilitate use of an impact/effort matrix with a group.
  3. Differentiate which project ideas to prioritize first using an impact/effort matrix.

Healthcare teams often have many great ideas about how to make their processes better. So how do you decide which idea to tackle as a first step? If your team has already engaged in a root cause analysis which yielded several solution ideas, a priority matrix can help make that decision objective and thorough. An impact effort matrix is a prioritization tool that should be used to think about which solution ideas to begin working on first, depending on resources (time and cost) and the potential impact the change will have. It leverages stakeholder consensus to find the most efficient path to achieve meaningful goals for patients and staff.

The impact/effort matrix is a tool that is very easy to use. The level of impact an idea would have is shown on the y axis, and the level of effort the change would require is shown on the x axis, as seen in the example below. The matrix is broken down into four quadrants. Quick wins, which have a high impact, and require minimal effort to complete, should be started first. If a team is working together for the first time, quick wins can be very important to keep team members engaged in the improvement process, and to build excitement around what they can accomplish as a team. The ideas that fall into the ‘major projects’ quadrant should be considered when there are enough resources and leadership buy-in to achieve success. Fill-ins should be completed as time allows, and thankless tasks should be re-evaluated or discarded.

This matrix should be used as a consensus-building tool to help drive decision-making. When facilitating the use of this tool, it’s important to have as many project stakeholders present as possible to be able to make the most informed decisions about where ideas fall on the impact/effort matrix. To begin, draw the matrix out on a large flip chart, white board or electronic board. Then, write all of the solution ideas on sticky notes.  Prior to assessing the impact of a potential idea, the group should revisit the goal statements and consider the patient, care team, financial, and safety outcomes that idea will produce. When considering the effort the idea will require to implement, the team should consider the time needs, number of staff, cost, and educational gaps that need to be closed. Next, based on the consensus of the group, the team should place the sticky notes on the grid in their respective quadrants as described above. Once complete, the finalized matrix should guide the action planning as the team moves into the next phase of idea implementation.

References:

Bens, I. Facilitating With Ease. Wiley, 2018.

Impact Effort Matrix. American Society for Quality.  Accessed November 2, 2020. https://asq.org/quality-resources/impact-effort-matrix

Impact Effort Matrix.  MaineHealth Performance Improvement. Accessed November 13, 2020. https://home.mainehealth.org/2/MMC/CenterforPerformanceImprovement/Tools%20and%20Templates/Impact%20Effort%20Matrix.pdf

 

 

Ch-ch-ch-ch-changes: Understanding Variation

Ch-ch-ch-ch-changes: Understanding Variation

Mark Parker, MD
Vice President, Quality and Safety
Maine Medical Center
November, 2020

Learning Objectives:
1. Recognize the features of a stable system
2. Differentiate common cause from special cause variation

David Robert Jones, a.k.a. David Bowie (c.1947-2016), and William Edwards Deming (c.1900-1993) had overlapping lifespans although it is likely that they did not know each other. The pop icon and the champion of statistical process control shared fame, though, for their association with “Changes” – one created a rock anthem to musical reinvention and the frequently changing world; the other dedicated a career to studying systems and understanding variation. The latter is our focus for this edition of QI/PS Hot Topic.
As described in the Model for Improvement (QS/PI Hot Topic, December, 2019), measurement is the answer to the question, “How will we know that a change is an improvement?” Yet, measurement is not helpful if we do not interpret measurement correctly through application of appropriate statistical rules. Frequently in Quality Improvement, we engage in a project and measure parameters of change over time during our PDSA cycles. It is not uncommon for teams to see early data trends and declare success or failure after a limited number of data points. Pre-conceived biases about the predicted effects of interventions may color the interpretation of results.
Every stable system, whether it is the production line at Toyota or the operating room at Maine Medical Center, has a central tendency (median or mean). And every stable system has data points that fluctuate around the central tendency. Such variation is predictable and is known as “Common Cause” variation (Figure 1) – the response to the normal variables that affect the system every day. For example, it is known that the operating room case volume will increase in the middle of each week due to the elective surgery scheduling tendencies of the surgeons and their support staffs. Conversely, the number of cases runs below the daily average on weekends due a paucity of elective cases. On balance, though, the average number of cases is predictable from week to week. But what if a sudden event perturbs the stable system? Think of the early impacts of the Covid-19 pandemic – elective surgeries were cancelled for a prolonged period and the average number of surgical cases dropped precipitously. This is known as “Special Cause” variation (Figure 2).

Most clinicians and data scientists are facile with traditional descriptive statistics and the concept of statistical significance – the mathematical likelihood that an outcome did not occur by chance alone. The same type of rigor in data interpretation is required in quality improvement. In quality improvement, however, we apply techniques known as statistical process control (SPC). In both forms of statistics, mathematical rules govern the identification of meaningful process or outcome changes. In QI, we use these rules to discern common cause variation from special cause variation.
Special cause variation is neither good nor bad inherently. It depends on the context. The drop in surgical cases due to Covid-19 was not desirable and the special cause was an external and unanticipated factor. However, the return of case volume to previous averages was desirable and was the result of a specific intervention by hospital leadership and surgical services – a deliberate decision to reintroduce elective cases when circumstances were safe to do so. This special cause was attributable to a planned intervention.
In a later editions of Hot Topic, my colleague, Dr. Alan Picarillo, will discuss the traditional methodologies for graphing small time dependent data sets (run charts: usually < 20-30 points) and larger data sets (statistical process control; ≥ 20-30 points). Statistical rules govern the discrimination of common cause variation from special cause variation on run charts or statistical process control charts (figure 3). As the number of data points accumulate, there is more confidence in the statistical result.
Figure 3. API Rules for Detecting Special Cause in statistical process control (ref.2)

Practitioners of quality improvement must be familiar with the concept of common cause and special cause variation, along with the statistical rules that help discriminate important variation. The risk for improvement teams is misinterpretation of data trends and the effects of interventions over time – cardinal mistakes to the eminent engineer and scholar who studied systems. W. Edwards Deming probably would have been puzzled by, and perhaps disagreed with the lyric, “Time may change me, but I can’t trace time.” Time is, after all, the independent variable of every system process. Nevertheless, he might have appreciated a hit record that titled his life’s work.

References
1. Provost, Lloyd and Murray, Sandra. 2011. The Health Care Data Guide. San Francisco: Jossey-Bass Publishers. www.josseybass.com
2. Scoville Associates. QI-Charts for Microsoft Excel. Version 2.0.23. 2009.

 

 

Integration of Toyota Principles in Healthcare for Quality Improvement

Integration of Toyota Principles in Healthcare for Quality Improvement

Vijayakrishnan Poondi Srinivasan, MS, LSSBB

Quality Management Engineer

Maine Medical Center

Learning Objectives:

  1. State the Principles of Toyota Production System
  2. Describe the need for application of Toyota Principles in Healthcare
  3. Explain the integration of Toyota Principles with key elements of Healthcare

Toyota Production System (TPS) is a manufacturing philosophy created by one of the leading automobile manufacturing companies called “Toyota” during post World War II Japan. TPS uses a process-oriented approach focusing on respect for people, teamwork, mutual trust and commitment, elimination of waste, and continuous quality improvement. The principles of TPS are all statements of beliefs and values focused on Philosophy, Process, People, and Problem Solving. In contrast to traditional hierarchical management structures, TPS values the importance of partnerships between management and employees at all levels.

Similar to manufacturing organizations, healthcare is facing challenges from rising labor and material costs, intense competition, scarce human resources, customer demand for impeccable quality, and stringent safety and performance standards. Integration of TPS in Healthcare helps to create an environment to do the right things – improve flow, improve quality of life of people, reduce waste and focus on continuous improvement. Virginia Mason was the first Health System to integrate Toyota management philosophy throughout its entire system. They created Virginia Mason Production System (VPMS) by combining TPS and elements from the philosophies of kaizen (see PSQI Hot Topic January, 2020; S.Tyzik) and lean to improve quality and safety, reduce the burden of work for team members, and decrease the cost of providing care.

In general, application of TPS in Healthcare is mainly focused on operational aspects using lean tools. A more integrative approach focused on task, structural, and cultural level of the organization is discussed below for successful implementation of TPS in Healthcare:

  1. All work must be highly specified as to content, sequence, timing, and outcome – accurate documentation of Patient’s medical record, developing processes to streamline the workflow, and tracking patient-centered outcome measures.
  2. Every customer-supplier connection must be direct, and there must be an unambiguous yes-or-no way to send requests and receive responses – direct communication between the patient and the caregiver, improved communication between caregivers regarding the patient’s condition and plan of care, and secured access to patient information.
  3. The pathway for every product and service must be simple and direct – develop and implement “Clinical Pathway” for each treatment initiative based on the “best practice” methodology.
  4. Any improvement must be made in accordance with the scientific method, under the guidance of a teacher, at the lowest possible level of the organization – identify quality improvement projects that focus on improving the workflow of front-line staff and patient safety.

The principles listed above specify how the work is performed (focused on patient care), how knowledge is transferred between workers and within the system (improving the quality of life of caregivers), how production is coordinated between tasks and services (improved flow within the system), and how the process is controlled, measured, and sustained (reduce waste and focus on continuous improvement). Therefore, approaching improvement efforts in healthcare using the principles listed above will create an environment for achieving the organization’s strategic goals much like Toyota – think, develop processes, develop people, and solve problems.

References:

  1. Jeffrey K Liker, Michael Hoseus “Toyota Culture – The Heart and Soul of the Toyota Way”, edition 2008.
  2. Kevin F Collins, Senthil Kumar Muthusamy “Applying the Toyota Production System to a Healthcare Organization: Case Study on a Rural Community Healthcare Provider”, Quality Management Journal, 2007.
  3. Gabriela S Spagnol, Li Li Min, and David Newbold “Lean Principles in Healthcare: An Overview of Challenges and Improvements”, IFAC, 2013.
  4. Joanne Farley Serembus, Faye Meloy, and Bobbie Posmontier “Learning from Business: Incorporating the Toyota Production System into Nursing Curricula”, 2012.
  5. David M Clark, Kate Silvester, and Simon Knowles “Lean Management Systems: Creating a Culture of Continuous Quality Improvement”, 2013.
  6. Virginia Mason Production System – https://www.virginiamason.org/VMPS