Cultivating Loyal Relationships with High Reliability Organizing

Cultivating Loyal Relationships with High Reliability Organizing

Melissa Retter, MA, CPHQ, CPHRM, CPSO, CPXP

Patient Experience Director

Maine Medical Center

 

Learning Objectives:

1.Recognize the distinction between patient experience and human experience

2.Describe how harm can erode human experiences and negatively impact trust and loyalty

3.Identify principles of high reliability organizing that facilitate positively memorable human experiences

When we ponder the essence of relationships that are established in healthcare settings, we are unable to do so without succumbing to the humble and simple realization that we are human beings caring for human beings. Our shared human condition results in a mutual vulnerability to the myriad of circumstances that we encounter in life. Our patients experience a wide range of influences that impact their well-being just as our care team members do. The Beryl Institute defines patient experience as, ‘the sum of all interactions shaped by an organization’s culture that influence patient perceptions across the continuum of care’ (2021). Beyond the sum of all interactions, patient experiences are defined by individual interactions they have with all other human beings. Together, we are empowered to co-create a consistently safe, compassionate, and positively memorable culture that arises when trusting relationships are formed. It behooves us to expand our mindset to think about relationship-centered care. Our engagement has a profound impact on the caliber of the relationships that we cultivate with our patients and their families.

Trusting and loyal relationships with our patients, colleagues and ultimately, organization, are eroded when we experience harm. While acknowledging the importance of physical harm, we focus our attention now on a few examples of the effects of psychological and emotional harm (Figure 1). Our patients may experience harm if they feel that we are discriminating against them and/or they perceive a lack compassion, courtesy, or respect. As a result, they may not receive indicated care or they may articulate anger towards caregivers. They may experience a desire to seek care elsewhere in the future, and share their negative experience with others, effectively harming our organizational reputation. Our colleagues may experience harm if they perceive a leader in the organization is controlling or condescending. They may feel unimportant and be less prone to speak up in the future when they identify safety concerns or have ideas about how to improve care. They may decide to seek employment elsewhere. Even worse, they may become disengaged and reckless during the delivery of patient care.  It is essential to note that nurturing relationships requires more than the mere absence of harm. We must prevent harm and consistently deploy best practices that are grounded in the science of organizing for high reliability.

When it comes to preventing harm and cultivating a care environment and work environment, there are practices within the realm of high reliability organizing that promote establishment of compassionate and positively memorable human experiences. Operating in a highly reliable manner does not imply that we are error free! Weick and Sutcliffe (2001) aptly reflect on how errors occur in complex systems. They highlight the processes and structures coupled with increased workload, distractions, over-confidence, and time pressures that impact the behavior of individuals and groups at the front line. Key tenants of high reliability include a commitment to the right culture, deferent and present leadership, continuous learning, organizational and individual resiliency, prevention and a zero tolerance for human harm (Figure 2). Reduction of power distances between patients, families, and colleagues by making eye contact, smiling, and greeting others by their preferred name creates a welcoming and inclusive culture. Commitment to the right culture also involves holding ourselves and our team members accountable for compassionate treatment. Noteworthy hallmarks of high reliability include a focus on preventing harm, a reluctance to simplify, a preoccupation with learning and deference to expertise. We seek to mitigate damage that can fracture trusting relationships with others by sharing human experience stories. We must go to the sites of care delivery to host listening labs, attend team huddles, and humbly defer to the knowledge of the workers at the front line. Lastly, our well-being as care team members is of paramount importance. When we take care of ourselves and we feel appreciated, we become resilient. When we are resilient, we can be mindful and present in each fleeting moment that we are blessed to share with others. We are empowered to tap into our passion as caregivers, and co-author consistent and mutually delightful human experiences. The result is devoutly loyal patients and care team members.

Figure 1. Harmful Communication and Behavior Outcomes*

Harmful Communication and/or Behavior Feelings Potential Patient Outcomes Potential Care Team Member Outcomes
Discriminating Inequality, Anger, Resentment Diminished Access to Indicated Care

Long-Term Impact on Prognosis

Future Care Sought Elsewhere

Share Negative Experience with Others

Restricted Access to Job Opportunities

Reduced Desire to Speak Up for Safety

Disengagement

Reduced Well-Being

Departure from Employer

Condescending Belittlement, Unimportance, Indignancy Reluctance to Speak Up with Concerns

Diminished Partnership with Care

Future Care Sought Elsewhere

Share Negative Experience with Others

Reduced Desire to Speak Up for Safety

Disengagement

Reduced Well-Being

Departure from Employer

Controlling Frustration, Rebellion, Complacency, Apathy Non-Compliance with Care Plan

Adversarial Relationships

Future Care Sought Elsewhere

Share Negative Experience with Others

Reckless and Unsafe Challenging Behavior

Disengagement

Reduced Well-Being

Departure from Employer

Dismissing and/or Ignoring Isolation, Desperation, Unimportance, Frustration Reluctance to Speak Up with Concerns

Diminished Partnership with Care

Future Care Sought Elsewhere

Share Negative Experience with Others

Reduced Desire to Speak Up for Safety

Disengagement

Reduced Well-Being

Departure from Employer

Lacking Compassion, Courtesy and/or Respect Anger, Disappointment, Sadness, Unimportance Diminished Partnership with Care

Future Care Sought Elsewhere

Share Negative Experience with Others

Reckless and Unsafe Challenging Behavior

Disengagement

Reduced Well-Being

Departure from Employer

Retter, M. 2021

*Examples above are not all-inclusive as additional ways harm impacts patients and team members exist.

 

References

  1. Cook, R. and Woods, D. 1994. Operating at the Sharp End: The Complexity of Human Error. Healthcare Performance Improvement, LLC, All Rights Reserved.
  2. Schultz, E. A. and Lavenda, R. H. 2005. Cultural Anthropology: A Perspective on the Human Condition. 7th New York: Oxford University Press.
  3. Weick, K.E. and Sutcliffe. 2001. Managing the Unexpected: Assuring High Performance in an Age of Complexity. San Francisco: Jossey-Bass Publishers.

Want to earn CE Credits? Click Here

Affinity Diagrams

Affinity Diagrams

Ghassan Saleh, DMD, DS

Director, MaineHealth Performance Improvement

 Learning Objectives:

  1. Define the affinity diagram and clarify when best to use.
  2. Illustrate the steps in generating affinity diagrams.
  3. Describe best practices when developing affinity diagrams.

 

Let’s imagine that you are in a grocery store. You are taking care of your weekly shopping and trying to go through your list. You start by grabbing some oranges in the fruit and vegetable department, a gallon of Greek yogurt from the dairy department and some Cheetos from the chip aisle. As you go further down the list you realize that you also need some peaches – back to the fruit and vegetable area. And guess what? You also need some milk and cheese. You cross the store again to dairy section. Before you know it, you go back and forth several times. Wouldn’t it be a lot easier and more effective if you had grouped everything you needed by categories? That’s the idea of the affinity diagram.

An affinity diagram is a visual tool that helps improvement specialists organize the information they come up with during a brainstorming session. This organization take place by grouping those ideas to their affinity, or similarity. This way, the generated ideas become easier to act upon. The affinity diagram can also help stimulate new patterns of thinking, sparked by groups formed to find creative solutions to difficult problems and involving people from diverse backgrounds.

When to Use the Affinity Diagram?

Simply put, when your brainstorming session is over. Affinity Diagrams aren’t a brainstorming tool themselves but rather a way to organize, consolidate and act on ideas. They are specifically helpful when:

  • Brainstorming with a large group of people
  • You end up with a big number of ideas (or lot of data points)
  • Dealing with complex problems
  • Group consensus is needed

Tips: Good Practice when Developing Affinity Diagrams:

  • Always start your brainstorming session with a clear objective – What is the problem at hand? What are you trying to solve?
  • Assign a facilitator to help keep the conversation focused.
  • Don’t come up with predetermined categories for your affinity diagram. You decide on the categories after all ideas are out on the white board (or virtual board).
  • Lastly, affinity diagrams helps you solve for the three killers of a Kaizen meeting: 1) Meet, not discuss. 2) Discuss, not decide and 3) Decide, not do. Affinity diagrams can certainly help you decide but you should always follow that decision with a clear action plan: who is going to do what, when and how to make sure that the “do” is happening.

Steps Taken in Generating Affinity Diagrams

There are four steps in developing affinity diagrams. They are:

Step 1:  Display ideas you generated during a brainstorming session.

MaineHealth OpEx program’s preferred way of brainstorming is “brain-writing” where each idea is written down on one sticky note (or virtual sticky). Figure #1 is an example of displayed ideas for issues in implementing Continuous Process Improvement in healthcare

Figure 1: Issues in Implementing Continuous Process Improvement in Healthcare

Step 2: Sort ideas into similar groups.

Figure #2 shows the same ideas as figure #1 organized into similar themes.

Figure 2: Issues Organized into Similar Groups

 Step 3: Create header cards.

Header cards are created for each of the groups choosing a title that best describe the theme of each group. See Figure #3

Figure 3: Header Cards for Each Group of Ideas Categorized in Step #2.

*TQL in the far right box = Total Quality Logistics.

Step 4:  Draw finished diagram.

The finished diagram displays each group of ideas with their respective header card at the top of the group. Figure #4 shows the final product of the affinity diagram for issues in implementing continuous improvement in healthcare.

Figure 4: Finished Diagram

Resources:

  1. 6th edition of the Project Management Body of Knowledge (PMBOK Guide).
  2. Minnesota Department of Health https://www.health.state.mn.us/communities/practice/resources/phqitoolbox/affinitydiagram.html
  3. Pictures of the example is adopted from 2013 American College of Cardiology, CQI Knowledge Assessment Quiz: Answer Key https://cvquality.acc.org/docs/default-source/qi-toolkit/03_knowledgeassessmentquiz_answerkey_12-10-13new.pdf?sfvrsn=55478fbf_2

Want to earn CE credits? Click here

Integration of Leading and Lagging Indicators in Healthcare for Quality Improvement

Integration of Leading and Lagging Indicators in Healthcare for Quality Improvement

Vijayakrishnan Poondi Srinivasan, MS, LSSBB

Quality Management Engineer

Maine Medical Center

Learning Objectives:

  1. Introduction to Performance Measurement System (PMS) and Hierarchical Levels
  2. Alignment of Leading and Lagging Indicators in the PMS
  3. Illustrate integration of Leading and Lagging Indicators in Healthcare for Quality Improvement

An effective Performance Measurement System (PMS) enables an organization to assess whether goals are being achieved and facilitates improvement by clarifying goals, highlighting gaps, and facilitating reliable forecasts. Therefore, a strong PMS enables an organization to align its process level performance with management level goals.

A comprehensive PMS is constructed at three levels based on the hierarchy of the organization:

  • Strategic Level: The main objective of this level is to translate the needs of the customer and stakeholder into defined goals and objectives.
  • Tactical Level: This level supports the strategic goals and objectives developed in the strategic level and defines the drivers for achieving those goals.
  • Operational Level: This level regulates the day-to-day output relative to schedules, specifications, and other aspects. The main scope at this level is streamlined process to work as quickly and efficiently as possible.

The PMS is characterized by the mixture of two types of performance measures. They are leading (cause) and lagging (effect) indicators respectively. The leading indicators are performance drivers in the operational level. Also, leading indicators are the operational inputs to the process. The lagging indicators are core outcomes that have a serious impact on the strategic level of the PMS. A balanced PMS should have a mix of outcome measures (lagging indicators) and performance drivers (leading indicators) which yield a cause-and-effect relationship.

Leading Indicators are also termed Performance Indicators (PI) and are present at the operational level and tactical level of the system. PI are the fundamental set of indicators defined for a process. These indicators include the input provided for each process. Strategies formulated at the management level are applied to PI because they serve as the input to the system.

Lagging Indicators are termed Key Performance Indicators (KPI) and are present at the strategic level and tactical level of the system. KPI are derived from the fundamental performance measures of a process. They are very useful to collect information on day-to-day activities and progress of a process. These indicators guide strategies for achieving the objective of an organization. The frequency of measurement differs based on the nature of the indicator, but they are often collected on a daily basis. It is important to establish a cause-and-effect relationship by linking the performance measures (PI and KPI) within and between different levels of the performance measurement system.

Figure 1: Alignment of Leading & Lagging Indicators

The application of this concept in Healthcare provides clarity in tracking the performance of the system at various levels of the organization. The example provided below, from the Orthopedics Service Line Clinical Transformation Project, demonstrates how efforts made in the operational level flow into the tactical level, and ultimately impact the strategic goals of the organization.

Problem Statement: Until January of 2018, Centers for Medicare & Medicaid Services (CMS) had Total Knee Replacement (TKR) on the inpatient procedure only list.  This requirement changed and CMS expected institutions to classify TKR patients as inpatient or outpatient and support this with appropriate documentation.

Goal: To have same day or one (1) overnight length of stay (LOS), making the procedure truly outpatient with respect to level of care.

Strategic Level Measures:

  • Reduce average LOS ≤ 1.3 days
  • Reduce reoperation rate (90 days)
  • Reduce readmission rate (90 days)
  • Reduce post-operative ED Visit rate (90 days)
  • Maintain or improve patient satisfaction scores
  • Reduce the variable direct cost per case for the episode of care by “X%”

Tactical Level Measures:

  • Increase % of patients receiving Tranexamic Acid (TXA)
  • Increase % of patients receiving Spinal Anesthesia 

Operational Level Measures:

  • Implement patient inclusion and exclusion criteria for one-day knee replacement procedure
    • Develop and implement a clinical pathway for patients eligible for one-day knee replacement surgery
  • Educate patients on the entire episode of care to prepare them for surgery and avoid any delays with on-time discharge
    • Increase the % of patients attend the educational session prior to surgery

References:

  1. Daniels RC, Burns N. A framework for proactive performance measurement system introduction. International Journal of Operations & Production Management. 1997.
  2. Bourne M, Mills J, Wilcox M, Neely A, Platts K. Designing, implementing and updating performance measurement systems. International journal of operations & production management. 2000.
  3. Kueng P. Process performance measurement system: a tool to support process based organizations. Total quality management. 2000.
  4. Rodriguez RR, Saiz JJA, Bas AO. Quantitative relationships between key performance indicators for supporting decision-making processes. Computers in Industry. 2009.

5. Tangen S. Performance measurement: from philosophy to practice. International journal of productivity and performance management. 2004.

Want to earn CE Credits? Click here

 

Frequency Plots: How to tell a quality improvement story utilizing “plots”?

Frequency Plots:

How to tell a quality improvement story utilizing “plots”?

Sonja C. Orff, RN, MS, CNL, CSCT

Quality and Safety Coordinator

Operative and Perioperative Services

Maine Medical Center

September 2021

Learning Objectives:

  1. Describe how to utilize a frequency plot/graphic to display quality improvement data
  2. Identify the benefits of the different types of frequency plots

The adage, “a picture is worth a thousand words” carries significant weight when embarking on a quality improvement opportunity. Graphic displays of data offer insights that lists of numbers alone cannot. Visual tools to analyze trends and patterns in quality are powerful aids to achieve continuous improvement. Frequency plots provide graphical display of data sets that reveal associations and relationships. There are at least six frequency plot methods which one can choose.

The frequency graph one chooses depends on the type of data to be analyzed. There are graphics for continuous data, which is data that can take any value (e.g. height, weight, temperature, length) and graphics for attribute data, which is data that can be counted and given a whole numerical value (e.g. surgical case volumes).  The goal is to compare the differences between groups, and/or to study the relationships between variables and values. Below are examples and applications of frequency plot graphics.

Histogram

Histograms showcase the frequency of continuous data values (y axis) by displaying the distribution or “shape” of a data set. It also shows the spread of the data set (x axis) while capturing the presence of outliers or gaps in the data points. The histogram utilizes rectangular vertical bars to depict where most of the data occurs. These graphics should be constructed utilizing a sample size of at least 30 data points. If the data size is too small, the histogram may not accurately display the distribution.

Dot Plot

If the sample size is less than 30, a dot plot is preferred. A dot plot is a graphical representation of data utilizing dots plotted on a simple scale. Dot plot, when applied for small data sets, can be used for both continuous and discrete data sets. Above is a simple example of how a dot plot can be applied.

Histogram vs. Bar Chart

Each column or bar of the histogram represents the frequency of occurrence of quantitative continuous data (y axis). The columns or bars in a histogram and bar chart can vary in height and shape. However, as depicted above, the histogram has no spaces between the bars. What’s more, a bar chart shows the comparison of categorical discrete variables as opposed to number ranges.

Pareto Chart

When a bar chart presents the categories of data in a descending order of frequency and the cumulative total is represented by the line, this is known as a Pareto Chart. Above is an example of a Pareto Chart exemplifying the descending order of category data and the cumulative total trend line. Pareto Charts are discussed in more detail by Alan Picarillo in the June, 2020 edition of MITE QI/PS Hot Topic.

Stem and Leaf Plot

Stem and leaf plot is a frequency graphic less utilized but worth considering as a rapid approach to analyze and display data. The key difference between this plot and a histogram is that a stem and leaf plot can be constructed manually without the use of analytical software. Furthermore, a stem and leaf plot shows individual data points, resembling a table, whereas a histogram does not. The stem on the left displays the first digit(s) and the leaf on the right displays the last digit. Above, one can see that the individual data points in the 20-29 range are represented most often (four times) in the stem and leaf plot below.

Box and Whisker Plot

The box and whisker plot shows the following noteworthy statistics of data: median, maximum and minimum values, and upper and lower quartiles. The data are plotted in a way that the top 25% and the bottom 25% of the data points are represented by two whiskers. The box in the middle represents the remaining 50% of the data. Box and whisker plots are especially worthy when performing a comparison analysis between several data sets. This frequency plot allows for the visual comparision of central tendency, the variability of multiple data sets, and the presents of outliers.  Above is an example of a horizontal box and whisker plot.

Selecting a frequency plot that best tells the quality improvement story has many benefits. Visual presentation of data can motivate and provide an opportunity to elicit contribution and buy in from stakeholders. These graphics also provide a method and means in which to monitor for success, change, and opportunities for improvement.

References

CIToolKit. (2020). Graphic Analysis. Retrieved September 12, 2021, https://citoolkit.com/articles/graphical-analysis/

CIToolKit. (2020). Histograms-and-Boxplots. Retrieved September 12, 2021, https://citoolkit.com/articles/histograms-and-boxplots/

Hessing, Ted. (n.d.). Frequency Plots. Retrieved: September 10, 2021, https://sixsigmastudyguide.com/frequency-plots/

Model Systems Knowledge Translation Center (MSKTC). (n.d.). Effective Use of Histograms. Retrieved: August 19, 2021, https://msktc.org/lib/docs/KT_Toolkit/Charts_and_Graphs/Charts_Tool_Histograms_508c.pdf

Stem-and-leaf display. (2021, August 14). In Wikipedia. https://en.wikipedia.org/wiki/Stem-and-leaf_display

Want to earn CE credits? Click here!

Innovations in Improvement: Virtual Improvement Coaching during the Covid-19 Pandemic

QIPS (Quality Improvement Patient Safety) Hot Topic-Suneela Nyack MS, RN August 2021

Innovations in Improvement: Virtual Improvement Coaching during the Covid-19 Pandemic

Learning Objectives

  1. Examine pros and cons of a virtual improvement coaching model necessitated by the Covid-19 pandemic
  2. Discuss the value of virtual coaching methods for healthcare improvement teams
  3. Determine opportunities to refine a Virtual Improvement Coaching Model

While the Covid-19 pandemic caused much disruption to established performance improvement workflows, it has also created opportunity for innovation and rapid transformational change.  To stay true to our MaineHealth Quality and Safety Mission, “Create a culture of continuous improvement that promotes quality and value in our healthcare system with an alignment of purpose…,” we resolved to innovate and adapt our traditional coaching methods.

To their credit, many MaineHealth care teams continued Key Performance Indicator (KPI) practice throughout the Covid-19 pandemic. Inspired by their engagement we turned to contemporary articles, blogs and webinars to learn more about remote and virtual coaching methods as a compliment to traditional techniques. Many authors underscored the value of establishing teamwork and camaraderie as key ingredients for the success of remote teams measured in terms of achieving strategic goals. Laura Spawn for Forbes (2020)1 writes, “Overall, achieving goals as a team creates a culture of clear communication, job satisfaction, and motivation”.  She asserts this outcome is best achieved when all employees pitch in to meet strategic goals.  In a similar vein, Graham Kenny, (Harvard Business Review 2020)2, discusses the importance of aligned top-down and bottom-up relationships for high value KPIs and strategy deployment, causality, and rapid responsiveness to change. In an earlier HBR publication, Dhawan and Chamorro-Premuzic3 suggest best practices to minimize the challenge of working with remote teams: clarity of written communications, avoiding digital overload, establishing communication norms, customizing approaches for individuals, and lastly, creating intentional space for celebrations.

Six months after we launched virtual improvement coaching, we conducted a survey of department leaders and teams active with KPIs and Operational Excellence to learn about what we could do better. All respondents agreed that their Op Ex Coach adequately addressed their questions and empowered them to advance their improvement work. More than 90% reported they had a strong relationship with their Op Ex Coach and were comfortable reaching out for help. This feedback suggests early success of our emerging Virtual Coaching Model and importantly, offers direction for improvement.  In combination with wisdom gleaned from the literature, we identified opportunities for refining virtual coaching techniques:

 

  1. Establish strong relationships early in the process to foster camaraderie and teamwork as precursors for successful virtual coaching.
  2. Invest time in pre-work such as agenda planning to improve productivity during coaching sessions
  3. When possible, concurrently engage key stakeholders located in diverse settings to optimize value for the improvement team
  4. Develop proficiency to fully leverage screen sharing capabilities on video conferencing platforms.
  5. Deploy specific facilitation techniques to empower problem articulation, explain complex processes, and generate improvement ideas in a virtual forum

In summary, in keeping with our core values and mission, we have innovated an effective Virtual Improvement Coaching Model to support care teams and leaders. Guidance from a range of authors and PDSA (Plan-Do-Study-Act) thinking has led to proven virtual coaching workflows, optimized by proficiency in videoconferencing applications. Unexpected wins of a Virtual Coaching Model are expanded capacity, and the opportunity to engage key stakeholders from different locations at the same time.  Lastly, emerging expertise with Virtual Coaching techniques adds to our improvement toolkit, and opens the door to further innovations.

 

References

  1. Spawn, L. (2020). Four Strategies for Setting Measurable Goals in a Remote Work Environment. Forbes. https://www.forbes.com/sites/forbeshumanresourcescouncil/2020/01/29/four-strategies-for-setting-measurable-goals-in-a-remote-work-environment/?sh=7f9441603014.
  2. Kenny, G. (2020). What are your KPIs Really Measuring? Harvard Business Review.

 https://hbr.org/2020/09what-are-your-kpis-really-measuring.

  1. Dhawan, E. and Chamorro-Premuzic, T. (2018). How to Collaborate Effectively If Your Team Is Remote. Harvard Business Review. https://hbr.org/2018/02/how-to-collaborate-effectively-if-your-team-is-remote.

Want to earn CE credits? Click here!

Driver Diagrams: Connecting Your Aim to Your Actions

Driver Diagrams: Connecting Your Aim to Your Actions

July 2021

Olivia Morejon, MS

Improvement Specialist II

Maine Medical Center/ MaineHealth

 

Learning Objectives

  1. Describe when to use a driver diagram
  2. Differentiate between the aim, primary drivers, secondary drivers, and change ideas/interventions
  3. Construct and facilitate the use of a driver diagram with a group

Sometimes the aim of your Quality Improvement work can seem inexplicably broad and much feels out of your control. As the old adage “How do you eat an elephant?” tells us, in those times, it can be very helpful to break down a large goal into smaller, more manageable pieces. In Improvement Science, this leads us to Driver Diagrams. A driver diagram visually breaks down and summarizes the larger aim and the smaller goals and steps that will ultimately help achieve that aim.

A driver diagram consists of four interconnected parts: the aim (or the goal), the primary drivers, the secondary drivers, and change ideas (or interventions). Your aim is the overarching goal of the project, or in the healthcare setting, what is ultimately meaningful to patients. It should be measureable and achievable, and it should be summarized into one or two sentences. The primary drivers are the large areas you will need to work on in order to achieve your aim. The secondary drivers are what need to be in place to achieve your primary drivers. It can be difficult to differentiate between primary and secondary drivers. Reviewing a process map can be helpful in identifying larger areas to use as primary drivers and smaller steps that make secondary drivers. Finally, the change ideas are the ideas of what you or your team would like to test in order to move towards the aim. All of the change ideas together can also form your project plan, the actions and changes to make in order to achieve your goal. The example below illustrates all four components of a driver diagram.

Example: reproduced from Reference 1

 

A driver diagrams is most often helpful early in planning a project. It can help transform a larger, global goal into actionable goals and interventions. But putting together a driver diagram yourself can be difficult, and final product will be improved by including others involved in the process.

  1. Pull together your team. Gathering the Subject Matter Experts for your process is key to the success of your project. Choose stakeholders from all areas of the process.
  2. Develop your aim. This should be the overall goal of the project, and guided by data to focus on big quality issues and determine achievable targets for improvement.
  3. Brainstorm Drivers. Work as a team to generate drivers, or areas or items that would contribute towards achieving your aim. You can either shout out drivers for a facilitator to record or have team members record their idea on sticky notes to later share with the whole group.
  4. Group similar ideas to form Primary Drivers. Similar areas of interest can be used as primary drivers, while the more specific details will form the Secondary Drivers.
  5. Expand upon the grouped ideas to form Secondary Drivers. This can be done by both reviewing some of the specific ideas within the grouped primary drivers, as well as brainstorming interventions specific to primary driver.
  6. Ask the group to identify change ideas for each Secondary Driver. These will be your tests of change to implement within your process.
  7. Revisit the Driver Diagram as a part of the PDSA cycle. Update the document with new change ideas as they emerge.

 

References:

  1. Driver Diagram | IHI – Institute for Healthcare Improvement. (2016). IHI- Institute for Healthcare Improvement. http://www.ihi.org/resources/Pages/Tools/Driver-Diagram.aspx
  2. Understanding Driver Diagrams. (n.d.). Life QI System. https://help.lifeqisystem.com/driver-diagrams/understanding-driver-diagrams (accessed July 19, 2021)

Want to earn CE credits? Click here!

 

Run Charts

Run Charts

Alan Picarillo, MD, FAAP

Medical Director of NICU/CCN

The Barbara Bush Children’s Hospital

Learning objectives:

  1. Describe why dynamic data Is preferable to static data for quality improvement
  2. Explain the basic components of a run chart
  3. Identify the basic rules of a run chart

Measurement of data is a core concept of quality improvement. Analysis of those collected data requires a distinct approach as compared to other areas of research. The usual research model for data is pre- versus post- intervention and that model is an incredibly important tenet of scientific inquiry. Many statistical tools are based upon those comparisons of data and outcomes.  The Model for Improvement, which is familiar to readers of this series, requires that one identifies measures to evaluate the impact of planned changes before considering the change ideas themselves.  The central role of measurement dates back to the work of Walter Shewhart and W. Edwards Deming and their effort to understand and measure data variation.

Traditional healthcare data are primarily static. Basic statistics are oriented towards cause and effect relationships in order to determine the significance of differential outcomes.  However, data for quality improvement is inherently time-oriented and therefore should be examined over time.   Aggregated data before and after an intervention can fail to show important trends which may be visible with more frequent or granular data.  Performance measured annually may not show changes which accurately reflect the impact of intervention or show improvement opportunities, such as monthly data might.  It is this display of dynamic data over time that forms the foundation for statistical data analysis in quality improvement.

A run chart is the most basic and commonly used graph of time-series data, but allows for more rigorous interpretation than simple linear graphs of data.  The x-axis represents time and is often plotted based on a specific measure of time (e.g., day, week, month) and the y-axis is the measure of interest.  Also included on a run chart is the centerline, or measurement of central tendency, which is typically the median FIGURE 1.  It is this centerline that allows for analysis of data variation and there are rules to determine significant data variations (signals) versus normal data variations (noise).  Also, annotations to the run chart and specific goal lines allow for a complete pictorial representation to illustrate the QI project for others.

Figure 1 Annotated run chart

Commonly used rules for detecting data signals in a run chart are the following FIGURE 2:

  1. Shift: Six or more consecutive points either above or below the median (centerline). Those values that fall on the median are skipped and do not add to or break a shift.
  2. Trend: Five of more consecutive points all going up or going down
  3. Too few or too many runs: A significant data variation can be signaled by either too many or too few runs, or crossings of the centerline. Critical value tables exist in the literature to determine significant variation (5% risk of failing the run test for random patterns of data)
  4. Astronomical data point: A data point that is unusually different from the rest of the data points.

Figure 2 Run chart rules

References

Langley GJ, Moen RD, Nolan KM, Nolan TW, Norman CL, Provost LP. The improvement guide: a practical approach to enhancing organizational performance. San Francisco, CA: John Wiley & Sons; 2009

Institute for Healthcare Improvement. Institute for Healthcare Improvement: science of improvement: establishing measures. http://www.ihi.org:80/resources/Pages/HowtoImprove/ScienceofImprovementEstablishingMeasures.aspx.  Accessed 21 Apr 2021

Perla RJ, Provost LP, Murray SK. The run chart: a simple analytical tool for learning from variation in healthcare processes. BMJ Qual Saf. 2011;20:46 51.

Want to earn Continuing Education Credits? Click here

PDSA Cycles for Continuous Improvement

April MITE Hot Topic – PDSA Cycles for Continuous Improvement

Author: Hilary Perrey, MHA, LSSBB

Improvement Specialist II

Maine Medical Center Performance Improvement

Learning Objectives:

  1. Describe what PDSA cycles are and how we use them in continuous improvement.
  2. Explain how to conduct PDSA cycles.
  3. Illustrate a helpful PDSA cycle template.

Have you ever heard a beautiful symphony orchestra and wondered how they achieved such musical perfection? How did a particular surgeon gain advanced technical skill in his or her profession? Plan – Do – Study – Act (PDSA) cycles are a four-step model for change and can be used in achieving continuous improvement – from a symphony to the operating room.

Walter Shewhart, a statistician at Bell Telephone Laboratories, developed the Shewhart Cycle in the 1920’s.1 The Shewhart Cycle is best known as the Plan – Do – Check – Act (PDCA) cycle. In 1993, W. Edwards Deming adapted the PDCA cycle as the Plan – Do – Study – Act (PDSA) cycle.2 Just as a circle has no end, the PDSA cycle can be repeated for continuous improvement. The PDSA cycle is also known as the Deming Cycle or the Deming Wheel.1

The MaineHealth branded PDSA template (figure 1) can be used in continuous improvement and quality improvement efforts. The following example illustrates a PDSA in action from the Oncology Service Line’s Multidisciplinary Thoracic Review (MATRIx) Program. The MATRIx Program was the FY19 Oncology Service Line’s Clinical Transformation Project and subsequently its FY20 Maine Medical Partners quality improvement initiative. The goal of the MATRIx project was to decrease time to surgery and time to various clinical endpoints for patients on different referral and treatment pathways.

The MATRIx team sought to increase efficiencies in OR scheduling as a means to decrease time to surgery for patients with suspicious lung nodules (figure 2). In the Plan phase, the baseline average time to surgery in FY19 Q4 was 34 days. Our team shadowed MATRIx scheduling, MATRIx clinic flow, and OR scheduling to understand if there were opportunities to improve processes and reduce time to surgery. In the Do phase, the clinic’s Office Manager collected data on how often creative scheduling occurs for surgery dates scheduled between 1/7/20 – 1/30/20 to understand how frequently scheduling workarounds are needed. In the Study phase, creative scheduling data was reviewed. In the Act phase, the Senior Director of Oncology Services met with the Director of Perioperative Services and requested improved OR scheduling for the MATRIx Program. Subsequently, the OR and MATRIx team collaborated to prioritize thoracic surgeries for cancer patients. The Medical Director of Thoracic Surgery advocated for an increase from one to two dedicated surgical teams so each thoracic surgeon can operate with its own team, cycling back into Plan. The thoracic surgeons trained an additional surgical nurse and technician and now have two dedicated thoracic surgery teams which may improve scheduling, time to surgery, and increase surgical volume (more Do, Study, and Act). This PDSA cycle demonstrates how incremental improvement can be achieved and how one PDSA cycle leads to another. This is the path of continuous improvement.

Figure 1. MaineHealth PDSA template

The PDSA cycle template and other MaineHealth Performance Improvement tools can be accessed via SharePoint at https://home.mainehealth.org/2/MMC/CenterforPerformanceImprovement/SitePages/AllTools.aspx.

References:

  1. Luc R. Pelletier, Christy L. Beaudin. HQ Solutions Resource for the Healthcare Quality Professional, Fourth Edition, 2018. National Association for Healthcare Quality.
  2. Ronald Moen. Foundation and History of the PDSA Cycle. Microsoft Word – PDSA_History._16th_Deming_Research_Seminar_Feb._2010.Moen.docx

Want to earn CE Credits? Click here!

Avoiding the “Whack-a Mole” Approach to Patient Safety Events: the Safety Assessment Code matrix

March 2021 MITE Quality Improvement Patient Safety Hot Topic

Avoiding the “Whack-a Mole” Approach to Patient Safety Events: the Safety Assessment Code matrix

Erin Graydon Baker, MS, RRT, CPPS, CPHRM

Clinical Risk Manager, MaineHealth

Learning Objectives:

  1. Describe how and when to prioritize immediate safety threats
  2. Explain the Safety Assessment Code ( SAC) matrix

In the December 2020 MITE Hot Topic, “Prioritization Methods: Which QI Project Solution Ideas Should We Tackle First?” author Lauren Atkinson describes the impact to effort matrix for quality improvement. The impact to effort matrix helps us to prioritize the most impactful improvements and discriminates them from efforts which may be too great for the anticipated impact.  A prioritization process for safety that is described by the Institute for Healthcare Improvement (IHI)/ National Patient Safety Foundation (NPSF) is similar in intent but more applicable to identifying and classifying adverse events and near misses.

All healthcare personnel are encouraged to report hazards, near misses, and adverse events that reach the patient regardless of whether injury occurs to the patient or staff. Failure to report can negatively affect our ability to mitigate the risk of harm. Solutions to ensuring staff reporting include an easy to use on-line reporting system, visible actions as the result of reports, and feedback to staff when the reports have led to improvements.  For some personnel, though, the reports seem to disappear into the “black hole” of reporting systems where seemingly nothing is done with the information.  To those receiving the safety reports, the daily work of reviewing and acting upon the all reports seems like a poor game of “Whack-a Mole” – when one issue seems resolved, another similar event pops up somewhere else. It can be both exhausting and non-productive to react to each event.  How should we prioritize the most significant events while trending and tracking those events that may be latent errors leading to something harmful?

The IHI/ NPSF describes a process called the Safety Assessment Code (SAC) matrix. SAC multiplies the probability that another event will happen if nothing is done by the actual and/or potential harm to patients or staff to assign a severity score. The highest scores deserve a deeper level of investigation whereas a lower score indicates events that should be trended and tracked over time. This level of prioritization allows for targeted improvements where it will matter most without losing sight of those latent errors that provide valuable information over time.

The matrix below describes how to score severity and probability in order to assign an overall safety score.  To use this grid, estimate how frequently this same event might occur. For example, falls might occur frequently, but historically, the actual or potential harm has been low because of the interventions we have in place to reduce serious harm.  A frequent event with minor harm would score “1” and would signal us to trend these.  However, if we had a 10-fold medication error in dosage which, although uncommon, could have a catastrophic impact on the patient, then the score would be “3”. This would warrant a full Root Cause Analysis.

A trained safety team uses this method best with interrater reliability in scoring and prioritizing events. Understanding SAC helps those who file reports recognize that all reports are reviewed with triage in mind. Some will receive intensive review, while others will contribute to data aggregation and monitoring.

For more details on the probability and severity categories, use the  link (1): http://www.ihi.org/resources/Pages/Tools/RCA2-Improving-Root-Cause-Analyses-and-Actions-to-Prevent-Harm.aspx

References

  1. ​​National Patient Safety Foundation. ​RCA2: Improving Root Cause Analyses and Actions to Prevent Harm. Boston, MA: National Patient Safety Foundation; 2015.

Want to earn CE credits? Go to CloudCME to review the materials, take a short quiz and evaluation!

 

 

 

 

Introduction to Simulation Modelling

Introduction to Simulation Modelling

Mohit Shukla, MS, LSSBB

Quality Management Engineer

MaineHealth Performance Improvement Team

Learning objectives:

  1. Describe Discrete Event Simulation with an example
  2. State when simulation might be more appropriate than other Lean improvement tools

Simulation modelling is the application of computation models built to replicate real life phenomena and/or processes in order to make inferences of interest. It falls within the field of Operations Research and has been applied to complex problems in healthcare since the 1960s [1]. Based on implementation strategy, simulations may be categorized as Continuous, Monte Carlo, Discrete-Event, Agent-Based or Hybrid simulations [2]. The most frequently applied in Healthcare Operations is Discrete Event Simulation (DES), which allows us to emulate real-life processes in a software environment, experiment, and assess the impact of changing the variables of a system on the outcomes of interest. For instance, let’s say we wanted to optimize the number of check-out counters open in Hannaford in the last hour of the day to ensure the store is ready for close at the earliest. We could try small tests of change (reduce/hire) until we find the right mix, but quite often, that approach can be too slow or too expensive. Simulation can help. Start by obtaining the number of check-outs from your sales database in the last hour by day of week, then use the trusty clipboard to understand the distribution of check-out time (e.g. 20% took about 2 minutes, 40% took about 5 minutes, and 20% took longer). From these two data points, a simulation model can be used to create several scenarios – such as opening 2, 4, 6 or more stations, redeploying an Associate to assist with packing up groceries instead of running another check-out, etc., and assessing the impact of those choices on both the time taken to clear the queue as well as the utilization of the assigned resources – without actually doing anything!

DES couples the principles of probability models and queuing theory with large scale random sampling. While most of it is done in specialized software, small scale simulation can be done in Excel as well. For instance, going back to the Hannaford example – if we assume that between 50 and 60 people show up to check-out between 8pm and 9pm on a weekday, and it takes approximately four minutes to check-out each person on average – we can build a simple model in excel to get started:

By changing the values in columns B, C and D, you can compare what happens with 2-4-6-more check-out staff. The key, as always with statistical inferences, is to have enough values. That is, the more values we have in column B, the closer the data gets to being normally distributed and the more robust is our estimate of the central tendency. For fun, start with a 1000+ values!

DES has found several areas of application in healthcare at Maine Medical Center – for instance, in streamlining workflows at the Covid Clinics, estimating the impact of the surgical schedule on Intermediate Care (IMC) bed needs, forecasting number of emergency department beds needed over the next 5, 10 and 15 years, and many others.

 

Figure 1 A simulation model to assess workflows at one of the Covid Vaccine Clinics

Figure 2 A model to simulate ED capacity and project needs for the next decade

Given the effort involved in building a good simulation, it is always best to first ask “what are you trying to achieve?” In Process Improvement, it never hurts to start with understanding the process (preferably with the right control charts!) by conducting a root-cause analysis and trying out a few PDCA (Plan-Do-Check-Act) cycles. If, however, we find ourselves dealing with a complex system comprised of many interacting factors and expensive tests-of-change, simulation can help build and test different solutions or alternatives to recommend the best place to start.

References:

[1] Henderson, S.G., Biller, B., Hsieh, M.H., Shortle, J., Tew, J.D., Barton, R.R., Brailsford, S.; Tutorial: Advances and challenges in healthcare simulation modeling. In Proceedings of the 2007 Winter Simulation Conference.

[2] Preston White Jr., K; Ingalls, R.G.; The Basics of Simulation. In Proceedings of the 2020 Winter Simulation Conference.

Want to earn CE credits? Click here!