6 Sigma Categories
Business & Career Improvement
Six Sigma Project Reduces Analytical Errors in an Automated Lab
As part of the laboratory’s ongoing performance-improvement process, changed results had been measured for years. Although the average percentage of changed results was consistently below 1% in the three main areas of the laboratory–hematology, coagulation, and chemistry–the administration had noted that eight analytical process failures had occurred in the first half of 2003, resulting in the correction of reported values that affected multiple patients at one time.
The problem was sporadic; there was no clear solution; and correcting the issue would help achieve the core lab’s goal of improving patient care, increasing customer satisfaction, and boosting staff morale. The core lab’s administration believed that reducing the number of failed analytical processes was a worthy goal for a Six Sigma project. A multidisciplinary team of technical management, information systems staff, and physicians assembled to tackle the problem using the Six Sigma define, measure, analyze, improve, and control (DMAIC) approach.
Six Sigma methodology
Six Sigma, a focused, high-impact process, uses proven quality principles and techniques to reduce process variance, and seeks to confine errors to 3.4 defects per million opportunities (DPMO). Six Sigma relies on rigorous statistical methods and implements control mechanisms in order to tie together quality, cost, process, people, and accountability, and begins with an understanding of customer requirements and values (referred to as voice of the customer). Once these are defined, Six Sigma’s process enables the identification of factors critical to customer satisfaction. The processes involved in these critical factors are then analyzed and measured. Improvement strategies are focused on the vital “X.” The Six Sigma goal is to reduce both variance and control processes in order to assure compliance with the critical specifications.
Defining and measuring the process
During the define phase, the Six Sigma team developed a high-level process map (see Figure 1), with the initial step being preparation of the analyzers for use and the final step being release of the result. The project’s scope covered the process from sample placement on the analyzer to the point at which the result was released in the LIS. A defect was defined as the need to change a result for any reason after verification.
Also during the define phase, the Six Sigma team needed to convince lab employees that further reduction of changed results was necessary, even though the average changed-result rate was already less than 1%. To accomplish this, the team used change acceleration process tools, such as the threat/opportunity matrix, to demonstrate the benefits of reducing changed results and the disadvantages of maintaining the current changed-result rate. For instance, reducing changed results would increase lab efficiency, improve staff performance and morale, and boost market share. Maintaining the current rate of changed results would ultimately diminish the core lab’s reputation, leading to a loss of revenue and decreased staff morale.
[FIGURE 1 OMITTED]
In the measure phase, the Six Sigma team used operational definitions and the lab supervisory staff to perform measurement-system analysis. Because the lab already operated at a high sigma level, the measurement system had to be 100% accurate for reproducibility and repeatability. The team had to ensure that any variations were due to the process, not the measurement system. In order to obtain this type of accuracy, the team developed operational definitions to classify errors: procedural, autoverification, sample, clerical, mechanical, and unknown. With the aid of logic trees, the team refined these definitions five times to ensure all errors were classified consistently so that repeatability and reproducibility were 100%. Statistical analysis using the Six Sigma methodology revealed that the lab operated at a 4.8 sigma level.
For the period of May 2003 through July 2003, the laboratory corrected 585 test results out of 1,645,975 results reported. The DPMO was 355. One of the Six Sigma tools–the stakeholder analysis–aided in developing a strategy to gain support for the project from moderately opposed individuals and helped identify those individuals likely to be involved in the process who could serve as resources for the team.
[FIGURE 2 OMITTED]
Analyzing and improving procedures
In the analyze phase, the Six Sigma team developed its aggressive goal of reducing analytical errors by 35% to a DPMO of 230 and a sigma score of 5.0. As the process moves toward a sigma level between 5 and 6, eliminating defects without eliminating the human factor becomes increasingly difficult.
Graphical analysis using Pareto charts (see Figure 2) indicated 86% of the defects could be attributed to two types of errors: Whereas 52% of the defects were procedural errors committed by employees while reviewing results, 34% of the defects were the result of autoverification errors by the LIS. This discovery was enlightening; SOPs (standard operating procedures) were not accomplishing their intended goals.
Six Sigma focuses on process, not people. Before the analysis, the team members had been convinced the culprit was something beyond the core lab’s control, such as unacceptable specimens received from the rapid response labs or from the outreach physician’s office. The lab had established SOPs for all operations, yet the staff was having difficulty making key decisions when it came to releasing analytical results. The analysis of variance (ANOVA) proved this vital “X” to be statistically significant. The null hypothesis that all types of analytical errors are the same was rejected because the p-value 0.001 was less than 0.05; thus, the team could conclude that a statistical difference in the number of defects existed among the different error categories.
The team utilized tools–like failure mode and effect analysis (FMEA)–to break down the very complicated process into individual steps: potential failure modes, effects, severity, cause, occurrence, control, and detection (Figure 3), so its members could look at key drivers, or “Xs,” in the process. Data for each step was analyzed graphically and tested mathematically for statistical significance.
One vital “X” was that the majority of errors occurred on two analyzers: general chemistry and hematology. The team drilled down, utilizing the five why’s tool and the voice of the customer from the technical staff. By developing an assessment tool, the team identified deficiencies in the staff training program. Staff trained by the vendor or lab supervisors scored 40 points higher on competency tests than peer-trained staff. The result was obvious. Ongoing basic training needed to be performed to stress analyzer maintenance, troubleshooting, and recognition of analyzer “flags.”
[FIGURE 5 OMITTED]
To reduce the number of procedural errors in the improve phase, a simplified result-review guideline tool was provided to technologists as an aid in the critical decision-making process used to validate test results (Figure 4). The autoverification process was modified to capture real-time suspect flags for CBC orders; results that required review were held by the LIS. The LIS team designed software that enabled real-time analyzer-result monitoring for chemistry analyzers, complete with an audio alert for notification of potential problems.
In the control phase, the Six Sigma team implemented a plan that incorporated individual and moving range charts for monitoring corrected results. The control plan enabled the team to determine the method for monitoring frequency, alert flag, action, and specific accountability for each of the key variables in the process. The DPMO for corrected results is now monitored on a monthly basis (see Figure 5). The Six Sigma metric has become part of the lab’s quality-management program. Real-life examples of analyzer printouts and flag results are used to assess staff competency on an ongoing basis.
At the end of the control phase, the process went from a 4.8 sigma level to a 5.0 sigma level. Using the chi-square test, the team was able to demonstrate a statistically significant decrease in the number of corrected results. The technical area of the core lab has experienced a 20% growth in volume from the completion of the project in December 2003 to present. The Six Sigma team turned the project over to its process owner in January 2004. Since that point, the department has operated at a sigma level of 5.0 or higher and was at a 5.2 sigma level as of October 2004. This project produced no direct financial impact. As chairman of the Department of Laboratories, Dr. Thomas Sodeman observes, “The error-reduction project was undertaken because it was the right thing to do.”
The Six Sigma DMAIC methodology has many advantages. It is a rigorous process that engages front-line employees in process redesign. It utilizes data and the voice of the customer to determine the factors that are most critical to quality. Controls and accountability are put in place to ensure the process remains efficient. Finally, this approach provides lab personnel with the tools to take a good process and make it even better.
By Nancy Riebling, MBB, MS, MT(ASCP), and Laurel Tria, MS, SC(ASCP)
Nancy Riebling, MBB, MS, MT(ASCP), email@example.com, is the director of Operational Performance Solutions and a Six Sigma Master Black Belt for the North Shore-LIJ Health System, Laurel Tria, MS, SC(ASCP), Itria@nshs.edu, is project manager for the Core Lab and a Six Sigma Certified Black Belt. The Six Sigma program is part of the Center for Learning & Innovation under the health system’s Corporate University.