
Full text loading...
Category: Clinical Microbiology
Benchmarking and Performance Monitoring for the Clinical Laboratory † , Page 1 of 2
< Previous page | Next page > /docserver/preview/fulltext/10.1128/9781555817282/9781555817275_Chap48-1.gif /docserver/preview/fulltext/10.1128/9781555817282/9781555817275_Chap48-2.gifAbstract:
Benchmarking allows you to compete more effectively, obtain and retain resources for your department, enhance the reputation of your department, and improve the satisfaction of your employees. It is but one of many tools that a laboratory or healthcare entity employs to improve performance, focus on customers, survive, and thrive in an environment with limited resources. This chapter discusses the reasons for undertaking benchmarking. It reviews the types of benchmarking activities and where to find benchmarks. Most laboratories prefer benchmarks that are specifically tailored to the services that a laboratory provides. One of the most difficult areas to benchmark is financial performance. The chapter further explains the general approach to the most common types of benchmarking and performance monitoring—for productivity and cost. It focuses on specific approaches to performance monitoring and external Benchmarking. The simplest and most straightforward approach to evaluating financial performance is to use your own laboratory as the standard and monitor performance over time. When the external benchmarking approach has been chosen, the analysis can be more sophisticated and potentially more instructive, but the process is also more complicated. If external benchmarking is performed using a contracted service, the analysis is both assisted by and limited by the way the data are massaged and presented by the benchmarking service.
Full text loading...
Control chart for performance monitoring. The parameter depicted is billable tests, followed over time. Upper and lower control limits are statistical parameters that can be set at any desired value (e.g., ± 1, 2, or 3 standard deviations around the mean). The data are plotted by quarter over a period of three years. Note that the laboratory has achieved a steady increase in billable tests beginning in the first quarter of 1995. The trend is going in the desired direction, but it is important to understand the reason(s) for the change. Compare with Fig. 48.2 and Fig. 48.3 . Data adapted from a report provided to a participant in the CAP's LMIP.
Control chart for performance monitoring. The parameter depicted is billable tests, followed over time. Upper and lower control limits are statistical parameters that can be set at any desired value (e.g., ± 1, 2, or 3 standard deviations around the mean). The data are plotted by quarter over a period of three years. Note that the laboratory has achieved a steady increase in billable tests beginning in the first quarter of 1995. The trend is going in the desired direction, but it is important to understand the reason(s) for the change. Compare with Fig. 48.2 and Fig. 48.3 . Data adapted from a report provided to a participant in the CAP's LMIP.
Control chart for performance monitoring. The parameter depicted in this chart is paid hours, followed over time. Note that a dramatic increase in paid hours was recorded in the first quarter of 1995, after which the number of employees stabilized. Such a large increase in personnel might appear undesirable, but it must be understood in the context of other changes in the operations of the laboratory. Compare with Fig. 48.1 and Fig. 48.3 . Data adapted from a report provided to a participant in the CAP's LMIP.
Control chart for performance monitoring. The parameter depicted in this chart is paid hours, followed over time. Note that a dramatic increase in paid hours was recorded in the first quarter of 1995, after which the number of employees stabilized. Such a large increase in personnel might appear undesirable, but it must be understood in the context of other changes in the operations of the laboratory. Compare with Fig. 48.1 and Fig. 48.3 . Data adapted from a report provided to a participant in the CAP's LMIP.
Control chart for performance monitoring. The parameter depicted in this chart, billable tests per FTE, incorporates the changes depicted in Fig. 48.1 and Fig. 48.2 . Despite the dramatic increase in FTEs in the first quarter of 1995, the productivity of the laboratory steadily increased. Plotting the ratio yields a more complete picture than viewing only the components would allow. Compare with Fig. 48.1 and Fig. 48.2 . Data adapted from a report provided to a participant in the CAP's LMIP.
Control chart for performance monitoring. The parameter depicted in this chart, billable tests per FTE, incorporates the changes depicted in Fig. 48.1 and Fig. 48.2 . Despite the dramatic increase in FTEs in the first quarter of 1995, the productivity of the laboratory steadily increased. Plotting the ratio yields a more complete picture than viewing only the components would allow. Compare with Fig. 48.1 and Fig. 48.2 . Data adapted from a report provided to a participant in the CAP's LMIP.
Percentile graph of billable tests for an institution for a single time period. The results of the participant laboratory can be compared with (i) all laboratories in the database, (ii) all participating laboratories in the same region, (iii) a group of laboratories selected (from a list of participating laboratories) by the participant, (iv) a “fingerprint cluster” of laboratories that were closest to the participant by statistical analysis (performed by the program), and (v) the two closest matches to the participant. Note that the participant laboratory is an outlier when compared to other laboratories in the program as a whole and in the region, but it falls in a similar range with more closely matched laboratories, whether they were self-selected or chosen by the program. There are obvious advantages to having multiple comparison groups from which to draw conclusions. Data adapted from a report provided to a participant in the CAP's LMIP.
Percentile graph of billable tests for an institution for a single time period. The results of the participant laboratory can be compared with (i) all laboratories in the database, (ii) all participating laboratories in the same region, (iii) a group of laboratories selected (from a list of participating laboratories) by the participant, (iv) a “fingerprint cluster” of laboratories that were closest to the participant by statistical analysis (performed by the program), and (v) the two closest matches to the participant. Note that the participant laboratory is an outlier when compared to other laboratories in the program as a whole and in the region, but it falls in a similar range with more closely matched laboratories, whether they were self-selected or chosen by the program. There are obvious advantages to having multiple comparison groups from which to draw conclusions. Data adapted from a report provided to a participant in the CAP's LMIP.
Percentile graph of blood expense for an institution for a single time period. Comparisons are as described in Fig. 48.4 . The expenses for the participant institution are considerably higher than those for the program as a whole and for laboratories in the region, but they are in line with the laboratories in the comparison groups that are better matched. The importance of multiple comparisons and valid comparison groups are demonstrated once again. Data adapted from a report provided to a participant in the CAP's LMIP.
Percentile graph of blood expense for an institution for a single time period. Comparisons are as described in Fig. 48.4 . The expenses for the participant institution are considerably higher than those for the program as a whole and for laboratories in the region, but they are in line with the laboratories in the comparison groups that are better matched. The importance of multiple comparisons and valid comparison groups are demonstrated once again. Data adapted from a report provided to a participant in the CAP's LMIP.
Graphical depiction of laboratory performance. Cost is plotted on the y-axis (manageable expense per bill-able test); productivity is plotted on the x-axis (billable test per FTE). The position of the “best performing” laboratory in the group is in the lower-right quadrant (greatest productivity and lowest cost). The upper-left quadrant (lowest productivity and highest cost) is the least desirable position. The center of the graph is “middle of the road.” The participant laboratory is represented by a black diamond, which is positioned in the lower-right quadrant but relatively close to the center point. Thus, the performance is respectable, but there is room for improvement. Such a graphical depiction transforms a morass of potentially confusing data and makes it much easier to see the big picture. Data adapted from a report provided to a participant in the CAP's LMIP.
Graphical depiction of laboratory performance. Cost is plotted on the y-axis (manageable expense per bill-able test); productivity is plotted on the x-axis (billable test per FTE). The position of the “best performing” laboratory in the group is in the lower-right quadrant (greatest productivity and lowest cost). The upper-left quadrant (lowest productivity and highest cost) is the least desirable position. The center of the graph is “middle of the road.” The participant laboratory is represented by a black diamond, which is positioned in the lower-right quadrant but relatively close to the center point. Thus, the performance is respectable, but there is room for improvement. Such a graphical depiction transforms a morass of potentially confusing data and makes it much easier to see the big picture. Data adapted from a report provided to a participant in the CAP's LMIP.
Basic steps in the benchmarking process a
a Adapted from reference 3 .
Basic steps in the benchmarking process a
a Adapted from reference 3 .
Variables that are frequently assessed in benchmarking programs
a Items in parentheses are other, similar parameters; h = hours.
Variables that are frequently assessed in benchmarking programs
a Items in parentheses are other, similar parameters; h = hours.
Commonly used ratios in benchmarking
Commonly used ratios in benchmarking
Approaches to assessment of laboratory financial performance
a These activities involve external comparisons, but they are performed at the local level. They are discussed in the text under performance monitoring.
Approaches to assessment of laboratory financial performance
a These activities involve external comparisons, but they are performed at the local level. They are discussed in the text under performance monitoring.
Internal benchmarking with “homemade” standard by reference to published information on publicly traded commercial laboratories a
a Data courtesy of Thomas Wadsworth.
Internal benchmarking with “homemade” standard by reference to published information on publicly traded commercial laboratories a
a Data courtesy of Thomas Wadsworth.