Using next generation electronic medical records for laboratory quality monitoring
Perspective of Clinical Database in Laboratory Medicine Research Column

Using next generation electronic medical records for laboratory quality monitoring

Tze Ping Loh1,2, Tony Badrick3

1Department of Laboratory Medicine, National University Hospital, Singapore 119074, Singapore; 2Biomedical Institute for Global Health Research and Technology, National University of Singapore, Singapore 119077, Singapore; 3Royal College of Pathologists of Australia Quality Assurance Programs, St Leonards, NSW, Australia

Correspondence to: Tze Ping Loh. Department of Laboratory Medicine, National University Hospital, 5 Lower Kent Ridge Road, Singapore 119074, Singapore. Email: tploh@hotmail.com.

Abstract: The increasing commoditization of laboratory practice risked favouring speed and cost at the expense of quality. Recent retroactive reagent recalls and literature reports of failure to detect significant analytical deviation suggest that the practice of traditional quality control measures is increasingly insufficient to maintain the quality required for resource restricted, high throughput and complex laboratory testing environment. Newer, patient result-based quality techniques have shown good capability in detecting analytical errors. However, a limitation of such techniques is the difficulty in determining whether a breach in control limit is due to an analytical error or a shift in underlying population being monitored. The next generation electronic medical record is an information technology system that brings together different silos of healthcare databases. It has the potential to significantly alter the way laboratory practices quality control by matching patient result-based laboratory observations with clinical practice and findings. This way, any significantly abnormal laboratory findings can be quickly verified with clinical information. It is possible that the traditional internal quality control practice will be replaced by these techniques, although more research is required before this becomes commonplace.

Keywords: Quality control; laboratory management; electronic medical records; statistics; quality assurance


Received: 15 July 2017; Accepted: 07 August 2017; Published: 21 August 2017.

doi: 10.21037/jlpm.2017.08.06


There is a saying that one can only have two of the following three features when it comes to a service or product: cheap, fast or good. The increasing commoditization of laboratory services often emphasizes on the former two, at the expense of quality.

Clinical laboratories produce results that physicians rely upon to make diagnostic and management decisions. The results generated by the laboratories should meet certain quality specifications to be clinically fit for purpose. At present, this is monitored through a combination of internal quality control and external quality assurance systems. These complimentary quality systems rely upon periodic testing of a sample that has a known value, and looking for significant deviation from the known/target value, when compared to certain control limits.

The internal quality control system was developed at a time when laboratory testing was done manually and in small batches. The tests are often developed in the laboratory and the laboratory practitioner often has good analytical understanding of the methodology employed. This ensured that the laboratory practitioners always looked at the raw analytical data and interpreted them, which would help detect any significant deviation in analytical performance before any incorrect patient results were released. The relatively leisurely pace also gave laboratory practitioners ample time to troubleshoot any unusual analytical performance that was detected. Given the low numbers of patients being tested at that time, having an internal quality control sample tested at the beginning, the end or both ends of the batch was likely to be able to detect significant performance deviations from the ‘in control’ situation.

By contrast, modern day laboratory practice is driven by highly automated instruments that can perform multiple tests simultaneously at very high throughputs, which can run into the thousands of patient samples per hour. Turnaround time from sample receipt in the laboratory to reported result on a doctor’s desktop is often set to a short few hours. Furthermore, in an effort to simplify and automate laboratory testing, most of the analytical data generated by the instruments are analysed and reviewed in middleware using electronic rules and algorithms. In other words, the instruments represent convenient ‘black box’ solutions where laboratory practitioners have little knowledge or control over the analytical process or data interpretation. These factors have greatly eroded the technical ability and independence of laboratory practitioners to detect and correct analytical deviations and can lead to erroneous patient results being released.

Despite the changes in laboratory practice, the modern day laboratory mainly still employs the historical internal quality control practices, where a quality control sample is tested at the beginning of a run or at fixed intervals throughout the day. The internal quality control practices cannot be the same for a laboratory practice where the number of samples tested is 300 per day and another where the number of samples tested is 10,000 per day (1). The continued practice of using traditional internal quality control carries significant clinical risk for missed error detection with the impact greatly amplified by the high test volumes in modern laboratories. It is unsurprising that large-scale laboratory errors are still being reported even in the laboratories that employ ‘state of the art’ internal quality control systems (2-4). It has been shown that the historical internal quality control practices lack sufficient power in detecting significant errors to meet the increasingly stringent quality requirements needed to meet medical requirements (4,5).

Another casualty of modern, electronically driven medical practice is the reduced interaction between the laboratory practitioner and the clinical practitioners, particularly with the advent of electronic test ordering systems. The previous professional courtesy of providing clinical details along with laboratory requests is fast becoming a thing of the past. It is now more common to receive laboratory requests without clinical details. This imposes significant challenges for laboratory practitioners to interpret laboratory results or detect a trend in the right clinical context, without which, a laboratory result is just a number without context or value.

It is clear that laboratory practices need to change. In particular, there is a need to adopt quality systems that continuously monitor the analytical performance of instruments. Some of these techniques include the moving sum of outliers (4), moving average (5-7), CUSUM-logistic regression (8), and average of delta (9). These techniques use the continuous calculation of statistics based on individual patient results to monitor trends in the population mean or SD that may signify significant shifts in the all reported results and lead to potential misclassification of patients.

The main technical difficulty associated with these techniques is the underlying assumption that the patient population being tested is relatively stable, which may not always be true. This makes it challenging to identify if the shift in distribution is caused by a change in the patients that are being measured, for example more diabetic patients coming from a particular clinic on a certain day of the week, or a true change in analytical performance of the glucose method. As aptly put by some authors, the objective is to monitor the analytical performance of the method, not the patients being tested that day (10). While there are some methods that can increase the effectiveness of these techniques, including the selection of a ‘normal’ population or application of truncation limits to the data, which is removing the patients with abnormal results (5), use of simulated annealing algorithms (10), they do not completely exclude the possibility of an underlying patient population shift. Furthermore, certain tests are not performed in patients from a ‘normal’ population (e.g., tumour markers, therapeutic drug monitoring, cardiac markers, endocrine hormones), thereby challenging the above assumption.

The next generation electronic medical record promises to bring together different clinical databases that are traditionally organized into silos (11,12). This opens up the possibility for the laboratory to match their laboratory trends with clinical information. For example, a laboratory that is employing the moving sum of outlier technique may detect an increased number of patients with elevated insulin-like growth factor-1 (IGF-1) concentration. By extracting the contemporaneous clinical information, it can be determined that there had been no increase in the diagnosis of acromegaly, which is a relatively rare disorder (prevalence: 50–60 per million population; incidence: 3–4 per million per year). Hence, the laboratory can be confident that the observed shift in the number of patients with elevated IGF-1 levels is unlikely to be genuine and initiate a detailed investigation looking for analytical errors. It is not difficult to imagine the same scenario for therapeutic drug monitoring, where the trend in the drug concentration can be matched with prescription patterns. It is even more tantalizing to think about the potential power of such tools when laboratories and health systems share the same information technology platform.

It is possible that internal quality control systems may become less relevant when such practice becomes commonplace. The role of the external quality assurance program will then become the periodic quality spot check of the analytical systems, provided they have a matching matrix with the clinical samples, target values assigned by reference method, are administered frequent enough and the results are returned in a timely manner.

These new tools bring the exciting possibility of a new laboratory practice that is responsive (fast), leverage on existing data (cheap) to improve the quality system (good).


Acknowledgements

None.


Footnote

Conflicts of Interest: The authors have no conflicts of interest to declare.


References

  1. Yundt-Pacheco J, Parvin CA. Validating the performance of QC procedures. Clin Lab Med 2013;33:75-88. [Crossref] [PubMed]
  2. Algeciras-Schimnich A, Bruns DE, Boyd JC, et al. Failure of current laboratory protocols to detect lot-to-lot reagent differences: findings and possible solutions. Clin Chem 2013;59:1187-94. [Crossref] [PubMed]
  3. Loh TP, Lee LC, Sethi SK, et al. Clinical consequences of erroneous laboratory results that went unnoticed for 10 days. J Clin Pathol 2013;66:260-1. [Crossref] [PubMed]
  4. Liu J, Tan CH, Badrick T, et al. Moving sum of number of positive patient result as a quality control tool. Clin Chem Lab Med 2017. [Epub ahead of print]. [Crossref] [PubMed]
  5. Liu J, Tan CH, Loh TP, et al. Verification of out-of-control situations detected by "average of normal" approach. Clin Biochem 2016;49:1248-53. [Crossref] [PubMed]
  6. Hayashi S, Ichihara K, Kanakura Y, et al. A new quality control method based on a moving average of "latent reference values" selected from patients' daily test results. Rinsho Byori 2004;52:204-11. [PubMed]
  7. Usta M, Aral H, Mete Çilingirtürk A, et al. Assessment of average of normals (AON) procedure for outlier-free datasets including qualitative values below limit of detection (LoD): an application within tumor markers such as CA 15-3, CA 125, and CA 19-9. Scand J Clin Lab Invest 2016;76:553-60. [Crossref] [PubMed]
  8. Sampson ML, Gounden V, van Deventer HE, et al. CUSUM-Logistic Regression analysis for the rapid detection of errors in clinical laboratory test results. Clin Biochem 2016;49:201-7. [Crossref] [PubMed]
  9. Jones GR. Average of delta: a new quality control tool for clinical laboratories. Ann Clin Biochem 2016;53:133-40. [Crossref] [PubMed]
  10. Ng D, Polito FA, Cervinski MA. Optimization of a Moving Averages Program Using a Simulated Annealing Algorithm: The Goal is to Monitor the Process Not the Patients. Clin Chem 2016;62:1361-71. [Crossref] [PubMed]
  11. Tolan NV, Parnas ML, Baudhuin LM, et al. "Big Data" in Laboratory Medicine. Clin Chem 2015;61:1433-40. [Crossref] [PubMed]
  12. Loh TP. Knowledge is power: harnessing clinical database for better informed laboratory medicine practice. J Lab Precis Med 2017;2:44. [Crossref]
doi: 10.21037/jlpm.2017.08.06
Cite this article as: Loh TP, Badrick T. Using next generation electronic medical records for laboratory quality monitoring. J Lab Precis Med 2017;2:61.

Refbacks

  • There are currently no refbacks.