Research

Evaluation Of Off-The-Shelf And Medical Grade Monitor Performance In Caries Detection

Suvendra Vijayan,BDS, MS, MPH, Assistant Professor, University of Pittsburgh School of Dental Medicine
suvendrav@gmail.com   phone: 319-335-9656   fax: 319-335-7351
Contribution: Conception of the idea of the work, analysis, verification of data analysis, drafting, editing, and approval of final draft..
Joshua J Orgill,DDS, Oral and Maxillofacial Radiology Resident, The University of Iowa College of Dentistry
joshua-orgill@uiowa.edu  
Contribution: Evaluator, acquisition, analysis, and interpretation of the data. Drafting the manuscript and approving final manuscript..
Sindhura Anamali,B.D.S, M.S, Assistant Professor, The University of Iowa College of Dentistry
sindhura-anamali@uiowa.edu   phone: 319-335-9656   fax: 319-335-7351
Contribution: Analysis and interpretation of the data, revising and editing drafts, and approving final manuscript..
Juan P. Castro,DDS, Oral and Maxillofacial Radiology Resident, The University of Iowa College of Dentistry
juan-castrocuellar@uiowa.edu   phone: 319-335-9656   fax: 319-335-7351
Contribution: Evaluator, acquisition, analysis, and interpretation of the data. Drafting the manuscript and approving final manuscript..
Daniah Alhazmi,BDS, MS, Oral and Maxillofacial Radiology Fellow, The University of Iowa College of Dentistry
daniah-al-hazmi@uiowa.edu   phone: 319-335-9656   fax: 319-335-7351
Contribution: Evaluator, acquisition, analysis, and interpretation of the data. Drafting the manuscript and approving final manuscript..
Veeratrishul Allareddy,B.D.S, M.S, Professor, The University of Iowa College of Dentistry
veeratrishul-allareddy@uiowa.edu   phone: 319-335-9656   fax: 319-335-7351
Contribution: Conception of the idea and design of the work, acquisition, analysis, and interpretation of the data. Revising, editing, and approval of final draft..

Address for correspondence

Joshua J Orgill, DDS
Email : joshua-orgill@uiowa.edu
Address : The University of Iowa College of Dentistry 801 Newton Road Iowa City, IA 52242-1010

Published on : 24 Jan 2019

In the last 20 years Dentistry has rapidly evolved from film-based imaging to digital radiography in that now it is a staple of dentistry used for diagnosis and treatment planning. Digital radiography is of two kinds, computed radiography and digital radiography. Computed radiography is the technology wherein storage phosphor plates are used, whereas in digital radiography the technology used is either Charge-Coupled Devices (CCD) or Complementary Metal Oxide Semiconductor devices (CMOS). The images once captured on either types of computed radiography are then viewed on monitors which, though they are a critical component in the diagnostic aspect of a disease, often are not of adequate quality to optimally diagnose the disease in question. In dentistry, consumer off-the-shelf displays are most commonly used to view the acquired images in bright light, compared to the medical radiology side which almost exclusively uses auto-calibrating high resolution medical grade displays.

In almost all dental schools in North America, digital images are viewed in optimal light conditions in an interpretation room, whereas in the average dental practitioners office off-the- shelf displays are mostly used, and often in bright light conditions. Many of the dental schools in North America have access to high medical grade displays and more are in the process of exploring them as viable options for everyday radiological diagnosis and interpretation in Dentistry in the last few years. The practice of viewing images in bright light conditions, once primary diagnosis is done in an optimally lit room, is ideal. But it is seldom followed since most of dentistry requires adequate and bright light to do procedures in private practices.

Multiple studies have shown that viewing images in optimal conditions has a significant positive impact on evaluation of images. (1-5) . Most imaging displays are liquid crystal displays (LCD), and newer display options include light emitting diode (LED)–backlight displays. Image displays may be monochrome or color. Recent and rapid display advancements in mobile devices including portable computers and tablets now offer similar viewing functionality resolutions comparable to an off-the-shelf display used with a consumer desktop, but in a portable medium and have reliably shown detection of anatomical and pathological entities in small pilot studies either on artificially induced caries (6) or on anatomical landmarks (7) . In the past, several studies have tried to study the efficacy of displays on detection of caries (8-11) , whilst many of them did not show a significant difference the type of display had on the detection of caries, they were of small samples or of artificial caries or of interproximal surfaces on patients which could not be confirmed clinically to check for accuracy. In some of these studies (2, 6) , though not a statistically significant difference, there was a trend showing better performance of medical grade displays.

This study aims to assess whether Medical grade displays perform better in the detection of confirmed caries on extracted teeth compared to off-the-shelf monitors. The null hypothesis was that there is no significant difference among the medical grade and off-the-shelf displays.

MATERIALS AND METHODS:

Sixty-four teeth (31 non-carious premolars, 31 carious premolars, 2 molars) extracted for therapeutic reasons collected from The University of Iowa College of Dentistry and Dental Clinics were used in this study. The teeth were initially examined by an oral and maxillofacial radiology resident under bright light to distinguish between carious and non-carious teeth and later independently examined and verified by a different oral and maxillofacial radiology resident under bright light with loupes and probes. The two molar and one non-carious premolar were mounted in a wax model to mimic teeth arrangement in the mouth. The 30 non-carious and 31 carious premolars were mounted between the molar and premolar, one by one, and imaged. Care was taken to ensure appropriate contact between the molar and premolars prior to the capture of the images.

The teeth were imaged on the Dexis (DEXIS. LLC, Hatfield, PA) sensors, although care was taken to ensure contact between the molar and premolars, sometimes the contact was not optimal, in these circumstances the teeth were rearranged and imaged again. The exposure parameters (7ma, 0.08s, 65kvp) were standardized and were constant for all images acquired.

The wax setup was mounted on a fixed plexiglass platform. The platform has fixed source to object, source to sensor, and object to sensor distances. It also has a guide ring to orient the position indicated device (PID) of the x-ray machine in a fixed, reproducible position. This standardized the position of all teeth for the radiographs. All the radiographs were stored in a research database in the PACS system for future retrieval as deemed appropriate.

The radiographs were downloaded as DICOM files from the PACS system and randomized. A list of 60 random numbers were generated and assigned as filenames for the DICOM images. A separate spreadsheet with the filename and the caries status was also maintained. After assigned the randomized filenames for the DICOM images, they were sorted in ascending order and further randomized.

Two independent, calibrated, and masked oral and maxillofacial residents evaluated these images and scored the teeth on a five point Likert scale in the presence or absence of caries on three different monitors under standardized ambient light conditions, a BARCO Nio Color 3MP DE medical grade monitor, a BARCO Nio Color 2MP monitor (a prototype at the time of the study), a DELL UltraSharp 2MP monitor and a WIDE 5MP monochrome monitor. The monitors when used for the study were connected to the same display graphics card in the same display port to keep other factors similar. The ambient light was measured using the Sekonic L-478D-U lightmeter and Dr. Meter digital illuminance/lightmeter. All images were viewed using Adobe Photoshop CC 2015 (Adobe ®, California, USA). The same procedures were repeated after two weeks and recorded. Inter-observer and intra-observer kappa statistics were calculated and interpreted. The statistical analysis was done on SPSS 24 (IBM, Armonk, NY) and the level of statistical significance was p < 0.05.

STATISTICAL ANALYSIS:

As a measure for diagnostic accuracy, the area under the ROC curve (AUC) was calculated taking into account the diagnostic confidence of the observers. The statistical significance of differences between AUC values was evaluated with the OR-DBM MRMC 2.5 package (12) . This program is based on the methods initially proposed by Berbaum, Dorfman, and Metz (13) and Obuchowski and Rockette (14) and later unified and improved by Hillis and colleagues (15- 17) . The reached sensitivity and specificity was also calculated for each monitor. The interrater reliability or extent of agreement among both observers for each monitor was determined by calculating percentage of agreement and by means of Cohen’s kappa statistics (18) . The percentage of agreement was calculated in two manners: by taking into account the levels of confidence (on a 5 point Likert scale) and secondly by reducing the answer options to a binary scale (present or not present).

DISCUSSION:

With nearly 800 CBCT machines sold in the U.S. each year 10 , there are a lot of questions dental specialists are asking about how to deal with the interpretation of the CBCT scans. In the literature there are several studies that show the importance of reviewing the entire CBCT scan due to the high number of incidental findings reported in scans of normal patients without any known pathosis in the regions of interest. Just a few of these studies have already been presented in the introduction of this paper. These previously described studies and many other studies have all come to the same conclusion that “it is essential that a person trained in advanced interpretation techniques in radiology interprets cone beam computed tomography scans” and that there is an undeniable “need for complete reporting of the data set.” 1,2,4,6,7,11-16 Thus, there is no longer a need for discussion about the necessity and obligation to review the entire CBCT scan. The ADA council on Scientific Affairs 2012 advisory statement stating the necessity of such a complete examination and reporting of such examinations was preceded by the 2008 executive opinion statement given by the American Academy of Oral and Maxillofacial Radiology which states “The practitioner who operates a CBCT unit, or requests a CBCT study, must examine the entire image dataset.” 16 The question that is most hotly debated is who should be interpreting the CBCT scans. The advisory statement issued by the ADA council on Scientific Affairs noted that “formal standards for CBCT training and education” was beyond the scope of that statement and stated this question would be shared with “the Commission on Dental Accreditation and other educational groups for consideration.” 6 The executive opinion statement given by the American Academy of Oral and Maxillofacial Radiology states that the “dentist using CBCT should be held to the same standards as board-certified oral and maxillofacial radiologists, just as dentists excising oral and maxillofacial lesions are held to the same standards as OMF surgeons.” 16 Considering the high rate of incidental findings there is evidence to support the need for experienced and knowledgeable interpretation of CBCT scans. In support of this, the Academy of Osseointegration reported in their clinical practice guidelines which were published in 2016 found that 88.37% of respondents felt that “Referral to a person who is trained in advanced interpretation techniques in radiology may be necessary.” 17 There is no doubt that Oral and Maxillofacial Radiologists are experts in the field of dental diagnostic image interpretation. The standards and training requirements set by CODA for Oral and Maxillofacial Radiology residencies in the United States and Canada attest to that, as do the standards set by CODA for each of the other 8 recognized dental specialties attest to their expertise in their respective areas. The question that has yet to be discussed with any formal investigation is the financial aspect of referring CBCT scans for interpretation to an Oral and Maxillofacial Radiologist. The findings of this study show that there is a significant cost savings for all dental specialists, especially for the Oral and Maxillofacial Surgeons. Therefore, the discussion of what to do for interpretation of CBCT scans should also consider the financial aspect. All dentists would agree that money should not dictate treatment. But when it comes to referring CBCT scans to an Oral and Maxillofacial Radiologist this study now provides evidence that saving money actually comes with high quality of care. Thus, this scenario of referral provides a double benefit of significant savings while maintaining the highest quality of care. The savings seen is independent of the referring dental specialists’ ability and knowledge of CBCT interpretation. This independence in a specialist’s ability to interpret CBCT scans was maintained in this study by using the same time for interpretation of CBCT scans for both the specialist and the Oral and Maxillofacial Radiologist in calculating production loss. This establishes the interpretation time for each dental specialist at the same level as the Oral and Maxillofacial Radiologist. Thus, the argument against referring CBCT scans to an Oral and Maxillofacial Radiologist based on perceived financial losses due to the associated interpretation fee appear to be unfounded.

RESULTS:

The accuracy of displays was measured by area under the ROC curve (table 1). For observer 1 the area under the curve for the Barco 3MP monitor was 0.69 and for the Wide 5MP monitor was at 0.71. The Barco 2MP and Dell monitors came in at 0.65 and 0.63 respectively. For observer 2 the area under the curve for the Barco 3MP monitor and the Wide 5MP monitor was at 0.66. The Barco 2MP and Dell monitors came in at 0.60 and 0.59 respectively. For observer 3 the area under the curve for the Barco 3MP monitor was at 0.77 and for the Wide 5MP monitor was at 0.65. The Barco 2MP and Dell monitors came in at 0.61 and 0.62 respectively. The Barco 3MP monitor performed significantly better than the prototype Barco 2MP monitor and the Dell UltraSharp monitor with a difference of respectively 14.7% and 15.5%.

The reached sensitivity with the Barco 3MP display was higher than with the other displays for observer 1 and 3. For observer 2 the result was on par with the Barco 2MP and Wide 5MP displays. For the Barco 3MP display, observer 1 had a sensitivity of 0.47, the Barco 2MP and Wide 5MP displays had a sensitivity of 0.38 and the dell display had the lowest sensitivity at 0.31. For observer 2, the Barco 3MP, Barco 2MP and Wide 5MP all had a sensitivity of 0.28 and the Dell display had a sensitivity value of 0.19. The sensitivity scores for observer 3 were 0.59 for the Barco 3MP, 0.41 for theWide 5MP, 0.38 for the Barco 2MP and 0.25 for the Dell.

Overall sensitivity was highest for the Barco 3MP display with a difference of respectively 28.6%; 32.4% and 80.0% for the Wide 5MP, the Barco 2MP prototype and the Dell UltraSharp. The sensitivity was much better in the medical grade monitors compared to the off-shelf monitor.

The specificity was similar for observer 1 and observer 2 in case of the Barco 2MP and Wide 5MP displays at 0.93. For observer 3 the Barco 2MP and the Wide 5MP had a specificity of respectively 0.80 and 0.83. For observer 1 the specificity for Barco 3MP was at 0.93, for observer 2 it was at 0.97 and for observer 3 it was at 0.83. For the Dell display the specificity for observers 1 and 3 was at 0.97 and for observer 2 it was at 0.93. Overall the Dell display had the highest specificity followed by the Barco 3MP, the Wide 5MP and the Barco 2MP displays.

The overall interobserver agreement was highest among the Barco 2MP and Dell displays. Using the 5-point response the inter-observer kappa for Barco 2MP was at 0.0.37 and for the Dell was at 0.34. The Barco 3MP had a value of 0.26 and the Wide 5MP had a value of 0.23. Percentage agreement for Barco 2MP was 0.66, Barco 3MP was 0.52, for the Dell display it was 0.59 and for the Wide 5MP it was 0.53.

The overall interobserver agreement on a binary scale was also calculated. The kappa between the observers for the Barco 2MP was 0.54, for Barco 3MP was 0.36, for Dell was 0.56 and for the Wide 5MP was 0.32. Percentage agreement on the binary scale for Barco 2MP was 0.84, Barco 3MP was 0.76, for the Dell display it was 0.88 and for the Wide 5MP it was 0.77.

Overall the Dell display had a good agreement between the two observers followed by the Barco 2MP display, the Wide 5 MP and Barco 3MP display. The Barco 3MP had the least favorable interobserver agreement on both scales.

DISCUSSION:

Clinical examination followed up with intraoral radiography are essential for the diagnosis of caries. The ability to accurately identify caries in spite of the advances in technology can often be an inconclusive process but is necessary for determining the presence of caries and an indicator of caries risk progression.

The diagnostician’s resources for viewing radiographs have been continuously evolving with the advent of digital imaging in dentistry over the last three decades. Medical grade monitors in Imaging are of high quality and the manufacturing process is very demanding and often expensive. For example, a new consumer grade monitor can be purchased anywhere from $100 to over $2,500 depending on the brand, manufacturer and several other factors such as monitor and pixel size, brightness, etc,. The current price for medical grade displays varies from $850 to over $36,000; also depending on the brand, manufacturer, and other factors. So although the medical grade monitors are generally more expensive, there is now some overlap in the cost which can be attributed in part to the presence of multiple manufacturers and improved manufacturing. To justify the cost of purchase of such expensive equipment it is important to see its value and impact in assessment of disease as the difference in cost to an off the shelf monitor is significant.

Most if not all medical grade monitors enable auto-calibration to provide the optimal screen contrast in a given light setting which is often unavailable on consumer grade monitors. The medical grade monitors are assumed to be superior in comparison to off the shelf monitors when the resolution of the images are very high as in mammography and wherein the mortality and morbidity rates could be significantly impacted by early detection of disease in such areas.

Unlike other studies which are inherently limited in the ability to draw direct clinical correlations, this study has the advantage that these were caries in natural teeth which were extracted and positioned to mimic natural teeth arrangement providing a more accurate representation of clinical present caries.

To further evaluate the clinical relevance of use of medical grade displays for diagnostic imaging it will be necessary to evaluate observer’s performance in a clinically comparable brightly lit environment in order to replicate the current clinical viewing habits of practitioners. The perceived limitations of the off the shelf displays will be their lack of auto-DICOM calibration function and the relatively poor performance in identifying the disease process when compared to medical grade monitors.

In this study we found that the overall sensitivity was highest for the Barco Nio Color 3MP monitor, followed by the Wide 5MP monitor and the Barco 2MP. For all observers the off- the-shelf display had the least sensitivity.

The overall interobserver agreement was highest amongst off the shelf monitor and the Barco 2MP monitors. The Barco 3MP, though it had a better sensitivity, had a lesser specificity. There was no statistically significant difference in the observers’ performance and was unrelated to the difference in experience they had in the residency program. The observers felt that the possible presence of more detail and definition of the images as viewed on this monitor often led them to be less definitive in their determination of the presence or absence of caries. For this reason as well the lower resolution monitors seem to have a better interobserver agreement.

CONCLUSION:

Based on the observation in this study it is clear that while there seems to be more information available on higher resolution monitors with better accuracy in identifying carious lesions. In this study one of the medical grade displays (Barco 3MP) showed a statistically significant higher accuracy compared to the consumer display, while these differences are not statistically significant for the other two medical displays. A larger study with clear definitive presence or absence of disease as confirmed by advanced imaging such as micro Computed tomography imaging may help with better evaluation of medical grade displays in dentistry.

References

  1. Hellen-Halme K, Petersson A, Warfvinge G, Nilsson M. Effect of ambient light and monitor brightness and contrast settings on the detection of approximal caries in digital radiographs: an in vitro study. Dentomaxillofac Radiol. 2008;37(7):380-4.
  2. Kallio-Pulkkinen S, Haapea M, Liukkonen E, Huumonen S, Tervonen O, Nieminen MT. Comparison of consumer grade, tablet and 6MP-displays: observer performance in detection of anatomical and pathological structures in panoramic radiographs. Oral Surg Oral Med Oral Pathol Oral Radiol. 2014;118(1):135-41.
  3. Kallio-Pulkkinen S, Haapea M, Liukkonen E, Huumonen S, Tervonen O, Nieminen MT. Comparison between DICOM-calibrated and uncalibrated consumer grade and 6-MP displays under different lighting conditions in panoramic radiography. Dentomaxillofac Radiol. 2015;44(5):20140365.
  4. Kallio-Pulkkinen S, Huumonen S, Haapea M, Liukkonen E, Sipola A, Tervonen O, et al. Effect of display type, DICOM calibration and room illuminance in bitewing radiographs. Dentomaxillofac Radiol. 2016;45(1):20150129.
  5. Samei E, Badano A, Chakraborty D, Compton K, Cornelius C, Corrigan K, et al. Assessment of display performance for medical imaging systems: executive summary of AAPM TG18 report. Med Phys. 2005;32(4):1205-25.
  6. Countryman SC, Sousa Melo SL, Belem MDF, Haiter-Neto F, Vargas MA, Allareddy V. Performance of 5 different displays in the detection of artificial incipient and recurrent caries- like lesions. Oral Surg Oral Med Oral Pathol Oral Radiol. 2018;125(2):182-91.
  7. Tadinada A, Mahdian M, Sheth S, Chandhoke TK, Gopalakrishna A, Potluri A, et al. The reliability of tablet computers in depicting maxillofacial radiographic landmarks. Imaging Sci Dent. 2015;45(3):175-80.
  8. Barbosa VL, Gonzaga AK, Pontual AA, Bento PM, Ramos-Perez FM, Filgueira PT, et al. The influence of display modalities on proximal caries detection and treatment decision. Acta odontologica latinoamericana : AOL. 2015;28(2):95-102.
  9. Hellén-Halme K, Nilsson M, Petersson A. Effect of monitors on approximal caries detection in digital radiographs-standard versus precalibrated DICOM part 14 displays: An in vitro study. Oral Surgery, Oral Medicine, Oral Pathology, Oral Radiology and Endodontology. 2009;107(5):716-20.
  10. Isidor S, Faaborg-Andersen M, Hintze H, Kirkevang L-L, Frydenberg M, Haiter-Neto F, et al. Effect of monitor display on detection of approximal caries lesions in digital radiographs. Dentomaxillofacial Radiology. 2009;38(8):537-41.
  11. Ludlow JB, Abreu Jr M. Performance of film, desktop monitor and laptop displays in caries detection. Dentomaxillofacial Radiology. 1999;28(1):26-30.
  12. Schartz KM, Hillis, S. L., Pesce, L.L. & Berbaum, K.S. Medical Image Perception Laboratory: OR-DBM MRMC Installation. The University of Iowa. 2006.
  13. Dorfman DD, Berbaum KS, Metz CE. Receiver operating characteristic rating analysis. Generalization to the population of readers and patients with the jackknife method. Invest Radiol. 1992;27(9):723-31.
  14. Obuchowski NA, & Rockette, H.E. . Hypothesis testing of diagnostic accuracy for multiple readers and multiple tests: An ANOVA approach with dependent observations. Communications in Statistics-Simulation and Computation. 1995;24:285-308.
  15. Hillis SL, Obuchowski, N.A., Schartz, K.M., & Berbaum, K.S. . A comparison of the Dorfman-Berbaum-Metz and Obuchowski-Rockette methods for receiver operating characteristic (ROC) data. Statistics in Medicine. 2005;24:1579-607.
  16. Hillis SL. A comparison of denominator degrees of freedom for multiple observer ROC analysis. Statistics in Medicine. 2007;26:596-619.
  17. Hillis SL, Berbaum, K.S., & Metz, C.E. Recent developments in the Dorfman-Berbaum- Metz procedure for multireader ROC study analysis. Academic Radiology. 2007;15:647-61.
  18. M.L. M. Interrater reliability: the kappa statistic. Biochemia Medica. 2012;22(3):276-82.

Table 1

Area Under Curves for different displays (display 1: Barco 2MP, display 2: Barco 3MP, display 3: Dell UltraSharp, display 4: Wide 5MP). * significant difference (α = 5%) with the total AUC of display 1 and 3.

Table 2

Sensitivity amongst the different displays (display 1: Barco 2MP, display 2: Barco 3MP, display 3: Dell UltraSharp, display 4: Wide 5MP).

Table 3

Specificity for the different displays (display 1: Barco 2MP, display 2: Barco 3MP, display 3: Dell Ultrasharp, display 4: Wide 5MP).

Table 4

Table showing interobserver agreement (as percentage agreement) for the different displays (display 1: Barco 2MP, display 2: Barco 3MP, display 3: Dell Ultrasharp, display 4: Wide 5MP).

Table 5

Table showing interobserver agreement (as percentage agreement) for the different displays on the binary scale (display 1: Barco 2MP, display 2: Barco 3MP, display 3: Dell Ultrasharp, display 4: Wide 5MP).

Table 6

A table demonstrating the distribution of answers between different monitor displays.

Close
Close

Join our email list to receive updates on OMFR Interactive Resource Center