American Journal of BioScience

| Peer-Reviewed |

Statistical Evaluation of Indicators of Diagnostic Test Performance

Received: 21 October 2013    Accepted:     Published: 20 November 2013
Views:       Downloads:

Share This Article

Abstract

Diagnostic accuracy relates to the ability of a test to discriminate between the target condition and health. This discriminative potential can be quantified by the measures of diagnostic accuracy such as sensitivity and specificity, predictive values, likelihood ratios, error rates, the area under the ROC curve, Youden's index and diagnostic odds ratio. Different measures of diagnostic accuracy relate to the different aspects of diagnostic procedure: while some measures are used to assess the discriminative property of the test, others are used to assess its predictive ability. Measures of diagnostic accuracy are not fixed indicators of a test performance, some are very sensitive to the disease prevalence, while others to the spectrum and definition of the disease. Furthermore, measures of diagnostic accuracy are extremely sensitive to the design of the study. Studies not meeting strict methodological standards usually over- or under-estimate the indicators of test performance as well as they limit the applicability of the results of the study. STARD initiative was a very important step toward the improvement of the quality of reporting of studies of diagnostic accuracy. STARD statement should be included into the Instructions to authors by scientific journals and authors should be encouraged to use the checklist whenever reporting their studies on diagnostic accuracy. Such efforts could make a substantial difference in the quality of reporting of studies of diagnostic accuracy and serve to provide the best possible evidence to the best for the patient care. This brief review outlines some basic definitions, formulas and characteristics of the measures of diagnostic accuracy.

DOI 10.11648/j.ajbio.20130104.13
Published in American Journal of BioScience (Volume 1, Issue 4, November 2013)
Page(s) 63-73
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Keywords

Diagnostic Accuracy, Sensitivity, Specificity, Likelihood Ratio, DOR, AUC, Predictive Values

References
[1] Bamber, D. (1975). The area above the ordinal dominance graph and the area below the receiver operating characteristic graph. Journal of Mathematical Psychology, 12:387–415.
[2] Bland JM. (2004) Interpretation of diagnostic tests. http://www-users.york.ac.uk/~mb55/msc/meas/senspec.pdf
[3] Bradley, A. P. (1997). The use of the area under the roc curve in the evaluation of machine learning algorithms. Pattern Recognition. Jul; 30(7): pp.1145-59.
[4] Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Clin Chem 2003a;49:1-6.
[5] Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM, et al. The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Clin Chem 2003b;49:7-18.
[6] Bossuyt PM. The quality of reporting in diagnostic test research: getting better, still not optimal. Clin Chem. 2004;50(3):465-6.
[7] Bossuyt PM. Clinical evaluation of medical tests: still a long road to go. Biochemia Medica 2006;16(2)89–228
[8] Bossuyt PM. STARD statement: still room for improvement in the reporting of diagnostic accuracy studies. Radiology 2008;248(3):713-4.
[9] Cornfield, J.(1964) "A Method for Estimating Comparative Rates from Clinical Data. Applications to Cancer of the Lung, Breast, and Cervix". Journal of the National Cancer Institute 11: 1269–1275.
[10] Deeks JJ, Altman DG. Diagnostic tests 4: likelihood ratios. BMJ 2004;17;329(7458):168-9.
[11] Dohoo I,Martin W,Stryhn H.(2003).Veterinary epidemiologic research.Ist ed.Chariottetown,Prince Edward Island,Canada:AVC inc;p.706.
[12] Edwards, A.W.F. (1963). "The measure of association in a 2x2 table". Journal of the Royal Statistical Society, Series A (Blackwell Publishing) 126 (1): 109–114.
[13] Ferri C., Flach P., Hernandez-Orallo J. (2002). Learning Decision Trees Using the Area under the ROC Curve. Nineteenth International Conference on Machine Learning (ICML 2002); Morgan Kaufmann; pp. 46-139.
[14] Giard R.Hermans J.(1996).The diagnostic information of tests for the detection of cancer: the usefulness of the likelihood ratio concept. Eur J cancer;32A(12):2042.
[15] Glas AS, Lijmer JG, Prins MH, Bonsel GJ, Bossuyt PM. The diagnostic odds ratio: a single indicator of test performance. J Clin Epidemiol. 2003;56(11):1129-35.
[16] Harding SP, Broadbent DM, Neoh C, White MC, Vora J. (1995) Sensitivity and specificity of photography and direct ophthalmoscopy in screening for sight threatening eye disease: the Liverpool diabetic eye study. BMJ 311, 1131-1135.
[17] Hanley, J. A. and McNeil, B. J. (1982). The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology, 143:29–36.
[18] Hosmer, D. W., Lemeshow, S. (2000). Applied Logistic Regression, Second Edition, Wiley, Inc., New York.
[19] Irwig L, Bossuyt P, Glasziou P, Gatsonis C, Lijmer J. Designing studies to ensure that estimates of test accuracy are transferable. BMJ. 2002;324(7338):669-71.
[20] Jaeschke R, Guyatt G, Lijmer J. Diagnostic tests. In: Guyatt G, Rennie D, eds. Users' guides to the medical literature. Chicago: AMA Press, 2002: 121-40.
[21] Mosteller, Frederick (1968). "Association and Estimation in Contingency Tables". Journal of the American Statistical Association (American Statistical Association) 63 (321): 1–28.
[22] Obuchowski NA, Lieber ML, Wians FH Jr. ROC curves in clinical chemistry: uses, misuses, and possible solutions. Clin Chem. 2004;50(7):1118-25.
[23] Raslich MA. Markert RJ, Stutes SA. Selecting and interpreting diagnostic tests. Biochemia Medica 2007;17(2):139-270.
[24] Rutjes AW, Reitsma JB, Di Nisio M, Smidt N, van Rijn JC, Bossuyt PM. Evidence of bias and variation in diagnostic accuracy studies. CMAJ. 2006;14;174(4):469-76.
[25] Sackett DL, Haynes RB, Guyatt GH, Tugwell P. (1991) Clinical Epidemiology: a Basic Science for Clinical Medicine, Little Brown, Chicago.
[26] Sackett DL, Straus S, Richardson WS, Rosenberg W, Haynes RB, 2000. Evidence-based medicine. How to practise and teach EBM. 2nd ed. Edinburgh: Churchill Livingstone,: 67-93.
[27] Sanchini MA, Gunelli R, Nanni O, Bravaccini S, Fabbri C, Sermasi A, Bercovich E, Ravaioli A, Amadori D, Calistri D. (2005) Relevance of urine telomerase in the diagnosis of bladder cancer. JAMA 294, 2052-2056.
[28] Staquett M,Rozencweig M,Lee YJ,Muggia FM,.(1981).Methodology for the assessment of new dichotomous diagnostic tests.J Chronic Dis;34(12):599-610.
[29] Wilczynski NL. Quality of reporting of diagnostic accuracy studies: no change since STARD statement publication--before-and-after study. Radiology. 2008;248(3):817-23.
[30] Youden WJ. Index for rating diagnostic tests. Cancer. 1950;3:32-35.
[31] Zweig M,Campbell G.(1993).Receiver operating characteristic (ROC) plots: a fundamental evaluation tool in clinical medicine.Clin Chem;39(4):561.
Author Information
  • Department of Industrial Mathematics and Applied Statistics, Ebonyi State University Abakaliki, Nigeria

  • Department of Industrial Mathematics and Applied Statistics, Ebonyi State University Abakaliki, Nigeria

Cite This Article
  • APA Style

    Okeh UM, Ogbonna LN. (2013). Statistical Evaluation of Indicators of Diagnostic Test Performance. American Journal of BioScience, 1(4), 63-73. https://doi.org/10.11648/j.ajbio.20130104.13

    Copy | Download

    ACS Style

    Okeh UM; Ogbonna LN. Statistical Evaluation of Indicators of Diagnostic Test Performance. Am. J. BioScience 2013, 1(4), 63-73. doi: 10.11648/j.ajbio.20130104.13

    Copy | Download

    AMA Style

    Okeh UM, Ogbonna LN. Statistical Evaluation of Indicators of Diagnostic Test Performance. Am J BioScience. 2013;1(4):63-73. doi: 10.11648/j.ajbio.20130104.13

    Copy | Download

  • @article{10.11648/j.ajbio.20130104.13,
      author = {Okeh UM and Ogbonna LN},
      title = {Statistical Evaluation of Indicators of Diagnostic Test Performance},
      journal = {American Journal of BioScience},
      volume = {1},
      number = {4},
      pages = {63-73},
      doi = {10.11648/j.ajbio.20130104.13},
      url = {https://doi.org/10.11648/j.ajbio.20130104.13},
      eprint = {https://download.sciencepg.com/pdf/10.11648.j.ajbio.20130104.13},
      abstract = {Diagnostic accuracy relates to the ability of a test to discriminate between the target condition and health. This discriminative potential can be quantified by the measures of diagnostic accuracy such as sensitivity and specificity, predictive values, likelihood ratios, error rates, the area under the ROC curve, Youden's index and diagnostic odds ratio. Different measures of diagnostic accuracy relate to the different aspects of diagnostic procedure: while some measures are used to assess the discriminative property of the test, others are used to assess its predictive ability. Measures of diagnostic accuracy are not fixed indicators of a test performance, some are very sensitive to the disease prevalence, while others to the spectrum and definition of the disease. Furthermore, measures of diagnostic accuracy are extremely sensitive to the design of the study. Studies not meeting strict methodological standards usually over- or under-estimate the indicators of test performance as well as they limit the applicability of the results of the study. STARD initiative was a very important step toward the improvement of the quality of reporting of studies of diagnostic accuracy. STARD statement should be included into the Instructions to authors by scientific journals and authors should be encouraged to use the checklist whenever reporting their studies on diagnostic accuracy. Such efforts could make a substantial difference in the quality of reporting of studies of diagnostic accuracy and serve to provide the best possible evidence to the best for the patient care. This brief review outlines some basic definitions, formulas and characteristics of the measures of diagnostic accuracy.},
     year = {2013}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Statistical Evaluation of Indicators of Diagnostic Test Performance
    AU  - Okeh UM
    AU  - Ogbonna LN
    Y1  - 2013/11/20
    PY  - 2013
    N1  - https://doi.org/10.11648/j.ajbio.20130104.13
    DO  - 10.11648/j.ajbio.20130104.13
    T2  - American Journal of BioScience
    JF  - American Journal of BioScience
    JO  - American Journal of BioScience
    SP  - 63
    EP  - 73
    PB  - Science Publishing Group
    SN  - 2330-0167
    UR  - https://doi.org/10.11648/j.ajbio.20130104.13
    AB  - Diagnostic accuracy relates to the ability of a test to discriminate between the target condition and health. This discriminative potential can be quantified by the measures of diagnostic accuracy such as sensitivity and specificity, predictive values, likelihood ratios, error rates, the area under the ROC curve, Youden's index and diagnostic odds ratio. Different measures of diagnostic accuracy relate to the different aspects of diagnostic procedure: while some measures are used to assess the discriminative property of the test, others are used to assess its predictive ability. Measures of diagnostic accuracy are not fixed indicators of a test performance, some are very sensitive to the disease prevalence, while others to the spectrum and definition of the disease. Furthermore, measures of diagnostic accuracy are extremely sensitive to the design of the study. Studies not meeting strict methodological standards usually over- or under-estimate the indicators of test performance as well as they limit the applicability of the results of the study. STARD initiative was a very important step toward the improvement of the quality of reporting of studies of diagnostic accuracy. STARD statement should be included into the Instructions to authors by scientific journals and authors should be encouraged to use the checklist whenever reporting their studies on diagnostic accuracy. Such efforts could make a substantial difference in the quality of reporting of studies of diagnostic accuracy and serve to provide the best possible evidence to the best for the patient care. This brief review outlines some basic definitions, formulas and characteristics of the measures of diagnostic accuracy.
    VL  - 1
    IS  - 4
    ER  - 

    Copy | Download

  • Sections