ijsmr logo-pdf 1349088093

On the Relationship between the Reliability and Accuracy of Bio-Behavioral Diagnoses: Simple Math to the Rescue
Pages 172-179
Dom Cicchetti
DOI:
http://dx.doi.org/10.6000/1929-6029.2015.04.02.2
Published: 21 May 2015


Abstract: An equivalence between the J statistic (Jack Youden, 1950) and the Kappa statistic (K), Cohen (1960), was discovered by Helena Kraemer (1982). J is defined as: [Sensitivity (Se) + Specificity (Sp)] – 1. The author (2011) added the remaining two validity components to the J Index, namely, Predicted Positive Accuracy (PPA) and Predicted Negative Accuracy (PNA). The resulting D Index or D = [(Se + Sp) + (PPA + PNA) – 1] / 2. The purpose of this research is to compare J and D as estimates of K, using both actual and simulated data sets. The actual data consisted of ratings of clinical depression and self-reports of gonorrhea. The simulated data sets represented binary diagnoses when the percentages of Negative and Positive cases were: (Identical; Slightly varying; Mildly varying; Moderately varying; or Markedly varying diagnostic patterns, For both the diagnosis of clinical depression, and the self-reports of gonorrhea, D produced closer approximations to Kappa. For the simulated data, under both identical and slightly different patterns of assigning Negative and Positive binary diagnoses, K, D and J produced identical results. While J produced acceptably close values to K under the condition of Mild discrepancies in the proportions of Negative and Positive cases, D continued to more closely approximate K. While D more closely estimated K under Markedly varying diagnostic patterns, D produced values under this extreme condition that were closer than would have been predicted. The significance of these findings for future research is discussed.

Keywords: Binary Diagnoses, Diagnostic Reliability, Diagnostic Accuracy.

Download Full Article
Submit to FacebookSubmit to TwitterSubmit to LinkedIn