Sparse distributed memory: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Jonesey95
en>Yobot
m →‎Critical Distance: WP:CHECKWIKI error fixes using AWB (10765)
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
[[File:Log DiagnosticOddsRatio.png|thumb|log(Diagnostic Odds Ratio) for varying sensitivity and specificity]]
Good friends simply call her [http://Www.Ehow.com/search.html?s=Ardella+Melgar Ardella Melgar]. For yrs she's been operating as a postal provider employee. Drawing is the only [http://www.Bing.com/search?q=pastime&form=MSNNWS&mkt=en-us&pq=pastime pastime] her husband would not approve of. North Dakota is wherever her property is. She's not great at layout but you may well want to look at her web site: http://www.secifuego.com/_notes/calvin-klein-calzoncillos.htm<br><br>Here is my blog post - Calvin Klein Calzoncillos ([http://www.secifuego.com/_notes/calvin-klein-calzoncillos.htm www.secifuego.com])
 
{{context|date=January 2012}}
The '''diagnostic odds ratio''' is a measure of the effectiveness of a [[diagnostic test]].<ref name="Glas 2003">Afina S. Glas, Jeroen G. Lijmer, Martin H. Prins, Gouke J. Bonsel, Patrick M.M. Bossuyt. The diagnostic odds ratio: a single indicator of test performance. Journal of Clinical Epidemiology, 56 (2003), 1129-1135.</ref> It is defined as the ratio of the odds of the test being positive if the subject has a disease relative to the odds of the test being positive if the subject does not have the disease.
 
== Definition ==
The diagnostic odds ratio is defined mathematically as:
 
:<math>\text{Diagnostic odds ratio, DOR} = \frac{TP/FN}{FP/TN}</math>
 
where <math>TP</math>, <math>FN</math>, <math>FP</math> and <math>TN</math> are the number of true positives, false negatives, false positives and true negatives respectively.<ref name="Glas 2003" />
 
=== Confidence interval ===
As an [[odds ratio]], the [[Natural logarithm|logarithm]] of the diagnostic odds ratio is approximately [[Normal distribution|normally distributed]].{{clarify|reason=give sample sizes/region where approximation is reasonable|date=January 2012}} The [[Standard error (statistics)|standard error]] of the log diagnostic odds ratio is approximately:
 
:<math>\mathrm{SE}\left(\log{\text{DOR}}\right) = \sqrt{\frac{1}{TP}+\frac{1}{FN}+\frac{1}{FP}+\frac{1}{TN}}</math>
 
From this an approximate 95% [[confidence interval]] can be calculated for the log diagnostic odds ratio:
 
:<math>\log{\text{DOR}} \pm 1.96 \times \mathrm{SE}\left(\log{\text{DOR}}\right)</math>
 
Exponentiation of the approximate confidence interval for the log diagnostic odds ratio gives the approximate confidence interval for the diagnostic odds ratio.<ref name="Glas 2003" />
 
== Relation to other measures of diagnostic test accuracy ==
The diagnostic odds ratio may be expressed in terms of the [[sensitivity and specificity]] of the test:<ref name="Glas 2003" />
 
:<math>\text{DOR} = \frac{\text{sensitivity}\times\text{specificity}}{\left(1-\text{sensitivity}\right)\times\left(1-\text{specificity}\right)}</math>
 
It may also be expressed in terms of the [[Positive predictive value]] (PPV) and [[Negative predictive value]] (NPV):<ref name="Glas 2003" />
 
:<math>\text{DOR} = \frac{\text{PPV}\times\text{NPV}}{\left(1-\text{PPV}\right)\times\left(1-\text{NPV}\right)}</math>
 
It is also related to the [[Likelihood ratios in diagnostic testing|likelihood ratios]], <math>LR+</math> and <math>LR-</math>:<ref name="Glas 2003" />
 
:<math>\text{DOR} = \frac{LR+}{LR-}</math>
 
== Rationale and interpretation ==
The rationale for the diagnostic odds ratio is that it is a single indicator of test performance (like [[Accuracy#Accuracy and precision in binary classification|accuracy]] and [[Youden's J statistic]]) but which is independent of [[prevalence]] (unlike accuracy) and is presented as an [[odds ratio]], which is familiar to medical practitioners.
 
The diagnostic odds ratio ranges from zero to infinity, although diagnostic odds ratios less than one indicate that the test can be improved by simply inverting the outcome of the test. A diagnostic odds ratio of exactly one means that the test is equally likely to predict a positive outcome whatever the true condition. Higher diagnostic odds ratios are indicative of better test performance.<ref name="Glas 2003" />
 
== Uses ==
'''Note:''' All the following uses are deprecated by more recent innovations in the meta-analysis of diagnostic test accuracy studies, namely the bivariate method<ref>Reitsma, Johannes B., et al. "Bivariate analysis of sensitivity and specificity produces informative summary measures in diagnostic reviews." Journal of clinical epidemiology 58.10 (2005): 982-990.</ref> and the hierarchical summary ROC (HSROC) method.<ref>Rutter, Carolyn M., and Constantine A. Gatsonis. "A hierarchical regression approach to meta‐analysis of diagnostic test accuracy evaluations." Statistics in medicine 20.19 (2001): 2865-2884.</ref>
 
The log diagnostic odds ratio is sometimes used in meta-analyses of diagnostic test accuracy studies due to its simplicity (being approximately normally distributed).<ref name="Gatsonis 2006">Constantine Gatsonis, Prashni Paliwal. Meta-analysis of diagnostic and screening test accuracy evaluations: methodologic primer. American Journal of Roentgenology 187(2), 271-281. 2006.</ref>
 
Traditional [[Meta-analysis|meta-analytic]] techniques such as [[inverse-variance weighting]] can be used to combine log diagnostic odds ratios computed from a number of data sources to produce an overall diagnostic odds ratio for the test in question.
 
The log diagnostic odds ratio can also be used to study the trade-off between sensitivity and specificity.<ref name="Moses 1993" /><ref name="Dinnes 2007">J. Dinnes, J. Deeks, H. Kunst, A. Gibson, E. Cummins, N. Waugh, F. Drobniewski, A. Lalvani. A systematic review of rapid diagnostic tests for the detection of tuberculosis infection. Health Technology Assessment, 11(3), 2007.</ref> By expressing the log diagnostic odds ratio in terms of the [[logit]] of the true positive rate (sensitivity) and false positive rate (1 &minus; specificity), and by additionally constructing a measure, <math>S</math>:
 
:<math>D = \log{\text{DOR}} = \log{\left[\frac{TPR}{(1-TPR)}\times\frac{(1-FPR)}{FPR}\right]} = \operatorname{logit}(TPR) - \operatorname{logit}(FPR)</math>
:<math>S = \operatorname{logit}(TPR) + \operatorname{logit}(FPR)</math>
 
It is then possible to fit a straight line, <math>D = a + bS</math>. If <var>b</var> ≠ 0 then there is a trend in diagnostic performance with threshold beyond the simple trade-off of sensitivity and specificity. The value <var>a</var> can be used to plot a summary [[Receiver operating characteristic|ROC]] (SROC) curve.<ref name="Moses 1993" /><ref name="Dinnes 2007" />
 
== Example ==
Consider a test with the following 2&times;2 [[confusion matrix]]:
{| class="wikitable"
|-
! rowspan="2" colspan="2" style="background:#fff; border-color:#fff #aaa #aaa #fff" |
! colspan="2" | Condition (as determined by “[[Gold standard (test)|Gold standard]]”
|-
! Positive
! Negative
|-
! rowspan="2" style="text-align:left" | Test<br />outcome
! Positive
| style="text-align: center" | 26
| style="text-align: center" | 12
|-
! Negative
| style="text-align: center" | 3
| style="text-align: center" | 48
|}
 
We calculate the diagnostic odds ratio as:
 
:<math>\text{DOR} = \frac{26/3}{12/48} = 34.666\ldots</math>
 
This diagnostic odds ratio is greater than one, so we know that the test is discriminating correctly. We compute the confidence interval for the diagnostic odds ratio of this test as [8.967, 134.0].
 
Now consider another test with ten extra false negatives; its diagnostic odds ratio is:
 
:<math>\text{DOR}^\prime = \frac{26/13}{12/48} = \frac{2}{1/4} = 8</math>
 
This diagnostic odds ratio is also greater than one, but is lower, indicating worse performance, as we would expect if the number of false negatives increased. The confidence interval for the diagnostic odds ratio of this test is [3.193, 20.04]. The confidence intervals for these tests are clearly overlapping, so we would not conclude that the first test was statistically better than the second.
 
== Criticisms ==
The diagnostic odds ratio is undefined when the number of false negatives or false positives is zero. The typical response to such a scenario is to add 0.5 to all cells in the contingency table,<ref name="Glas 2003" /><ref name="Cox 1970">D.R. Cox. The analysis of binary data. Methuen, London, 1970.</ref> although this should not be seen as a correction as it introduces a bias to results.<ref name="Moses 1993">Lincoln E. Moses, David Shapiro. Combining independent studies of a diagnostic test into a summary ROC curve: data-analytic approaches and some additional considerations. Statistics in Medicine, 12(12) 1293&ndash;1316, 1993.</ref> It is suggested that the adjustment is made to all contingency tables, even if there are no cells with zero entries.<ref name="Moses 1993" />
 
== See also ==
* [[Sensitivity and specificity]]
* [[Binary classification]]
* [[Positive predictive value]] and [[negative predictive value]]
* [[Odds ratio]]
 
== References ==
{{reflist|2}}
 
[[Category:Epidemiology]]
[[Category:Biostatistics]]
[[Category:Medical statistics]]
[[Category:Statistical ratios]]
[[Category:Summary statistics for contingency tables]]

Latest revision as of 10:47, 13 January 2015

Good friends simply call her Ardella Melgar. For yrs she's been operating as a postal provider employee. Drawing is the only pastime her husband would not approve of. North Dakota is wherever her property is. She's not great at layout but you may well want to look at her web site: http://www.secifuego.com/_notes/calvin-klein-calzoncillos.htm

Here is my blog post - Calvin Klein Calzoncillos (www.secifuego.com)