Maxwell relations: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Maschen
make roadmap image bigger (400px width) and define symbols in the caption
→‎The four most common Maxwell relations: See last change for reason of changing letter "A" to notation "F"
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
[[File:Roccurves.png|thumb|right|ROC curve of three [[epitope]] predictors.]]
Oscar is how he's known as and he completely enjoys this name. To perform baseball is the hobby he will by no means stop performing. He used to be unemployed but now he is a meter reader. California is our beginning place.<br><br>My web-site; home std test kit - [http://erpcmc.com/groups/valuable-guidance-for-successfully-treating-infections/ go now] -
In [[signal detection theory]], a '''receiver operating characteristic''' ('''ROC'''), or simply '''ROC curve''', is a [[graph of a function|graphical]] plot which illustrates the performance of a [[binary classifier]] system as its discrimination threshold is varied. It is created by plotting the fraction of [[true positive]]s out of the total actual positives (TPR = true positive rate) vs. the fraction of [[false positive]]s out of the total actual negatives (FPR = false positive rate), at various threshold settings. TPR is also known as [[Sensitivity (tests)|sensitivity]] or [[Precision and recall#Definition (classification context)|recall]] in [[machine learning]]. The FPR is also known as the [[Information retrieval#Fall-out|fall-out]] and can be calculated as one minus the more well known [[Specificity (tests)|specificity]]. The ROC curve is then the [[Sensitivity (tests)|sensitivity]] as a function of [[Information retrieval#Fall-out|fall-out]]. In general, if both of the probability distributions for detection and false alarm are known, the ROC curve can be generated by plotting the [[Cumulative Distribution Function]] (area under the probability distribution from -inf to +inf) of the detection probability in the y-axis versus the Cumulative Distribution Function of the false alarm probability in x-axis.
 
ROC analysis provides tools to select possibly optimal models and to discard suboptimal ones independently from (and prior to specifying) the cost context or the class distribution. ROC analysis is related in a direct and natural way to cost/benefit analysis of diagnostic [[decision making]].
 
The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting enemy objects in battlefields and was soon introduced to [[psychology]] to account for perceptual detection of stimuli. ROC analysis since then has been used in [[medicine]], [[radiology]], [[biometrics]], and other areas for many decades and is increasingly used in [[machine learning]] and [[data mining]] research.
 
The ROC is also known as a relative operating characteristic curve, because it is a comparison of two operating characteristics (TPR and FPR) as the criterion changes.<ref name="Swets1996">Swets, John A.; [http://www.questia.com/PM.qst?a=o&d=91082370 ''Signal detection theory and ROC analysis in psychology and diagnostics : collected papers''], Lawrence Erlbaum Associates, Mahwah, NJ, 1996</ref>
 
==Basic concept==
 
{| class="wikitable" align="right" width=35% style="font-size:98%; margin-left:0.5em; padding:0.25em; background:#f1f5fc;"
|+ Terminology and derivations<br
/>from a confusion matrix
|- valign=top
|
; true positive (TP)
:eqv. with hit
; true negative (TN)
:eqv. with correct rejection
; false positive (FP)
:eqv. with [[false alarm]], [[Type I error]]
; false negative (FN)
:eqv. with miss, [[Type II error]]
--------------------------------------------------------
; [[sensitivity (test)|sensitivity]] or true positive rate (TPR)
:eqv. with [[hit rate]], [[Information retrieval#Recall|recall]]
:<math>\mathit{TPR} = \mathit{TP} / P = \mathit{TP} / (\mathit{TP}+\mathit{FN})</math>
; [[Specificity (tests)|specificity]] (SPC) or True Negative Rate
:<math>\mathit{SPC} = \mathit{TN} / N = \mathit{TN} / (\mathit{FP} + \mathit{TN}) </math>
; [[Information retrieval#Precision|precision]] or [[positive predictive value]] (PPV)
:<math>\mathit{PPV} = \mathit{TP} / (\mathit{TP} + \mathit{FP})</math>
; [[negative predictive value]] (NPV)
:<math>\mathit{NPV} = \mathit{TN} / (\mathit{TN} + \mathit{FN})</math>
; [[Information retrieval#Fall-out|fall-out]] or false positive rate (FPR)
:<math>\mathit{FPR} = \mathit{FP} / N = \mathit{FP} / (\mathit{FP} + \mathit{TN})</math>
; [[false discovery rate]] (FDR)
:<math>\mathit{FDR} = \mathit{FP} / (\mathit{FP} + \mathit{TP}) = 1 - \mathit{PPV} </math>
; Miss Rate or [[Type_I_and_type_II_errors#False_positive_and_false_negative_rates|False Negative Rate]] (FNR)
:<math>\mathit{FNR} = \mathit{FN} / (\mathit{FN} + \mathit{TP}) </math>
------------------------------------------------
; [[accuracy]] (ACC)
:<math>\mathit{ACC} = (\mathit{TP} + \mathit{TN}) / (P + N)</math>
;[[F1 score]]
: is the [[Harmonic mean#Harmonic mean of two numbers|harmonic mean]] of [[Information retrieval#Precision|precision]] and [[sensitivity (test)|sensitivity]]
:<math>\mathit{F1} = 2 \mathit{TP} / (2 \mathit{TP} + \mathit{FP} + \mathit{FN})</math>
; [[Matthews correlation coefficient]] (MCC)
:<math> \frac{ TP \times TN - FP \times FN } {\sqrt{ (TP+FP) ( TP + FN ) ( TN + FP ) ( TN + FN ) } }
</math>
 
;Informedness = Sensitivity + Specificity - 1
;Markedness = Precision + NPV - 1
;
<span style="font-size:90%;">''Source: Fawcett (2006).''</span>
|}
 
{{See also|Type I and type II errors|Sensitivity and specificity}}
A classification model ([[Classifier (mathematics)|classifier]] or [[medical diagnosis|diagnosis]]) is a [[Mapping (mathematics)|mapping]] of instances between certain classes/groups. The classifier or diagnosis result can be a [[Real number|real value]] (continuous output), in which case the classifier boundary between classes must be determined by a threshold value (for instance, to determine whether a person has [[hypertension]] based on a [[blood pressure]] measure). Or it can be a [[Countable set|discrete]] class label, indicating one of the classes.
 
Let us consider a two-class prediction problem ([[binary classification]]), in which the outcomes are labeled either as positive (''p'') or negative (''n''). There are four possible outcomes from a binary classifier. If the outcome from a prediction is ''p'' and the actual value is also ''p'', then it is called a ''true positive'' (TP); however if the actual value is ''n'' then it is said to be a ''false positive'' (FP). Conversely, a ''true negative'' (TN) has occurred when both the prediction outcome and the actual value are ''n'', and ''false negative'' (FN) is when the prediction outcome is ''n'' while the actual value is ''p''.
 
To get an appropriate example in a real-world problem, consider a diagnostic test that seeks to determine whether a person has a certain disease. A false positive in this case occurs when the person tests positive, but actually does not have the disease. A false negative, on the other hand, occurs when the person tests negative, suggesting they are healthy, when they actually do have the disease.
 
Let us define an experiment from '''P''' positive instances and '''N''' negative instances for some condition. The four outcomes can be formulated in a 2×2 ''[[contingency table]]'' or ''[[confusion matrix]]'', as follows:
 
{{DiagnosticTesting_Diagram}}
<!--
{| align=center
|-
! colspan=2 | &nbsp;
! colspan=2 align=center | actual value
|-
! colspan=2 | &nbsp; !! ''p'' !! ''n'' !! style="padding-left:1em;" | total
|-
! rowspan=2 valign=middle | prediction<br
/>outcome
! valign=middle style="padding-right:1em;" | ''p'''
| style="border:thin solid; padding:1em;" | True<br/>Positive
| style="border:thin solid; padding:1em;" | False<br/>Positive
| style="padding-left:1em;" | P'
|-
! valign=middle style="padding-right:1em;" | ''n'''
| style="border:thin solid; padding:1em;" | False<br/>Negative
| style="border:thin solid; padding:1em;" | True<br/>Negative
| style="padding-left:1em;" | N'
|-
! colspan=2 align=right style="padding-right:1em;" | total
| align=center| P || align=center | N
|}.
-->
 
==ROC space==
[[Image:ROC space-2.png|thumb|right|250px|The ROC space and plots of the four prediction examples.]]
The contingency table can derive several evaluation "metrics" (see infobox). To draw an ROC curve, only the true positive rate (TPR) and false positive rate (FPR) are needed (as functions of some classifier parameter). The TPR defines how many correct positive results occur among all positive samples available during the test. FPR, on the other hand, defines how many incorrect positive results occur among all negative samples available during the test.
 
A ROC space is defined by FPR and TPR as ''x'' and ''y'' axes respectively, which depicts relative trade-offs between true positive (benefits) and false positive (costs). Since TPR is equivalent with [[Sensitivity (tests)|sensitivity]] and FPR is equal to 1 − [[Specificity (tests)|specificity]], the ROC graph is sometimes called the sensitivity vs (1 − specificity) plot. Each prediction result or instance of a confusion matrix represents one point in the ROC space.
 
The best possible prediction method would yield a point in the upper left corner or coordinate (0,1) of the ROC space, representing 100% [[Sensitivity (tests)|sensitivity]] (no false negatives) and 100% [[Specificity (tests)|specificity]] (no false positives). The (0,1) point is also called a ''perfect classification''. A completely [[Randomness|random guess]] would give a point along a diagonal line (the so-called ''line of no-discrimination'') from the left bottom to the top right corners (regardless of the positive and negative [[base rate]]s). An intuitive example of random guessing is a decision by [[Coin flipping|flipping coins (heads or tails)]]. As the size of the sample increases, a random classifier's ROC point migrates towards (0.5,0.5).
 
The diagonal divides the ROC space. Points above the diagonal represent good classification results (better than random), points below the line poor results (worse than random). Note that the output of a consistently poor predictor could simply be inverted to obtain a good predictor.
 
Let us look into four prediction results from 100 positive and 100 negative instances:
{| style="font-size:95%; margin-left:1em;"
! A !! B !! C !! C′
|-
|
{| style="text-align:center;"
| style=" border:thin solid; padding:1em;" | TP=63 || style=" border:thin solid; padding:1em;" | FP=28 || 91
|-
| style=" border:thin solid; padding:1em;" | FN=37 || style=" border:thin solid; padding:1em;" | TN=72 || 109
|-
| 100 || 100 || 200
|}
| style="padding-left:1em;" |
{| style="text-align:center;"
| style=" border:thin solid; padding:1em;" | TP=77 || style=" border:thin solid; padding:1em;" | FP=77 || 154
|-
| style=" border:thin solid; padding:1em;" | FN=23 || style=" border:thin solid; padding:1em;" | TN=23 || 46
|-
| 100 || 100 || 200
|}
| style="padding-left:1em;" |
{| style="text-align:center;"
| style=" border:thin solid; padding:1em;" | TP=24 || style=" border:thin solid; padding:1em;" | FP=88 || 112
|-
| style=" border:thin solid; padding:1em;" | FN=76 || style=" border:thin solid; padding:1em;" | TN=12 || 88
|-
| 100 || 100 || 200
|}
| style="padding-left:1em;" |
{| style="text-align:center;"
| style=" border:thin solid; padding:1em;" | TP=76 || style=" border:thin solid; padding:1em;" | FP=12 || 88
|-
| style=" border:thin solid; padding:1em;" | FN=24 || style=" border:thin solid; padding:1em;" | TN=88 || 112
|-
| 100 || 100 || 200
|}
|-
| style="padding-left:1em;" | TPR = 0.63 || style="padding-left:2em;" | TPR = 0.77 || style="padding-left:2em;" | TPR = 0.24 || style="padding-left:2em;" | TPR = 0.76
|-
| style="padding-left:1em;" | FPR = 0.28 || style="padding-left:2em;" | FPR = 0.77 || style="padding-left:2em;" | FPR = 0.88 || style="padding-left:2em;" | FPR = 0.12
|-
| style="padding-left:1em;" | PPV = 0.69 || style="padding-left:2em;" | PPV = 0.50 || style="padding-left:2em;" | PPV = 0.21 || style="padding-left:2em;" | PPV = 0.86
|-
| style="padding-left:1em;" | F1 = 0.66 || style="padding-left:2em;" | F1 = 0.61 || style="padding-left:2em;" | F1 = 0.22 || style="padding-left:2em;" | F1 = 0.81
|-
| style="padding-left:1em;" | ACC = 0.68 || style="padding-left:2em;" | ACC = 0.50 || style="padding-left:2em;" | ACC = 0.18 || style="padding-left:2em;" | ACC = 0.82
|}
 
Plots of the four results above in the ROC space are given in the figure. The result of method '''A''' clearly shows the best predictive power among '''A''', '''B''', and '''C'''. The result of '''B''' lies on the random guess line (the diagonal line), and it can be seen in the table that the accuracy of '''B''' is 50%. However, when '''C''' is mirrored across the center point (0.5,0.5), the resulting method '''C′''' is even better than '''A'''. This mirrored method simply reverses the predictions of whatever method or test produced the '''C''' contingency table. Although the original '''C''' method has negative predictive power, simply reversing its decisions leads to a new predictive method '''C′''' which has positive predictive power. When the '''C''' method predicts '''p''' or '''n''', the '''C′''' method would predict '''n''' or '''p''', respectively. In this manner, the '''C′''' test would perform the best. The closer a result from a contingency table is to the upper left corner, the better it predicts, but the distance from the random guess line in either direction is the best indicator of how much predictive power a method has. If the result is below the line (i.e. the method is worse than a random guess), all of the method's predictions must be reversed in order to utilize its power, thereby moving the result above the random guess line.
 
==Curves in ROC space==
[[File:Receiver Operating Characteristic.png|thumb|right|300px]]
Classifications are often based on a [[Continuous probability distribution|continuous random variable]].
Write the probability for belonging in the class as a function of a decision/threshold parameter <math> T </math> as <math> P_1 (T) </math> and
the probability of not belonging to the class as  <math> P_0 (T) </math>.
The false positive rate FPR is given by <math> FPR(T)=\int_{T}^\infty P_0 (T) dT </math> and the true positive rate is <math> TPR(T)= \int_{T}^\infty P_1(T) dT </math>.
The ROC curve plots parametrically TPR(T) versus FPR(T) with T as the varying parameter.
 
For example, imagine that the blood protein levels in diseased people and healthy people are [[normal distribution|normally distributed]] with means of 2 [[gram|g]]/[[decilitre|dL]] and 1 g/dL respectively. A medical test might measure the level of a certain protein in a blood sample and classify any number above a certain threshold as indicating disease. The experimenter can adjust the threshold (black vertical line in the figure), which will in turn change the false positive rate. Increasing the threshold would result in fewer false positives (and more false negatives), corresponding to a leftward movement on the curve. The actual shape of the curve is determined by how much overlap the two distributions have.
 
==Further interpretations==
 
Sometimes, the ROC is used to generate a summary statistic. Common versions are:
* the intercept of the ROC curve with the line at 90 degrees to the no-discrimination line (also called [[Youden's J statistic]])
* the area between the ROC curve and the no-discrimination line {{Citation needed|date=March 2010}}
* the area under the ROC curve, or "AUC" ("Area Under Curve"), or A' (pronounced "a-prime"),<ref>{{cite conference |first1=James |last1=Fogarty |first2=Ryan S. |last2=Baker |first3=Scott E. |last3=Hudson |year=2005 |title=Case studies in the use of ROC curve analysis for sensor-based estimates in human computer interaction |booktitle=ACM International Conference Proceeding Series, Proceedings of Graphics Interface 2005 |publisher=Canadian Human-Computer Communications Society |location=Waterloo, ON |url=http://portal.acm.org/citation.cfm?id=1089530 }}</ref> or "c-statistic".<ref>{{cite book |first1=Trevor |last1=Hastie|authorlink1=Trevor Hastie|first2=Robert |last2=Tibshirani |authorlink2=Robert Tibshirani|first3=Jerome H. |last3=Friedman |year=2009 |title=The elements of statistical learning: data mining, inference, and prediction |edition=2nd}}</ref>
* [[d']] (pronounced "d-prime"), the distance between the mean of the distribution of activity in the system under noise-alone conditions and its distribution under signal-alone conditions, divided by their [[standard deviation]], under the assumption that both these distributions are [[normal distribution|normal]] with the same standard deviation. Under these assumptions, it can be proved that the shape of the ROC depends only on [[d']].
 
However, any attempt to summarize the ROC curve into a single number loses information about the pattern of tradeoffs of the particular discriminator algorithm.
 
===Area under the curve===
When using normalized units, the area under the curve (often referred to as simply the AUC, or AUROC) is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming 'positive' ranks higher than 'negative').<ref>Fawcett, Tom (2006); ''An introduction to ROC analysis'', Pattern Recognition Letters, 27, 861–874.</ref> This can be seen as follows: the area under the curve is given by (the integral boundaries are reversed as large T has
a lower value on the x-axis)
:<math> A = \int_{\infty}^{-\infty} y(T) x'(T) dT = \int_{\infty}^{-\infty} TPR(T) FPR'(T) dT = \int_{-\infty}^{\infty} TPR(T) P_0(T) dT = \langle TPR \rangle
</math>. The angular brackets denote average from the distribution of negative samples.
 
It can further be shown that the AUC is closely related to the [[Mann–Whitney U]],<ref name="Hanley">{{cite journal |last1=Hanley |first1=James A. |last2=McNeil |first2=Barbara J. |journal=Radiology |number=1 |pages=29–36 |title=The Meaning and Use of the Area under a Receiver Operating Characteristic (ROC) Curve |volume=143 |year=1982 |pmid=7063747 }}</ref><ref name="Mason">{{cite journal |last1=Mason |first1=Simon J. |last2=Graham |first2=Nicholas E. |year=2002 |title=Areas beneath the relative operating characteristics (ROC) and relative operating levels (ROL) curves: Statistical significance and interpretation |journal=Quarterly Journal of the Royal Meteorological Society |issue=128 |pages=2145–2166 |url=http://www.inmet.gov.br/documentos/cursoI_INMET_IRI/Climate_Information_Course/References/Mason+Graham_2002.pdf }}</ref> which tests whether positives are ranked higher than negatives. It is also equivalent to the [[Wilcoxon signed-rank test|Wilcoxon test of ranks]].<ref name="Mason" /> The AUC is related to the [[Gini coefficient]] (<math>G_1</math>) by the formula <math>G_1 = 2 AUC - 1</math>, where:
 
:<math>G_1 = 1 - \sum_{k=1}^n (X_{k} - X_{k-1}) (Y_k + Y_{k-1})</math><ref>Hand, David J.; and Till, Robert J. (2001); ''A simple generalization of the area under the ROC curve for multiple class classification problems'', Machine Learning, 45, 171–186.</ref>
 
In this way, it is possible to calculate the AUC by using an average of a number of trapezoidal approximations.
 
It is also common to calculate the Area Under the ROC Convex Hull (ROC AUCH = ROCH AUC) as any point on the line segment between two prediction results can be achieved by randomly using one or other system with probabilities proportional to the relative length of the opposite component of the segment.<ref>{{cite journal |first1=F. |last1=Provost |first2=T. |last2=Fawcett |title=Robust classification for imprecise environments. |journal=Machine Learning, |volume=44 |pages=203–231 |year=2001}}</ref> Interestingly, it is also possible to invert concavities – just as in the figure the worse solution can be reflected to become a better solution; concavities can be reflected in any line segment, but this more extreme form of fusion is much more likely to overfit the data.<ref>{{cite conference |first=P.A. last=Flach |first2=S. |last2=Wu |year=2005 |title=Repairing concavities in ROC curves. |booktitle=19th International Joint Conference on Artificial Intelligence (IJCAI'05), |pages=702–707.}}</ref>
 
The [[machine learning]] community most often uses the ROC AUC statistic for model comparison.<ref>{{cite journal |issue=3 |pages=839–843 |last1=Hanley |first1=James A.| last2=McNeil |first2=Barbara J. |title=A method of comparing the areas under receiver operating characteristic curves derived from the same cases |journal=Radiology |accessdate=2008-12-03 |date=1983-09-01 |volume=148 |pages=839–43 |url=http://radiology.rsnajnls.org/cgi/content/abstract/148/3/839 |pmid=6878708 }}</ref> However, this practice has recently been questioned based upon new machine learning research that shows that the AUC is quite noisy as a classification measure<ref name="Hanczar2010">Hanczar, Blaise; Hua, Jianping; Sima, Chao; Weinstein, John; Bittner, Michael; and Dougherty, Edward R. (2010); ''Small-sample precision of ROC-related estimates'', Bioinformatics 26 (6): 822–830</ref> and has some other significant problems in model comparison.<ref name="Lobo2008">Lobo, Jorge M.; Jiménez-Valverde, Alberto; and Real, Raimundo (2008), ''AUC: a misleading measure of the performance of predictive distribution models'', Global Ecology and Biogeography, 17: 145–151</ref><ref name="Hand2009">Hand, David J. (2009); ''Measuring classifier performance: A coherent alternative to the area under the ROC curve'', Machine Learning, 77: 103–123</ref> A reliable and valid AUC estimate can be interpreted as the probability that the classifier will assign a higher score to a randomly chosen positive example than to a randomly chosen negative example. However, the critical research<ref name="Hanczar2010" /><ref name="Lobo2008" /> suggests frequent failures in obtaining reliable and valid AUC estimates. Thus, the practical value of the AUC measure has been called into question,<ref name="Hand2009" /> raising the possibility that the AUC may actually introduce more uncertainty into machine learning classification accuracy comparisons than resolution.
Nonetheless, the coherence of AUC as a measure of aggregated classification performance has been vindicated, in terms of a uniform rate distribution,<ref name="Flachetal2011">{{cite conference |first1=P.A. |last1=Flach |first2=J. |last2=Hernandez-Orallo | first3=C. | last3=Ferri |year=2011 |title=A coherent interpretation of AUC as a measure of aggregated classification performance. |booktitle=Proceedings of the 28th International Conference on Machine Learning (ICML-11) |pages=657–664|url=http://www.icml-2011.org/papers/385_icmlpaper.pdf.}}</ref> and AUC has being linked to a number of other performance metrics such as the [[Brier score]].<ref name="hernandez2012unified ">{{cite journal |first1=J. |last1= Hernandez-Orallo| first2=P.A.| last2=Flach | first3=C. | last3=Ferri |year=2012 |title=A unified view of performance metrics: translating threshold choice into expected classification loss|journal=Journal of Machine Learning Research|volume=13 |pages=2813–2869 |url=http://jmlr.org/papers/volume13/hernandez-orallo12a/hernandez-orallo12a.pdf}}</ref>
 
One recent explanation of the problem with ROC AUC is that reducing the ROC Curve to a single number ignores the fact that it is about the tradeoffs between the different systems or performance points plotted and not the performance of an individual system, as well as ignoring the possibility of concavity repair, so that related alternative measures such as Informedness<ref name="Powers2007">{{cite journal |first=David M W |last=Powers |date=2007/2011 |title=Evaluation: From Precision, Recall and F-Measure  to ROC, Informedness, Markedness & Correlation |journal=Journal of Machine Learning Technologies |volume=2 |issue=1 |pages=37–63 |url=http://www.bioinfo.in/uploadfiles/13031311552_1_1_JMLT.pdf
|url=http://dl.dropbox.com/u/27743223/201101-Evaluation_JMLT_Postprint-Colour.pdf}}</ref> or DeltaP are recommended.<ref>{{cite conference |first=David M.W. |last=Powers |title=The Problem of Area Under the Curve |booktitle=International Conference on Information Science and Technology |year=2012}}</ref> These measures are essentially equivalent to the Gini for a single prediction point with DeltaP' = Informedness = 2AUC-1, whilst DeltaP = Markedness represents the dual (viz. predicting the prediction from the real class) and their geometric mean is the [[Matthews correlation coefficient]].<ref name="Powers2007"/>
 
===Other measures===
 
In engineering, the area between the ROC curve and the no-discrimination line is sometimes preferred (equivalent to subtracting 0.5 from the AUC), and referred to as the '''discrimination'''.{{Citation needed|date=October 2013}} In [[psychophysics]], the Sensitivity Index '''[[d']] (d-prime)''', ΔP' or DeltaP' is the most commonly used measure<ref>{{cite journal |first1=P. |last1=Perruchet |first2=R. |last2=Peereman |year=2004 |title=The exploitation of distributional information in syllable processing |journal=J. Neurolinguistics |volume=17 |pages=97−119}}</ref> and is equivalent to twice the discrimination, being equal also to Informedness, deskewed WRAcc and Gini Coefficient in the single point case (single parameterization or single system).<ref name="Powers2007"/>  These measures all have the advantage that 0 represents chance performance whilst 1 represents perfect performance, and -1 represents the "perverse" case of full informedness used to always give the wrong response.<ref>{{cite conference|first=David M. W. |last=Powers |year=2003 |title= Recall and Precision versus the Bookmaker |booktitle=Proceedings of the International Conference on Cognitive Science (ICSC- 2003), Sydney Australia, 2003, pp.529-534. | url=http://dl.dropbox.com/u/27743223/200302-ICCS-Bookmaker.pdf }}</ref>
 
These varying choices of scale are fairly arbitrary since chance performance always has a fixed value: for AUC it is 0.5, but these alternative scales bring chance performance to 0 and allow them to be interpreted as Kappa statistics. Informedness has been shown to have desirable characteristics for Machine Learning versus other common definitions of Kappa such as [[Cohen's kappa|Cohen Kappa]] and [[Fleiss' kappa|Fleiss Kappa]].<ref name="Powers2007"/><ref>{{cite conference |first=David M. W. |last=Powers |year=2012 |url=http://dl.dropbox.com/u/27743223/201209-eacl2012-Kappa.pdf |title=The Problem with Kappa |booktitle=Conference of the European Chapter of the Association for Computational Linguistics (EACL2012) Joint ROBUS-UNSUP Workshop}}</ref>
 
Sometimes it can be more useful to look at a specific region of the ROC Curve rather than at the whole curve. It is possible to compute partial AUC.<ref>{{cite journal |doi=10.1177/0272989X8900900307 |volume=9 |issue=3 |pages=190–195 |last=McClish |first=Donna Katzman |title=Analyzing a Portion of the ROC Curve |journal=Medical Decision Making |accessdate=2008-09-29 |date=1989-08-01 |url=http://mdm.sagepub.com/cgi/content/abstract/9/3/190 |pmid=2668680 }}</ref> For example, one could focus on the region of the curve with low false positive rate, which is often of prime interest for population screening tests.<ref>{{cite journal |doi=10.1111/1541-0420.00071 |volume=59 |issue=3 |pages=614–623 |last1=Dodd |first1=Lori E. | first2=Margaret S. |last2=Pepe |title=Partial AUC Estimation and Regression |journal=Biometrics |accessdate=2007-12-18 |year=2003 |url=http://www.blackwell-synergy.com/doi/abs/10.1111/1541-0420.00071 |pmid=14601762 }}</ref> Another common approach for classification problems in which P ≪ N (common in bioinformatics applications) is to use a logarithmic scale for the x-axis.<ref>Karplus, Kevin (2011); [http://www.soe.ucsc.edu/~karplus/papers/better-than-chance-sep-07.pdf ''Better than Chance: the importance of null models''], University of California, Santa Cruz, in Proceedings of the First International Workshop on Pattern Recognition in Proteomics, Structural Biology and Bioinformatics (PR PS BB 2011)</ref>
 
==Detection error tradeoff graph==
[[File:Example of DET curves.png|thumb|Example DET graph]]
An alternative to the ROC curve is the [[detection error tradeoff]] (DET) graph, which plots the false negative rate (missed detections) vs. the false positive rate (false alarms) on non-linearly transformed x- and y-axes. The transformation function is the quantile function of the normal distribution, i.e., the inverse of the cumulative normal distribution. It is, in fact, the same transformation as zROC, below, except that the complement of the hit rate, the miss rate or false negative rate, is used. This alternative spends more graph area on the region of interest. Most of the ROC area is of little interest; one primarily cares about the region tight against the y-axis and the top left corner – which, because of using miss rate instead of its complement, the hit rate, is the lower left corner in a DET plot. The DET plot is used extensively in the automatic speaker recognition community, where the name DET was first used. The analysis of the ROC performance in graphs with this warping of the axes was used by psychologists in perception studies halfway the 20th century, where this was dubbed "double probability paper".
 
==Z-transformation==
 
If a [[standard score|z-transform]]ation is applied to the ROC curve, the curve will be transformed into a straight line.<ref>{{cite book |first1=Neil A. |last1=MacMillan |first2=C. Douglas |last2=Creelman |edition=2nd |year=2005 |title=Detection Theory: A User's Guide |publisher=Lawrence Erlbaum Associates |location=Mahwah, NJ | isbn=1-4106-1114-0 }}</ref>  This z-transformation is based on a normal distribution with a mean of zero and a standard deviation of one. In memory [[strength theory]], one must assume that the zROC is not only linear, but has a slope of 1.0. The normal distributions of targets (studied objects that the subjects need to recall) and lures (non studied objects that the subjects attempt to recall) is the factor causing the zROC to be linear.
 
The linearity of the zROC curve depends on the standard deviations of the target and lure strength distributions. If the standard deviations are equal, the slope will be 1.0. If the standard deviation of the target strength distribution is larger than the standard deviation of the lure strength distribution, then the slope will be smaller than 1.0. In most studies, it has been found that the zROC curve slopes constantly fall below 1, usually between 0.5 and 0.9.<ref>{{cite journal |last=Glanzer |first=Murray |first2=Kim |last2=Kisok |last3=Hilford |first3=Andy |last4=Adams |first4=John K. |title=Slope of the receiver-operating characteristic in recognition memory |journal=Journal of Experimental Psychology: Learning, Memory, and Cognition |year=1999 |volume=25 |issue=2 |pages=500–513 }}</ref> Many experiments yielded a zROC slope of 0.8. A slope of 0.8 implies that the variability of the target strength distribution is 25% larger than the variability of the lure strength distribution.<ref>{{cite journal |last1=Ratcliff |first1=Roger |last2=McCoon |first2=Gail |last3=Tindall |first3=Michael |title=Empirical generality of data from recognition memory ROC functions and implications for GMMs |journal=Journal of Experimental Psychology: Learning, Memory, and Cognition |year=1994 |volume=20 |pages=763–785 }}</ref>
 
Another variable used is&nbsp;[[D prime|''d<nowiki>'</nowiki>'' (d prime)]] (discussed above in "Other measures"), which can easily be expressed in terms of z-values. Although ''d''<nowiki>'</nowiki>&nbsp;is a commonly used parameter, it must be recognized that it is only relevant when strictly adhering to the very strong assumptions of strength theory made above.<ref>{{cite journal |last=Zhang |first=Jun |last2=Mueller |first2=Shane T. |title=A note on ROC analysis and non-parametric estimate of sensitivity |journal=Psychometrika |year=2005 |volume=70 |issue=203-212 }}</ref>
 
The z-transformation of a ROC curve is always linear, as assumed, except in special situations. The [[Yonelinas familiarity-recollection model]] is a two-dimensional account of recognition memory. Instead of the subject simply answering yes or no to a specific input, the subject gives the input a feeling of familiarity, which operates like the original ROC curve. What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1. However, when adding the recollection component, the zROC curve will be concave up, with a decreased slope. This difference in shape and slope result from an added element of variability due to some items being recollected. Patients with anterograde amnesia are unable to recollect, so their Yonelinas zROC curve would have a slope close to 1.0.<ref>{{cite journal |last1=Yonelinas |first1=Andrew P. |last2=Kroll |first2=Neal E. A. |last3=Dobbins |first3=Ian G. |last4=Lazzara |first4=Michele |last5=Knight |first5=Robert T. |title=Recollection and familiarity deficits in amnesia: Convergence of remember-know, process dissociation, and receiver operating characteristic data |journal=Neuropsychology |year=1998 |volume=12 |pages=323–339 }}</ref>
 
==History==
The ROC curve was first used during [[World War II]] for the analysis of [[radar|radar signals]] before it was employed in [[signal detection theory]].<ref name="green66">{{cite book |first1=David M. |last1=Green |first2=John A. |last2=Swets |title=Signal detection theory and psychophysics |publisher=John Wiley and Sons Inc. |year=1966 |location=New York, NY |isbn=0-471-32420-5 }}</ref> Following the [[attack on Pearl Harbor]] in 1941, the United States army began new research to increase the prediction of correctly detected Japanese aircraft from their radar signals.
 
In the 1950s, ROC curves were employed in [[psychophysics]] to assess human (and occasionally non-human animal) detection of weak signals.<ref name="green66" /> In [[medicine]], ROC analysis has been extensively used in the evaluation of [[diagnostic test]]s.<ref>{{cite journal |first1=Mark H. |last1=Zweig |first2=Gregory |last2=Campbell |journal=Clinical Chemistry |pmid=8472349 |title=Receiver-operating characteristic (ROC) plots: a fundamental evaluation tool in clinical medicine |volume=39 |issue=8 |year=1993 |pages=561–577 |url=http://www.clinchem.org/content/39/4/561.full.pdf }}</ref><ref>{{cite book |first=Margaret S. |last=Pepe |title=The statistical evaluation of medical tests for classification and prediction |location=New York, NY |publisher=Oxford |year=2003 |isbn=0-19-856582-8 }}</ref> ROC curves are also used extensively in [[epidemiology]] and [[medical research]] and are frequently mentioned in conjunction with [[evidence-based medicine]]. In [[radiology]], ROC analysis is a common technique to evaluate new radiology techniques.<ref>{{cite journal |first=Nancy A. |last=Obuchowski |title=Receiver operating characteristic curves and their use in radiology |pmid=14519861 |journal=Radiology |volume=229 |issue=1 |year=2003 |pages=3–8 |doi=10.1148/radiol.2291010898 }}</ref> In the social sciences, ROC analysis is often called the ROC Accuracy Ratio, a common technique for judging the accuracy of default probability models.
 
ROC curves also proved useful for the evaluation of [[machine learning]] techniques. The first application of ROC in machine learning was by Spackman who demonstrated the value of ROC curves in comparing and evaluating different classification [[algorithm]]s.<ref>{{cite conference |last=Spackman |first=Kent A. |year=1989 |title=Signal detection theory: Valuable tools for evaluating inductive learning |booktitle=Proceedings of the Sixth International Workshop on Machine Learning |location=San Mateo, CA |pages=160–163 |publisher=[[Morgan Kaufmann]] }}</ref>
 
==ROC curves beyond binary classification==
The extension of ROC curves for classification problems with more than two classes has always been cumbersome, as the degrees of freedom increase quadratically with the number of classes, and the ROC space has <math>c(c-1)</math> dimensions, where <math>c</math> is the number of classes.<ref name="Srinivasan99">{{cite conference |first=A.|last=Srinivasan|year=1999 |title=Note on the Location of Optimal Classifiers in N-dimensional ROC Space|booktitle=Technical Report PRG-TR-2-99, Oxford University Computing Laboratory, Wolfson Building, Parks Road, Oxford.}}</ref> Some approaches have been made for the particular case with three classes (three-way ROC).<ref name="Mossman99">{{cite journal |first=D.| last= Mossman |year=1999 |title=Three-way ROCs |journal=Medical Decision Making |volume=19 |pages=78–89}}</ref> The calculation of the volume under the ROC surface (VUS) has been analyzed and studied as a performance metric for multi-class problems.<ref name="Ferri03">{{cite conference |first1=C. | last1=Ferri |first2=J.|last2=Hernandez-Orallo | first3=M.A.| last3=Salido |year=2003 |title=Volume under the ROC Surface for Multi-class Problems|booktitle=Machine Learning: ECML 2003|pages=108–120.}}</ref> However, because of the complexity of approximating the true VUS, some other approaches <ref name="HandTill01">{{cite journal |first1=D.J.| last1= Till| first2=R.J.| last2=Hand |year=2012 |title=A Simple Generalisation of the Area Under the ROC Curve for Multiple Class Classification Problems|journal=Machine Learning|volume=45|pages=171–186}}</ref> based on an extension of AUC are more popular as an evaluation metric.
 
Given the success of ROC curves for the assessment of classification models, the extension of ROC curves for other supervised tasks has also been investigated. Notable proposals for regression problems are the so-called regression error characteristic (REC) Curves <ref name="bij2003regression">{{cite conference |first1=J.|last1=Bi | first2=K.P.| last2=Bennett |year=2003 |title=Regression error characteristic curves |booktitle=Twentieth International Conference on Machine Learning (ICML-2003). Washington, DC.}}</ref> and the Regression ROC (RROC) curves.<ref name="hernandez2013rroc ">{{cite journal |first=J.| last= Hernandez-Orallo |year=2013 |title=ROC curves for regression|journal=Pattern Recognition|volume=46 |issue=12|pages=3395–3411 .}}</ref> In the latter, RROC curves become extremely similar to ROC curves for classification, with the notions of asymmetry, dominance and convex hull. Also, the area under RROC curves is proportional to the error variance of the regression model.
 
==See also==
{{Commons category|Receiver operating characteristic}}
* [[F1 score]]
* [[Brier score]]
* [[Coefficient of determination]]
* [[Constant false alarm rate]]
* [[Detection theory]]
* [[False alarm]]
* [[Gain (information retrieval)]]
* [[Precision and recall]]
 
==References==
{{Reflist|2}}
 
===General references===
* {{cite book |first1=Xiao-Hua |last1=Zhou |first2=Nancy A. |last2=Obuchowski |first3=Donna K. |last3=McClish |year=2002 |title=Statistical Methods in Diagnostic Medicine |publisher=Wiley & Sons |location=New York, NY| isbn=978-0-471-34772-9 }}
 
==Further reading==
* Balakrishnan, Narayanaswamy (1991); ''Handbook of the Logistic Distribution'', Marcel Dekker, Inc., ISBN 978-0-8247-8587-1
* Brown, Christopher D.; and Davis, Herbert T. (2006); ''[http://dx.doi.org/10.1016/j.chemolab.2005.05.004 Receiver operating characteristic curves and related decision measures: a tutorial]'', Chemometrics and Intelligent Laboratory Systems, '''80''':24–38
* Fawcett, Tom (2004); ''[http://home.comcast.net/~tom.fawcett/public_html/papers/ROC101.pdf ROC Graphs: Notes and Practical Considerations for Researchers]'', Pattern Recognition Letters, '''27'''(8):882–891.
* Gonen, Mithat (2007); ''Analyzing Receiver Operating Characteristic Curves Using SAS'', SAS Press, ISBN 978-1-59994-298-8
* Green, William H., (2003) ''Econometric Analysis'', fifth edition, [[Prentice Hall]], ISBN 0-13-066189-9
* Heagerty, Patrick J.; Lumley, Thomas; and Pepe, Margaret S. (2000); ''Time-dependent ROC Curves for Censored Survival Data and a Diagnostic Marker'', Biometrics, '''56''':337–344
* Hosmer, David W.; and Lemeshow, Stanley (2000); ''Applied Logistic Regression'', 2nd ed., New York, NY: [[John Wiley & Sons|Wiley]], ISBN 0-471-35632-8
* Lasko, Thomas A.; Bhagwat, Jui G.; Zou, Kelly H.; and Ohno-Machado, Lucila (2005); [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.97.9674&rep=rep1&type=pdf&ei=GpRGT_juOo3H0AH3quCqDg&usg=AFQjCNHvAiRwGwk8mRE7sMtPEOKXClmCsA&cad=rja ''The use of receiver operating characteristic curves in biomedical informatics''], Journal of Biomedical Informatics, 38(5):404–415
* Stephan, Carsten; Wesseling, Sebastian; Schink, Tania; and Jung, Klaus (2003); [http://www.clinchem.org/content/49/3/433.abstract ''Comparison of Eight Computer Programs for Receiver-Operating Characteristic Analysis''], Clinical Chemistry, '''49''':433–439
* Swets, John A.; Dawes, Robyn M.; and Monahan, John (2000); ''Better Decisions through Science'', [[Scientific American]], October, pp.&nbsp;82–87
* Zou, Kelly H.; O'Malley, A. James; Mauri, Laura (2007); [http://circ.ahajournals.org/content/115/5/654.full ''Receiver-operating characteristic analysis for evaluating diagnostic tests and predictive models''], Circulation, 115(5):654–7
 
==External links==
{{External links|date=March 2010}}
* [http://www.spl.harvard.edu/archive/spl-pre2007/pages/ppl/zou/roc.html Kelly H. Zou's bibliography of ROC literature and articles]
* [http://home.comcast.net/~tom.fawcett/public_html/ROCCH/index.html Tom Fawcett's ROC Convex Hull: tutorial, program and papers]
* [http://www.cs.bris.ac.uk/~flach/ICML04tutorial/index.html Peter Flach's tutorial on ROC analysis in machine learning]
* [http://www.anaesthetist.com/mnm/stats/roc/ The magnificent ROC] – An explanation and interactive demonstration of the connection of ROCs to archetypal bi-normal test result plots
* [http://www.rad.jhmi.edu/jeng/javarad/roc/ Web-based calculator for ROC Curves] – by [http://www.rad.jhmi.edu/jeng/javarad/ John Eng]
* [http://www.cs.ucl.ac.uk/staff/W.Langdon/roc/ Convex Hull, cost trade off, etc]
 
{{DEFAULTSORT:Receiver Operating Characteristic}}
[[Category:Detection theory]]
[[Category:Data mining]]
[[Category:Socioeconomics]]
[[Category:Biostatistics]]
[[Category:Statistical classification]]
[[Category:Summary statistics for contingency tables]]
 
{{Portal|Statistics}}
{{Statistics|state=expanded}}

Latest revision as of 21:27, 2 January 2015

Oscar is how he's known as and he completely enjoys this name. To perform baseball is the hobby he will by no means stop performing. He used to be unemployed but now he is a meter reader. California is our beginning place.

My web-site; home std test kit - go now -