Main Page: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
No edit summary
No edit summary
 
(592 intermediate revisions by more than 100 users not shown)
Line 1: Line 1:
'''Binary''' or '''binomial classification''' is the task of [[Statistical classification|classifying]] the members of a given [[Set (mathematics)|set]] of objects into two groups on the basis of whether they have some [[property]] or not. Some typical binary classification tasks are
This is a preview for the new '''MathML rendering mode''' (with SVG fallback), which is availble in production for registered users.


* medical testing to determine if a patient has certain disease or not (the classification property is the disease)
If you would like use the '''MathML''' rendering mode, you need a wikipedia user account that can be registered here [[https://en.wikipedia.org/wiki/Special:UserLogin/signup]]
* quality control in factories; i.e. deciding if a new product is good enough to be sold, or if it should be discarded (the classification property is being good enough)
* Only registered users will be able to execute this rendering mode.
* deciding whether a page or an article should be in the result set of a search or not (the classification property is the relevance of the article - typically the presence of a certain word in it)
* Note: you need not enter a email address (nor any other private information). Please do not use a password that you use elsewhere.


[[Statistical classification]] in general is one of the problems studied in [[computer science]], in order to automatically learn classification systems; some methods suitable for learning binary classifiers include the [[Decision_tree_learning|decision trees]], [[Bayesian network]]s, [[support vector machine]]s, and [[neural network]]s.
Registered users will be able to choose between the following three rendering modes:


Sometimes, classification tasks are trivial. Given 100 balls, some of them red and some blue, a human with normal color vision can easily separate them into red ones and blue ones. However, some tasks, like those in practical medicine, and those interesting from the computer science point-of-view, are far from trivial, and may produce faulty results if executed imprecisely.
'''MathML'''
:<math forcemathmode="mathml">E=mc^2</math>


==Hypothesis testing==
<!--'''PNG'''  (currently default in production)
In traditional [[statistical hypothesis testing]], the tester starts with a [[null hypothesis]] and an [[alternative hypothesis]], performs an experiment, and then decides whether to reject the null hypothesis in favour of the alternative. Hypothesis testing is therefore a binary classification of the hypothesis under study.
:<math forcemathmode="png">E=mc^2</math>


A positive or [[statistically significant]] result is one which rejects the null hypothesis.  Doing this when the null hypothesis is in fact true - a false positive - is a [[type I error]]; doing this when the null hypothesis is false results in a true positive. A negative or not statistically significant result is one which does not reject the null hypothesis.  Doing this when the null hypothesis is in fact false - a false negative - is a [[type II error]]; doing this when the null hypothesis is true results in a true negative.
'''source'''
:<math forcemathmode="source">E=mc^2</math> -->


==Evaluation of binary classifiers==
<span style="color: red">Follow this [https://en.wikipedia.org/wiki/Special:Preferences#mw-prefsection-rendering link] to change your Math rendering settings.</span> You can also add a [https://en.wikipedia.org/wiki/Special:Preferences#mw-prefsection-rendering-skin Custom CSS] to force the MathML/SVG rendering or select different font families. See [https://www.mediawiki.org/wiki/Extension:Math#CSS_for_the_MathML_with_SVG_fallback_mode these examples].
:''See also: [[sensitivity and specificity]], [[precision and recall]]''
[[Image:binary-classification-labeled.svg|thumb|220px|right|From the [[confusion matrix]] you can derive four basic measures]]
To measure the performance of a medical test, the concepts [[sensitivity (tests)|sensitivity]] and [[Specificity (tests)|specificity]] are often used; these concepts are readily usable for the evaluation of any binary classifier. Say we test some people for the presence of a disease. Some of these people have the disease, and our test says they are positive. They are called ''true positives'' (TP). Some have the disease, but the test claims they don't. They are called ''false negatives'' (FN). Some don't have the disease, and the test says they don't - ''true negatives'' (TN). Finally, there might be healthy people who have a positive test result - ''false positives'' (FP). Thus, the number of true positives, false negatives, true negatives, and false positives add up to 100% of the set.


'''Specificity''' (TNR) is the proportion of people that tested negative (TN) of all the people that actually are negative (TN+FP). As with sensitivity, it can be looked at as ''the probability that the test result is negative given that the patient is not sick''. With higher specificity, fewer healthy people are labeled as sick (or, in the factory case, the less money the factory loses by discarding good products instead of selling them).
==Demos==


'''Sensitivity''' (TPR) is the proportion of people that tested positive (TP) of all the people that actually are positive (TP+FN). It can be seen as ''the probability that the test is positive given that the patient is sick''. With higher sensitivity, fewer actual cases of disease go undetected (or, in the case of the factory quality control, the fewer faulty products go to the market).
Here are some [https://commons.wikimedia.org/w/index.php?title=Special:ListFiles/Frederic.wang demos]:


The relationship between sensitivity and specificity, as well as the performance of the classifier, can be visualized and studied using [[Receiver_Operating_Characteristic|the ROC curve]].


In theory, sensitivity and specificity are independent in the sense that it is possible to achieve 100% in both (such as in the red/blue ball example given above). In more practical, less contrived instances, however, there is usually a trade-off, such that they are inversely proportional to one another to some extent. This is because we rarely measure the actual thing we would like to classify; rather, we generally measure an indicator of the thing we would like to classify, referred to as a [[surrogate endpoint|surrogate marker]]. The reason why 100% is achievable in the ball example is because redness and blueness is determined by directly detecting redness and blueness. However, indicators are sometimes compromised, such as when non-indicators mimic indicators or when indicators are time-dependent, only becoming evident after a certain lag time. The following example of a pregnancy test will make use of such an indicator.
* accessibility:
** Safari + VoiceOver: [https://commons.wikimedia.org/wiki/File:VoiceOver-Mac-Safari.ogv video only], [[File:Voiceover-mathml-example-1.wav|thumb|Voiceover-mathml-example-1]], [[File:Voiceover-mathml-example-2.wav|thumb|Voiceover-mathml-example-2]], [[File:Voiceover-mathml-example-3.wav|thumb|Voiceover-mathml-example-3]], [[File:Voiceover-mathml-example-4.wav|thumb|Voiceover-mathml-example-4]], [[File:Voiceover-mathml-example-5.wav|thumb|Voiceover-mathml-example-5]], [[File:Voiceover-mathml-example-6.wav|thumb|Voiceover-mathml-example-6]], [[File:Voiceover-mathml-example-7.wav|thumb|Voiceover-mathml-example-7]]
** [https://commons.wikimedia.org/wiki/File:MathPlayer-Audio-Windows7-InternetExplorer.ogg Internet Explorer + MathPlayer (audio)]
** [https://commons.wikimedia.org/wiki/File:MathPlayer-SynchronizedHighlighting-WIndows7-InternetExplorer.png Internet Explorer + MathPlayer (synchronized highlighting)]
** [https://commons.wikimedia.org/wiki/File:MathPlayer-Braille-Windows7-InternetExplorer.png Internet Explorer + MathPlayer (braille)]
** NVDA+MathPlayer: [[File:Nvda-mathml-example-1.wav|thumb|Nvda-mathml-example-1]], [[File:Nvda-mathml-example-2.wav|thumb|Nvda-mathml-example-2]], [[File:Nvda-mathml-example-3.wav|thumb|Nvda-mathml-example-3]], [[File:Nvda-mathml-example-4.wav|thumb|Nvda-mathml-example-4]], [[File:Nvda-mathml-example-5.wav|thumb|Nvda-mathml-example-5]], [[File:Nvda-mathml-example-6.wav|thumb|Nvda-mathml-example-6]], [[File:Nvda-mathml-example-7.wav|thumb|Nvda-mathml-example-7]].
** Orca: There is ongoing work, but no support at all at the moment [[File:Orca-mathml-example-1.wav|thumb|Orca-mathml-example-1]], [[File:Orca-mathml-example-2.wav|thumb|Orca-mathml-example-2]], [[File:Orca-mathml-example-3.wav|thumb|Orca-mathml-example-3]], [[File:Orca-mathml-example-4.wav|thumb|Orca-mathml-example-4]], [[File:Orca-mathml-example-5.wav|thumb|Orca-mathml-example-5]], [[File:Orca-mathml-example-6.wav|thumb|Orca-mathml-example-6]], [[File:Orca-mathml-example-7.wav|thumb|Orca-mathml-example-7]].
** From our testing, ChromeVox and JAWS are not able to read the formulas generated by the MathML mode.


Modern pregnancy tests ''do not'' use the pregnancy itself to determine pregnancy status; rather, [[human chorionic gonadotropin]] is used, or hCG, present in the urine of [[gravid]] females, as a ''surrogate marker to indicate'' that a woman is pregnant.  Because hCG can also be produced by a [[neoplasm|tumor]], the specificity of modern pregnancy tests cannot be 100% (in that false positives are possible).  Also, because hCG is present in the urine in such small concentrations after fertilization and early [[embryogenesis]], the sensitivity of modern pregnancy tests cannot be 100% (in that false negatives are possible).
==Test pages ==


In addition to sensitivity and specificity, the performance of a binary classification test can be measured with '''[[positive predictive value|positive]] (PPV) and [[negative predictive value]]s (NPV)'''. The positive prediction value answers the question "If the test result is ''positive'', how well does that ''predict'' an actual presence of disease?". It is calculated as (true positives) / (true positives + false positives); that is, it is the proportion of true positives out of all positive results. (The negative prediction value is the same, but for negatives, naturally.)
To test the '''MathML''', '''PNG''', and '''source''' rendering modes, please go to one of the following test pages:
*[[Displaystyle]]
*[[MathAxisAlignment]]
*[[Styling]]
*[[Linebreaking]]
*[[Unique Ids]]
*[[Help:Formula]]


There is one crucial difference between the two concepts: Sensitivity and specificity are independent from the population in the sense that they do not change depending on the tested proportion of positives and negatives. Indeed, the sensitivity of the test can be determined by testing ''only'' positive cases. However, the prediction values are dependent on the population.
*[[Inputtypes|Inputtypes (private Wikis only)]]
 
*[[Url2Image|Url2Image (private Wikis only)]]
===Example===
==Bug reporting==
 
If you find any bugs, please report them at [https://bugzilla.wikimedia.org/enter_bug.cgi?product=MediaWiki%20extensions&component=Math&version=master&short_desc=Math-preview%20rendering%20problem Bugzilla], or write an email to math_bugs (at) ckurs (dot) de .
As an example, suppose there is a test for a disease with 99% sensitivity and 99% specificity. If 2000 people are tested, 1000 of them are sick and 1000 of them are healthy. About 990 true positives 990 true negatives are likely, with 10 false positives and 10 false negatives. The positive and negative prediction values would be 99%, so there can be high confidence in the result.
 
However, if of the 2000 people only 100 are really sick: the likely result is 99 true positives, 1 false negative, 1881 true negatives and 19 false positives. Of the 19+99 people tested positive, only 99 really have the disease - that means, intuitively, that given that a patient's test result is positive, there is only 84% chance that he or she really has the disease. On the other hand, given that the patient's test result is negative, there is only 1 chance in 1882, or 0.05% probability, that the patient has the disease despite the test result.
 
==Measuring a classifier with sensitivity and specificity==
 
Suppose you are training your own classifier, and you wish to measure its performance using the well-accepted metrics of sensitivity and specificity.  It may be instructive to compare your classifier to a random classifier that flips a coin based on the prevalence of a disease.  Suppose that the probability a person has the disease is <math>p</math> and the probability that they do not is <math>q=1-p</math>.  Suppose then that we have a random classifier that guesses that you have the disease with that same probability <math>p</math> and guesses you do not with the same probability <math>q</math>.
 
The probability of a true positive is the probability that you have the disease and the random classifier guesses that you do, or <math>p^2</math>.  With similar reasoning, the probability of a false negative is <math>pq</math>. From the definitions above, the sensitivity of this classifier is <math>p^2/(p^2+pq)=p</math>.  With more similar reasoning, we can calculate the specificity as <math>q^2/(q^2+pq)=q</math>.
 
So, while the measure itself is independent of disease prevalence, the performance of this random classifier depends on disease prevalence.  Your classifier may have performance that is like this random classifier, but with a better-weighted coin (higher sensitivity and specificity).  So, these measures may be influenced by disease prevalence. An alternative measure of performance is the [[Matthews correlation coefficient]], for which any random classifier will get an average score of 0.
 
==Converting continuous values to binary==
{{anchor|artificial}} <!--Artificially binary value redirects here-->
Tests whose results are of continuous values, such as most [[blood values]], can artificially be made binary by defining a [[cutoff (reference value)|cutoff value]], with test results being designated as [[positive or negative test|positive or negative]] depending on whether the resultant value is higher or lower than the cutoff.
 
However, such conversion causes a loss of information, as the resultant binary classification does not tell ''how much'' above or below the cutoff a value is. As a result, when converting a continuous value that is close to the cutoff to a binary one, the resultant [[Positive predictive value|positive]] or [[negative predictive value]] is generally higher than the [[predictive value]] given directly from the continuous value. In such cases, the designation of the test of being either positive or negative gives the appearance of an inappropriately high certainty, while the value is in fact in an interval of uncertainty. For example, with the urine concentration of [[Human chorionic gonadotropin|hCG]] as a continuous value, a urine [[pregnancy test]] that measured 52 mIU/ml of hCG may show as "positive" with 50 mIU/ml as cutoff, but is in fact in an interval of uncertainty, which may be apparent only by knowing the original continuous value. On the other hand, a test result very far from the cutoff generally has a resultant positive or negative predictive value that is lower than the predictive value given from the continuous value. For example, a urine hCG value of 200,000 mIU/ml confers a very high probability of pregnancy, but conversion to binary values results in that it shows just as "positive" as the one of 52 mIU/ml.
 
==See also==
* [[Multiclass classification]]
* [[Multi-label classification]]
* [[kernel methods]]
* [[Thresholding (image processing)]]
* [[prosecutor's fallacy]]
* [[Bayesian inference#Simple examples of Bayesian inference|Examples of Bayesian inference]]
* [[Receiver operating characteristic]]
* [[Matthews correlation coefficient]]
* [[Classification rule]]
 
{{No footnotes|date=March 2011}}
{{Refimprove|date=March 2011}}
 
== Bibliography ==
* [[Nello Cristianini]] and [[John Shawe-Taylor]]. ''An Introduction to Support Vector Machines and other kernel-based learning methods''. Cambridge University Press, 2000. ISBN 0-521-78019-5 ''([http://www.support-vector.net] SVM Book)''
* John Shawe-Taylor and Nello Cristianini.  ''Kernel Methods for Pattern Analysis''.  Cambridge University Press, 2004.  ISBN 0-521-81397-2 ''([http://www.kernel-methods.net] Kernel Methods Book)''
* Bernhard Schölkopf and A. J. Smola: ''Learning with Kernels''. MIT Press, Cambridge, MA, 2002. ''(Partly available on line: [http://www.learning-with-kernels.org].)'' ISBN 0-262-19475-9
 
[[Category:Statistical classification]]
[[Category:Machine learning]]
 
[[de:Beurteilung eines Klassifikators]]
[[fa:رده‌بندی بیزی]]
[[he:מדדים למבחנים איבחונים]]
[[ja:二項分類]]
[[vi:Phân loại nhị phân]]

Latest revision as of 23:52, 15 September 2019

This is a preview for the new MathML rendering mode (with SVG fallback), which is availble in production for registered users.

If you would like use the MathML rendering mode, you need a wikipedia user account that can be registered here [[1]]

  • Only registered users will be able to execute this rendering mode.
  • Note: you need not enter a email address (nor any other private information). Please do not use a password that you use elsewhere.

Registered users will be able to choose between the following three rendering modes:

MathML


Follow this link to change your Math rendering settings. You can also add a Custom CSS to force the MathML/SVG rendering or select different font families. See these examples.

Demos

Here are some demos:


Test pages

To test the MathML, PNG, and source rendering modes, please go to one of the following test pages:

Bug reporting

If you find any bugs, please report them at Bugzilla, or write an email to math_bugs (at) ckurs (dot) de .