Gain (information retrieval)

From formulasearchengine
Jump to navigation Jump to search

{{#invoke:Hatnote|hatnote}} {{ safesubst:#invoke:Unsubst||$N=Unreferenced |date=__DATE__ |$B= {{#invoke:Message box|ambox}} }} The gain, also called improvement over random Template:Cn can be specified for a classifier and is an important measure {{ safesubst:#invoke:Unsubst||$N=Dubious |date=__DATE__ |$B= {{#invoke:Category handler|main}}[dubious ] }} to describe the performance of it.


In the following a random classifier is defined such that it randomly predicts the same amount of either class.

The gain is defined as described in the following:

Gain in Precision

The random precision of a classifier is defined as

where TP, TN, FP and FN are the numbers of true positives, true negatives, false positives and false negatives respectively, positives is the number of positive instances in the target dataset and N is the size of the dataset.

The random precision defines the lowest baseline of a classifier.

And Gain is defined as

which gives a factor by which a classifier is better when compared to its random counterpart. A Gain of 1 would indicate a classifier that is not better than random. The larger the gain, the better.

Gain in Overall Accuracy

The accuracy of a classifier in general is defined as

Here, the random accuracy of a classifier can be defined as

f(Positives) and f(Negatives) is the fraction of positive and negative classes in the dataset.

And again gain is

This time the gain is measured not only with respect to the prediction of a so-called positive class, but with respect to the overall classifier ability to distinguish the two equally important classes.


In Bioinformatics as an example, the gain is measured for methods that predict residue contacts in proteins.

See also