TeX: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Monkbot
No edit summary
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
A '''statistical hypothesis test''' is a method of [[statistical inference]] using data from a [[scientific method|scientific study]]. In [[statistics]], a result is called [[statistically significant]] if it has been predicted as unlikely to have occurred by [[Luck|chance]] alone, according to a pre-determined threshold probability, the significance level. The phrase "test of significance" was coined by statistician [[Ronald Fisher]].<ref name="Fisher1925">R. A. Fisher (1925).''Statistical Methods for Research Workers'', Edinburgh: Oliver and Boyd, 1925, p.43.</ref> These tests are used in determining what outcomes of a study would lead to a rejection of the [[null hypothesis]] for a pre-specified level of significance; this can help to decide whether results contain enough information to cast doubt on conventional wisdom, given that conventional wisdom has been used to establish the null hypothesis. The ''critical region'' of a hypothesis test is the set of all outcomes which cause the null hypothesis to be rejected in favor of the [[alternative hypothesis]]. Statistical hypothesis testing is sometimes called '''confirmatory data analysis''', in contrast to [[exploratory data analysis]], which may not have pre-specified hypotheses.
Hello, I'm Sammy, a 19 year old from Opitter, Belgium.<br>My hobbies include (but are not limited to) Sand castle building, Kiteboarding and watching 2 Broke Girls.<br><br>Stop by my web-site - [http://cnx.dk/index.php?m=member_blog&p=view&id=29539&sid=151415 Fifa 15 Coin Generator]
 
== Variations and Sub-Classes ==
 
Statistical hypothesis testing is a key technique of both [[Frequentist inference]] and [[Bayesian inference]] although they have notable differences. Statistical hypothesis tests define a procedure that controls (fixes) the probability of incorrectly ''deciding'' that a default position ([[null hypothesis]]) is incorrect based on how likely it would be for a set of observations to occur if the null hypothesis were true. Note that this probability of making an incorrect decision is not the probability that the null hypothesis is true, nor whether any specific alternative hypothesis is true. This contrasts with other possible techniques of [[decision theory]] in which the null and [[alternative hypothesis]] are treated on a more equal basis. One naive [[Bayesian statistics|Bayesian]] approach to hypothesis testing is to base decisions on the [[posterior probability]],<ref>Schervish, M (1996) ''Theory of Statistics'', p. 218. Springer ISBN 0-387-94546-6</ref><ref>{{cite book|title=Reference Manual on Scientific Evidence|publisher=West National Academies Press|chapter=Reference Guide on Statistics|first1=David H.|last1=Kaye|first2=David A.|last2=Freedman|url=http://www.nap.edu/openbook.php?record_id=13163&page=211|location=Eagan, MN Washington, D.C|year=2011|edition=3rd|page=259|isbn=978-0-309-21421-6}}</ref> but this fails when comparing point and continuous hypotheses. Other approaches to decision making, such as [[Bayesian decision theory]], attempt to balance the consequences of incorrect decisions across all possibilities, rather than concentrating on a single null hypothesis. A number of other approaches to reaching a decision based on data are available via [[decision theory]] and [[optimal decision]]s, some of which have desirable properties, yet hypothesis testing is a dominant approach to data analysis in many fields of science. Extensions to the theory of hypothesis testing include the study of the [[statistical power|power]] of tests, which refers to the probability of correctly rejecting the null hypothesis when a given state of nature exists. Such considerations can be used for the purpose of [[sample size determination]] prior to the collection of data.
 
==The testing process==
In the statistical literature, statistical hypothesis testing plays a fundamental role.<ref name=LR/> The usual line of reasoning is as follows:
# There is an initial research hypothesis of which the truth is unknown.
# The first step is to state the relevant '''null and alternative hypotheses'''. This is important as mis-stating the hypotheses will [[Garbage In, Garbage Out|muddy the rest of the process]]. Specifically, the null hypothesis allows to attach an attribute: it should be chosen in such a way that it allows us to conclude whether the alternative hypothesis can either be accepted or stays undecided as it was before the test.<ref name="Adèr, 2008">[[Adèr, J.H.]] (2008). Chapter 12: Modelling. In [[H.J. Adèr]] & [[Gideon J. Mellenbergh|G.J. Mellenbergh]] (Eds.) (with contributions by D.J. Hand), Advising on Research Methods: A consultant's companion (pp. 183–209). Huizen, The Netherlands: Johannes van Kessel Publishing</ref>
# The second step is to consider the [[statistical assumption]]s being made about the sample in doing the test; for example, assumptions about the [[statistical independence]] or about the form of the distributions of the observations. This is equally important as invalid assumptions will mean that the results of the test are invalid.
# Decide which test is appropriate, and state the relevant '''[[test statistic]]''' <var>T</var>.
# Derive the distribution of the test statistic under the null hypothesis from the assumptions. In standard cases this will be a well-known result. For example the test statistic might follow a [[Student's t distribution]] or a [[normal distribution]].
# Select a significance level (''α''), a probability threshold below which the null hypothesis will be rejected. Common values are 5% and 1%.
# The distribution of the test statistic under the null hypothesis partitions the possible values of <var>T</var> into those for which the null hypothesis is rejected, the so-called critical region, and those for which it is not. The probability of the critical region is ''α''.
# Compute from the observations the observed value <var>t</var><sub>obs</sub></var> of the test statistic <var>T</var>.
# Decide to either reject the null hypothesis in favor of the alternative or not reject it. The decision rule is to reject the null hypothesis <var>H</var><sub>0</sub> if the observed value <var>t</var><sub>obs</sub> is in the critical region, and to accept or "fail to reject" the hypothesis otherwise.
 
An alternative process is commonly used:
# <li value="7"> Compute from the observations the observed value <var>t</var><sub>obs</sub></var> of the test statistic <var>T</var>.
# From the statistic calculate a probability of the observation under the null hypothesis (the [[p-value]]).
# Reject the null hypothesis in favor of the alternative or not reject it. The decision rule is to reject the null hypothesis if and only if the p-value is less than the significance level (the selected probability) threshold.
 
The two processes are equivalent.<ref>{{cite book|last=Triola|first=Mario|title=Elementary statistics|publisher=Addison-Wesley|location=Boston|year=2001|isbn=0-201-61477-4|edition=8|page=388}}</ref> The former process was advantageous in the past when only tables of test statistics at common probability thresholds were available. It allowed a decision to be made without the calculation of a probability. It was adequate for classwork and for operational use, but it was deficient for reporting results.
 
The latter process relied on extensive tables or on computational support not always available. The explicit calculation of a
probability is useful for reporting. The calculations are now trivially performed with appropriate software.
 
The difference in the two processes applied to the Radioactive suitcase example:
* "The Geiger-counter reading is 10. The limit is 9. Check the suitcase."
* "The Geiger-counter reading is high; 97% of safe suitcases have lower readings. The limit is 95%. Check the suitcase."
The former report is adequate, the latter gives a more detailed explanation of the data and the reason why the suitcase is being checked.
 
It is important to note the philosophical difference between accepting the null hypothesis and simply failing to reject it. The "fail to reject" terminology highlights the fact that the null hypothesis is assumed to be true from the start of the test; if there is a lack of evidence against it, it simply continues to be assumed true. The phrase "accept the null hypothesis" may suggest it has been proved simply because it has not been disproved, a logical [[fallacy]] known as the [[argument from ignorance]]. Unless a test with particularly high [[Statistical power|power]] is used, the idea of "accepting" the null hypothesis may be dangerous. Nonetheless the terminology is prevalent throughout statistics, where its meaning is well understood.
 
Alternatively, if the testing procedure forces us to reject the null hypothesis (H<sub>0</sub>), we can accept the alternative hypothesis (H<sub>1</sub>) and we conclude that the research hypothesis is supported by the data. This fact expresses that our procedure is based on probabilistic considerations in the sense we accept that using another set of data could lead us to a different conclusion.
 
The processes described here are perfectly adequate for computation. They seriously neglect the [[design of experiments]] considerations.<ref>{{cite book|author=Hinkelmann, Klaus and [[Oscar Kempthorne|Kempthorne, Oscar]]|year=2008|title=Design and Analysis of Experiments|volume=I and II|edition=Second|publisher=Wiley|isbn=978-0-470-38551-7}}</ref><ref>{{cite book|last=Montgomery|first=Douglas|title=Design and analysis of experiments|publisher=Wiley|location=Hoboken, NJ|year=2009|isbn=978-0-470-12866-4}}</ref>
 
It is particularly critical that appropriate sample sizes be estimated before conducting the experiment.
 
===Interpretation===
If the p-value is less than the required significance level (equivalently, if the observed test statistic is in the
critical region), then we say the null hypothesis is rejected at the given level of significance. Rejection of the null hypothesis is a conclusion. This is like a "guilty" verdict in a criminal trial – the evidence is sufficient to reject innocence, thus proving guilt. We might accept the alternative hypothesis (and the research hypothesis).
 
If the p-value is '''not''' less than the required significance level (equivalently, if the observed test statistic is outside the critical region), then the test has no result. The evidence is insufficient to support a conclusion. (This is like a jury that fails to reach a verdict.) The researcher typically gives extra consideration to those cases where the p-value is close to the significance level.
 
In the Lady tasting tea example (below), Fisher required the Lady to properly categorize all of the cups of tea to justify the conclusion that the result was unlikely to result from chance. He defined the critical region as that case alone. The region was defined by a probability (that the null hypothesis was correct) of less than 5%.
 
Whether rejection of the null hypothesis truly justifies acceptance of the research hypothesis depends on the structure of the hypotheses. Rejecting the hypothesis that a large paw print originated from a bear does not immediately prove the existence of [[Bigfoot]]. Hypothesis testing emphasizes the rejection which is based on a probability rather than the acceptance which requires extra steps of logic.
 
"The probability of rejecting the null hypothesis is a function of
five factors: whether the test is one- or two tailed, the level of
significance, the standard deviation, the amount of deviation from the
null hypothesis, and the number of observations."<ref name=bakan66>
{{cite journal
| last = Bakan
| first = David
| title = The test of significance in psychological research
| journal = Psychological Bulletin
| volume = 66 | issue = 6 | pages = 423–437
| year = 1966
}}</ref> These factors are a source of criticism.
 
===Use and importance===
Statistics are helpful in analyzing most collections of data. This is equally true of hypothesis testing which can justify conclusions even when no scientific theory exists. In the Lady tasting tea example, it was "obvious" that no difference existed between (milk poured into tea) and (tea poured into milk). The data contradicted the "obvious".
 
Real world applications of hypothesis testing include:<ref name=larsen>{{cite book|author=Richard J. Larsen, Donna Fox Stroup|title=Statistics in the Real World: a book of examples|publisher=Macmillan|isbn=978-0023677205|year=1976}}</ref>
* Testing whether more men than women suffer from nightmares
* Establishing authorship of documents
* Evaluating the effect of the full moon on behavior
* Determining the range at which a bat can detect an insect by echo
* Deciding whether hospital carpeting results in more infections
* Selecting the best means to stop smoking
* Checking whether bumper stickers reflect car owner behavior
* Testing the claims of handwriting analysts
 
Statistical hypothesis testing plays an important role in the whole of statistics and in [[statistical inference]]. For example, Lehmann (1992) in a review of the fundamental paper by Neyman and Pearson (1933) says: "Nevertheless, despite their shortcomings, the new paradigm formulated in the 1933 paper, and the many developments carried out within its framework continue to play a central role in both the theory and practice of statistics and can be expected to do so in the foreseeable future".
 
Significance testing has been the favored statistical tool
in some experimental social sciences (over 90% of articles in the
Journal of Applied Psychology during the early 1990s).<ref name=hubbard>{{cite journal|author=Hubbard, R.; Parsa, A. R.; Luthy, M. R.|title=The Spread of Statistical Significance Testing in Psychology: The Case of the Journal of Applied Psychology|journal=Theory and Psychology|volume=7|pages=545–554|year=1997|doi=10.1177/0959354397074006|issue=4}}</ref> Other fields have favored the estimation of parameters (e.g., [[effect size]]). Significance testing is used as a substitute for the traditional comparison of predicted value and experimental result at the core of the [[scientific method]]. When theory is only capable of predicting the sign of a relationship, a directional (one-sided) hypothesis test can be configured so that only a statistically significant result supports theory. This form of theory appraisal is the most heavily criticized application of hypothesis testing.
 
===Cautions===
"If the government required statistical procedures to carry warning labels like those on drugs, most inference methods would have long labels indeed."<ref name="moore">{{cite book|last=Moore|first=David|title=Introduction to the Practice of Statistics|publisher=W.H. Freeman and Co|location=New York|year=2003|page=426|isbn=9780716796572}}</ref> This caution applies to hypothesis tests and alternatives to them.
 
The successful hypothesis test is associated with a probability and a type-I error rate. The conclusion ''might'' be wrong.
 
The conclusion of the test is only as solid as the sample upon which it is based. The design of the experiment is critical. A number of unexpected effects have been observed including:
* The [[Clever Hans effect]]. A horse appeared to be capable of doing simple arithmetic.
* The [[Hawthorne effect]]. Industrial workers were more productive in better illumination, and most productive in worse.
* The [[Placebo effect]]. Pills with no medically active ingredients were remarkably effective.
A statistical analysis of misleading data produces misleading conclusions. The issue of data quality can be more subtle. In [[forecasting]] for example, there is no agreement on a measure of forecast accuracy. In the absence of a consensus measurement, no decision based on measurements will be without controversy.
 
The book ''[[How to Lie with Statistics]]''<ref>{{cite book|last=Huff|first=Darrell|title=How to lie with statistics|publisher=Norton|location=New York|year=1993|isbn=0-393-31072-8}}</ref><ref>{{cite book|last=Huff|first=Darrell|title=How to Lie with Statistics|publisher=Penguin Books|location=London|year=1991|isbn=0-14-013629-0}}</ref> is the most popular book on statistics ever published.<ref name="fiftyyears">"Over the last fifty years, How to Lie with Statistics has sold more copies than any other statistical text." J. M. Steele. "[http://www-stat.wharton.upenn.edu/~steele/Publications/PDF/TN148.pdf Darrell Huff and Fifty Years of ''How to Lie with Statistics'']. ''Statistical Science'', 20 (3), 2005, 205–209.</ref> It does not much consider hypothesis
testing, but its cautions are applicable, including: Many claims are made on the basis of samples too small to convince. If a report does not mention sample size, be doubtful.
 
Hypothesis testing acts as a filter of statistical conclusions; only those results meeting a probability threshold are publishable. Economics also acts as a publication filter; only those results favorable to the author and funding source may be submitted for publication. The impact of filtering on publication is termed [[publication bias]]. A related problem is that of [[multiple testing]] (sometimes linked to [[data mining]]), in which a variety of tests for a variety of possible effects are applied to a single data set and only those yielding a significant result are reported. These are often dealt with by using multiplicity correction procedures that control the [[family wise error rate]] (FWER) or the [[false discovery rate]] (FDR).
 
Those making critical decisions based on the results of a hypothesis test are prudent to look at the details rather than the conclusion alone. In the physical sciences most results are fully accepted only when independently confirmed. The general advice concerning statistics is, "Figures never lie, but liars figure" (anonymous).
 
==Example==
 
===[[Lady tasting tea]]===
 
In a famous example of hypothesis testing, known as the ''[[Lady tasting tea]]'' example,<ref name=fisher>{{cite book|first=Sir Ronald A.|last=Fisher|authorlink=Ronald Fisher|chapter=Mathematics of a Lady Tasting Tea|origyear=1935|year=1956|title=The World of Mathematics, volume 3|editor=James Roy Newman|url=http://books.google.com/?id=oKZwtLQTmNAC&pg=PA1512&dq=%22mathematics+of+a+lady+tasting+tea%22|trans_title=Design of Experiments|publisher=Courier Dover Publications|isbn=978-0-486-41151-4}} Originally from Fisher's book ''Design of Experiments''.</ref> a female colleague of Fisher claimed to be able to tell whether the tea or the milk was added first to a cup. Fisher proposed to give her eight cups, four of each variety, in random order. One could then ask what the probability was for her getting the number she got correct, but just by chance. The null hypothesis was that the Lady had no such ability. The test statistic was a simple count of the number of successes in selecting the 4 cups. The critical region was the single case of 4 successes of 4 possible based on a conventional probability criterion (<&nbsp;5%; 1 of 70 ≈&nbsp;1.4%). Fisher asserted that no alternative hypothesis was (ever) required. The lady correctly identified every cup,<ref>{{cite book|last=Box|first=Joan Fisher|title=R.A. Fisher, The Life of a Scientist|year=1978|location=New York|publisher=Wiley|page=134|isbn=0-471-09300-9}}</ref> which would be considered a statistically significant result.
 
===Analogy – Courtroom trial===
A statistical test procedure is comparable to a criminal [[trial (law)|trial]]; a defendant is considered not guilty as long as his or her guilt is not proven. The prosecutor tries to prove the guilt of the defendant. Only when there is enough charging evidence the defendant is convicted.
 
In the start of the procedure, there are two hypotheses <math>H_0</math>: "the defendant is not guilty", and <math>H_1</math>: "the defendant is guilty". The first one is called ''[[null hypothesis]]'', and is for the time being accepted. The second one is called ''alternative (hypothesis)''. It is the hypothesis one hopes to support.
 
The hypothesis of innocence is only rejected when an error is very unlikely, because one doesn't want to convict an innocent defendant. Such an error is called ''[[error of the first kind]]'' (i.e., the conviction of an innocent person), and the occurrence of this error is controlled to be rare. As a consequence of this asymmetric behaviour, the ''[[error of the second kind]]'' (acquitting a person who committed the crime), is often rather large.
 
{|class="wikitable"
|
! H<sub>0</sub> is true <br> Truly not guilty
! H<sub>1</sub> is true <br> Truly guilty
|- align="center"
| Accept Null Hypothesis <br> Acquittal
| Right decision
| Wrong decision <br> Type II Error
|- align="center"
| Reject Null Hypothesis <br> Conviction
| Wrong decision <br> Type I Error
| Right decision
|}
 
A criminal trial can be regarded as either or both of two decision processes: guilty vs not guilty or evidence vs a threshold ("beyond a reasonable doubt"). In one view, the defendant is judged; in the other view the performance of the prosecution (which bears the burden of proof) is judged. A hypothesis test can be regarded as either a judgment of a hypothesis or as a judgment of evidence.
 
===Example 1 – Philosopher's beans===
The following example was produced by a philosopher describing scientific methods generations before hypothesis testing was
formalized and popularized.<ref>{{cite journal|author=C. S. Peirce|date=August 1878|title=Illustrations of the Logic of Science VI: Deduction, Induction, and Hypothesis|journal=Popular Science Monthly|volume=13|accessdate=30 March 2012|url=http://en.wikisource.org/w/index.php?oldid=3592335}}</ref>
 
<blockquote>
Few beans of this handful are white.<br />
Most beans in this bag are white.<br />
Therefore: Probably, these beans were taken from another bag.<br />
This is an hypothetical inference.
</blockquote>
 
The beans in the bag are the population. The handful are the sample. The null hypothesis is that the sample originated from the population. The criterion for rejecting the null-hypothesis is the "obvious" difference in appearance (an informal difference in the mean). The interesting result is that consideration of a real population and a real sample produced an imaginary bag. The philosopher was considering logic rather than probability. To be a real statistical hypothesis test, this example requires the formalities of a probability calculation and a comparison of that probability to a standard.
 
A simple generalization of the example considers a mixed bag of beans and a handful that contain either very few or very many white beans. The generalization considers both extremes. It requires more calculations and more comparisons to arrive at a formal answer, but the core philosophy is unchanged; If the composition of the handful is greatly different that of the bag, then the sample probably originated from another bag. The original example is termed a one-sided or a one-tailed test while the generalization is termed a two-sided or two-tailed test.
 
===Example 2 – Clairvoyant card game===
A person (the subject) is tested for [[clairvoyance]]. He is shown the reverse of a randomly chosen playing card 25 times and asked which of the four [[Suit (cards)|suits]] it belongs to. The number of hits, or correct answers, is called ''X''.
 
As we try to find evidence of his clairvoyance, for the time being the null hypothesis is that the person is not clairvoyant. The alternative is, of course: the person is (more or less) clairvoyant.
 
If the null hypothesis is valid, the only thing the test person can do is guess. For every card, the probability (relative frequency) of any single suit appearing is 1/4. If the alternative is valid, the test subject will predict the suit correctly with probability greater than 1/4. We will call the probability of guessing correctly ''p''. The hypotheses, then, are:
* null hypothesis <math>\text{:} \qquad H_0: p = \tfrac 14</math> &nbsp;&nbsp;&nbsp;&nbsp;(just guessing)
and
* alternative hypothesis <math>\text{:} H_1: p>\tfrac 14</math> &nbsp;&nbsp;&nbsp;(true clairvoyant).
 
When the test subject correctly predicts all 25 cards, we will consider him clairvoyant, and reject the null hypothesis. Thus also with 24 or 23 hits. With only 5 or 6 hits, on the other hand, there is no cause to consider him so. But what about 12 hits, or 17 hits? What is the critical number, ''c'', of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value ''c''? It is obvious that with the choice ''c''=25 (i.e. we only accept clairvoyance when all cards are predicted correctly) we're more critical than with ''c''=10. In the first case almost no test subjects will be recognized to be clairvoyant, in the second case, a certain number will pass the test. In practice, one decides how critical one will be. That is, one decides how often one accepts an error of the first kind – a [[false positive]], or Type I error. With ''c'' = 25 the probability of such an error is:
 
:<math>P(\text{reject }H_0 | H_0 \text{ is valid}) = P(X = 25|p=\tfrac 14)=\left(\tfrac 14\right)^{25}\approx10^{-15},</math>
 
and hence, very small. The probability of a false positive is the probability of randomly guessing correctly all 25 times.
 
Being less critical, with ''c''=10, gives:
 
:<math>P(\text{reject }H_0 | H_0 \text{ is valid}) = P(X \ge 10|p=\tfrac 14) =\sum_{k=10}^{25}P(X=k|p=\tfrac 14)\approx 0{.}07.</math>
 
Thus, ''c'' = 10 yields a much greater probability of false positive.
 
Before the test is actually performed, the maximum acceptable probability of a Type I error (''α'') is determined. Typically, values in the range of 1% to 5% are selected. (If the maximum acceptable error rate is zero, an infinite number of correct guesses is required.) Depending on this Type 1 error rate, the critical value ''c'' is calculated. For example, if we select an error rate of 1%, ''c'' is calculated thus:
 
:<math>P(\text{reject }H_0 | H_0 \text{ is valid}) = P(X \ge c|p=\tfrac 14) \le 0{.}01.</math>
 
From all the numbers c, with this property, we choose the smallest, in order to minimize the probability of a Type II error, a [[false negative]]. For the above example, we select: <math>c=13</math>.
<!--
But what if the subject did not guess any cards at all? Having zero correct answers is clearly an oddity too. Without any clairvoyant skills the probability.
 
:<math>P(X=0| H_0 \text{ is valid}) = P(X = 0|p=\tfrac 14) =(1-\tfrac 14)^{25} \approx 0{.}00075.</math>
 
This is highly unlikely (less than 1 in a 1000 chance). While the subject can't guess the cards correctly, dismissing H<sub>0</sub> in favour of H<sub>1</sub> would be an error. In fact, the result would suggest a trait on the subject's part of avoiding calling the correct card. A test of this could be formulated: for a selected 1% error rate the subject would have to answer correctly at least twice, for us to believe that card calling is based purely on guessing. -->
 
===Example 3 – Radioactive suitcase===
As an example, consider determining whether a suitcase contains some radioactive material. Placed under a [[Geiger counter]], it produces 10 counts per minute. The null hypothesis is that no radioactive material is in the suitcase and that all measured counts are due to ambient radioactivity typical of the surrounding air and harmless objects. We can then calculate how likely it is that we would observe 10 counts per minute if the null hypothesis were true. If the null hypothesis predicts (say) on average 9 counts per minute, then according to the [[Poisson distribution]] typical for [[radioactive decay]] there is about 41% chance of recording 10 or more counts. Thus we can say that the suitcase is compatible with the null hypothesis (this does not guarantee that there is no radioactive material, just that we don't have enough evidence to suggest there is). On the other hand, if the null hypothesis predicts 3 counts per minute (for which the Poisson distribution predicts only 0.1% chance of recording 10 or more counts) then the suitcase is not compatible with the null hypothesis, and there are likely other factors responsible to produce the measurements.
 
The test does not directly assert the presence of radioactive material. A ''successful'' test asserts that the claim of no radioactive material present is unlikely given the reading (and therefore ...). The double negative (disproving the null hypothesis) of the method is confusing, but using a counter-example to disprove is standard mathematical practice. The attraction of the method is its practicality. We know (from experience) the expected range of counts with only ambient radioactivity present, so we can say that a measurement is ''unusually'' large. Statistics just formalizes the intuitive by using numbers instead of adjectives. We probably do not know the characteristics of the radioactive suitcases; We just assume
that they produce larger readings.
 
To slightly formalize intuition: Radioactivity is suspected if the Geiger-count with the suitcase is among or exceeds the greatest (5% or 1%) of the Geiger-counts made with ambient radiation alone. This makes no assumptions about the distribution of counts. Many ambient radiation observations are required to obtain good probability estimates for rare events.
 
The test described here is more fully the null-hypothesis statistical significance test. The null hypothesis represents what we would believe by default, before seeing any evidence. [[Statistical significance]] is a possible finding of the test, declared when the observed [[Sample (statistics)|sample]] is unlikely to have occurred by chance if the null hypothesis were true. The name of the test describes its formulation and its possible outcome. One characteristic of the test is its crisp decision: to reject or not reject the null hypothesis. A calculated value is compared to a threshold, which is determined from the tolerable risk of error.
 
==Definition of terms==
The following definitions are mainly based on the exposition in the book by Lehmann and Romano:<ref name=LR>{{cite book|title=Testing Statistical Hypotheses|edition=3E|isbn=0-387-98864-5|last1=Lehmann|first1=E.L.|first2=Joseph P.|last2=Romano|year=2005|publisher=Springer|location=New York}}</ref>
 
; Statistical hypothesis : A statement about the parameters describing a population (not a sample).
; Statistic : A value calculated from a sample, often to summarize the sample for comparison purposes.
; Simple hypothesis : Any hypothesis which specifies the population distribution completely.
; Composite hypothesis : Any hypothesis which does ''not'' specify the population distribution completely.
; [[Null hypothesis]] (H<sub>0</sub>) : A simple hypothesis associated with a contradiction to a theory one would like to prove.
; [[Alternative hypothesis]] (H<sub>1</sub>) : A hypothesis (often composite) associated with a theory one would like to prove.
; Statistical test : A procedure whose inputs are samples and whose result is a hypothesis.
; Region of acceptance : The set of values of the test statistic for which we fail to reject the null hypothesis.
; Region of rejection / Critical region: The set of values of the test statistic for which the null hypothesis is rejected.
; [[Critical value#Statistics|Critical value]]: The threshold value delimiting the regions of acceptance and rejection for the test statistic.
; [[statistical power|Power of a test]] (1&nbsp;−&nbsp;''β''): The test's probability of correctly rejecting the null hypothesis. The complement of the [[false negative]] rate, ''β''. Power is termed '''sensitivity''' in [[biostatistics]]. ("This is a sensitive test. Because the result is negative, we can confidently say that the patient does not have the condition.") See [[sensitivity and specificity]] and [[Type I and type II errors]] for exhaustive definitions.
; Size / Significance level of a test (''α''): For simple hypotheses, this is the test's probability of ''incorrectly'' rejecting the null hypothesis. The [[false positive]] rate. For composite hypotheses this is the upper bound of the probability of rejecting the null hypothesis over all cases covered by the null hypothesis. The complement of the false positive rate, (1&nbsp;−&nbsp;''α''), is termed '''specificity''' in [[biostatistics]]. ("This is a specific test. Because the result is positive, we can confidently say that the patient has the condition.") See [[sensitivity and specificity]] and [[Type I and type II errors]] for exhaustive definitions.
; [[p-value]]: The probability, assuming the null hypothesis is true, of observing a result at least as extreme as the test statistic.
; Statistical significance test : A predecessor to the statistical hypothesis test (see the Origins section). An experimental result was said to be statistically significant if a sample was sufficiently inconsistent with the (null) hypothesis. This was variously considered common sense, a pragmatic heuristic for identifying meaningful experimental results, a convention establishing a threshold of statistical evidence or a method for drawing conclusions from data. The statistical hypothesis test added mathematical rigor and philosophical consistency to the concept by making the alternative hypothesis explicit. The term is loosely used to describe the modern version which is now part of statistical hypothesis testing.
; Conservative test : A test is conservative if, when constructed for a given nominal significance level, the true probability of ''incorrectly'' rejecting the null hypothesis is never greater than the nominal level.
; [[Exact test]]: A test in which the significance level or critical value can be computed exactly, i.e., without any approximation. In some contexts this term is restricted to tests applied to [[categorical data]] and to [[permutation tests]], in which computations are carried out by complete enumeration of all possible outcomes and their probabilities.
 
A statistical hypothesis test compares a test statistic (''z'' or ''t'' for examples) to a threshold. The test statistic (the formula found in the table below) is based on optimality. For a fixed level of Type I error rate, use of these statistics minimizes Type II error rates (equivalent to maximizing power). The following terms describe tests in terms of such optimality:
 
; Most powerful test: For a given ''size'' or ''significance level'', the test with the greatest power (probability of rejection) for a given value of the parameter(s) being tested, contained in the alternative hypothesis.
; [[Uniformly most powerful test]] (UMP): A test with the greatest ''power'' for all values of the parameter(s) being tested, contained in the alternative hypothesis.
 
==Common test statistics==
{{main|Test statistic}}
 
One-sample tests are appropriate when a sample is being compared to the population from a hypothesis. The population characteristics are known from theory or are calculated from the population.
 
Two-sample tests are appropriate for comparing two samples, typically experimental and control samples from a scientifically controlled experiment.
 
Paired tests are appropriate for comparing two samples where it is impossible to control important variables. Rather than comparing two sets, members are paired between samples so the difference between the members becomes the sample. Typically the mean of the differences is then compared to zero.
 
[[Z-test]]s are appropriate for comparing means under stringent conditions regarding normality and a known standard deviation.
 
[[T-test]]s are appropriate for comparing means under relaxed conditions (less is assumed).
 
Tests of proportions are analogous to tests of means (the 50% proportion).
 
Chi-squared tests use the same calculations and the same probability distribution for different applications:
* [[Chi-squared test]]s for variance are used to determine whether a normal population has a specified variance. The null hypothesis is that it does.
* Chi-squared tests of independence are used for deciding whether two variables are associated or are independent. The variables are categorical rather than numeric. It can be used to decide whether [[left-handedness]] is correlated with [[Libertarianism|libertarian]] politics (or not). The null hypothesis is that the variables are independent. The numbers used in the calculation are the observed and expected frequencies of occurrence (from [[contingency table]]s).
* Chi-squared goodness of fit tests are used to determine the adequacy of curves fit to data. The null hypothesis is that the curve fit is adequate. It is common to determine curve shapes to minimize the mean square error, so it is appropriate that the goodness-of-fit calculation sums the squared errors.
 
[[F-test]]s (analysis of variance, ANOVA) are commonly used when deciding whether groupings of data by category are meaningful. If the variance of test scores of the left-handed in a class is much smaller than the variance of the whole class, then it may be useful to study lefties as a group. The null hypothesis is that two variances are the same – so the proposed grouping is not meaningful.
 
In the table below, the symbols used are defined at the bottom of the table. Many other tests can be found in [[:Category:Statistical tests|other articles]]. Proofs exist that the test statistics are appropriate.<ref name="Loveland">{{Cite thesis |type= M.Sc. (Mathematics) |title= Mathematical Justification of Introductory Hypothesis Tests and Development of Reference Materials |url= http://digitalcommons.usu.edu |last= Loveland |first= Jennifer L. |year= 2011 |publisher= Utah State University |accessdate= April 2013}} Abstract: "The focus was on the Neyman-Pearson approach to hypothesis testing. A brief historical development of the Neyman-Pearson approach is followed by mathematical proofs of each of the hypothesis tests covered in the reference material." The proofs do not reference the concepts introduced by Neyman and Pearson, instead they show that traditional test statistics have the probability distributions ascribed to them, so that significance calculations assuming those distributions are correct. The thesis information is also posted at mathnstats.com as of April 2013.</ref>
 
{|class="wikitable"
! Name
! Formula
! Assumptions or notes
|-
|One-sample [[z-test]]
|align=center|<math>z=\frac{\overline{x}-\mu_0}{\sigma}\sqrt n</math>
|(Normal population '''or''' ''n'' > 30) '''and''' σ known. <br />
(''z'' is the distance from the mean in relation to the standard deviation of the mean). For non-normal distributions it is possible to calculate a minimum proportion of a population that falls within ''k'' standard deviations for any ''k'' (see: ''[[Chebyshev's inequality]]'').
|-
|Two-sample z-test
|align=center|<math>z=\frac{(\overline{x}_1 - \overline{x}_2) - d_0}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}}</math>
|Normal population '''and''' independent observations '''and''' σ<sub>1</sub> and σ<sub>2</sub> are known
|-
|One-sample [[t-test]]
|align=center|<math>t=\frac{\overline{x}-\mu_0} {( s / \sqrt{n} )} ,</math><br />
<math>df=n-1 \ </math>
|(Normal population '''or''' ''n'' > 30) '''and''' <math>\sigma</math> unknown
|-
|Paired t-test
|align=center|<math>t=\frac{\overline{d}-d_0} { ( s_d / \sqrt{n} ) } ,</math><br />
<math>df=n-1 \ </math>
|(Normal population of differences '''or''' ''n'' > 30) '''and''' <math>\sigma</math> unknown or small sample size ''n'' < 30
|-
|Two-sample pooled [[t-test]], equal variances
|align=center|<math>t=\frac{(\overline{x}_1 - \overline{x}_2) - d_0}{s_p\sqrt{\frac{1}{n_1} + \frac{1}{n_2}}},</math><br />
<math>s_p^2=\frac{(n_1 - 1)s_1^2 + (n_2 - 1)s_2^2}{n_1 + n_2 - 2},</math><br />
<math>df=n_1 + n_2 - 2 \ </math><ref name=NIST2mean>NIST handbook: [http://www.itl.nist.gov/div898/handbook/eda/section3/eda353.htm Two-Sample t-Test for Equal Means]</ref>
|(Normal populations '''or''' ''n''<sub>1</sub>&nbsp;+&nbsp;''n''<sub>2</sub>&nbsp;>&nbsp;40) '''and''' independent observations '''and''' σ<sub>1</sub> = σ<sub>2</sub> unknown
|-
|Two-sample unpooled t-test, unequal variances
|align=center|<math>t=\frac{(\overline{x}_1 - \overline{x}_2) - d_0}{\sqrt{\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2}}},</math><br />
<math>df = \frac{\left(\frac{s_1^2}{n_1}+\frac{s_2^2}{n_2}\right)^2} {\frac{\left(\frac{s_1^2}{n_1}\right)^2}{n_1-1} + \frac{\left(\frac{s_2^2}{n_2}\right)^2}{n_2-1}}</math><ref name=NIST2mean/>
|(Normal populations '''or''' ''n''<sub>1</sub>&nbsp;+&nbsp;''n''<sub>2</sub>&nbsp;>&nbsp;40) '''and''' independent observations '''and''' σ<sub>1</sub> ≠ σ<sub>2</sub> both unknown
|-
|One-proportion z-test
|align=center|<math>z=\frac{\hat{p} - p_0}{\sqrt{p_0 (1-p_0)}}\sqrt n</math>
|''n<sup> .</sup>p<sub>0</sub>'' > 10 '''and''' ''n'' (1&nbsp;−&nbsp;''p<sub>0</sub>'') > 10 '''and''' it is a SRS (Simple Random Sample), see [[Binomial distribution#Normal approximation|notes]].
|-
|Two-proportion z-test, pooled for <math>H_0\colon p_1=p_2</math>
|align=center|<math>z=\frac{(\hat{p}_1 - \hat{p}_2)}{\sqrt{\hat{p}(1 - \hat{p})(\frac{1}{n_1} + \frac{1}{n_2})}}</math>
<math>\hat{p}=\frac{x_1 + x_2}{n_1 + n_2}</math>
|''n''<sub>1</sub> ''p''<sub>1</sub> > 5 '''and''' ''n''<sub>1</sub>(1&nbsp;−&nbsp;''p''<sub>1</sub>) > 5 '''and''' ''n''<sub>2</sub> ''p''<sub>2</sub>&nbsp;>&nbsp;5 '''and''' ''n''<sub>2</sub>(1&nbsp;−&nbsp;''p''<sub>2</sub>) > 5 '''and''' independent observations, see [[Binomial distribution#Normal approximation|notes]].
|-
|Two-proportion z-test, unpooled for <math>|d_0|>0</math>
|align=center|<math>z=\frac{(\hat{p}_1 - \hat{p}_2) - d_0}{\sqrt{\frac{\hat{p}_1(1 - \hat{p}_1)}{n_1} + \frac{\hat{p}_2(1 - \hat{p}_2)}{n_2}}}</math>
|''n''<sub>1</sub> ''p''<sub>1</sub> > 5 '''and''' ''n''<sub>1</sub>(1&nbsp;−&nbsp;''p''<sub>1</sub>) > 5 '''and''' ''n''<sub>2</sub> ''p''<sub>2</sub>&nbsp;>&nbsp;5 '''and''' ''n''<sub>2</sub>(1&nbsp;−&nbsp;''p''<sub>2</sub>) > 5 '''and''' independent observations, see [[Binomial distribution#Normal approximation|notes]].
|-
|Chi-squared test for variance
|align=center|<math>\chi^2=(n-1)\frac{s^2}{\sigma^2_0}</math>
|Normal population
|-
|Chi-squared test for goodness of fit
|align=center|<math>\chi^2=\sum^k\frac{(\text{observed}-\text{expected})^2}{\text{expected}}</math>
|''df = k - 1 - # parameters estimated'', and one of these must hold.
• All expected counts are at least 5.<ref>Steel, R.G.D, and Torrie, J. H., ''Principles and Procedures of Statistics with Special Reference to the Biological Sciences.'', [[McGraw Hill]], 1960, page 350.</ref>
 
• All expected counts are >&nbsp;1 and no more than 20% of expected counts are less than&nbsp;5<ref>{{cite book|last=Weiss|first=Neil A.|title=Introductory Statistics|edition=5th|year=1999|pages=802|isbn=0-201-59877-9}}
</ref>
|-
|Two-sample F test for equality of variances
|align=center|<math>F=\frac{s_1^2}{s_2^2}</math>
|Normal populations<br />Arrange so <math>s_1^2 \ge s_2^2</math> and reject H<sub>0</sub> for <math>F > F(\alpha/2,n_1-1,n_2-1)</math><ref>NIST handbook: [http://www.itl.nist.gov/div898/handbook/eda/section3/eda359.htm F-Test for Equality of Two Standard Deviations] (Testing standard deviations the same as testing variances)</ref>
|-
|Regression t-test of <math>H_0\colon r^2=0.</math>
|align=center|<math>t=\sqrt{\frac{r^2(n-k-1^*)}{1-r^2}}</math>
|*Subtract 1 for intercept; ''k'' terms contain independent variables.<br />Reject H<sub>0</sub> for <math>t > t(\alpha/2,n-k-1^*)</math><ref>Steel, R.G.D, and Torrie, J. H., ''Principles and Procedures of Statistics with Special Reference to the Biological Sciences.'', [[McGraw Hill]], 1960, page 288.)</ref>
|-
| colspan=3 | In general, the subscript 0 indicates a value taken from the [[null hypothesis]], H<sub>0</sub>, which should be used as much as possible in constructing its test statistic. ''... Definitions of other symbols:''
{| border="0"
| valign="top" |
* <math>\alpha</math>, the [[probability]] of [[Type I and type II errors|Type I error]] (rejecting a [[null hypothesis]] when it is in fact true)
* <math>n</math> = [[sample size]]
* <math>n_1</math> = sample 1 size
* <math>n_2</math> = sample 2 size
* <math>\overline{x}</math> = [[sample mean]]
* <math>\mu_0</math> = hypothesized [[population mean]]
* <math>\mu_1</math> = population 1 mean
* <math>\mu_2</math> = population 2 mean
* <math>\sigma</math> = [[population standard deviation]]
* <math>\sigma^2</math> = [[population variance]]
* <math>s</math> = [[sample standard deviation (disambiguation)|sample standard deviation]]
* <math>\sum^k</math> = sum (of k numbers)
| valign="top" |
* <math>s^2</math> = [[sample variance]]
* <math>s_1</math> = sample 1 standard deviation
* <math>s_2</math> = sample 2 standard deviation
* <math>t</math> = [[t statistic]]
* <math>df</math> = [[Degrees of freedom (statistics)|degrees of freedom]]
* <math>\overline{d}</math> = sample mean of differences
* <math>d_0</math> = hypothesized population mean difference
* <math>s_d</math> = standard deviation of differences
* <math>\chi^2</math> = [[Chi-squared statistic]]
| valign="top" |
* <math>\hat{p}</math> = ''x/n'' = sample [[ratio|proportion]], unless specified otherwise
* <math>p_0</math> = hypothesized population proportion
* <math>p_1</math> = proportion 1
* <math>p_2</math> = proportion 2
* <math>d_p</math> = hypothesized difference in proportion
* <math>\min\{n_1,n_2\} </math> = minimum of ''n''<sub>1</sub> and ''n''<sub>2</sub>
* <math>x_1 = n_1 p_1</math>
* <math>x_2 = n_2 p_2</math>
* <math>F</math> = [[F statistic]]
|}
|}
 
==Origins and early controversy==
[[File:Origins Of Hybrid Hypothesis Testing.png|thumb|290px|A likely originator of the "hybrid" method of hypothesis testing, as well as the use of "nil" null hypotheses, is [[E.F. Lindquist]] in his statistics textbook: Lindquist, E.F. (1940) Statistical Analysis In Educational Research. Boston: Houghton Mifflin.]]
 
Significance testing is largely the product of [[Karl Pearson]] ([[p-value]], [[Pearson's chi-squared test]]), [[William Sealy Gosset]] ([[Student's t-distribution]]), and [[Ronald Fisher]] ("[[null hypothesis]]", [[analysis of variance]], "[[statistical significance|significance test]]"), while hypothesis testing was developed by [[Jerzy Neyman]] and [[Egon Pearson]] (son of Karl). Ronald Fisher, mathematician and biologist described by [[Richard Dawkins]] as "the greatest biologist since Darwin", began his life in statistics as a Bayesian (Zabell 1992), but Fisher soon grew disenchanted with the subjectivity involved (namely use of the [[principle of indifference]] when determining prior probabilities), and sought to provide a more "objective" approach to inductive inference.<ref name="ftp.isds.duke">Raymond Hubbard, M.J. Bayarri, ''[http://ftp.isds.duke.edu/WorkingPapers/03-26.pdf P Values are not Error Probabilities]''. A working paper that explains the difference between Fisher's evidential p-value and the Neyman–Pearson Type I error rate <math>\alpha</math>.</ref>
 
Fisher was an agricultural statistician who emphasized rigorous experimental design and methods to extract a result from few samples assuming Gaussian distributions. Neyman (who teamed with the younger Pearson) emphasized mathematical rigor and methods to obtain more results from many samples and a wider range of distributions. Modern hypothesis testing is an inconsistent hybrid of the Fisher vs Neyman/Pearson formulation, methods and terminology developed in the early 20th century. While hypothesis testing was popularized early in the 20th century, evidence of its use can be found much earlier. In the 1770s [[Pierre-Simon Laplace|Laplace]] considered the statistics of almost half a million births. The statistics showed an excess of boys compared to girls.<ref name="Laplace 1778">{{cite journal| last=Laplace| first=P| title=Memoire Sur Les Probabilities|journal=Memoirs de l’Academie royale des Sciences de Paris|year=1778| volume=9| pages=227–332| url=http://cerebro.xu.edu/math/Sources/Laplace/memoir_probabilities.pdf}}</ref> He concluded by calculation of a p-value that the excess was a real, but unexplained, effect.<ref>{{cite book|last=Stigler|first=Stephen M.|title=The History of Statistics: The Measurement of Uncertainty before 1900|publisher=Belknap Press of Harvard University Press|location=Cambridge, Mass|year=1986|isbn=0-674-40340-1|page=134}}</ref>
 
Fisher popularized the "significance test". He required a null-hypothesis (corresponding to a population frequency distribution) and a sample. His (now familiar) calculations determined whether to reject the null-hypothesis or not. Significance testing did not utilize an alternative hypothesis so there was no concept of a Type II error.
 
The p-value was devised as an informal, but objective, index meant to help a researcher determine (based on other knowledge) whether to modify future experiments or strengthen one's [[Fiducial inference|faith]] in the null hypothesis.<ref name="Fisher 1955 69–78">{{cite journal|last=Fisher|first=R|title=Statistical Methods and Scientific Induction|journal=Journal of the Royal Statistical Society, Series B|year=1955 |volume=17|issue=1|pages=69–78|url=http://www.phil.vt.edu/dmayo/PhilStatistics/Triad/Fisher%201955.pdf}}</ref> Hypothesis testing (and Type I/II errors) were devised by Neyman and Pearson as a more objective alternative to Fisher's p-value, also meant to determine researcher behaviour, but without requiring any inductive inference by the researcher.<ref name="Neyman 289–337">{{cite journal|last=Neyman|first=J|title=On the Problem of the most Efficient Tests of Statistical Hypotheses|journal=[[Philosophical Transactions of the Royal Society A]]|date=January 1, 1933|volume=231|issue=694–706|pages=289–337|doi=10.1098/rsta.1933.0009|last2=Pearson|first2=E. S.}}</ref><ref>{{cite journal|last=Goodman|first=S N|title=Toward evidence-based medical statistics. 1: The P Value Fallacy|journal=Ann Intern Med|date=June 15, 1999|volume=130|issue=12|pages=995–1004|url=http://annals.org/article.aspx?articleid=712762|doi=10.7326/0003-4819-130-12-199906150-00008|pmid=10383371}}</ref>
 
Neyman & Pearson considered a different problem (which they called "hypothesis testing"). They initially considered two simple hypotheses (both with frequency distributions). They calculated two probabilities and typically selected the hypothesis associated with the higher probability (the hypothesis more likely to have generated the sample). Their method always selected a hypothesis. It also allowed the calculation of both types of error probabilities.
 
Fisher and Neyman/Pearson clashed bitterly. Neyman/Pearson considered their formulation to be an improved generalization of significance testing.(The defining paper<ref name="Neyman 289–337"/> was [[Neyman–Pearson lemma|abstract]]. Mathematicians have generalized and refined the theory for decades.<ref name="Lehmann93" />) Fisher thought that it was not applicable to scientific research because often, during the course of the experiment, it is discovered that the initial assumptions about the null hypothesis are questionable due to unexpected sources of error. He believed that the use of rigid reject/accept decisions based on models formulated before data is collected was incompatible with this common scenario faced by scientists and attempts to apply this method to scientific research would lead to mass confusion.<ref>{{cite journal|last=Fisher|first=R N|title=The Nature of Probability|journal=Centennial Review|year=1958|volume=2|pages=261–274|url=http://www.york.ac.uk/depts/maths/histstat/fisher272.pdf}}"We are quite in danger of sending highly-trained and highly intelligent young men out into the world with tables of erroneous numbers under their arms, and with a dense fog in the place where their brains ought to be. In this century, of course, they will be working on guided missiles and advising the medical profession on the control of disease, and there is no limit to the extent to which they could impede every sort of national effort."
</ref>
 
The dispute between Fisher and Neyman-Pearson was waged on philosophical grounds, characterized by a philosopher as a dispute over the proper role of models in statistical inference.<ref name=Lenhard>{{cite journal|last=Lenhard|first=Johannes|title=Models and Statistical Inference: The Controversy between Fisher and Neyman–Pearson|journal=Brit. J. Phil. Sci.|volume=57|pages=69–91|year=2006}}</ref>
 
Events intervened: Neyman accepted a position in the western hemisphere, breaking his partnership with Pearson and separating disputants (who had occupied the same building) by much of the planetary diameter. World War II provided an intermission in the debate. The dispute between Fisher and Neyman terminated (unresolved after 27 years) with Fisher's death in 1962. Neyman wrote a well-regarded eulogy.<ref>{{cite journal|last1=Neyman|first1=Jerzy|title=RA Fisher (1890—1962): An Appreciation.|journal=Science|volume=156.3781|pages=1456–1460|year=1967}}</ref> Some of Neyman's later publications reported p-values and significance levels.<ref>{{cite journal|last1=Losavich|first1=J. L.|last2=Neyman|first2=J.|last3=Scott|first3=E. L.|last4=Wells|first4=M. A.|title=Hypothetical explanations of the negative apparent effects of cloud seeding in the Whitetop Experiment.|journal=Proceedings of the U.S. National Academy of Sciences|year=1971|volume=68|pages=2643–2646}}</ref>
 
The modern version of hypothesis testing is a hybrid of the two approaches that resulted from confusion by writers of statistical textbooks (as predicted by Fisher) beginning in the 1940s.<ref name="Halpin 625–653">{{cite journal|last=Halpin|first=P F|title=Inductive Inference or Inductive Behavior: Fisher and Neyman: Pearson Approaches to Statistical Testing in Psychological Research (1940–1960)|journal=The American Journal of Psychology|date=Winter 2006 |volume=119|issue=4|pages=625–653|jstor=20445367|doi=10.2307/20445367|pmid=17286092|last2=Stam|first2=HJ}}</ref> (But [[Detection theory|signal detection]], for example, still uses the Neyman/Pearson formulation.) Great conceptual differences and many caveats in addition to those mentioned above were ignored. Neyman and Pearson provided the stronger terminology, the more rigorous mathematics and the more consistent philosophy, but the subject taught today in introductory statistics has more similarities with Fisher's method than theirs.<ref name=Gigerenzer>{{cite book|title=The Empire of Chance: How Probability Changed Science and Everyday Life|last=Gigerenzer|first=Gerd|coauthors=Zeno Swijtink, Theodore Porter, Lorraine Daston, John Beatty, Lorenz Kruger|year=1989|publisher=Cambridge University Press|chapter=Part 3: The Inference Experts|isbn=978-0-521-39838-1|pages=70–122}}</ref> This history explains the inconsistent terminology (example: the null hypothesis is never accepted, but there is a region of acceptance).
 
Sometime around 1940,<ref name="Halpin 625–653" /> in an apparent effort to provide researchers with a "non-controversial"<ref name="Gigerenzer 587–606">{{cite journal|last=Gigerenzer|first=G|title=Mindless statistics|journal=The Journal of Socio-Economics|date=November 2004|volume=33|issue=5|pages=587–606|doi=10.1016/j.socec.2004.09.033}}</ref> way to [[Have one's cake and eat it too|have their cake and eat it too]], the authors of statistical text books began anonymously combining these two strategies by using the p-value in place of the [[test statistic]] (or data) to test against the Neyman-Pearson "significance level".<ref name="Halpin 625–653"/> Thus, researchers were encouraged to infer the strength of their data against some [[null hypothesis]] using p-values, while also thinking they are retaining the post-data collection [[Objectivity (science)|objectivity]] provided by hypothesis testing. It then became customary for the null hypothesis, which was originally some realistic research hypothesis, to be used almost solely as a [[strawman]] "nil" hypothesis (one where a treatment has no effect, regardless of the context).<ref>{{cite journal|last=Loftus|first=G R|title=On the Tyranny of Hypothesis Testing in the Social Sciences|journal=Contemporary Psychology|year= 1991 |volume=36|issue=2|pages=102–105|url=https://www.ics.uci.edu/~sternh/courses/210/loftus91_tyranny.pdf}}</ref>
 
'''A comparison between Fisherian, frequentist (Neyman-Pearson)'''
{|class="wikitable"
|-
! Fisher's null hypothesis testing !! Neyman–Pearson decision theory
|-
|1. Set up a statistical null hypothesis. The null need not be a nil hypothesis (i.e., zero difference).
|| 1. Set up two statistical hypotheses, H1 and H2, and decide about α, β, and sample size before the experiment, based on subjective cost-benefit considerations. These define a rejection region for each hypothesis.
|-
| 2. Report the exact level of significance (e.g., p = 0.051 or p = 0.049). Do not use a conventional 5% level, and do not talk about accepting or rejecting hypotheses. If the result is "not significant", draw no conclusions and make no decisions, but suspend judgement until further data is available.
|| 2. If the data falls into the rejection region of H1, accept H2; otherwise accept H1. Note that accepting a hypothesis does not mean that you believe in it, but only that you act as if it were true.
|-
| 3. Use this procedure only if little is known about the problem at hand, and only to draw provisional conclusions in the context of an attempt to understand the experimental situation.
|| 3. The usefulness of the procedure is limited among others to situations where you have a disjunction of hypotheses (e.g., either μ1 = 8 or μ2 = 10 is true) and where you can make meaningful cost-benefit trade-offs for choosing alpha and beta.
|}
 
<!--{{Quote|text=We are inclined to think that no test based upon a theory of probability can by itself provide any valuable evidence of the truth or falsehood of a hypothesis. But we may look at the purpose of tests from another viewpoint. Without hoping to know whether each separate hypothesis is true or false, we may search for rules to govern our behaviour with regard to them, in following which we insure that, in the long run of experience, we shall not often be wrong.
 
|sign= Neyman (1933)| source = <ref name="Neyman 289–337"/> }}-->
 
===Early Choices of Null Hypothesis===
[[Paul Meehl]] has argued that the [[epistemological]] importance of the choice of null hypothesis has gone largely unacknowledged. When the null hypothesis is predicted by theory, a more precise experiment will be a more severe test of the underlying theory. When the null hypothesis defaults to "no difference" or "no effect", a more precise experiment is a less severe test of the theory that motivated performing the experiment.<ref>{{cite journal| last=Meehl| first=P| title=Appraising and Amending Theories: The Strategy of Lakatosian Defense and Two Principles That Warrant It|journal=Psychological Inquiry|year=1990| volume=1| issue=2| pages=108–141| url=http://rhowell.ba.ttu.edu/meehl1.pdf}}</ref> An examination of the origins of the latter practice may therefore be useful:
 
'''1778:''' [[Pierre Laplace]] compares the birthrates of boys and girls in multiple European cities. He states: "it is natural to conclude that these possibilities are very nearly in the same ratio". Thus Laplace's null hypothesis that the birthrates of boys and girls should be equal given "conventional wisdom".<ref name="Laplace 1778"/>
 
'''1900:''' [[Karl Pearson]] develops the [[chi squared test]] to determine "whether a given form of frequency curve will effectively describe the samples drawn from a given population." Thus the null hypothesis is that a population is described by some distribution predicted by theory. He uses as an example the numbers of five and sixes in the [[Walter Frank Raphael Weldon|Weldon dice throw data]].<ref name="Pearson 1900">{{cite journal| last=Pearson| first=K| title= On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling|year=1900| journal= Philosophical Magazine Series| volume=5| issue=50| pages=157–175| url=http://www.economics.soton.ac.uk/staff/aldrich/1900.pdf | doi=10.1080/14786440009463897}}</ref>
 
'''1904:''' [[Karl Pearson]] develops the concept of "[[contingency table|contingency]]" in order to determine whether outcomes are [[statistical independence|independent]] of a given categorical factor. Here the null hypothesis is by default that two things are unrelated (e.g. scar formation and death rates from small pox).<ref name="Pearson 1904">{{cite journal| last=Pearson| first=K| title= On the Theory of Contingency and Its Relation to Association and Normal Correlation|year=1904| journal= Drapers' Company Research Memoirs Biometric Series| volume=1| pages=1–35| url=https://ia700408.us.archive.org/18/items/cu31924003064833/cu31924003064833.pdf}}</ref> The null hypothesis in this case is no longer predicted by theory or conventional wisdom, but is instead the [[principle of indifference]] that lead [[Ronald Fisher|Fisher]] and others to dismiss the use of "inverse probabilities".<ref>{{cite journal| last=Zabell| first=S| title= R. A. Fisher on the History of Inverse Probability|year=1989| journal= Statistical Science| volume=4| issue=3| pages=247–256| jstor=2245634}}</ref>
 
==Null hypothesis statistical significant testing vs hypothesis testing==
An example of Neyman-Pearson hypothesis testing can be made by a change to the radioactive suitcase example. If the "suitcase" is actually a shielded container for the transportation of radioactive material, then a test might be used to select among three hypotheses: no radioactive source present, one present, two (all) present. The test could be required for safety, with actions required in each case. The [[Neyman-Pearson lemma]] of hypothesis testing says that a good criterion for the selection of hypotheses is the ratio of their probabilities (a likelihood ratio). A simple method of solution is to select the hypothesis with the highest probability for the Geiger counts observed. The typical result matches intuition: few counts imply no source, many counts imply two sources and intermediate counts imply one source.
 
Neyman-Pearson theory can accommodate both prior probabilities and the costs of actions resulting from decisions.<ref name="Ash">{{cite book | last = Ash | first = Robert | title = Basic probability theory | publisher = Wiley | location = New York | year = 1970 | isbn = 978-0471034506 }}Section 8.2</ref> The former allows each test to consider the results of earlier tests (unlike Fisher's significance tests). The latter allows the consideration of economic issues (for example) as well as probabilities. A likelihood ratio remains a good criterion for selecting among hypotheses.
 
The two forms of hypothesis testing are based on different problem formulations. The original test is analogous to a true/false question; the Neyman-Pearson test is more like multiple choice. In the view of [[John Tukey|Tukey]]<ref name="Tukey60" /> the former produces a conclusion on the basis of only strong evidence while the latter produces a decision on the basis of available evidence. While the two tests seem quite different both mathematically and philosophically, later developments lead to the opposite claim. Consider many tiny radioactive sources. The hypotheses become 0,1,2,3... grains of radioactive sand. There is little distinction between none or some radiation (Fisher) and 0 grains of radioactive sand versus all of the alternatives (Neyman-Pearson). The major Neyman-Pearson paper of 1933<ref name="Neyman 289–337" /> also considered composite hypotheses (ones whose distribution includes an unknown parameter). An example proved the optimality of the (Student's) t-test, "there can be no better test for the hypothesis under consideration" (p 321). Neyman-Pearson theory was proving the optimality of Fisherian methods from its inception.
 
Fisher's significance testing has proven a popular flexible statistical tool in application with little mathematical growth potential. Neyman-Pearson hypothesis testing is claimed as a pillar of mathematical statistics,<ref>{{cite journal
| last = Stigler | first = Stephen M.
| title = The History of Statistics in 1933
| journal = Statistical Science
| volume = 11 | issue = 3 | pages = 244–252 | date = Aug 1996
| jstor=2246117}}</ref> creating a new paradigm for the field. It also stimulated new applications in [[Statistical process control]], [[detection theory]], [[decision theory]] and [[game theory]]. Both formulations have been successful, but the successes have been of a different character.
 
The dispute over formulations is unresolved. Science primarily uses Fisher's (slightly modified) formulation as taught in introductory statistics. Statisticians study Neyman-Pearson theory in graduate school. Mathematicians are proud of uniting the formulations. Philosophers consider them separately. Learned opinions deem the formulations variously competitive (Fisher vs Neyman), incompatible<ref name="ftp.isds.duke" /> or complementary.<ref name="Lehmann93" /> The dispute has become more complex since Bayesian inference has achieved respectability.
 
The terminology is inconsistent. Hypothesis testing can mean any mixture of two formulations that both changed with time. Any discussion of significance testing vs hypothesis testing is doubly vulnerable to confusion.
 
Fisher thought that hypothesis testing was a useful strategy for performing industrial quality control, however, he strongly disagreed that hypothesis testing could be useful for scientists.<ref name="Fisher 1955 69–78"/>
Hypothesis testing provides a means of finding test statistics used in significance testing.<ref name="Lehmann93" /> The concept of power is useful in explaining the consequences of adjusting the significance level and is heavily used in [[sample size determination]]. The two methods remain philosophically distinct.<ref name=Lenhard/> They usually (but ''not always'') produce the same mathematical answer. The preferred answer is context dependent.<ref name="Lehmann93">{{cite journal|last=Lehmann|first=E. L.|title=The Fisher, Neyman-Pearson Theories of Testing Hypotheses: One Theory or Two?|journal=Journal of the American Statistical Association|volume=88|issue=424|pages=1242–1249|date=December 1993}}</ref> While the existing merger of Fisher and Neyman-Pearson theories has been heavily criticized, modifying the merger to achieve Bayesian goals has been considered.<ref>{{cite journal|last=Berger|first=James O.|title=Could Fisher, Jeffreys and Neyman Have Agreed on Testing?|journal=Statistical Science|volume=18|issue=1|pages=1–32|year=2003}}</ref>
 
==Criticism==
Criticism of statistical hypothesis testing fills volumes<ref name=morrison>{{cite book|origyear=1970|year=2006|title=The Significance Test Controversy|editor=Morrison, Denton; Henkel, Ramon|publisher=AldineTransaction|isbn=0-202-30879-0}}</ref><ref>{{cite book|last=Oakes|first=Michael|title=Statistical Inference: A Commentary for the Social and Behavioural Sciences|publisher=Wiley|location=Chichester New York|year=1986|isbn=0471104434}}</ref><ref name=chow>{{cite book|first=Siu L.|last=Chow|year=1997|title=Statistical Significance: Rationale, Validity and Utility|isbn=0-7619-5205-5}}</ref><ref name=harlow>{{cite book|year=1997|title=What If There Were No Significance Tests?|editor=Harlow, Lisa Lavoie; Stanley A. Mulaik; James H. Steiger|publisher=Lawrence Erlbaum Associates|isbn=978-0-8058-2634-0}}</ref><ref name=kline>{{cite book|last=Kline|first=Rex|title=Beyond Significance Testing: Reforming Data Analysis Methods in Behavioral Research|publisher=American Psychological Association|location=Washington, DC|year=2004|isbn=9781591471189 }}</ref><ref name=mccloskey>{{cite book|last=McCloskey|first=Deirdre N.|coauthors=Stephen T. Ziliak|year=2008|title=The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives|publisher=University of Michigan Press|isbn=0-472-05007-9}}</ref> citing 300–400 primary references. Much of the criticism can
be summarized by the following issues:
* The interpretation of a p-value is dependent upon stopping rule and definition of multiple comparison. The former often changes during the course of a study and the latter is unavoidably ambiguous. (i.e. "p values depend on both the (data) observed and on the other possible (data) that might have been observed but weren't").<ref>{{cite journal|last=Cornfield|first=Jerome|title=Recent Methodological Contributions to Clinical Trials| journal=American Journal of Epidemiology|volume=104|issue=4|pages=408–421|year=1976|url=http://www.epidemiology.ch/history/PDF%20bg/Cornfield%20J%201976%20recent%20methodological%20contributions.pdf}}</ref>
* Confusion resulting (in part) from combining the methods of Fisher and Neyman-Pearson which are conceptually distinct.<ref name="Tukey60">{{cite journal|last=Tukey|first=John W.|title=Conclusions vs decisions|journal=Technometrics|volume=26|issue=4|pages=423–433|year=1960}} "Until we go through the accounts of testing hypotheses, separating [Neyman-Pearson] decision elements from [Fisher] conclusion elements, the intimate mixture of disparate elements will be a continual source of confusion." ... "There is a place for both "doing one's best" and "saying only what is certain," but it is important to know, in each instance, both which one is being done, and which one ought to be done."</ref>
* Emphasis on statistical significance to the exclusion of estimation and confirmation by repeated experiments.<ref>{{cite journal|last=Yates|first=Frank|title=The Influence of Statistical Methods for Research Workers on the Development of the Science of Statistics|journal=Journal of the American Statistical Association|volume=46|pages=19–34|year=1951}} "The emphasis given to formal tests of significance throughout [R.A. Fisher's] Statistical Methods ... has caused scientific research workers to pay undue attention to the results of the tests of significance they perform on their data, particularly data derived from experiments, and too little to the estimates of the magnitude of the effects they are investigating." ... "The emphasis on tests of significance and the consideration of the results of each experiment in isolation, have had the unfortunate consequence that scientific workers have often regarded the execution of a test of significance on an experiment as the ultimate objective."</ref>
* Rigidly requiring statistical significance as a criterion for publication, resulting in [[publication bias]].<ref>{{cite journal|last1=Begg|first1=Colin B.|last2=Berlin|first2=Jesse A.|title=Publication bias: a problem in interpreting medical data|journal=Journal of the Royal Statistical Society, Series A|pages=419–463|year=1988}}</ref> Most of the criticism is indirect. Rather than being wrong, statistical hypothesis testing is misunderstood, overused and misused.
* When used to detect whether a difference exists between groups, a paradox arises. As improvements are made to experimental design (e.g., increased precision of measurement and sample size), the test becomes more lenient. Unless one accepts the absurd assumption that all sources of noise in the data cancel out completely, the chance of finding statistical significance in either direction approaches 100%.<ref>{{cite journal|last=Meehl|first=Paul E.|title=Theory-Testing in Psychology and Physics: A Methodological Paradox|journal=Philosophy of Science|volume=34|issue=2|pages=103–115|year=1967|url=http://mres.gmu.edu/pmwiki/uploads/Main/Meehl1967.pdf}} Thirty years later, Meehl acknowledged statistical significance theory to be mathematically sound while continuing to question the default choice of null hypothesis, blaming instead the "social scientists’ poor understanding of the logical relation between theory and fact" in "The Problem Is Epistemology, Not Statistics: Replace Significance Tests by Confidence Intervals and Quantify Accuracy of Risky Numerical Predictions" (Chapter 14 in Harlow (1997)).</ref>
*Layers of philosophical concerns. The probability of statistical significance is a function of decisions made by experimenters/analysts.<ref name=bakan66 /> If the decisions are based on convention they are termed arbitrary or mindless<ref name="Gigerenzer 587–606" /> while those not so based may be termed subjective. To minimize type II errors, large samples are recommended. In psychology practically all null hypotheses are claimed to be false for sufficiently large samples so "...it is usually nonsensical to perform an experiment with the ''sole'' aim of rejecting the null hypothesis.".<ref>{{cite journal
| last = Nunnally
| first = Jum
| title = The place of statistics in psychology
| journal = Educational and Psychological Measurement
| volume = 20
| number = 4
| pages = 641–650
| year = 1960 }}</ref> "Statistically significant findings are often misleading" in psychology.<ref>{{cite journal
| last = Lykken
| first = David T.
| title = What's wrong with psychology, anyway?
| journal = Thinking Clearly About Psychology
| volume = 1
| pages = 3–39
| year = 1991}}</ref> Statistical significance does not imply practical significance and [[correlation does not imply causation]]. Casting doubt on the null hypothesis is thus far from directly supporting the research hypothesis.
"[I]t does not tell us what we want to know".<ref name=cohen94/> Lists of dozens of complaints are available.<ref name=kline/><ref name="nickerson">{{cite journal|author=Nickerson, Raymond S.|title=Null Hypothesis Significance Tests: A Review of an Old and Continuing Controversy|journal=Psychological Methods|volume=5|issue=2|pages=241–301|year=2000|doi=10.1037/1082-989X.5.2.241|pmid=10937333}}</ref>
 
Critics and supporters are largely in factual agreement regarding the characteristics of NHST: While it can provide critical information, it is ''inadequate as the sole tool for statistical analysis''. ''Successfully rejecting the null hypothesis may offer no support for the research hypothesis.'' The continuing controversy concerns the selection of the best statistical practices for the near-term future given the (often poor) existing practices. Critics would prefer to ban NHST completely, forcing a complete departure from those practices, while supporters suggest a less absolute change.
 
Controversy over significance testing, and its effects on publication bias in particular, has produced several results. The American Psychological Association has strengthened its statistical reporting requirements after review,<ref name=wilkinson>{{cite journal|author=Wilkinson, Leland|title=Statistical Methods in Psychology Journals; Guidelines and Explanations|journal=American Psychologist|volume=54|issue=8|pages=594–604|year=1999|doi=10.1037/0003-066X.54.8.594}} "Hypothesis tests. It is hard to imagine a situation in which a dichotomous accept-reject decision is better than reporting an actual p value or, better still, a confidence interval." (p 599). The committee used the cautionary term "forbearance" in describing its decision against a ban of hypothesis testing in psychology reporting. (p 603)</ref> medical journal publishers have recognized the obligation to publish some results that are not statistically significant to combat publication bias<ref>{{cite web|url=http://www.icmje.org/publishing_1negative.html|title=ICMJE: Obligation to Publish Negative Studies|accessdate=3 September 2012|quote=Editors should seriously consider for publication any carefully done study of an important question, relevant to their readers, whether the results for the primary or any additional outcome are statistically significant. Failure to submit or publish findings because of lack of statistical significance is an important cause of publication bias.}}</ref> and a journal (''Journal of Articles in Support of the Null Hypothesis'') has been created to publish such results exclusively.<ref name=JASNH>''Journal of Articles in Support of the Null Hypothesis'' website: [http://www.jasnh.com/ JASNH homepage]. Volume 1 number 1 was published in 2002, and all articles are on psychology-related subjects.</ref> Textbooks have added some cautions<ref>{{cite book|title=Statistical Methods for Psychology|last=Howell|first=David|year=2002|publisher=Duxbury|edition=5|isbn=0-534-37770-X|page=94}}</ref> and increased coverage of the tools necessary to estimate the size of the sample required to produce significant results. Major organizations have not abandoned use of significance tests although some have discussed doing so.<ref name=wilkinson/>
 
===Alternatives to significance testing===
The numerous criticisms of significance testing do not lead to a single alternative or even to a unified set of alternatives. A unifying position of critics is that statistics should not lead to a conclusion or a decision but to a probability or to an estimated value with a [[confidence interval]] rather than to an accept-reject decision regarding a particular hypothesis. It is unlikely that the controversy surrounding significance testing will be resolved in the near future. Its supposed flaws and unpopularity do not eliminate the need for an objective and transparent means of reaching conclusions regarding studies that produce statistical results. Critics have not unified around an alternative. Other forms of reporting confidence or uncertainty could probably grow in popularity. One strong critic of significance testing suggested a list of reporting alternatives:<ref name=Armstrong1>{{cite journal|author=Armstrong, J. Scott|title=Significance tests harm progress in forecasting|journal=International Journal of Forecasting|volume=23|pages=321–327|year=2007|url=http://repository.upenn.edu/cgi/viewcontent.cgi?article=1104&context=marketing_papers|doi=10.1016/j.ijforecast.2007.03.004|issue=2}}</ref> effect sizes for importance, prediction intervals for confidence, replications and extensions for replicability, meta-analyses for generality. None of these suggested alternatives produces a conclusion/decision. Lehmann said that hypothesis testing theory can be presented in terms of conclusions/decisions, probabilities, or confidence intervals. "The distinction between the ... approaches is largely one of reporting and interpretation."<ref name=Lehmann97>{{cite journal|author=E. L. Lehmann|title=Testing Statistical Hypotheses: The Story of a Book|journal=Statistical Science|volume=12|issue=1|pages=48–52|year=1997|doi=10.1214/ss/1029963261}}</ref>
 
On one "alternative" there is no disagreement: Fisher himself said,<ref name=fisher /> "In relation to the test of significance, we may say that a phenomenon is experimentally demonstrable when we know how to conduct an experiment which will rarely fail to give us a statistically significant result." Cohen, an influential critic of significance testing, concurred,<ref name=cohen94>{{cite journal|author=Jacob Cohen|title=The Earth Is Round (p < .05)|journal=American Psychologist|volume=49|issue=12|pages=997–1003|date=December 1994|doi=10.1037/0003-066X.49.12.997}} This paper lead to the review of statistical practices by the APA. Cohen was a member of the Task Force that did the review.</ref> "... don't look for a magic alternative to NHST ''[null hypothesis significance testing]'' ... It doesn't exist." "... given the problems of statistical induction, we must finally rely, as have the older sciences, on replication." The "alternative" to significance testing is repeated testing. The easiest way to decrease statistical uncertainty is by obtaining more data, whether by increased sample size or by repeated tests. Nickerson claimed to have never seen the publication of a literally replicated experiment in psychology.<ref name=nickerson /> An indirect approach to replication is [[meta-analysis]].
 
[[Bayesian inference]] is one alternative to significance testing.{{citation needed|date=January 2013}} For example, Bayesian [[parameter estimation]] can provide rich information about the data from which researchers can draw inferences, while using uncertain [[priors]] that exert only minimal influence on the results when enough data is available. Psychologist Kruschke, John K. has suggested Bayesian estimation as an alternative for the [[t-test]].<ref>{{cite journal|last=Kruschke|first=J K|title=Bayesian Estimation Supersedes the T Test|journal=Journal of Experimental Psychology: General|date=July 9, 2012 |volume=N/A|issue=N/A|pages=N/A|doi=10.1037/a0029146}}</ref> Alternatively two competing models/hypothesis can be compared using [[Bayes factors]].<ref>{{cite journal|last=Kass|first=R E
|title=Bayes factors and model uncertainty|year=1993|url=http://www.stat.washington.edu/research/reports/1993/tr254.pdf}}Department of Statistics, University of Washington Technical Paper</ref> Bayesian methods could be criticized for requiring information that is seldom available in the cases where significance testing is most heavily used.{{citation needed|date=November 2012}}
 
Advocates of a Bayesian approach sometimes claim that the goal of a researcher is most often to [[objectivity (science)|objectively]] assess the [[probability]] that a [[hypothesis]] is true based on the data they have collected.{{citation needed|date=January 2013}} Neither [[Ronald Fisher|Fisher]]'s significance testing, nor [[Neyman–Pearson lemma|Neyman-Pearson]] hypothesis testing can provide this information, and do not claim to. The probability a hypothesis is true can only be derived from use of [[Bayes' Theorem]], which was unsatisfactory to both the Fisher and Neyman-Pearson camps due to the explicit use of [[subjectivity]] in the form of the [[prior probability]].<ref name="Neyman 289–337"/><ref>{{cite journal|last=Aldrich|first=J|title=R. A. Fisher on Bayes and Bayes' theorem|journal=Bayesian Analysis|year= 2008 |volume=3|issue=1|pages=161–170|url=http://ba.stat.cmu.edu/journal/2008/vol03/issue01/aldrich.pdf|doi=10.1214/08-BA306}}</ref> Fisher's strategy is to sidestep this with the [[p-value]] (an objective ''index'' based on the data alone) followed by ''inductive inference'', while Neyman-Pearson devised their approach of ''inductive behaviour''.
 
==Philosophy==
Hypothesis testing and philosophy intersect. [[Inferential statistics]],
which includes hypothesis testing, is applied probability. Both
probability and its application are intertwined with philosophy.
Philosopher [[David Hume]] wrote, "All knowledge degenerates into
probability." Competing practical definitions of
[[Probability#Interpretations|probability]] reflect philosophical
differences. The most common application of hypothesis testing is in
the scientific interpretation of experimental data, which is naturally
studied by the [[philosophy of science]].
 
Fisher and Neyman opposed the subjectivity of probability.
Their views contributed to the objective definitions. The core of
their historical disagreement was philosophical.
 
Many of the philosophical criticisms of hypothesis testing are
discussed by statisticians in other contexts, particularly
[[correlation does not imply causation]] and the [[design of experiments]].
Hypothesis testing is of continuing interest to philosophers.<ref name=Lenhard/><ref name="doi10.1093/bjps/axl003">
{{cite doi|10.1093/bjps/axl003}}</ref>
 
==Education==
{{main|Statistics education}}
Statistics is increasingly being taught in schools with hypothesis testing being one of the elements taught.<ref>[http://www.corestandards.org/the-standards/mathematics/hs-statistics-and-probability/introduction/ Mathematics > High School: Statistics & Probability > Introduction] Common Core State Standards Initiative (relates to USA students)</ref><ref>[http://www.collegeboard.com/student/testing/ap/sub_stats.html College Board Tests > AP: Subjects > Statistics] The College Board (relates to USA students)</ref> Many conclusions reported in the popular press (political opinion polls to medical studies) are based on statistics. An informed public should understand the limitations of statistical conclusions<ref name=Huff8>{{cite book|last=Huff|first=Darrell|title=How to lie with statistics|publisher=Norton|location=New York|year=1993|isbn=0-393-31072-8|page=8}}'Statistical methods and statistical terms are necessary in reporting the mass data of social and economic trends, business conditions, "opinion" polls, the census. But without writers who use the words with honesty and readers who know what they mean, the result can only be semantic nonsense.'</ref><ref name=S&C>{{cite book|last1=Snedecor|first1=George W.|last2=Cochran|first2=William G.|title=Statistical Methods|publisher=Iowa State University Press|location=Ames, Iowa|year=1967|edition=6|page=3}} "...the basic ideas in statistics assist us in thinking clearly about
the problem, provide some guidance about the conditions that must be satisfied if sound inferences are to be made, and enable us to detect many inferences that have no good logical foundation."</ref>{{citation needed|date=April 2012}} and many college fields of study require a course in statistics for the same reason.<ref name=Huff8/><ref name=S&C/>{{citation needed|date=April 2012}} An introductory college statistics class places much emphasis on hypothesis testing – perhaps half of the course. Such fields as literature and divinity now include findings based on statistical analysis (see the [[Bible Analyzer]]). An introductory statistics class teaches hypothesis testing as a cookbook process. Hypothesis testing is also taught at the postgraduate level. Statisticians learn how to create good statistical test procedures (like ''z'', Student's ''t'', ''F'' and chi-squared). Statistical hypothesis testing is considered a mature area within statistics,<ref name=Lehmann97/> but a limited amount of development continues.
 
The cookbook method of teaching introductory statistics leaves no time for history, philosophy or controversy. Hypothesis testing has been taught as received unified method. Surveys showed that graduates of the class were filled with philosophical misconceptions (on all aspects of statistical inference) that persisted among instructors.<ref>{{cite journal|last1=Sotos|first1=Ana Elisa Castro|last2=Vanhoof|first2=Stijn|last3=Noortgate|first3=Wim Van den|last4=Onghena|first4=Patrick|title=Students' Misconceptions of Statistical Inference: A Review of the Empirical Evidence from Research on Statistics Education|journal=Educational Research Review|volume=2|pages=98–113|year=2007}}</ref> While the problem was addressed more than a decade ago,<ref>{{cite journal|last=Moore|first=David S.|title=New Pedagogy and New Content: The Case of Statistics|journal=International Statistical Review|volume=65|pages=123–165|year=1997}}</ref> and calls for educational reform continue,<ref>{{cite doi|10.1177/0273475306288399}} [http://escholarshare.drake.edu/bitstream/handle/2092/413/WhyWeDon't.pdf Preprint]</ref> students still graduate from statistics classes holding fundamental misconceptions about hypothesis testing.<ref>{{cite journal|last1=Sotos|first1=Ana Elisa Castro|last2=Vanhoof|first2=Stijn|last3=Noortgate|first3=Wim Van den|last4=Onghena|first4=Patrick|title=How Confident Are Students in Their Misconceptions about Hypothesis Tests?|journal=Journal of Statistics Education|volume=17|number=2|year=2009}}</ref> Ideas for improving the teaching of hypothesis testing include encouraging students to search for statistical errors in published papers, teaching the history of statistics and emphasizing the controversy in a generally dry subject.<ref name="Gigerenzer 2004 391–408">{{cite journal|last=Gigerenzer|first=G|title=The Null Ritual What You Always Wanted to Know About Significant Testing but Were Afraid to Ask|journal=The SAGE Handbook of Quantitative Methodology for the Social Sciences|year=2004|pages=391–408|url=http://library.mpib-berlin.mpg.de/ft/gg/GG_Null_2004.pdf|doi=10.4135/9781412986311}}</ref>
 
==See also==
{{Portal|Statistics}}
{{Commons category|Statistical hypothesis testing}}
{{Div col|3}}
* [[Behrens–Fisher problem]]
* [[Bootstrapping (statistics)]]
* [[Checking if a coin is fair]]
* [[Comparing means]] test decision tree
* [[Complete spatial randomness]]
* [[Counternull]]
* [[Falsifiability]]
* [[Fisher's method]] for combining [[Statistical independence|independent]] [[Statistical significance|tests of significance]]
* [[Granger causality]]
* [[Look-elsewhere effect]]
* [[Modifiable areal unit problem]]
* [[Omnibus test]]
{{Div col end}}
 
==References==
{{Reflist|30em}}
 
==Further reading==
* Lehmann E.L. (1992) "Introduction to Neyman and Pearson (1933) On the Problem of the Most Efficient Tests of Statistical Hypotheses". In: ''Breakthroughs in Statistics, Volume 1'', (Eds Kotz, S., Johnson, N.L.), Springer-Verlag. ISBN 0-387-94037-5 (followed by reprinting of the paper)
* {{cite journal|doi=10.1098/rsta.1933.0009|last1=Neyman|first1=J.|last2=Pearson|first2=E.S.|year=1933|title=On the Problem of the Most Efficient Tests of Statistical Hypotheses|journal=[[Philosophical Transactions of the Royal Society A]]|volume=231|issue=694–706| pages=289–337}}
 
==External links==
{{Wikiversity|at=Introduction to Statistical Analysis/Unit 5 Content}}
* {{springer|title=Statistical hypotheses, verification of|id=p/s087400}}
* {{Cite web|title=Hypothesis Testing |last=Wilson González |first=Georgina |coauthors=Kay Sankaran |work=Environmental Sampling & Monitoring Primer |url=http://www.webapps.cee.vt.edu/ewr/environmental/teach/smprimer/hypotest/ht.html |publisher=Virginia Tech |date=September 10, 1997 }}
* [http://www.cs.ucsd.edu/users/goguen/courses/275f00/stat.html Bayesian critique of classical hypothesis testing]
* [http://www.npwrc.usgs.gov/resource/methods/statsig/stathyp.htm Critique of classical hypothesis testing highlighting long-standing qualms of statisticians]
* Dallal GE (2007) [http://www.tufts.edu/~gdallal/LHSP.HTM The Little Handbook of Statistical Practice] (A good tutorial)
* [http://core.ecu.edu/psyc/wuenschk/StatHelp/NHST-SHIT.htm References for arguments for and against hypothesis testing]
* [http://www.wiwi.uni-muenster.de/ioeb/en/organisation/pfaff/stat_overview_table.html Statistical Tests Overview:] How to choose the correct statistical test
* [http://wasser.heliohost.org/?l=en An Interactive Online Tool to Encourage Understanding Hypothesis Testing]
* [http://simplifyingstats.com/data/HypothesisTesting.pdf A non mathematical way to understand Hypothesis Testing]
 
{{Statistics|analysis||state=expanded}}
 
{{DEFAULTSORT:Statistical Hypothesis Testing}}
[[Category:Design of experiments]]
[[Category:Hypothesis testing]]
[[Category:Psychometrics]]
[[Category:Statistical inference]]
[[Category:Logic and statistics]]

Latest revision as of 16:03, 15 December 2014

Hello, I'm Sammy, a 19 year old from Opitter, Belgium.
My hobbies include (but are not limited to) Sand castle building, Kiteboarding and watching 2 Broke Girls.

Stop by my web-site - Fifa 15 Coin Generator