Schatten norm: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>AnomieBOT
m Dating maintenance tags: {{Refimprove}}
en>Michael Hardy
No edit summary
 
Line 1: Line 1:
In [[statistics]], the '''Holm&ndash;Bonferroni method''' <ref>{{cite journal
Nestor is the title my parents gave me but I don't like when people use my complete name. Bookkeeping is what she does. What she loves performing is to perform croquet but she hasn't made a dime with it. I currently live in Arizona but now I'm contemplating other options.<br><br>Feel free to visit my webpage :: [http://Vinodjoseph.com/ActivityFeed/MyProfile/tabid/154/userId/6690/Default.aspx extended auto warranty]
|last=Holm |first=S.
|year=1979
|title=A simple sequentially rejective multiple test procedure
|journal=Scandinavian Journal of Statistics
|volume=6 |issue=2 |pages=65&ndash;70
|mr=538597 | jstor = 4615733
}}</ref> is a method used to counteract the problem of [[multiple comparisons]]. It is intended to control the [[Familywise error rate]] and offers a simple test uniformly more powerful than the [[Bonferroni correction]]. It is one of the earliest usage of stepwise algorithms in [[Familywise_error_rate#Simultaneous_inference_vs._selective_inference|simultaneous inference]].
 
It is named after [[Sture Holm]] who invented the method in 1978 and [[Carlo Emilio Bonferroni]].
 
==Introduction==
When considering several hypotheses in the same test the problem of [[Multiple_comparisons#The_problem|multiplicity]] arises. Intuitively, the more hypotheses we check, the higher the probability to witness a rare result. With 10 different hypotheses and [[significance level]] of 0.05, there is more than 40% chance of having one or more [[type I error]]s.
Holm–Bonferroni method is one of the many ways to address this issue. It modifies the rejection criteria in order to enable us to control the overall probability of witnessing one or more type I errors (also called the [[familywise error rate]]) at a predetermined level.
 
==Formulation==
The method algorithm is as follows:
* Let <math>H_{1},...,H_{m}</math> be a family of hypotheses and <math>P_{1},...,P_{m}</math> the corresponding P-values.
* Start by ordering the p-values <math>P_{(1)} \ldots P_{(m)}</math> and let the associated hypotheses be  <math>H_{(1)} \ldots H_{(m)}</math>
* For a given [[significance level]] <math>\alpha</math>, let <math>k</math> be the minimal index such that <math>P_{(k)} > \frac{\alpha}{m+1-k}</math>
* Reject the null hypotheses <math>H_{(1)} \ldots H_{(k-1)}</math> and do not reject <math>H_{(k)} \ldots H_{(m)}</math>
* If <math>k=1</math> then don't reject any of the hypotheses and if no such <math>k</math> exist then reject all hypotheses.
 
The Holm–Bonferroni method ensures that this method will control the <math>FWER\leq\alpha</math>, where <math>FWER</math> is the [[Familywise error rate]]
 
===Proof that Bonferroni-Holm controls the FWER===
Let <math>H_{(1)}\ldots H_{(m)}</math> be a family of hypotheses, and <math>P_{(1)}\leq P_{(2)}\leq\ldots\leq P_{(m)}</math>  be the sorted p-values. Let <math>I_{0}</math> be the set of indices corresponding to the (unknown) true null hypotheses, having <math>m_{0}</math> members and define <math>i_{0}=m-m_{0}+1</math>. Also let <math>k</math> be the stopping index for Holm–Bonferroni method, as described above. In the case we don't reject any of the true null hypotheses, <math>k</math> will be smaller than <math>i_{0}</math>. Therefore, the <math>FWER=Pr\left\{ k\geq i_{0}\right\}</math>.
 
Let's define <math>A=\left\{ P_{(i)}>\frac{\alpha}{m_{0}},\forall i\in I_{0}\right\}</math>. From [[Bonferroni_inequalities#Bonferroni_inequalities|Bonferroni inequalities]] we get that <math>Pr(A)\geq1-\alpha</math>. Since the event of <math>\left\{ P_{(i_{0})}>\frac{\alpha}{m_{o}}=\frac{\alpha}{m-i_{0}+1}\right\} \subseteq A</math> when <math>k<i_{0}</math> and for all <math>i<k</math>, <math>P_{(i)}\leq\frac{\alpha}{m-i+1}</math>, we can conclude that <math>FWER=Pr\left\{ k\geq i_{0}\right\} \leq\alpha</math>, as required.
 
== Example ==
Four [[null hypotheses]] are tested with α = 0.05.  The four unadjusted p-values are 0.01, 0.04, 0.03 and 0.005. The smallest of these is 0.005.  Since this is less than 0.05/4, [[null hypothesis]] four is rejected (meaning some [[alternative hypothesis]] likely explains the data).  The next smallest p-value is 0.01, which is smaller than 0.05/3.  So, null hypothesis one is also rejected.  The next smallest p-value is 0.03.  This is not smaller than 0.05/2, so you fail to reject this hypothesis (meaning you have not seen evidence to conclude an alternative hypothesis is preferable to the level of α = 0.05). As soon as that happens, you stop, and therefore, also fail to reject the remaining hypothesis that has a p-value of 0.04. Therefore, hypotheses one and four are rejected while hypotheses two and three are not rejected. Applying the [[False_discovery_rate#Dependent_tests|approximate false discovery rate]] produces the same result without requiring ordering the p-values, then using different criteria for each test.
 
==Extensions==
The Holm–Bonferroni method is an example of a [[closed testing procedure|closed test procedure]].<ref>{{cite journal |last=Marcus |first=R. |last2=Peritz |first2=E. |last3=Gabriel |first3=K. R. |year=1976 |title=On closed testing procedures with special reference to ordered analysis of variance |journal=[[Biometrika]] |volume=63 |issue=3 |pages=655–660 |doi=10.1093/biomet/63.3.655 }}</ref>  As such, it controls the [[familywise error rate]] for all the ''k'' hypotheses at level α in the strong sense.  Each intersection is tested using the simple Bonferroni test.
 
===Adjusted P-value===
The adjusted P-values for Holm–Bonferroni method are:
<math>\widetilde{p}_{(i)}=\max_{j\leq i}\left\{ (N-j+1)p_{(j)}\right\} _{1}</math>, where <math>\{x\}_{1}\equiv \min(x,1)</math>.
 
===Šidák version===
{{main|Šidák correction}}
It is possible to replace <math>\frac{\alpha}{m},\frac{\alpha}{m-1},...,\frac{\alpha}{1}</math> with <math>1-(1-\alpha)^{1/m},1-(1-\alpha)^{1/(m-1)},...,1-(1-\alpha)^{1}</math>, which will provide a more powerful test, but the increase in power will not be very big.
 
===Weighted version===
Let ''p''<sub>1</sub>,..., ''p''<sub>''k''</sub> be the unadjusted p-values and let ''w''<sub>1</sub>,..., ''w''<sub>''k''</sub> be a set of corresponding positive weights that add to 1.  Without loss of generality, assume the p-values and the weights are all ordered such that ''p''<sub>1</sub>/''w''<sub>1</sub> ≤ ''p''<sub>2</sub>/''w''<sub>2</sub> ≤ ... ≤ ''p''<sub>''k''</sub>/''w''<sub>''k''</sub>{{citation needed|date=June 2012}}.  The adjusted p-value for the first hypothesis is ''q''<sub>1</sub> = min{1,''p''<sub>1</sub>/''w''<sub>1</sub>}.  Inductively, define the adjusted p-value for hypothesis ''i'' by ''q''<sub>''i''</sub>&nbsp;=&nbsp;min{1,max{''q''<sub>''i''−1</sub>,(''w''<sub>''i''</sub> + ... + ''w''<sub>''k''</sub>)×''p''<sub>''i''</sub>/''w''<sub>''i''</sub>.  A hypothesis is rejected at level α if and only if its adjusted p-value is less than α. In the earlier example using equal weights, the adjusted p-values are 0.03, 0.06, 0.06, and 0.02.  This is another way to see that using α = 0.05, only hypotheses one and four are rejected by this procedure.
 
==Alternatives and usage==
{{main|Familywise error rate#Controlling procedures}}
Holm-Bonferroni is more powerful than the regular Bonferroni, and can always be used as a substitute. With that said, it is not the best [[Familywise_error_rate#Simultaneous_inference_vs._selective_inference|simultaneous inference]] test available. There are many other tests that intend to control the familywise error rate, many of them are more powerful than Holm-Bonferroni. Among those tests there is the [[Familywise_error_rate#Hochberg.27s_step-up_procedure_.281988.29|Hochberg procedure]] (1988).
If we replace the rejection criteria and look for the maximal index <math>k</math> such that <math>P_{(k)} \leq \frac{\alpha}{m+1-k}</math>, then reject <math>H_{(1)} \ldots H_{(k)}</math>, we will get the Hochberg method, which is guaranteed to be no less powerful and is in many cases more powerful then Holm-Bonferroni. With that said, Hochberg method require the hypotheses to be [[Independence (probability theory)|independent]] (and also under some forms of positive dependence), while Holm-Bonferroni can be applied with no conditions.
 
==Bonferroni contribution==
Carlo Emilio Bonferroni did not take part in inventing the method described here. Holm originally called the method the "sequentially rejective Bonferroni test", and it became known as Holm-Bonferroni only after some time. Holm's motives for naming his method after Bonferroni are explained in the original paper:
''"The use of the Boole inequality within multiple inference theory is usually called the Bonferroni technique, and for this reason we will call our test the sequentially rejective Bonferroni test."''
 
==See also ==
* [[Multiple comparisons]]
* [[Bonferroni correction]]
* [[Familywise error rate]]
* [[Closed testing procedure]]
 
==References==
<references/>
 
{{DEFAULTSORT:Holm-Bonferroni Method}}
[[Category:Hypothesis testing]]
[[Category:Statistical tests]]
 
[[de:Alphafehler-Kumulierung]]
[[sv:Bonferroni-Holms metod]]

Latest revision as of 06:00, 30 July 2014

Nestor is the title my parents gave me but I don't like when people use my complete name. Bookkeeping is what she does. What she loves performing is to perform croquet but she hasn't made a dime with it. I currently live in Arizona but now I'm contemplating other options.

Feel free to visit my webpage :: extended auto warranty