Grothendieck category: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>R.e.b.
Properties: Adding/removing wikilink(s)
 
en>AxelBoldt
mNo edit summary
Line 1: Line 1:
Freezers are not only a comfort choice they are an economical a single as properly. Now if you have been to place salt water in the freezer along side your ice tray, placing the ice water (or perhaps it truly froze) in the container initially and then add the ice to it, and the cubes may perhaps truly last longer for the reason that it will be colder than the ice. Unfortunately, the salt could bring about the ice to nonetheless melt faster. It is done with ice cream to permit the water to get colder than zero degrees Celsius.<br><br>If you have not attempted a clinically verified acne treatment this is advised prior to attempting dry ice acne remedies. I presume you fill the cooler with ice and then let the tiny fan blow across it. If so, I kinda doubt even block ice will final incredibly long. How extended it lasts will rely ont he form of ice, the cooler and the air temp. As lengthy as the ice lasts, it really should make a noticeable distinction in a 26 foot boat.<br><br>Thicker, stronger foil will work very best at keeping the baked potatoes warm for longer periods of time, although the same effect can be accomplished by double-lining the ice chest with thinner foil if that is all that is offered. As soon as the potatoes have been cooked, they will be delicate, and even though their aluminum foil wraps will [http://Search.un.org/search?ie=utf8&site=un_org&output=xml_no_dtd&client=UN_Website_en&num=10&lr=lang_en&proxystylesheet=UN_Website_en&oe=utf8&q=protect&Submit=Go protect] the skin and their general shape, you should nonetheless be gentle even though stacking them in the ice chest.<br><br>Warning do not touch dry ice it normally comes in paper , leave the paper on and make positive you bring a pair of gloves in case you have to manage it. It will burn your 1 a lot more issue do not place eggs in the similar chest with it they will freeze. Rakegirl - just got back from a trip into the western mountains of North Carolina and getting prepared to empty out the Xtreme ice cooler.  If you enjoyed this information and you would certainly such as to obtain even more information regarding [http://www.bestcoolerreviewshq.com/best-cooler-for-the-money/ http://www.bestcoolerreviewshq.Com/best-cooler-for-the-money/] kindly see the web-page. In a seperate ice chest I placed some drinks and added a bag of ice to  Greatest Cooler For The Money 2014 chill them down.<br><br>The final portion of the equation is that I cover the ice chest with a thick towel that I keep wet with river (you can use ocean) water. Wintersmiths, founded by American brothers Chris Small and Pat Little, found a tremendous quantity of good results in 2013 with the Ice Baller. The Ice Chest uses straightforward tap water frozen overnight to create four completely round slow-melting ice spheres.  Buy an ice chest that will very best suit your requires.<br><br>Lots of of our nearby super markets carry it. You place a block of that in your cooler and you happen to be superior for twice the normal timespan in my expertise. Its been 80 degrees lately and my chest is eating 40-60lbs of ice a week depending on how a lot I open it. The 90s will be promptly approaching and soon the 100s. Pack them in, fill the chest with ice Thursday evening... nevertheless have ice cold drinks on Sunday.
{{context|date=April 2013}}
In [[statistics]], '''sufficient dimension reduction (SDR)''' is a paradigm for analyzing data that combines the ideas of [[dimension reduction]] with the concept of [[sufficient statistic|sufficiency]].
 
Dimension reduction has long been a primary goal of [[regression analysis]]. Given a response variable ''y'' and a ''p''-dimensional predictor vector <math>\textbf{x}</math>, regression analysis aims to study the distribution of <math>y|\textbf{x}</math>, the [[conditional distribution]] of <math>y</math> given <math>\textbf{x}</math>. A '''dimension reduction''' is a function <math>R(\textbf{x})</math> that maps <math>\textbf{x}</math> to a subset of <math>\mathbb{R}^k</math>, ''k''&nbsp;<&nbsp;''p'', thereby reducing the [[dimension (vector space)|dimension]] of <math>\textbf{x}</math>.<ref name="Cook & Adragni:2009">Cook & Adragni (2009) [http://rsta.royalsocietypublishing.org/content/367/1906/4385.full ''Sufficient Dimension Reduction and Prediction in Regression''] In: ''Philosophical Transactions of the Royal Society A: Physical, Mathematical and Engineering Sciences'', 367(1906): 4385–4405</ref> For example, <math>R(\textbf{x})</math> may be one or more [[linear combinations]] of <math>\textbf{x}</math>.
 
A dimension reduction <math>R(\textbf{x})</math> is said to be '''sufficient''' if the distribution of <math>y|R(\textbf{x})</math> is the same as that of <math>y|\textbf{x}</math>. In other words, no information about the regression is lost in reducing the dimension of <math>\textbf{x}</math> if the reduction is sufficient.<ref name="Cook & Adragni:2009" />
 
== Graphical motivation ==
In a regression setting, it is often useful to summarize the distribution of <math>y|\textbf{x}</math> graphically. For instance, one may consider a [[scatter plot]] of <math>y</math> versus one or more of the predictors. A scatter plot that contains all available regression information is called a '''sufficient summary plot'''.
 
When <math>\textbf{x}</math> is high-dimensional, particularly when <math>p\geq 3</math>, it becomes increasingly challenging to construct and visually interpret sufficiency summary plots without reducing the data. Even three-dimensional scatter plots must be viewed via a computer program, and the third dimension can only be visualized by rotating the coordinate axes. However, if there exists a sufficient dimension reduction <math>R(\textbf{x})</math> with small enough dimension, a sufficient summary plot of <math>y</math> versus <math>R(\textbf{x})</math> may be constructed and visually interpreted with relative ease.
 
Hence sufficient dimension reduction allows for graphical intuition about the distribution of <math>y|\textbf{x}</math>, which might not have otherwise been available for high-dimensional data.
 
Most graphical methodology focuses primarily on dimension reduction involving linear combinations of <math>\textbf{x}</math>. The rest of this article deals only with such reductions.
 
== Dimension reduction subspace ==
Suppose <math>R(\textbf{x}) = A^T\textbf{x}</math> is a sufficient dimension reduction, where <math>A</math> is a <math>p\times k</math> [[matrix (mathematics)|matrix]] with [[rank (linear algebra)|rank]] <math>k\leq p</math>. Then the regression information for <math>y|\textbf{x}</math> can be inferred by studying the distribution of <math>y|A^T\textbf{x}</math>, and the plot of <math>y</math> versus <math>A^T\textbf{x}</math> is a sufficient summary plot.
 
[[Without loss of generality]], only the [[vector space|space]] [[linear span|spanned]] by the columns of <math>A</math> need be considered. Let <math>\eta</math> be a [[basis (linear algebra)|basis]] for the column space of <math>A</math>, and let the space spanned by <math>\eta</math> be denoted by <math>\mathcal{S}(\eta)</math>. It follows from the definition of a sufficient dimension reduction that
 
: <math>F_{y|x} = F_{y|\eta^Tx},</math>
 
where <math>F</math> denotes the appropriate [[cumulative distribution function|distribution function]]. Another way to express this property is
 
: <math>y\perp\!\!\!\perp\textbf{x}\,|\,\eta^T\textbf{x},</math>
 
or <math>y</math> is [[conditional independence|conditionally independent]] of <math>\textbf{x}</math>, given <math>\eta^T\textbf{x}</math>. Then the subspace <math>\mathcal{S}(\eta)</math> is defined to be a '''dimension reduction subspace (DRS)'''.<ref name="Cook:1998">Cook, R.D. (1998)  ''Regression Graphics: Ideas for Studying Regressions Through Graphics'', Wiley ISBN 0471193658 </ref>
 
=== Structural dimensionality ===
For a regression <math>y|\textbf{x}</math>, the '''structural dimension''', <math>d</math>, is the smallest number of distinct linear combinations of <math>\textbf{x}</math> necessary to preserve the conditional distribution of <math>y|\textbf{x}</math>. In other words, the smallest dimension reduction that is still sufficient maps <math>\textbf{x}</math> to a subset of <math>\mathbb{R}^d</math>. The corresponding DRS will be ''d''-dimensional.<ref name="Cook:1998" />
 
=== Minimum dimension reduction subspace ===
A subspace <math>\mathcal{S}</math> is said to be a '''minimum DRS''' for <math>y|\textbf{x}</math> if it is a DRS and its dimension is less than or equal to that of all other DRSs for <math>y|\textbf{x}</math>. A minimum DRS <math>\mathcal{S}</math> is not necessarily unique, but its dimension is equal to the structural dimension <math>d</math> of <math>y|\textbf{x}</math>, by definition.<ref name="Cook:1998" />
 
If <math>\mathcal{S}</math> has basis <math>\eta</math> and is a minimum DRS, then a plot of ''y'' versus <math>\eta^T\textbf{x}</math> is a '''minimal sufficient summary plot''', and it is (''d''&nbsp;+&nbsp;1)-dimensional.
 
== Central subspace ==
If a subspace <math>\mathcal{S}</math> is a DRS for <math>y|\textbf{x}</math>, and if <math>\mathcal{S}\subset\mathcal{S}_{drs}</math> for all other DRSs <math>\mathcal{S}_{drs}</math>, then it is a '''central dimension reduction subspace''', or simply a '''central subspace''', and it is denoted by <math>\mathcal{S}_{y|x}</math>. In other words, a central subspace for <math>y|\textbf{x}</math> exists [[if and only if]] the intersection <math>\cap\mathcal{S}_{drs}</math> of all dimension reduction subspaces is also a dimension reduction subspace, and that intersection is the central subspace <math>\mathcal{S}_{y|x}</math>.<ref name="Cook:1998" />
 
The central subspace <math>\mathcal{S}_{y|x}</math> does not necessarily exist because the intersection <math>\cap\mathcal{S}_{drs}</math> is not necessarily a DRS. However, if <math>\mathcal{S}_{y|x}</math> ''does'' exist, then it is also the unique minimum dimension reduction subspace.<ref name="Cook:1998" />
 
=== Existence of the central subspace ===
While the existence of the central subspace <math>\mathcal{S}_{y|x}</math> is not guaranteed in every regression situation, there are some rather broad conditions under which its existence follows directly. For example, consider the following proposition from Cook (1998):
 
: Let <math>\mathcal{S}_1</math> and <math>\mathcal{S}_2</math> be dimension reduction subspaces for <math>y|\textbf{x}</math>. If <math>\textbf{x}</math> has [[probability density function|density]] <math>f(a) > 0</math> for all <math>a\in\Omega_x</math> and <math>f(a) = 0</math> everywhere else, where <math>\Omega_x</math> is [[convex set|convex]], then the intersection <math>\mathcal{S}_1\cap\mathcal{S}_2</math> is also a dimension reduction subspace.
 
It follows from this proposition that the central subspace <math>\mathcal{S}_{y|x}</math> exists for such <math>\textbf{x}</math>.<ref name="Cook:1998" />
 
== Methods for dimension reduction ==
There are many existing methods for dimension reduction, both graphical and numeric. For example, '''[[sliced inverse regression]] (SIR)''' and '''sliced average variance estimation (SAVE)''' were introduced in the 1990s and continue to be widely used.<ref name="Li:1991">Li, K-C. (1991) [http://www.jstor.org/stable/2290563 ''Sliced Inverse Regression for Dimension Reduction''] In: ''[[Journal of the American Statistical Association]]'', 86(414): 316–327</ref> Although SIR was originally designed to estimate an ''effective dimension reducing subspace'', it is now understood that it estimates only the central subspace, which is generally different.
 
More recent methods for dimension reduction include [[likelihood function|likelihood]]-based sufficient dimension reduction,<ref name="Cook & Forzani(2009)">Cook, R.D. and Forzani, L. (2009) ''Likelihood-Based Sufficient Dimension Reduction'' In: [[Journal of the American Statistical Association]], 104(485): 197–208</ref> estimating the central subspace based on the inverse third [[moment (mathematics)|moment]] (or ''k''th moment),<ref name="Yin & Cook:2003">Yin, X. and Cook, R.D. (2003) [http://www.jstor.org/stable/30042023 ''Estimating Central Subspaces via Inverse Third Moments''] In: ''[[Biometrika]]'', 90(1): 113–125</ref> estimating the central solution space,<ref name="Li & Dong:2009">Li, B. and Dong, Y.D. (2009) [http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aos/1239369022 ''Dimension Reduction for Nonelliptically Distributed Predictors''] In: ''[[Annals of Statistics]]'', 37(3): 1272–1298</ref> and graphical regression.<ref name="Cook:1998" /> For more details on these and other methods, consult the statistical literature.
 
[[Principal components analysis]] '''(PCA)''' and similar methods for dimension reduction are not based on the sufficiency principle.
 
=== Example: linear regression ===
Consider the regression model
 
: <math>y = \alpha + \beta^T\textbf{x} + \varepsilon,\text{ where }\varepsilon\perp\!\!\!\perp\textbf{x}.</math>
 
Note that the distribution of <math>y|\textbf{x}</math> is the same as the distribution of <math>y|\beta^T\textbf{x}</math>. Hence, the span of <math>\beta</math> is a dimension reduction subspace. Also, <math>\beta</math> is 1-dimensional (unless <math>\beta=\textbf{0}</math>), so the structural dimension of this regression is <math>d=1</math>.
 
The [[ordinary least squares|OLS]] estimate <math>\hat{\beta}</math> of <math>\beta</math> is [[consistent estimator|consistent]], and so the span of <math>\hat{\beta}</math> is a consistent estimator of <math>\mathcal{S}_{y|x}</math>. The plot of <math>y</math> versus <math>\hat{\beta}^T\textbf{x}</math> is a sufficient summary plot for this regression.
 
== See also ==
*[[Dimension reduction]]
*[[Sliced inverse regression]]
*[[Principal component analysis]]
*[[Linear discriminant analysis]]
*[[Curse of dimensionality]]
*[[Multilinear subspace learning]]
 
== Notes ==
{{Reflist}}
 
== References ==
{{refbegin}}
*Cook, R.D. (1998) ''Regression Graphics: Ideas for Studying Regressions through Graphics'', Wiley Series in Probability and Statistics. [http://www.stat.umn.edu/RegGraph/ Regression Graphics].
*Cook, R.D. and Adragni, K.P. (2009) "Sufficient Dimension Reduction and Prediction in Regression", [[Philosophical Transactions of the Royal Society A: Physical, Mathematical and Engineering Sciences]], 367(1906), 4385–4405. [http://rsta.royalsocietypublishing.org/content/367/1906/4385.full Full-text]
*Cook, R.D. and Weisberg, S. (1991) "Sliced Inverse Regression for Dimension Reduction: Comment", [[Journal of the American Statistical Association]], 86(414), 328–332. [http://www.jstor.org/stable/2290564 Jstor]
*Li, K-C. (1991) "Sliced Inverse Regression for Dimension Reduction", [[Journal of the American Statistical Association]], 86(414), 316–327. [http://www.jstor.org/stable/2290563 Jstor]
{{refend}}
 
== External links ==
* [http://www.stat.umn.edu/~dennis/SDR/ Sufficient Dimension Reduction]
 
[[Category:Multivariate statistics]]
[[Category:Dimension reduction]]

Revision as of 23:17, 23 August 2013

My name is Jestine (34 years old) and my hobbies are Origami and Microscopy.

Here is my web site; http://Www.hostgator1centcoupon.info/ (support.file1.com) In statistics, sufficient dimension reduction (SDR) is a paradigm for analyzing data that combines the ideas of dimension reduction with the concept of sufficiency.

Dimension reduction has long been a primary goal of regression analysis. Given a response variable y and a p-dimensional predictor vector x, regression analysis aims to study the distribution of y|x, the conditional distribution of y given x. A dimension reduction is a function R(x) that maps x to a subset of k, k < p, thereby reducing the dimension of x.[1] For example, R(x) may be one or more linear combinations of x.

A dimension reduction R(x) is said to be sufficient if the distribution of y|R(x) is the same as that of y|x. In other words, no information about the regression is lost in reducing the dimension of x if the reduction is sufficient.[1]

Graphical motivation

In a regression setting, it is often useful to summarize the distribution of y|x graphically. For instance, one may consider a scatter plot of y versus one or more of the predictors. A scatter plot that contains all available regression information is called a sufficient summary plot.

When x is high-dimensional, particularly when p3, it becomes increasingly challenging to construct and visually interpret sufficiency summary plots without reducing the data. Even three-dimensional scatter plots must be viewed via a computer program, and the third dimension can only be visualized by rotating the coordinate axes. However, if there exists a sufficient dimension reduction R(x) with small enough dimension, a sufficient summary plot of y versus R(x) may be constructed and visually interpreted with relative ease.

Hence sufficient dimension reduction allows for graphical intuition about the distribution of y|x, which might not have otherwise been available for high-dimensional data.

Most graphical methodology focuses primarily on dimension reduction involving linear combinations of x. The rest of this article deals only with such reductions.

Dimension reduction subspace

Suppose R(x)=ATx is a sufficient dimension reduction, where A is a p×k matrix with rank kp. Then the regression information for y|x can be inferred by studying the distribution of y|ATx, and the plot of y versus ATx is a sufficient summary plot.

Without loss of generality, only the space spanned by the columns of A need be considered. Let η be a basis for the column space of A, and let the space spanned by η be denoted by 𝒮(η). It follows from the definition of a sufficient dimension reduction that

Fy|x=Fy|ηTx,

where F denotes the appropriate distribution function. Another way to express this property is

yx|ηTx,

or y is conditionally independent of x, given ηTx. Then the subspace 𝒮(η) is defined to be a dimension reduction subspace (DRS).[2]

Structural dimensionality

For a regression y|x, the structural dimension, d, is the smallest number of distinct linear combinations of x necessary to preserve the conditional distribution of y|x. In other words, the smallest dimension reduction that is still sufficient maps x to a subset of d. The corresponding DRS will be d-dimensional.[2]

Minimum dimension reduction subspace

A subspace 𝒮 is said to be a minimum DRS for y|x if it is a DRS and its dimension is less than or equal to that of all other DRSs for y|x. A minimum DRS 𝒮 is not necessarily unique, but its dimension is equal to the structural dimension d of y|x, by definition.[2]

If 𝒮 has basis η and is a minimum DRS, then a plot of y versus ηTx is a minimal sufficient summary plot, and it is (d + 1)-dimensional.

Central subspace

If a subspace 𝒮 is a DRS for y|x, and if 𝒮𝒮drs for all other DRSs 𝒮drs, then it is a central dimension reduction subspace, or simply a central subspace, and it is denoted by 𝒮y|x. In other words, a central subspace for y|x exists if and only if the intersection 𝒮drs of all dimension reduction subspaces is also a dimension reduction subspace, and that intersection is the central subspace 𝒮y|x.[2]

The central subspace 𝒮y|x does not necessarily exist because the intersection 𝒮drs is not necessarily a DRS. However, if 𝒮y|x does exist, then it is also the unique minimum dimension reduction subspace.[2]

Existence of the central subspace

While the existence of the central subspace 𝒮y|x is not guaranteed in every regression situation, there are some rather broad conditions under which its existence follows directly. For example, consider the following proposition from Cook (1998):

Let 𝒮1 and 𝒮2 be dimension reduction subspaces for y|x. If x has density f(a)>0 for all aΩx and f(a)=0 everywhere else, where Ωx is convex, then the intersection 𝒮1𝒮2 is also a dimension reduction subspace.

It follows from this proposition that the central subspace 𝒮y|x exists for such x.[2]

Methods for dimension reduction

There are many existing methods for dimension reduction, both graphical and numeric. For example, sliced inverse regression (SIR) and sliced average variance estimation (SAVE) were introduced in the 1990s and continue to be widely used.[3] Although SIR was originally designed to estimate an effective dimension reducing subspace, it is now understood that it estimates only the central subspace, which is generally different.

More recent methods for dimension reduction include likelihood-based sufficient dimension reduction,[4] estimating the central subspace based on the inverse third moment (or kth moment),[5] estimating the central solution space,[6] and graphical regression.[2] For more details on these and other methods, consult the statistical literature.

Principal components analysis (PCA) and similar methods for dimension reduction are not based on the sufficiency principle.

Example: linear regression

Consider the regression model

y=α+βTx+ε, where εx.

Note that the distribution of y|x is the same as the distribution of y|βTx. Hence, the span of β is a dimension reduction subspace. Also, β is 1-dimensional (unless β=0), so the structural dimension of this regression is d=1.

The OLS estimate β^ of β is consistent, and so the span of β^ is a consistent estimator of 𝒮y|x. The plot of y versus β^Tx is a sufficient summary plot for this regression.

See also

Notes

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

References

Template:Refbegin

Template:Refend

External links

  1. 1.0 1.1 Cook & Adragni (2009) Sufficient Dimension Reduction and Prediction in Regression In: Philosophical Transactions of the Royal Society A: Physical, Mathematical and Engineering Sciences, 367(1906): 4385–4405
  2. 2.0 2.1 2.2 2.3 2.4 2.5 2.6 Cook, R.D. (1998) Regression Graphics: Ideas for Studying Regressions Through Graphics, Wiley ISBN 0471193658
  3. Li, K-C. (1991) Sliced Inverse Regression for Dimension Reduction In: Journal of the American Statistical Association, 86(414): 316–327
  4. Cook, R.D. and Forzani, L. (2009) Likelihood-Based Sufficient Dimension Reduction In: Journal of the American Statistical Association, 104(485): 197–208
  5. Yin, X. and Cook, R.D. (2003) Estimating Central Subspaces via Inverse Third Moments In: Biometrika, 90(1): 113–125
  6. Li, B. and Dong, Y.D. (2009) Dimension Reduction for Nonelliptically Distributed Predictors In: Annals of Statistics, 37(3): 1272–1298