Invariant theory: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
edit
 
en>David Eppstein
Line 1: Line 1:
Luke Bryan is a celebrity  [http://lukebryantickets.asiapak.net luke bryan tour date] within the earning and also the profession development 1st second to his 3rd theatre recording, & , is definitely the resistant. He burst to the picture in 2005 along with his amusing mix of downward-house availability, film superstar fantastic appearance and  lyrics, is placed t in a significant way. The latest recor in the nation graph or chart and #2 in the pop charts, earning it the next maximum very first in those days of 2000 to get a country designer. <br><br>
In [[statistics]], '''canonical-correlation analysis''' ('''CCA''') is a way of making sense of [[cross-covariance matrix|cross-covariance matrices]]. If we have two vectors ''X'' = (''X''<sub>1</sub>, ..., ''X''<sub>''n''</sub>) and ''Y'' = (''Y''<sub>1</sub>, ..., ''Y''<sub>''m''</sub>)  of random variables, and there are correlations among the variables, then canonical-correlation analysis will find linear combinations of the ''X''<sub>''i''</sub> and ''Y''<sub>''j''</sub> which have maximum correlation with each other.<ref>{{cite doi|10.1007/978-3-540-72244-1_14}}</ref> T. R. Knapp notes "virtually all of the commonly encountered [[parametric statistics|parametric test]]s of significance can be treated as special cases of canonical-correlation analysis, which is the general procedure for investigating the relationships between two sets of variables."<ref>{{cite doi|10.1037/0033-2909.85.2.410}}</ref> The method was first introduced by [[Harold Hotelling]] in 1936.<ref>{{cite doi|10.1093/biomet/28.3-4.321}}</ref>


The child of the , is aware perseverance and perseverance are key elements when it comes to a successful  job- .   [http://lukebryantickets.citizenswebcasting.com meet and greet one direction] His very first recording, Keep Me, created the very best  hits “All My Girlfriends Say” and “Country Guy,” when his  energy, Doin’  Thing, discovered the vocalist-three right No. 2 single men and women: In addition Contacting Is really a Good Thing.”<br><br>In the slip of 2011[http://www.ladyhawkshockey.org luke brian concerts] Concert tour: Luke And that have an outstanding listing of , such as Downtown. “It’s almost like you’re obtaining a   approval to visit to a higher level, says these designers which were a part of the  Concert touraround right into a greater degree of artists.” It twisted as the best excursions in its 15-calendar year background.<br><br>Feel  [http://www.ffpjp24.org luke bryan tour 2014 tickets] free to surf to my page :: [http://lukebryantickets.sgs-suparco.org luke bryan tickets indianapolis]
==Definition==
Given two [[column vectors]] <math>X = (x_1, \dots, x_n)'</math> and <math>Y = (y_1, \dots, y_m)'</math> of [[random_variable|random variables]] with [[Wikt:finite|finite]] [[second moments]], one may define the [[cross-covariance]] <math>\Sigma _{XY} = \operatorname{cov}(X, Y) </math> to be the <math> n \times m</math> [[matrix (mathematics)|matrix]] whose <math>(i, j)</math> entry is the [[covariance]] <math>\operatorname{cov}(x_i, y_j)</math>. In practice, we would estimate the covariance matrix based on sampled data from <math>X</math> and <math>Y</math> (i.e. from a pair of data matrices).
 
Canonical-correlation analysis seeks vectors <math>a</math> and <math>b</math> such that the random variables <math>a' X</math> and <math>b' Y</math> maximize the [[correlation]] <math>\rho = \operatorname{corr}(a' X, b' Y)</math>. The random variables <math>U = a' X</math> and <math>V = b' Y</math> are the '''''first pair of canonical variables'''''. Then one seeks vectors maximizing the same correlation subject to the constraint that they are to be uncorrelated with the first pair of canonical variables; this gives the '''''second pair of canonical variables'''''. This procedure may be continued up to <math>\min\{m,n\}</math> times.
 
==Computation==
===Derivation===
Let <math>\Sigma _{XX} = \operatorname{cov}(X, X)</math> and <math>\Sigma _{YY} = \operatorname{cov}(Y, Y)</math>. The parameter to maximize is
 
:<math>
\rho = \frac{a' \Sigma _{XY} b}{\sqrt{a' \Sigma _{XX} a} \sqrt{b' \Sigma _{YY} b}}.
</math>
 
The first step is to define a [[change of basis]] and define
 
:<math>
c = \Sigma _{XX} ^{1/2} a,
</math>
 
:<math>
d = \Sigma _{YY} ^{1/2} b.
</math>
 
And thus we have
 
:<math>
\rho = \frac{c' \Sigma _{XX} ^{-1/2} \Sigma _{XY} \Sigma _{YY} ^{-1/2} d}{\sqrt{c' c} \sqrt{d' d}}.
</math>
 
By the [[Cauchy-Schwarz inequality]], we have
 
:<math>
\left(c' \Sigma _{XX} ^{-1/2} \Sigma _{XY} \Sigma _{YY} ^{-1/2} \right) d \leq \left(c' \Sigma _{XX} ^{-1/2} \Sigma _{XY} \Sigma _{YY} ^{-1/2} \Sigma _{YY} ^{-1/2} \Sigma _{YX} \Sigma _{XX} ^{-1/2} c \right)^{1/2} \left(d' d \right)^{1/2},
</math>
 
:<math>
\rho \leq \frac{\left(c' \Sigma _{XX} ^{-1/2} \Sigma _{XY} \Sigma _{YY} ^{-1} \Sigma _{YX} \Sigma _{XX} ^{-1/2} c \right)^{1/2}}{\left(c' c \right)^{1/2}}.
</math>
 
There is equality if the vectors <math>d</math> and <math>\Sigma _{YY} ^{-1/2} \Sigma _{YX} \Sigma _{XX} ^{-1/2} c</math> are collinear. In addition, the maximum of correlation is attained if <math>c</math> is the [[eigenvector]] with the maximum eigenvalue for the matrix <math>\Sigma _{XX} ^{-1/2} \Sigma _{XY} \Sigma _{YY} ^{-1} \Sigma _{YX} \Sigma _{XX} ^{-1/2}</math> (see [[Rayleigh quotient]]). The subsequent pairs are found by using [[eigenvalues]] of decreasing magnitudes. Orthogonality is guaranteed by the symmetry of the correlation matrices.
 
===Solution===
The solution is therefore:
* <math>c</math> is an eigenvector of <math>\Sigma _{XX} ^{-1/2} \Sigma _{XY} \Sigma _{YY} ^{-1} \Sigma _{YX} \Sigma _{XX} ^{-1/2}</math>
* <math>d</math> is proportional to <math>\Sigma _{YY} ^{-1/2} \Sigma _{YX} \Sigma _{XX} ^{-1/2} c</math>
 
Reciprocally, there is also:
* <math>d</math> is an eigenvector of <math>\Sigma _{YY} ^{-1/2} \Sigma _{YX} \Sigma _{XX} ^{-1} \Sigma _{XY} \Sigma _{YY} ^{-1/2}</math>
* <math>c</math> is proportional to <math>\Sigma _{XX} ^{-1/2} \Sigma _{XY} \Sigma _{YY} ^{-1/2} d</math>
 
Reversing the change of coordinates, we have that
* <math>a</math> is an eigenvector of <math>\Sigma _{XX} ^{-1} \Sigma _{XY} \Sigma _{YY} ^{-1} \Sigma _{YX}</math>
* <math>b</math> is an eigenvector of <math>\Sigma _{YY} ^{-1} \Sigma _{YX} \Sigma _{XX} ^{-1} \Sigma _{XY}</math>
* <math>a</math> is proportional to <math>\Sigma _{XX} ^{-1} \Sigma _{XY} b</math>
* <math>b</math> is proportional to <math>\Sigma _{YY} ^{-1} \Sigma _{YX} a</math>
 
The canonical variables are defined by:
 
:<math>U = c' \Sigma _{XX} ^{-1/2} X = a' X</math>
 
:<math>V = d' \Sigma _{YY} ^{-1/2} Y = b' Y</math>
 
===Implementation===
CCA can be computed using [[singular value decomposition]] on a correlation matrix.<ref>{{cite doi|10.1016/j.jcss.2011.12.025}}</ref> It is available as a function in<ref>{{cite doi|10.1016/j.jspi.2008.10.011}}</ref>
 
* [[MATLAB]] as [http://www.mathworks.co.uk/help/stats/canoncorr.html canoncorr]
* [[R (programming language)|R]] as [http://stat.ethz.ch/R-manual/R-devel/library/stats/html/cancor.html cancor]  or in [http://factominer.free.fr/ FactoMineR]
* [[SAS language|SAS]] as [http://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#statug_cancorr_sect005.htm proc cancorr]
* [[Scikit-Learn,Python]] as [http://scikit-learn.org/stable/modules/cross_decomposition.html Cross decomposition]
 
==Hypothesis testing==
Each row can be tested for significance with the following method. Since the correlations are sorted, saying that row <math>i</math> is zero implies all further correlations are also zero.  If we have <math>p</math> independent observations in a sample and <math>\widehat{\rho}_i</math> is the estimated correlation for <math>i = 1,\dots, \min\{m,n\}</math>. For the <math>i</math>th row, the test statistic is:
 
:<math>\chi ^2 = - \left( p - 1 - \frac{1}{2}(m + n + 1)\right) \ln \prod _ {j = i} ^{\min\{m,n\}} (1 - \widehat{\rho}_j^2),</math>
 
which is asymptotically distributed as a [[chi-squared distribution|chi-squared]] with <math>(m - i + 1)(n - i + 1)</math> [[degrees of freedom (statistics)|degrees of freedom]] for large <math>p</math>.<ref>{{Cite book
| author = [[Kanti V. Mardia]], J. T. Kent and J. M. Bibby
| title = Multivariate Analysis
| year = 1979
| publisher = [[Academic Press]]
}}</ref> Since all the correlations from <math> \min\{m,n\}</math> to <math>p</math> are logically zero (and estimated that way also) the product for the terms after this point is irrelevant.
 
==Practical uses==
A typical use for canonical correlation in the experimental context is to take two sets of variables and see what is common amongst the two sets. For example in psychological testing, you could take two well established multidimensional [[personality tests]] such as the [[Minnesota Multiphasic Personality Inventory]] (MMPI-2) and the [[Neuroticism Extraversion Openness Personality Inventory|NEO]]. By seeing how the MMPI-2 factors relate to the NEO factors, you could gain insight into what dimensions were common between the tests and how much variance was shared. For example you might find that an [[Extraversion and introversion|extraversion]] or [[neuroticism]] dimension accounted for a substantial amount of shared variance between the two tests.
 
One can also use canonical-correlation analysis to produce a model equation which relates two sets of variables, for example a set of performance measures and a set of explanatory variables, or a set of outputs and set of inputs. Constraint restrictions can be imposed on such a model to ensure it reflects theoretical requirements or intuitively obvious conditions. This type of model is known as a maximum correlation model.<ref>{{cite doi|10.1111/1467-9884.00195}}</ref>
 
Visualization of the results of canonical correlation is usually through bar plots of the coefficients of the two sets of variables for the pairs of canonical variates showing significant correlation. Some authors suggest that they are best visualized by plotting them as heliographs, a circular format with ray like bars, with each half representing the two sets of variables.<ref>{{cite doi|10.1007/11783183_11}}</ref>
 
==Examples==
Let <math>X = x_1</math> with zero [[expected value]], i.e., <math>\operatorname{E}(X)=0</math>. If <math>Y = X</math>, i.e., <math>X</math> and <math>Y</math> are perfectly correlated, then, e.g., <math>a=1</math> and <math>b=1</math>, so that the first (and only in this example) pair of canonical variables is <math>U = X</math> and <math>V = Y =X</math>. If <math>Y = -X</math>, i.e., <math>X</math> and <math>Y</math> are perfectly anticorrelated, then, e.g., <math>a=1</math> and <math>b=-1</math>, so that the first (and only in this example) pair of canonical variables is <math>U = X</math> and <math>V = -Y =X</math>. We notice that in both cases <math>U =V</math>, which illustrates that the canonical-correlation analysis treats correlated and anticorrelated variables similarly.
 
==Connection to principal angles==
Assuming that <math>X = (x_1, \dots, x_n)'</math> and <math>Y = (y_1, \dots, y_m)'</math> have zero [[expected value]]s, i.e., <math>\operatorname{E}(X)=\operatorname{E}(Y)=0</math>, their [[covariance]] matrices <math>\Sigma _{XX} =\operatorname{Cov}(X,X) = \operatorname{E}[X X']</math> and <math>\Sigma _{YY} =\operatorname{Cov}(Y,Y) = \operatorname{E}[Y Y']</math> can be viewed as [[Gram matrix|Gram matrices]] in an [[inner product]] for the entries of <math>X</math> and <math>Y</math>, correspondingly. In this interpretation, the random variables, entries <math>x_i</math> of  <math>X</math> and <math>y_j</math> of <math>Y</math> are treated as elements of a vector space with an inner product given by the [[covariance]] <math>\operatorname{cov}(x_i, y_j)</math>, see [[Covariance#Relationship_to_inner_products]].
 
The definition of the canonical variables <math>U</math> and <math>V</math> is then equivalent to the definition of [[principal angles|principal vectors]] for the pair of subspaces spanned by the entries of  <math>X</math> and <math>Y</math> with respect to this [[inner product]]. The canonical correlations <math>\operatorname{corr}(U,V)</math> is equal to the [[cosine]] of [[principal angles]].
 
==See also==
*[[Generalized Canonical Correlation]]
*[[Multilinear subspace learning]]
*[[RV coefficient]]
*[[Principal angles]]
*[[Principal component analysis]]
*[[Regularized canonical correlation analysis]]
*[[Singular value decomposition]]
 
==References==
{{Reflist}}
 
==External links==
*[http://www.qmrg.org.uk/files/2008/12/3-understanding-canonical-correlation-analysis1.pdf Understanding canonical correlation analysis] (Concepts and Techniques in Modern Geography)
* {{cite doi|10.1162/0899766042321814}}
*[http://mpra.ub.uni-muenchen.de/12796/ A note on the ordinal canonical-correlation analysis of two sets of ranking scores] (Also provides a FORTRAN program)- in J. of Quantitative Economics 7(2), 2009, pp. 173-199
*[http://ssrn.com/abstract=1331886 Representation-Constrained Canonical Correlation Analysis: A Hybridization of Canonical Correlation and Principal Component Analyses] (Also provides a FORTRAN program)- in J. of Applied Economic Sciences 4(1), 2009, pp. 115-124
 
[[Category:Covariance and correlation]]
[[Category:Multivariate statistics]]

Revision as of 06:42, 3 December 2013

In statistics, canonical-correlation analysis (CCA) is a way of making sense of cross-covariance matrices. If we have two vectors X = (X1, ..., Xn) and Y = (Y1, ..., Ym) of random variables, and there are correlations among the variables, then canonical-correlation analysis will find linear combinations of the Xi and Yj which have maximum correlation with each other.[1] T. R. Knapp notes "virtually all of the commonly encountered parametric tests of significance can be treated as special cases of canonical-correlation analysis, which is the general procedure for investigating the relationships between two sets of variables."[2] The method was first introduced by Harold Hotelling in 1936.[3]

Definition

Given two column vectors X=(x1,,xn) and Y=(y1,,ym) of random variables with finite second moments, one may define the cross-covariance ΣXY=cov(X,Y) to be the n×m matrix whose (i,j) entry is the covariance cov(xi,yj). In practice, we would estimate the covariance matrix based on sampled data from X and Y (i.e. from a pair of data matrices).

Canonical-correlation analysis seeks vectors a and b such that the random variables aX and bY maximize the correlation ρ=corr(aX,bY). The random variables U=aX and V=bY are the first pair of canonical variables. Then one seeks vectors maximizing the same correlation subject to the constraint that they are to be uncorrelated with the first pair of canonical variables; this gives the second pair of canonical variables. This procedure may be continued up to min{m,n} times.

Computation

Derivation

Let ΣXX=cov(X,X) and ΣYY=cov(Y,Y). The parameter to maximize is

ρ=aΣXYbaΣXXabΣYYb.

The first step is to define a change of basis and define

c=ΣXX1/2a,
d=ΣYY1/2b.

And thus we have

ρ=cΣXX1/2ΣXYΣYY1/2dccdd.

By the Cauchy-Schwarz inequality, we have

(cΣXX1/2ΣXYΣYY1/2)d(cΣXX1/2ΣXYΣYY1/2ΣYY1/2ΣYXΣXX1/2c)1/2(dd)1/2,
ρ(cΣXX1/2ΣXYΣYY1ΣYXΣXX1/2c)1/2(cc)1/2.

There is equality if the vectors d and ΣYY1/2ΣYXΣXX1/2c are collinear. In addition, the maximum of correlation is attained if c is the eigenvector with the maximum eigenvalue for the matrix ΣXX1/2ΣXYΣYY1ΣYXΣXX1/2 (see Rayleigh quotient). The subsequent pairs are found by using eigenvalues of decreasing magnitudes. Orthogonality is guaranteed by the symmetry of the correlation matrices.

Solution

The solution is therefore:

Reciprocally, there is also:

Reversing the change of coordinates, we have that

The canonical variables are defined by:

U=cΣXX1/2X=aX
V=dΣYY1/2Y=bY

Implementation

CCA can be computed using singular value decomposition on a correlation matrix.[4] It is available as a function in[5]

Hypothesis testing

Each row can be tested for significance with the following method. Since the correlations are sorted, saying that row i is zero implies all further correlations are also zero. If we have p independent observations in a sample and ρ^i is the estimated correlation for i=1,,min{m,n}. For the ith row, the test statistic is:

χ2=(p112(m+n+1))lnj=imin{m,n}(1ρ^j2),

which is asymptotically distributed as a chi-squared with (mi+1)(ni+1) degrees of freedom for large p.[6] Since all the correlations from min{m,n} to p are logically zero (and estimated that way also) the product for the terms after this point is irrelevant.

Practical uses

A typical use for canonical correlation in the experimental context is to take two sets of variables and see what is common amongst the two sets. For example in psychological testing, you could take two well established multidimensional personality tests such as the Minnesota Multiphasic Personality Inventory (MMPI-2) and the NEO. By seeing how the MMPI-2 factors relate to the NEO factors, you could gain insight into what dimensions were common between the tests and how much variance was shared. For example you might find that an extraversion or neuroticism dimension accounted for a substantial amount of shared variance between the two tests.

One can also use canonical-correlation analysis to produce a model equation which relates two sets of variables, for example a set of performance measures and a set of explanatory variables, or a set of outputs and set of inputs. Constraint restrictions can be imposed on such a model to ensure it reflects theoretical requirements or intuitively obvious conditions. This type of model is known as a maximum correlation model.[7]

Visualization of the results of canonical correlation is usually through bar plots of the coefficients of the two sets of variables for the pairs of canonical variates showing significant correlation. Some authors suggest that they are best visualized by plotting them as heliographs, a circular format with ray like bars, with each half representing the two sets of variables.[8]

Examples

Let X=x1 with zero expected value, i.e., E(X)=0. If Y=X, i.e., X and Y are perfectly correlated, then, e.g., a=1 and b=1, so that the first (and only in this example) pair of canonical variables is U=X and V=Y=X. If Y=X, i.e., X and Y are perfectly anticorrelated, then, e.g., a=1 and b=1, so that the first (and only in this example) pair of canonical variables is U=X and V=Y=X. We notice that in both cases U=V, which illustrates that the canonical-correlation analysis treats correlated and anticorrelated variables similarly.

Connection to principal angles

Assuming that X=(x1,,xn) and Y=(y1,,ym) have zero expected values, i.e., E(X)=E(Y)=0, their covariance matrices ΣXX=Cov(X,X)=E[XX] and ΣYY=Cov(Y,Y)=E[YY] can be viewed as Gram matrices in an inner product for the entries of X and Y, correspondingly. In this interpretation, the random variables, entries xi of X and yj of Y are treated as elements of a vector space with an inner product given by the covariance cov(xi,yj), see Covariance#Relationship_to_inner_products.

The definition of the canonical variables U and V is then equivalent to the definition of principal vectors for the pair of subspaces spanned by the entries of X and Y with respect to this inner product. The canonical correlations corr(U,V) is equal to the cosine of principal angles.

See also

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

External links

  1. Template:Cite doi
  2. Template:Cite doi
  3. Template:Cite doi
  4. Template:Cite doi
  5. Template:Cite doi
  6. 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  7. Template:Cite doi
  8. Template:Cite doi