Alveolar air equation: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Magioladitis
m Break in list (checkwiki 54), removed stub tag using AWB (8246)
 
en>Frze
Undid revision 562839149 by 212.250.25.123 (talk)
Line 1: Line 1:
Workout previously morning. If you start your day with a work out it jump starts your metabolism and keeps it going throughout the day. Try to eat something with protein, carbohydrates and body fat.<br><br>Dumbbell Overhead Press - Use a selection of motion from your ears to lockout. Get a moderate weight that in order to to get the weight into position without losing a lot energy. [http://t-rexmuscle.net/ T Rex Muscle Reviews] After warm-ups, perform 2 teams of 10-12 reps with a similar weight.<br><br>Your CV work is with intense CV intervals which have been necessary increase your energy. You will be working at a higher heart rate. The change in training stimulus will also help to [http://t-rexmuscle.net/ T Rex Muscle Review] decrease your weight percentage.<br><br>Probably my best food the particular the 5 mentioned appropriate. Watermelon is sweet, juicy, and for you to build muscle complex?!? Absolutely, watermelon contains the amino acid L-Citruline which ultimate increases your NO levels. Adore eating slices of watermelon for lunch a several hours before my evening workouts. Try sticking a few pieces your past juicer for an all day supply of watermelon liquid. It's a grate method hydrate and supplement the human body's nitric oxide production.<br><br>Carbohydrates, fat and protein is what your diet should associated with when Muscle Building, but protein is definitely the critical of several groups. Protein's main role in your is to build and repair body tissues, which is vital since lifting works down your body tissues. Sets from eggs to fish to beef to milk in order to consumed on a daily angle.<br><br>A periodization exercise program can be split into a 5 staging. You should perform any exercises that are new for you with a friend or an individual are lifting heavy barbells.<br><br>A meal replacement powder can hit you up for up to $3.00 PER PACKET, whilst it would probably cost you $1.00 or less ought to you eat you shouldn'[http://t-rexmuscle.net/ T Rex Muscle Reviews] T Rex Muscle Review amount of calories from real food stuff.<br><br>To build muscles, you ought to to notice that there are two main factors that come into carry out. First will be exercise, and second will be diet. None will give desired results without the other.
{{Orphan|date=April 2012}}
 
<!-- Please leave this line alone! -->
 
[[Image:Concept chart for EM.png|thumb|alt=concept chart of EigenMoment algorithm|Signal space is transformed into moment space, i.e. Geometric Moments, then it is transformed into noise space in which axes with lowest rate of noise are retained and finally transformed into feature space]]
'''EigenMoments'''<ref>Pew-Thian Yap, Raveendran Paramesran, Eigenmoments, Pattern Recognition, Volume 40, Issue 4, April 2007, Pages 1234-1244, ISSN 0031-3203, 10.1016/j.patcog.2006.07.003.</ref> is a set of [[orthogonal]], noise robust, invariant to rotation, scaling and translation and distribution sensitive [[Moment (mathematics)|moments]]. Their application can be found in [[signal processing]] and [[computer vision]] as descriptors of the signal or image. The descriptors can later be used for [[Classification in machine learning|classification]] purposes.
 
It is obtained by performing [[orthogonalization]], via [[Eigenvalues and eigenvectors|eigen analysis]] on [[Image moment|geometric moments]].<ref name="“hu">M. K. Hu, "Visual Pattern Recognition by Moment Invariants", IRE Trans. Info. Theory, vol. IT-8, pp.179&ndash;187, 1962</ref>
 
== Framework summary ==
EigenMoments are computed by performing eigen analysis on the moment space of an image by maximizing [[Signal to Noise Ratio]] in the feature space in form of [[Rayleigh quotient]].
 
This approach has several benefits in Image processing applications:
# Dependency of moments in the moment space on the distribution of the images being transformed, ensures decorrelation of the final feature space after eigen analysis on the moment space.
# The ability of EigenMoments to take into account distribution of the image makes it more versatile and adaptable for different genres.
# Generated moment kernels are orthogonal and therefore analysis on the moment space becomes easier. Transformation with orthogonal moment kernels into moment space is analogous to projection of the image onto a number of orthogonal axes.
# Nosiy components can be removed. This makes EigenMoments robust for classification applications.
# Optimal information compaction can be obtained and therefore a few number of moments are needed to characterize the images.
 
== Problem formulation ==
Assume that a signal vector <math> s \in \mathcal{R}^n </math> is taken from a certain distribution having coorelation <math> C \in \mathcal{R}^{n \times n} </math>,i.e. <math>C=E[ss^T]</math> where E[.] denotes expected value.
 
Dimension of signal space, n, is often too large to be useful for practical application such as pattern classification, we need to transform the signal space into a space with lower dimensionality.
 
This is performed by a two-step linear transformation:
 
<math>q=W^T X^T s,</math>
 
where <math>q=[q_1,...,q_n]^T \in \mathcal{R}^k</math> is the transformed signal, <math>X=[x_1,...,x_n]^T \in \mathcal{R}^{n \times m}</math> a fixed transformation matrix which transforms the signal into the moment space, and  <math>W=[w_1,...,w_n]^T \in \mathcal{R}^{m \times k}</math> the transformation matrix which we are going to determine by maximizing the  [[Signal-to-noise ratio|SNR]] of the feature space resided by  <math>q</math>. For the case of Geometric Moments, X would be the monomials. If <math>m=k=n</math>, a full rank transformation would result, however usually we have <math>m \leq n</math> and <math>k \leq m</math>. This is specially the case when <math>n</math> is of high dimensions.
 
Finding <math>W</math> that maximizes the [[Signal-to-noise ratio|SNR]] of the feature space:
 
<math> SNR_{transform} = \frac{w^TX^TCXw}{w^TX^TNXw},</math>
 
where N is the correlation matrix of the noise signal. The problem can thus be formulated as
 
<math>{w_1,...,w_k}=argmax_w \frac{w^TX^TCXw}{w^TX^TNXw}</math>
 
subject to constraints:
 
<math>w_i^T X^T NX w_j=\delta_{ij},</math> where <math>\delta_{ij}</math> is the [[Kronecker delta]].
 
It can be observed that this maximization is Rayleigh quotient by letting <math>A=X^TCX</math> and <math>B=X^TNX</math> and therefore can be written as:
 
<math>{w_1,...,w_k}=\underset{x}{\operatorname{arg\,max}} \frac{w^TAw}{w^TBw}</math>, <math>w_i^TBw_j=\delta_{ij}</math>
 
=== Rayleigh quotient ===
Optimization of [[Rayleigh quotient]]<ref>T. De Bie, N. Cristianini, R. Rosipal, Eigenproblems in
pattern recognition, in: E. Bayro-Corrochano (Ed.), Handbook of
Computational Geometry for Pattern Recognition, Computer Vision,
Neurocomputing and Robotics, Springer, Heidelberg, 2004G.</ref><ref>Strang, Linear Algebra and Its Applications, second ed., Academic
Press, New York, 1980.</ref> has the form:
 
<math> \max_w R(w)= \max_w \frac{w^{T}Aw}{w^{T}Bw} </math>
 
and <math>A</math> and <math>B</math>, both are [[Symmetry|symmetric]] and <math>B</math> is [[Positive-definite matrix|positive definite]] and therefore [[Invertible matrix|invertibale]].
scaling <math>w</math> does not change the value of the object function and hence and additional scalar constraint <math>w^{T}Bw=1</math> can be imposed on <math>w</math> and no solution would be lost when the objective function is optimized.
 
This constraint optimization problem can be solved using [[Lagrangian multiplier]]:
 
<math> \max_w {w^{T}Aw}</math> subject to <math>{w^{T}Bw}=1</math>
 
<math> \max_w \mathcal{L}(w) = \max_w (w{T}Aw-\lambda w^{T}Bw)</math>
 
equating first derivative to zero and we will have:
 
<math>Aw=\lambda Bw</math>
 
which is an instance of [[Eigendecomposition_of_a_matrix#Generalized_eigenvalue_problem|Generelized Eigenvalue Problem]](GEP).
The GEP has the form:
 
<math> Aw=\lambda Bw </math>
 
for any pair <math>(w,\lambda)</math> that is a solution to above equation, <math>w</math> is called a [[Eigendecomposition_of_a_matrix#Generalized_eigenvalue_problem|generalized eigenvector]] and <math> \lambda </math> is called a [[Eigendecomposition_of_a_matrix#Generalized_eigenvalue_problem|generalized eigenvalue]].
 
Finding <math>w</math> and <math>\lambda</math> that satisfies this equations would produce the result which optimizes [[Rayleigh quotient]].
 
One way of maximizing [[Rayleigh quotient]] is through solving the [[Eigendecomposition_of_a_matrix#Generalized_eigenvalue_problem|Generalized Eigen Problem]]. [[Dimension reduction]] can be performed by simply choosing the first components <math>w_i</math>, <math>i=1,...,k</math>, with the highest values for <math>R(w)</math> out of the <math>m</math> components, and discard the rest. Interpretation of this transformation is [[Rotation|rotating]] and [[Scaling (geometry)|scaling]] the moment space, transforming it into a feature space with maximized [[Signal-to-noise ratio|SNR]] and therefore, the first <math>k</math> components are the components with highest <math>k</math> [[Signal-to-noise ratio|SNR]] values.
 
The other method to look at this solution is to use the concept of [[Diagonalizable_matrix#Simultaneous_diagonalization|simultaneous diagonalization]] instead of [[Eigendecomposition_of_a_matrix#Generalized_eigenvalue_problem|Generalized Eigen Problem]].
 
=== Simultaneous diagonalization ===
 
# Let <math>A=X^TCX</math> and <math>B=X^TNX</math> as mentioned earlier. We can write <math>W</math> as two separate transformation matrices:
 
<math>W=W_1W_2.</math>
 
#<math>W_1</math> can be found by first diagonalize B:
 
<math>P^TBP=D_B</math>.
 
 
Where <math>D_B</math> is a diagonal matrix sorted in increasing order. Since <math>B</math> is positive definite, thus <math>D_B>0</math>. We can discard those [[Eigenvalues and eigenvectors|eigenvalues]] that large and retain those close to 0, since this means the energy of the noise is close to 0 in this space, at this stage it is also possible to discard those [[Eigenvalues and eigenvectors|eigenvectors]] that have large [[Eigenvalues and eigenvectors|eigenvalues]].
 
Let <math>\hat P</math> be the first <math>k</math> columns of <math>P</math>, now <math>\hat{P^T}B\hat P=\hat{D_B}</math> where <math>\hat{D_B}</math> is the <math>k \times k</math> principal submatrix of <math>D_B</math>.
 
#Let
 
<math>W_1=\hat{P} \hat{D_B}^{-1/2}</math>
 
 
and hence:
 
<math>W_1^T B W_1=(\hat P \hat{D_B}^{-1/2})^TB(\hat P \hat{D_B}^{-1/2})=I</math>.
 
 
<math>W_1</math> whiten <math>B</math> and reduces the dimensionality from <math>m</math> to <math>k</math>. The transformed space resided by <math>q'=W_1^TX^Ts</math> is called the noise space.
 
#Then, we [[Diagonalizable matrix#Diagonalization|diagonalize]] <math>W_1^T A W_1</math>:
 
<math>W_2^T W_1^T A W_1 W_2 = D_A</math>,
 
where <math>W_2^T W_2 =I</math>. <math>D_A</math> is the matrix with [[Eigenvalues and eigenvectors|eigenvalues]] of <math>W_1^T A W_1</math> on its diagonal. We may retain all the [[Eigenvalues and eigenvectors|eigenvalues]] and their corresponding [[Eigenvalues and eigenvectors|eigenvectors]] since the most of the noise are already discarded in previous step.
 
#Finally the transformation is given by:
 
<math>W=W_1W_2</math>
 
where <math>W</math> [[Diagonalizable matrix#Diagonalization|diagonalizes]] both the numerator and denominator of the [[Signal-to-noise ratio|SNR]],
 
<math>W^TAW=D_A</math>, <math>W^TBW=I</math> and the transformation of signal <math>s</math> is defined as <math>q=W^TX^Ts=W_2^TW_1^TX^Ts</math>.
 
=== Information loss ===
To find the information loss when we discard some of the eigenvalues and eigenvectors we can perform following analysis:
 
<math> \begin{array}{lll}
\eta &=& 1- \frac{trace(W_1^TAW_1)}{trace(D_B^{-1/2}P^TAPD_B^{-1/2})}\\
    &=& 1- \frac{trace(\hat{D_B}^{-1/2}\hat{P}^TA\hat{P}\hat{D_B}^{-1/2})}{trace(D_B^{-1/2}P^TAPD_B^{-1/2})}
   
\end{array}
</math>
 
== Eigenmoments ==
 
Eigenmoments are derived by applying the above framework on Geometric Moments. They can be derived for both 1D and 2D signals.
 
=== 1D signal ===
 
If we let <math>X=[1,x,x^2,...,x^{m-1}]</math>, i.e. the [[monomials]], after the transformation <math>X^T</math> we obtain Geometric Moments, denoted by vector <math>M</math>, of signal <math>s=[s(x)]</math>,i.e. <math>M=X^Ts</math>.
 
In practice it is difficult to estimate the correlation signal due to insufficient number of samples, therefore parametric approaches are utilized.
 
One such model can be defined as:
 
<math>r(x_1,x_2)=r(0,0)e^{-c(x_1-x_2)^2}</math>,
 
[[Image:Signal Correlation model.png|400px|thumb|alt=Model for correlation in signal|Plot of the parametric model which predicts correlations in the input signal. <math>r(x_1,x_2)=r(0,0)e^{-c(x_1-x_2)^2}</math>]]
 
where <math>r(0,0)=E[tr(ss^T)]</math>. This model of correlation can be replaced by other models however this model covers general natural images.
 
Since <math>r(0,0)</math> does not affect the maximization it can be dropped.
 
<math>A=X^TCX=\int_{-1}^{1}\int_{-1}^{1}[x_1^j x_2^i e^{-c(x_1-x_2)^2}]_{i,j=0}^{i,j=m-1}dx_1dx_2</math>
 
The correlation of noise can be modelled as <math>\sigma_n^2\delta(x_1,x_2)</math>, where <math>\sigma_n^2</math> is the energy of noise.Again <math>\sigma_n^2</math> can be dropped because the constant does not have any effect on the maximization problem.
 
<math>B=X^TNX=\int_{-1}^{1}\int_{-1}^{1}[x_1^j x_2^i\delta(x_1,x_2)]_{i,j=0}^{i,j=m-1}dx_1dx_2</math>
<math>B=X^TNX=\int_{-1}^{1}[x_1^{j+i}]_{i,j=0}^{i,j=m-1}dx_1=X^TX</math>
 
Using the computed A and B and applying the algorithm discussed in previous section we find <math>W</math> and set of transformed [[monomials]] <math>\Phi=[\phi_1,...,\phi_k]=XW</math> which produces the moment kernels of EM. The moment kernels of EM decorrelate the correlation in the image.
 
<math>\Phi^TC\Phi=(XW)^TC(XW)=D_C</math>,
 
and are orthogonal:
 
<math> \begin{array}{lll}\Phi^T\Phi& = & (XW)^T(XW) \\
& = & W^TX^TX\\
& = & W^TX^TNXW\\
& = & W^TBW\\
& = & I\\
\end{array}
</math>
 
==== Example computation ====
 
Taking <math>c=0.5</math>, the dimension of moment space as <math>m=6</math> and the dimension of feature space as <math>k=4</math>, we will have:
 
<math>W=
\left( \begin{array}{cccc}
0.0      & 0        & -0.7745 & -0.8960 \\
2.8669    & -4.4622  & 0.0    & 0.0    \\
0.0      & 0.0      & 7.9272  & 2.4523  \\
-4.0225  & 20.6505  & 0.0    & 0.0    \\
0.0      & 0.0      & -9.2789 & -0.1239 \\
-0.5092  & -18.4582 & 0.0    & 0.0      \end{array} \right)
</math>
 
and
 
<math> \begin{array}{lll}
\phi_1&=& 2.8669x - 4.0225x^3 - 0.5092x^5 \\
\phi_2&=&-4.4622x + 20.6505x^3 - 18.4582x^5 \\
\phi_3&=&-0.7745  + 7.9272x^2 - 9.2789x^4 \\
\phi_4&=&-0.8960  + 2.4523x^2 - 0.1239x^4 \\
\end{array}
</math>
 
=== 2D signal ===
 
The derivation for 2D signal is the same as 1D signal except that conventional [[Image moment|Geometric Moments]] are directly employed to obtain the set of 2D EigenMoments.
 
The definition of [[Image moment|Geometric Moments]] of order <math>(p+q)</math> for 2D image signal is:
 
<math>m_{pq}=\int_{-1}^1\int_{-1}^1 x^py^qf(x,y)dxdy</math>.
 
which can be denoted as <math>M=\{m_{j,i}\}_{i,j=0}^{i,j=m-1}</math>. Then the set of 2D EigenMoments are:
 
<math>\Omega=W^TMW</math>,
 
where <math>\Omega=\{\Omega_{j,i}\}_{i,j=0}^{i,j=k-1}</math> is a matrix that contains the set of EigenMoments.
 
<math>\Omega_{j,i}=\Sigma_{r=0}^{m-1}\Sigma_{s=0}^{m-1}w_{r,j}w_{s,i}m_{r,s}</math>.
 
=== EigenMoment invariants (EMI)  ===
In order to obtain a set of moment invariants we can use [[Image_moment#Rotation_invariant_moments|normalized Geometric Moments]] <math>\hat M</math> instead of <math>M</math>.
 
[[Image_moment#Rotation_invariant_moments|Normalized Geometric Moments]] are invariant to Rotation,Scaling and Transformation and defined by:
 
<math>\begin{array}{lll}
\hat m_{pq} & = & \alpha^p+q+2\int_{-1}^{1}\int_{-1}^{1}[(x-x^c)cos(\theta)+(y-y^c)sin(\theta)]^p\\
            & = & \times [-(x-x^c)sin(\theta)+(y-y^c)cos(\theta)]^q\\
            & = & \times f(x,y)dxdy,\\
\end{array}
</math>
 
where:<math> (x^c,y^c) = (m_{10}/m_{00},m_{01}/m_{00}) </math> is the centroid of the image <math>f(x,y)</math> and
 
<math>\begin{array}{lll}
\alpha&=&[m_{00}^{S}/m_{00}]^{1/2}\\
\theta&=&\frac{1}{2}tan^{-1}\frac{2m_{11}}{m_{20}-m_{02}}
\end{array}
</math>.
 
<math>m_{00}^{S}</math> in this equation is a scaling factor depending on the image. <math>m_{00}^{S}</math> is usually set to 1 for binary images.
 
== See also ==
* [[Computer Vision]]
* [[Signal Processing]]
* [[Image moment]]
 
== References ==
{{Reflist}}
 
== External links ==
* [http://www.btabibian.com/labbook/eigenmoments implementation of EigenMoments in Matlab]
 
[[Category:Signal processing]]
[[Category:Computer vision]]
[[Category:Articles created via the Article Wizard]]

Revision as of 09:53, 7 July 2013

Template:Orphan


concept chart of EigenMoment algorithm
Signal space is transformed into moment space, i.e. Geometric Moments, then it is transformed into noise space in which axes with lowest rate of noise are retained and finally transformed into feature space

EigenMoments[1] is a set of orthogonal, noise robust, invariant to rotation, scaling and translation and distribution sensitive moments. Their application can be found in signal processing and computer vision as descriptors of the signal or image. The descriptors can later be used for classification purposes.

It is obtained by performing orthogonalization, via eigen analysis on geometric moments.[2]

Framework summary

EigenMoments are computed by performing eigen analysis on the moment space of an image by maximizing Signal to Noise Ratio in the feature space in form of Rayleigh quotient.

This approach has several benefits in Image processing applications:

  1. Dependency of moments in the moment space on the distribution of the images being transformed, ensures decorrelation of the final feature space after eigen analysis on the moment space.
  2. The ability of EigenMoments to take into account distribution of the image makes it more versatile and adaptable for different genres.
  3. Generated moment kernels are orthogonal and therefore analysis on the moment space becomes easier. Transformation with orthogonal moment kernels into moment space is analogous to projection of the image onto a number of orthogonal axes.
  4. Nosiy components can be removed. This makes EigenMoments robust for classification applications.
  5. Optimal information compaction can be obtained and therefore a few number of moments are needed to characterize the images.

Problem formulation

Assume that a signal vector sn is taken from a certain distribution having coorelation Cn×n,i.e. C=E[ssT] where E[.] denotes expected value.

Dimension of signal space, n, is often too large to be useful for practical application such as pattern classification, we need to transform the signal space into a space with lower dimensionality.

This is performed by a two-step linear transformation:

q=WTXTs,

where q=[q1,...,qn]Tk is the transformed signal, X=[x1,...,xn]Tn×m a fixed transformation matrix which transforms the signal into the moment space, and W=[w1,...,wn]Tm×k the transformation matrix which we are going to determine by maximizing the SNR of the feature space resided by q. For the case of Geometric Moments, X would be the monomials. If m=k=n, a full rank transformation would result, however usually we have mn and km. This is specially the case when n is of high dimensions.

Finding W that maximizes the SNR of the feature space:

SNRtransform=wTXTCXwwTXTNXw,

where N is the correlation matrix of the noise signal. The problem can thus be formulated as

w1,...,wk=argmaxwwTXTCXwwTXTNXw

subject to constraints:

wiTXTNXwj=δij, where δij is the Kronecker delta.

It can be observed that this maximization is Rayleigh quotient by letting A=XTCX and B=XTNX and therefore can be written as:

w1,...,wk=argmaxxwTAwwTBw, wiTBwj=δij

Rayleigh quotient

Optimization of Rayleigh quotient[3][4] has the form:

maxwR(w)=maxwwTAwwTBw

and A and B, both are symmetric and B is positive definite and therefore invertibale. scaling w does not change the value of the object function and hence and additional scalar constraint wTBw=1 can be imposed on w and no solution would be lost when the objective function is optimized.

This constraint optimization problem can be solved using Lagrangian multiplier:

maxwwTAw subject to wTBw=1

maxw(w)=maxw(wTAwλwTBw)

equating first derivative to zero and we will have:

Aw=λBw

which is an instance of Generelized Eigenvalue Problem(GEP). The GEP has the form:

Aw=λBw

for any pair (w,λ) that is a solution to above equation, w is called a generalized eigenvector and λ is called a generalized eigenvalue.

Finding w and λ that satisfies this equations would produce the result which optimizes Rayleigh quotient.

One way of maximizing Rayleigh quotient is through solving the Generalized Eigen Problem. Dimension reduction can be performed by simply choosing the first components wi, i=1,...,k, with the highest values for R(w) out of the m components, and discard the rest. Interpretation of this transformation is rotating and scaling the moment space, transforming it into a feature space with maximized SNR and therefore, the first k components are the components with highest k SNR values.

The other method to look at this solution is to use the concept of simultaneous diagonalization instead of Generalized Eigen Problem.

Simultaneous diagonalization

  1. Let A=XTCX and B=XTNX as mentioned earlier. We can write W as two separate transformation matrices:

W=W1W2.

  1. W1 can be found by first diagonalize B:

PTBP=DB.


Where DB is a diagonal matrix sorted in increasing order. Since B is positive definite, thus DB>0. We can discard those eigenvalues that large and retain those close to 0, since this means the energy of the noise is close to 0 in this space, at this stage it is also possible to discard those eigenvectors that have large eigenvalues.

Let P^ be the first k columns of P, now PT^BP^=DB^ where DB^ is the k×k principal submatrix of DB.

  1. Let

W1=P^DB^1/2


and hence:

W1TBW1=(P^DB^1/2)TB(P^DB^1/2)=I.


W1 whiten B and reduces the dimensionality from m to k. The transformed space resided by q=W1TXTs is called the noise space.

  1. Then, we diagonalize W1TAW1:

W2TW1TAW1W2=DA,

where W2TW2=I. DA is the matrix with eigenvalues of W1TAW1 on its diagonal. We may retain all the eigenvalues and their corresponding eigenvectors since the most of the noise are already discarded in previous step.

  1. Finally the transformation is given by:

W=W1W2

where W diagonalizes both the numerator and denominator of the SNR,

WTAW=DA, WTBW=I and the transformation of signal s is defined as q=WTXTs=W2TW1TXTs.

Information loss

To find the information loss when we discard some of the eigenvalues and eigenvectors we can perform following analysis:

η=1trace(W1TAW1)trace(DB1/2PTAPDB1/2)=1trace(DB^1/2P^TAP^DB^1/2)trace(DB1/2PTAPDB1/2)

Eigenmoments

Eigenmoments are derived by applying the above framework on Geometric Moments. They can be derived for both 1D and 2D signals.

1D signal

If we let X=[1,x,x2,...,xm1], i.e. the monomials, after the transformation XT we obtain Geometric Moments, denoted by vector M, of signal s=[s(x)],i.e. M=XTs.

In practice it is difficult to estimate the correlation signal due to insufficient number of samples, therefore parametric approaches are utilized.

One such model can be defined as:

r(x1,x2)=r(0,0)ec(x1x2)2,

Model for correlation in signal
Plot of the parametric model which predicts correlations in the input signal. r(x1,x2)=r(0,0)ec(x1x2)2

where r(0,0)=E[tr(ssT)]. This model of correlation can be replaced by other models however this model covers general natural images.

Since r(0,0) does not affect the maximization it can be dropped.

A=XTCX=1111[x1jx2iec(x1x2)2]i,j=0i,j=m1dx1dx2

The correlation of noise can be modelled as σn2δ(x1,x2), where σn2 is the energy of noise.Again σn2 can be dropped because the constant does not have any effect on the maximization problem.

B=XTNX=1111[x1jx2iδ(x1,x2)]i,j=0i,j=m1dx1dx2 B=XTNX=11[x1j+i]i,j=0i,j=m1dx1=XTX

Using the computed A and B and applying the algorithm discussed in previous section we find W and set of transformed monomials Φ=[ϕ1,...,ϕk]=XW which produces the moment kernels of EM. The moment kernels of EM decorrelate the correlation in the image.

ΦTCΦ=(XW)TC(XW)=DC,

and are orthogonal:

ΦTΦ=(XW)T(XW)=WTXTX=WTXTNXW=WTBW=I

Example computation

Taking c=0.5, the dimension of moment space as m=6 and the dimension of feature space as k=4, we will have:

W=(0.000.77450.89602.86694.46220.00.00.00.07.92722.45234.022520.65050.00.00.00.09.27890.12390.509218.45820.00.0)

and

ϕ1=2.8669x4.0225x30.5092x5ϕ2=4.4622x+20.6505x318.4582x5ϕ3=0.7745+7.9272x29.2789x4ϕ4=0.8960+2.4523x20.1239x4

2D signal

The derivation for 2D signal is the same as 1D signal except that conventional Geometric Moments are directly employed to obtain the set of 2D EigenMoments.

The definition of Geometric Moments of order (p+q) for 2D image signal is:

mpq=1111xpyqf(x,y)dxdy.

which can be denoted as M={mj,i}i,j=0i,j=m1. Then the set of 2D EigenMoments are:

Ω=WTMW,

where Ω={Ωj,i}i,j=0i,j=k1 is a matrix that contains the set of EigenMoments.

Ωj,i=Σr=0m1Σs=0m1wr,jws,imr,s.

EigenMoment invariants (EMI)

In order to obtain a set of moment invariants we can use normalized Geometric Moments M^ instead of M.

Normalized Geometric Moments are invariant to Rotation,Scaling and Transformation and defined by:

m^pq=αp+q+21111[(xxc)cos(θ)+(yyc)sin(θ)]p=×[(xxc)sin(θ)+(yyc)cos(θ)]q=×f(x,y)dxdy,

where:(xc,yc)=(m10/m00,m01/m00) is the centroid of the image f(x,y) and

α=[m00S/m00]1/2θ=12tan12m11m20m02.

m00S in this equation is a scaling factor depending on the image. m00S is usually set to 1 for binary images.

See also

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

External links

  1. Pew-Thian Yap, Raveendran Paramesran, Eigenmoments, Pattern Recognition, Volume 40, Issue 4, April 2007, Pages 1234-1244, ISSN 0031-3203, 10.1016/j.patcog.2006.07.003.
  2. M. K. Hu, "Visual Pattern Recognition by Moment Invariants", IRE Trans. Info. Theory, vol. IT-8, pp.179–187, 1962
  3. T. De Bie, N. Cristianini, R. Rosipal, Eigenproblems in pattern recognition, in: E. Bayro-Corrochano (Ed.), Handbook of Computational Geometry for Pattern Recognition, Computer Vision, Neurocomputing and Robotics, Springer, Heidelberg, 2004G.
  4. Strang, Linear Algebra and Its Applications, second ed., Academic Press, New York, 1980.