# Matrix normal distribution

In statistics, the matrix normal distribution is a probability distribution that is a generalization of the multivariate normal distribution to matrix-valued random variables.

## Definition

The probability density function for the random matrix X (n × p) that follows the matrix normal distribution ${\displaystyle {\mathcal {MN}}_{n,p}({\mathbf {M} },{\mathbf {U} },{\mathbf {V} })}$ has the form:

${\displaystyle p({\mathbf {X} }|{\mathbf {M} },{\mathbf {U} },{\mathbf {V} })={\frac {\exp \left(-{\frac {1}{2}}\,{\mathrm {tr} }\left[{\mathbf {V} }^{-1}({\mathbf {X} }-{\mathbf {M} })^{T}{\mathbf {U} }^{-1}({\mathbf {X} }-{\mathbf {M} })\right]\right)}{(2\pi )^{np/2}|{\mathbf {V} }|^{n/2}|{\mathbf {U} }|^{p/2}}}}$

where ${\displaystyle {\mathrm {tr} }}$ denotes trace and M is n × p, U is n × n and V is p × p.

The matrix normal is related to the multivariate normal distribution in the following way:

${\displaystyle {\mathbf {X} }\sim {\mathcal {MN}}_{n\times p}({\mathbf {M} },{\mathbf {U} },{\mathbf {V} }),}$

if and only if

${\displaystyle {\mathrm {vec} }({\mathbf {X} })\sim {\mathcal {N}}_{np}({\mathrm {vec} }({\mathbf {M} }),{\mathbf {V} }\otimes {\mathbf {U} })}$

### Proof

The equivalence between the above matrix normal and multivariate normal density functions can be shown using several properties of the trace and Kronecker product, as follows. We start with the argument of the exponent of the matrix normal PDF:

{\displaystyle {\begin{aligned}&\;\;\;\;-{\frac {1}{2}}{\text{tr}}\left[\mathbf {V} ^{-1}(\mathbf {X} -\mathbf {M} )^{T}\mathbf {U} ^{-1}(\mathbf {X} -\mathbf {M} )\right]\\&=-{\frac {1}{2}}{\text{vec}}\left(\mathbf {X} -\mathbf {M} \right)^{T}{\text{vec}}\left(\mathbf {U} ^{-1}(\mathbf {X} -\mathbf {M} )\mathbf {V} ^{-1}\right)\\&=-{\frac {1}{2}}{\text{vec}}\left(\mathbf {X} -\mathbf {M} \right)^{T}\left(\mathbf {V} ^{-1}\otimes \mathbf {U} ^{-1}\right){\text{vec}}\left(\mathbf {X} -\mathbf {M} \right)\\&=-{\frac {1}{2}}\left[{\text{vec}}(\mathbf {X} )-{\text{vec}}(\mathbf {M} )\right]^{T}\left(\mathbf {V} \otimes \mathbf {U} \right)^{-1}\left[{\text{vec}}(\mathbf {X} )-{\text{vec}}(\mathbf {M} )\right]\end{aligned}}}

which is the argument of the exponent of the multivariate normal PDF. The proof is completed by using the determinant property: ${\displaystyle |\mathbf {V} \otimes \mathbf {U} |=|\mathbf {V} |^{n}|\mathbf {U} |^{p}.}$

## Properties

If ${\displaystyle \mathbf {X} \sim {\mathcal {MN}}_{n\times p}(\mathbf {M} ,\mathbf {U} ,\mathbf {V} )}$, then we have the following properties:[1]

### Expected values

The mean, or expected value is:

${\displaystyle E[\mathbf {X} ]=\mathbf {M} }$

and we have the following second-order expectations:[2]

${\displaystyle E[(\mathbf {X} -\mathbf {M} )(\mathbf {X} -\mathbf {M} )^{T}]=\mathbf {U} \operatorname {tr} (\mathbf {V} )}$
${\displaystyle E[(\mathbf {X} -\mathbf {M} )^{T}(\mathbf {X} -\mathbf {M} )]=\mathbf {V} \operatorname {tr} (\mathbf {U} )}$

More generally, for appropriately dimensioned matrices A,B,C:

{\displaystyle {\begin{aligned}E[\mathbf {X} \mathbf {A} \mathbf {X} ^{T}]&=\mathbf {U} \operatorname {tr} (\mathbf {A} ^{T}\mathbf {V} )+\mathbf {MAM} ^{T}\\E[\mathbf {X} ^{T}\mathbf {B} \mathbf {X} ]&=\mathbf {V} \operatorname {tr} (\mathbf {U} \mathbf {B} ^{T})+\mathbf {M} ^{T}\mathbf {BM} \\E[\mathbf {X} \mathbf {C} \mathbf {X} ]&=\mathbf {U} \mathbf {C} ^{T}\mathbf {V} +\mathbf {MCM} \end{aligned}}}

### Transformation

Transpose transform:

${\displaystyle \mathbf {X} ^{T}\sim {\mathcal {MN}}_{p\times n}(\mathbf {M} ^{T},\mathbf {V} ,\mathbf {U} )}$

Linear transform: let D (r-by-n), be of full rank r ≤ n and C (p-by-s), be of full rank s ≤ p, then:

${\displaystyle \mathbf {DXC} \sim {\mathcal {MN}}_{n\times p}(\mathbf {DMC} ,\mathbf {DUD} ^{T},\mathbf {C} ^{T}\mathbf {VC} )}$

## Example

Let's imagine a sample of n independent p-dimensional random variables identically distributed according to a multivariate normal distribution:

${\displaystyle {\mathbf {Y} }_{i}\sim {\mathcal {N}}_{p}({\boldsymbol {\mu }},{\boldsymbol {\Sigma }}){\text{ with }}i\in \{1,\ldots ,n\}}$.

When defining the n × p matrix ${\displaystyle {\mathbf {X} }}$ for which the ith row is ${\displaystyle {\mathbf {Y} }_{i}}$, we obtain:

${\displaystyle {\mathbf {X} }\sim {\mathcal {MN}}_{n\times p}({\mathbf {M} },{\mathbf {U} },{\mathbf {V} })}$

## Relation to other distributions

Dawid (1981) provides a discussion of the relation of the matrix-valued normal distribution to other distributions, including the Wishart distribution, Inverse Wishart distribution and matrix t-distribution, but uses different notation from that employed here.

## References

1. {{#invoke:citation/CS1|citation |CitationClass=book }}
2. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}

1. REDIRECT Template:Probability distributions