# Studentized residual

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

{{ safesubst:#invoke:Unsubst||$N=Disputed |date=__DATE__ |$B= {{#invoke:Message box|ambox}} }}

Template:Regression bar In statistics, a studentized residual is the quotient resulting from the division of a residual by an estimate of its standard deviation. Typically the standard deviations of residuals in a sample vary greatly from one data point to another even when the errors all have the same standard deviation, particularly in regression analysis; thus it does not make sense to compare residuals at different data points without first studentizing. It is a form of a Student's t-statistic, with the estimate of error varying between points.

This is an important technique in the detection of outliers. It is named in honor of William Sealey Gosset, who wrote under the pseudonym Student, and dividing by an estimate of scale is called studentizing, in analogy with standardizing and normalizing: see Studentization.

## Motivation

The key reason for studentizing is that, in regression analysis of a multivariate distribution, the variances of the residuals at different input variable values may differ, even if the variances of the errors at these different input variable values are equal. The issue is the difference between errors and residuals in statistics, particularly the behavior of residuals in regressions.

Consider the simple linear regression model

$Y=\alpha _{0}+\alpha _{1}X+\varepsilon .\,$ Given a random sample (XiYi), i = 1, ..., n, each pair (XiYi) satisfies

$Y_{i}=\alpha _{0}+\alpha _{1}X_{i}+\varepsilon _{i},\,$ where the errors εi, are independent and all have the same variance σ2. The residuals are not the true, and unobservable, errors, but rather are estimates, based on the observable data, of the errors. When the method of least squares is used to estimate α0 and α1, then the residuals ${\widehat {\varepsilon }}$ , unlike the errors $\varepsilon$ , cannot be independent since they satisfy the two constraints

$\sum _{i=1}^{n}{\widehat {\varepsilon }}_{i}=0$ and

$\sum _{i=1}^{n}{\widehat {\varepsilon }}_{i}x_{i}=0.$ (Here εi is the ith error, and ${\widehat {\varepsilon }}_{i}$ is the ith residual.)

Moreover, and most importantly, the residuals, unlike the errors, do not all have the same variance: the variance decreases as the corresponding x-value gets farther from the average x-value. This is a feature of the regression better fitting values at the ends of the domain, not the data itself, and is also reflected in the influence functions of various data points on the regression coefficients: endpoints have more influence. This can also be seen because the residuals at endpoints depend greatly on the slope of a fitted line, while the residuals at the middle are relatively insensitive to the slope. The fact that the variances of the residuals differ, even though the variances of the true errors are all equal to each other, is the principal reason for the need for studentization.

It is not simply a matter of the population parameters (mean and standard deviation) being unknown – it is that regressions yield different residual distributions at different data points, unlike point estimators of univariate distributions, which share a common distribution for residuals.

## How to studentize

For this simple model, the design matrix is

$X=\left[{\begin{matrix}1&x_{1}\\\vdots &\vdots \\1&x_{n}\end{matrix}}\right]$ and the hat matrix H is the matrix of the orthogonal projection onto the column space of the design matrix:

$H=X(X^{T}X)^{-1}X^{T}.\,$ The "leverage" hii is the ith diagonal entry in the hat matrix. The variance of the ith residual is

$\operatorname {var} ({\widehat {\varepsilon }}_{i})=\sigma ^{2}(1-h_{ii}).$ In case the design matrix X has only two columns (as in the example above), this is equal to

$\operatorname {var} ({\widehat {\varepsilon }}_{i})=\sigma ^{2}\left(1-{\frac {1}{n}}-{\frac {(x_{i}-{\bar {x}})^{2}}{\sum _{j=1}^{n}(x_{j}-{\bar {x}})^{2}}}\right).$ The corresponding studentized residual is then

${{\widehat {\varepsilon }}_{i} \over {\widehat {\sigma }}{\sqrt {1-h_{ii}\ }}}$ where ${\widehat {\sigma }}$ is an appropriate estimate of σ (see below).

## Internal and external studentization

The usual estimate of σ2 is

${\widehat {\sigma }}^{2}={1 \over n-m}\sum _{j=1}^{n}{\widehat {\varepsilon }}_{j}^{\,2}.$ where m is the number of parameters in the model (2 in our example). But it is desirable to exclude the ith observation from the process of estimating the variance when one is considering whether the ith case may be an outlier. Consequently one may use the estimate

${\widehat {\sigma }}_{(i)}^{2}={1 \over n-m-1}\sum _{\begin{smallmatrix}j=1\\j\neq i\end{smallmatrix}}^{n}{\widehat {\varepsilon }}_{j}^{\,2},$ based on all but the ith case. If the latter estimate is used, excluding the ith case, then the residual is said to be externally studentized; if the former is used, including the ith case, then it is internally studentized.

If the errors are independent and normally distributed with expected value 0 and variance σ2, then the probability distribution of the ith externally studentized residual is a Student's t-distribution with nm − 1 degrees of freedom, and can range from $-\infty$ to $+\infty$ .

On the other hand, the internally studentized residuals are in the range $0\,\pm \,{\sqrt {\mathrm {r.d.f.} }}$ , where r.d.f. is the number of residual degrees of freedom, namely n − m. If "i.s.r." represents the internally studentized residual, and again assuming that the errors are independent identically distributed Gaussian variables, then

$\mathrm {i.s.r.} ^{2}=\mathrm {r.d.f.} {t^{2} \over t^{2}+\mathrm {r.d.f.} -1}$ where t is a random variable distributed as Student's t-distribution with r.d.f. − 1 degrees of freedom. In fact, this implies that i.s.r.2/r.d.f. follows the beta distribution B(1/2,(r.d.f. − 1)/2). When r.d.f. = 3, the internally studentized residuals are uniformly distributed between $-{\sqrt {3}}$ and $+{\sqrt {3}}$ .

If there is only one residual degree of freedom, the above formula for the distribution of internally studentized residuals doesn't apply. In this case, the i.s.r.'s are all either +1 or −1, with 50% chance for each.

The standard deviation of the distribution of internally studentized residuals is always 1, but this does not imply that the standard deviation of all the i.s.r.'s of a particular experiment is 1. For instance, the internally studentized residuals when fitting a straight line going through (0, 0) to the points (1, 4), (2, −1), (2, −1) are ${\sqrt {2}},\ -{\sqrt {5}}/5,\ -{\sqrt {5}}/5$ , and the standard deviation of these is not 1.