# Leverage (statistics)

In statistics, leverage is a term used in connection with regression analysis and, in particular, in analyses aimed at identifying those observations that are far away from corresponding average predictor values. Leverage points do not necessarily have a large effect on the outcome of fitting regression models.

Leverage points are those observations, if any, made at extreme or outlying values of the independent variables such that the lack of neighboring observations means that the fitted regression model will pass close to that particular observation.[1]

Modern computer packages for statistical analysis include, as part of their facilities for regression analysis, various quantitative measures for identifying influential observations: among these measures is partial leverage, a measure of how a variable contributes to the leverage of a datum.

## Definition

The leverage score for the ${\displaystyle i^{th}}$ data unit is defined as:

## Properties

### Proof

First, note that ${\displaystyle H^{2}=X(X'X)^{-1}X'X(X'X)^{-1}X'=XI(X'X)^{-1}X'=H}$. Also, observe that ${\displaystyle H}$ is symmetric. So we have,

and

If we are in an ordinary least squares setting with fixed X and:

In other words, if the ${\displaystyle \epsilon }$ are homoscedastic, leverage scores determine the noise level in the model.

### Proof

First, note that ${\displaystyle I-H}$ is idempotent and symmetric. This gives, ${\displaystyle var(e)=var((I-H)Y)=(I-H)var(Y)(I-H)'=\sigma ^{2}(I-H)^{2}=\sigma ^{2}(I-H)}$.