# Bicubic interpolation

Bicubic interpolation on the square ${\displaystyle [0,3]\times [0,3]}$ consisting of 9 unit squares patched together. Bicubic interpolation as per MATLAB's implementation. Colour indicates function value. The black dots are the locations of the prescribed data being interpolated. Note how the color samples are not radially symmetric.
Bilinear interpolation on the same dataset as above. Derivatives of the surface are not continuous over the square boundaries.
Nearest-neighbor interpolation on the same dataset as above. Note that the information content in all these three examples is equivalent.

In mathematics, bicubic interpolation is an extension of cubic interpolation for interpolating data points on a two dimensional regular grid. The interpolated surface is smoother than corresponding surfaces obtained by bilinear interpolation or nearest-neighbor interpolation. Bicubic interpolation can be accomplished using either Lagrange polynomials, cubic splines, or cubic convolution algorithm.

In image processing, bicubic interpolation is often chosen over bilinear interpolation or nearest neighbor in image resampling, when speed is not an issue. In contrast to bilinear interpolation, which only takes 4 pixels (2×2) into account, bicubic interpolation considers 16 pixels (4×4). Images resampled with bicubic interpolation are smoother and have fewer interpolation artifacts.

Suppose the function values ${\displaystyle f}$ and the derivatives ${\displaystyle f_{x}}$, ${\displaystyle f_{y}}$ and ${\displaystyle f_{xy}}$ are known at the four corners ${\displaystyle (0,0)}$, ${\displaystyle (1,0)}$, ${\displaystyle (0,1)}$, and ${\displaystyle (1,1)}$ of the unit square. The interpolated surface can then be written

${\displaystyle p(x,y)=\sum _{i=0}^{3}\sum _{j=0}^{3}a_{ij}x^{i}y^{j}.}$

The interpolation problem consists of determining the 16 coefficients ${\displaystyle a_{ij}}$. Matching ${\displaystyle p(x,y)}$ with the function values yields four equations,

Likewise, eight equations for the derivatives in the ${\displaystyle x}$-direction and the ${\displaystyle y}$-direction

And four equations for the cross derivative ${\displaystyle xy}$.

where the expressions above have used the following identities,

${\displaystyle p_{x}(x,y)=\textstyle \sum _{i=1}^{3}\sum _{j=0}^{3}a_{ij}ix^{i-1}y^{j}}$
${\displaystyle p_{y}(x,y)=\textstyle \sum _{i=0}^{3}\sum _{j=1}^{3}a_{ij}x^{i}jy^{j-1}}$
${\displaystyle p_{xy}(x,y)=\textstyle \sum _{i=1}^{3}\sum _{j=1}^{3}a_{ij}ix^{i-1}jy^{j-1}}$.

This procedure yields a surface ${\displaystyle p(x,y)}$ on the unit square ${\displaystyle [0,1]\times [0,1]}$ which is continuous and with continuous derivatives. Bicubic interpolation on an arbitrarily sized regular grid can then be accomplished by patching together such bicubic surfaces, ensuring that the derivatives match on the boundaries.

If the derivatives are unknown, they are typically approximated from the function values at points neighbouring the corners of the unit square, e.g. using finite differences.

Grouping the unknown parameters ${\displaystyle a_{ij}}$ in a vector,

${\displaystyle \alpha =\left[{\begin{smallmatrix}a_{00}&a_{10}&a_{20}&a_{30}&a_{01}&a_{11}&a_{21}&a_{31}&a_{02}&a_{12}&a_{22}&a_{32}&a_{03}&a_{13}&a_{23}&a_{33}\end{smallmatrix}}\right]^{T}}$

and letting

${\displaystyle x=\left[{\begin{smallmatrix}f(0,0)&f(1,0)&f(0,1)&f(1,1)&f_{x}(0,0)&f_{x}(1,0)&f_{x}(0,1)&f_{x}(1,1)&f_{y}(0,0)&f_{y}(1,0)&f_{y}(0,1)&f_{y}(1,1)&f_{xy}(0,0)&f_{xy}(1,0)&f_{xy}(0,1)&f_{xy}(1,1)\end{smallmatrix}}\right]^{T}}$,

the problem can be reformulated into a linear equation ${\displaystyle A\alpha =x}$ where its inverse is:

.

## Bicubic convolution algorithm

Bicubic spline interpolation requires the solution of the linear system described above for each grid cell. An interpolator with similar properties can be obtained by applying a convolution with the following kernel in both dimensions:

${\displaystyle W(x)={\begin{cases}(a+2)|x|^{3}-(a+3)|x|^{2}+1&{\text{for }}|x|\leq 1\\a|x|^{3}-5a|x|^{2}+8a|x|-4a&{\text{for }}1<|x|<2\\0&{\text{otherwise}}\end{cases}}}$

where ${\displaystyle a}$ is usually set to -0.5 or -0.75. Note that ${\displaystyle W(0)=1}$ and ${\displaystyle W(n)=0}$ for all nonzero integers ${\displaystyle n}$.

This approach was proposed by Keys who showed that ${\displaystyle a=-0.5}$ (which corresponds to cubic Hermite spline) produces third-order convergence with respect to the original function.[1]

If we use the matrix notation for the common case ${\displaystyle a=-0.5}$, we can express the equation in a more friendly manner:

${\displaystyle p(t)={\tfrac {1}{2}}{\begin{bmatrix}1&t&t^{2}&t^{3}\\\end{bmatrix}}{\begin{bmatrix}0&2&0&0\\-1&0&1&0\\2&-5&4&-1\\-1&3&-3&1\\\end{bmatrix}}{\begin{bmatrix}a_{-1}\\a_{0}\\a_{1}\\a_{2}\\\end{bmatrix}}}$

for ${\displaystyle t}$ between 0 and 1 for one dimension. for two dimensions first applied once in ${\displaystyle x}$ and again in ${\displaystyle y}$:

${\displaystyle \textstyle b_{-1}=p(t_{x},a_{(-1,-1)},a_{(0,-1)},a_{(1,-1)},a_{(2,-1)})}$
${\displaystyle \textstyle b_{0}=p(t_{x},a_{(-1,0)},a_{(0,0)},a_{(1,0)},a_{(2,0)})}$
${\displaystyle \textstyle b_{1}=p(t_{x},a_{(-1,1)},a_{(0,1)},a_{(1,1)},a_{(2,1)})}$
${\displaystyle \textstyle b_{2}=p(t_{x},a_{(-1,2)},a_{(0,2)},a_{(1,2)},a_{(2,2)})}$
${\displaystyle \textstyle p(x,y)=p(t_{y},b_{-1},b_{0},b_{1},b_{2})}$

## Use in computer graphics

Bicubic interpolation causes overshoot, which increases acutance.

The bicubic algorithm is frequently used for scaling images and video for display (see bitmap resampling). It preserves fine detail better than the common bilinear algorithm.

However, due to the negative lobes on the kernel, it causes overshoot (haloing). This can cause clipping, and is an artifact (see also ringing artifacts), but it increases acutance (apparent sharpness), and can be desirable.