# Talk:Sherman–Morrison formula

## Invertability?

Is there an easy condition that asserts that the inverse exists? Haseldon 08:59, 18 February 2007 (UTC)

I'm pretty sure it exists if and only if ${\displaystyle 1+vA^{-1}u\neq 0}$ but unfortunately I don't have a proof here. Ocolon 09:50, 18 February 2007 (UTC)
.. and ${\displaystyle A^{-1}}$ should also exist for the claim to make sense. For the proof, I think that it suffices to note that the proof given in the article shows that the inverse exists as long as all equations are defined; This is the case assuming that ${\displaystyle 1+vA^{-1}u\neq 0}$. Haseldon 12:26, 18 February 2007 (UTC)
I added these assumptions to the text. Haseldon 12:33, 18 February 2007 (UTC)

## Name?

Where does the name come from? --RainerBlome 23:54, 27 August 2005 (UTC)

Good question. The article should begin by saying
In mathematics, in particular linear algebra, the Sherman-Morrison formula [PFTV92], named after XXXXX and XXXXX, computes the inverse of ...
for suitable values of XXXXX. Michael Hardy 22:43, 28 August 2005 (UTC)
Fixed, the relevant references are now in the article. --RainerBlome 21:49, 17 September 2007 (UTC)

In a publication that I used, the identity is also called the "Bartlett Identity" (literally "Bartletts Identität", as it is in German), citing {{#invoke:citation/CS1|citation |CitationClass=citation }}. Can anyone verify this? --RainerBlome 10:26, 17 September 2007 (UTC)

## Before and after my recent edit

### BEFORE:

${\displaystyle {\begin{matrix}XY&=&(A&+&uv)(A^{-1}&-&{A^{-1}uvA^{-1} \over 1+vA^{-1}u})\\&=&AA^{-1}&+&uvA^{-1}&-&{AA^{-1}uvA^{-1}+uvA^{-1}uvA^{-1} \over 1+vA^{-1}u}\\&=&I&+&uvA^{-1}&-&{uvA^{-1}+uvA^{-1}uvA^{-1} \over 1+vA^{-1}u}\\&=&I&+&uvA^{-1}&-&{(1+vA^{-1}u)uvA^{-1} \over 1+vA^{-1}u}\\&=&I&+&uvA^{-1}&-&uvA^{-1}\\&=&I\end{matrix}}}$

### AFTER:

${\displaystyle XY=(A+uv)\left(A^{-1}-{A^{-1}uvA^{-1} \over 1+vA^{-1}u}\right)}$
${\displaystyle =AA^{-1}+uvA^{-1}-{AA^{-1}uvA^{-1}+uvA^{-1}uvA^{-1} \over 1+vA^{-1}u}}$
${\displaystyle =I+uvA^{-1}-{uvA^{-1}+uvA^{-1}uvA^{-1} \over 1+vA^{-1}u}}$
${\displaystyle =I+uvA^{-1}-{(1+vA^{-1}u)uvA^{-1} \over 1+vA^{-1}u}}$
${\displaystyle =I+uvA^{-1}-uvA^{-1}\,}$
${\displaystyle =I.}$

Although it's nice to have the "="s nicely aligned, various other aspects of the alignment in the \matrix version (i.e. the "BEFORE" version) of this display look bad. In particular, the fractions on the left should not all get centered the way they are. Also, the lines are too close together; it makes the fractions hard to read. Finally, the right and left parentheses are not big enough in some cases. Contrast the following:

${\displaystyle ({1 \over 2})+3}$
${\displaystyle \left({1 \over 2}\right)+3}$

(To see what makes the difference, click on "edit this page" and see what I typed here.) Michael Hardy 22:38, 28 August 2005 (UTC)

## Numerical Complexity

Assuming that ${\displaystyle A^{-1}}$ is an ${\displaystyle n}$ times ${\displaystyle n}$ matrix, for arbitrary update vectors u and v, the numerical complexity is

${\displaystyle (n+1)V(n)+nV(n)+nM+n^{2}M+n^{2}S+1A+1D}$,

where

For a single-nonzero-element vector u and an arbitrary vector v (only one row of ${\displaystyle A}$ is updated), the numerical complexity is

${\displaystyle (n+1)V(n)+nM+n^{2}M+n^{2}S+A+D}$,

For a single-nonzero-element vector v and an arbitrary vector u (only one column of ${\displaystyle A}$ is updated), the numerical complexity is

${\displaystyle (n+1)V(n)+nM+n^{2}M+n^{2}S+A+D}$,

For single-nonzero-element vectors v and u (only one element of ${\displaystyle A}$ is updated), the numerical complexity is

${\displaystyle n^{2}M+nM+n^{2}S+A+D}$,

In each case, O(n^2) multiplications are needed. However, the single-row-update and single-column-update cases require twice as many multiplications as the single-element-update case, and the general case requires thrice as many. This agrees with the note on relative algorithm speeds given at [1]. Can someone provide a better reference for this? —Preceding unsigned comment added by RainerBlome (talkcontribs) 11:39, 17 September 2007 (UTC)

## Format of references

Jmath666, why did you remove the publisher from the refs? --RainerBlome 19:56, 30 September 2007 (UTC)

The standard practice in scientific literature is not to list journal publisher, only the journal title. The publisher is usually listed for for books, but not for journals. A quick look at some math book or paper will confirm this. Jmath666 20:14, 30 September 2007 (UTC)

Furthermore, The MR links in this article look like dead ends to me, and at most serve to confirm that the citations themselves are correct. Other online resources, for example JSTOR, are better suited to confirm citations. What do you think? --RainerBlome 10:09, 3 October 2007 (UTC)

Should it be noted that :${\displaystyle (uv^{T})}$ must be a nxn matrix consistent with :${\displaystyle (A)^{-1}}$ or are we to assume that qualification is present? Basically i am asking if the size of u and v with respect to A must be given? —Preceding unsigned comment added by I5kfun (talkcontribs) 04:48, 5 February 2009 (UTC)
Does ${\displaystyle v^{T}(A^{-1}u)=(v^{T}A^{-1})u}$? As operators the first one is the only sensible one, but since we are dealing with matrices, the second makes sense also. If they are not equal, which is correct? Watson Ladd (talk) 14:43, 24 December 2009 (UTC)