Chow ring: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>David Eppstein
en>David Eppstein
→‎History: clean up reference formatting, remove unsourced etymological speculation
 
Line 1: Line 1:
{{no footnotes|date=September 2013|talk=yes}}
Wilber Berryhill is the title his mothers and fathers gave him and he completely digs that name. Kentucky is where I've usually been living. He is an info officer. It's not a common factor but what she likes performing is to perform domino but she doesn't have the time lately.<br><br>Visit my web page; online psychic chat ([http://kpupf.com/xe/talk/735373 visit the up coming internet page])
 
In [[mathematics]], more specifically in [[numerical linear algebra]], the '''biconjugate gradient method''' is an [[algorithm]] to solve [[system of linear equations|systems of linear equations]]
 
:<math>A x= b.\,</math>
 
Unlike the [[conjugate gradient method]], this algorithm does not require the [[matrix (mathematics)|matrix]] <math>A</math> to be [[self-adjoint]], but instead one needs to perform multiplications by the [[conjugate transpose]] {{math|<var>A</var><sup>*</sup>}}.
 
==The algorithm==
 
# Choose initial guess <math>x_0\,</math>, two other vectors <math>x_0^*</math> and <math>b^*\,</math> and a [[preconditioner]] <math>M\,</math>
# <math>r_0 \leftarrow b-A\, x_0\,</math>
# <math>r_0^* \leftarrow b^*-x_0^*\, A^T </math>
# <math>p_0 \leftarrow M^{-1} r_0\,</math>
# <math>p_0^* \leftarrow r_0^*M^{-1}\,</math>
# for <math>k=0, 1, \ldots</math> do
## <math>\alpha_k \leftarrow {r_k^* M^{-1} r_k \over p_k^* A p_k}\,</math>
## <math>x_{k+1} \leftarrow x_k + \alpha_k \cdot p_k\,</math>
## <math>x_{k+1}^* \leftarrow x_k^* + \overline{\alpha_k}\cdot p_k^*\,</math>
## <math>r_{k+1} \leftarrow r_k - \alpha_k \cdot A p_k\,</math>
## <math>r_{k+1}^* \leftarrow r_k^*- \overline{\alpha_k} \cdot p_k^*\, A </math>
## <math>\beta_k \leftarrow {r_{k+1}^* M^{-1} r_{k+1} \over r_k^* M^{-1} r_k}\,</math>
## <math>p_{k+1} \leftarrow M^{-1} r_{k+1} + \beta_k \cdot p_k\,</math>
## <math>p_{k+1}^* \leftarrow r_{k+1}^*M^{-1}  + \overline{\beta_k}\cdot p_k^*\,</math>
 
In the above formulation, the computed <math>r_k\,</math> and <math>r_k^*</math> satisfy
 
:<math>r_k = b - A x_k,\,</math>
:<math>r_k^* = b^* - x_k^*\, A </math>
 
and thus are the respective [[Residual (numerical analysis)|residual]]s corresponding to <math>x_k\,</math> and <math>x_k^*</math>, as approximate solutions to the systems
 
:<math>A x = b,\,</math>
:<math>x^*\, A = b^*\,;</math>
 
<math>x^*</math> is the [[Hermitian adjoint|adjoint]], and <math>\overline{\alpha}</math> is the [[complex conjugate]].
 
=== Unpreconditioned version of the algorithm ===
# Choose initial guess <math>x_0\,</math>,
# <math>r_0 \leftarrow b-A\, x_0\,</math>
# <math>\hat{r}_0 \leftarrow \hat{b} - \hat{x}_0A^T  </math>
# <math>p_0 \leftarrow r_0\,</math>
# <math>\hat{p}_0 \leftarrow \hat{r}_0\,</math>
# for <math>k=0, 1, \ldots</math> do
## <math>\alpha_k \leftarrow {\hat{r}_k r_k \over \hat{p}_k A p_k}\,</math>
## <math>x_{k+1} \leftarrow x_k + \alpha_k \cdot p_k\,</math>
## <math>\hat{x}_{k+1} \leftarrow \hat{x}_k + \alpha_k \cdot \hat{p}_k\,</math>
## <math>r_{k+1} \leftarrow r_k - \alpha_k \cdot A p_k\,</math>
## <math>\hat{r}_{k+1} \leftarrow \hat{r}_k- \alpha_k \cdot \hat{p}_k A^T  </math>
## <math>\beta_k \leftarrow {\hat{r}_{k+1} r_{k+1} \over \hat{r}_k r_k}\,</math>
## <math>p_{k+1} \leftarrow r_{k+1} + \beta_k \cdot p_k\,</math>
## <math>\hat{p}_{k+1} \leftarrow \hat{r}_{k+1}  + \beta_k \cdot \hat{p}_k\,</math>
 
==Discussion==
The biconjugate gradient method is [[numerical stability|numerically unstable]]{{Citation needed|date=September 2009}} (compare to the [[biconjugate gradient stabilized method]]), but very important from a theoretical point of view. Define the iteration steps by
 
:<math>x_k:=x_j+ P_k A^{-1}\left(b - A x_j \right),</math>
:<math>x_k^*:= x_j^*+\left(b^*- x_j^* A \right) P_k A^{-1},</math>
 
where <math>j<k</math> using the related [[projection (linear algebra)|projection]]
 
:<math>P_k:= \mathbf{u}_k \left(\mathbf{v}_k^* A \mathbf{u}_k \right)^{-1} \mathbf{v}_k^* A,</math>
 
with
 
:<math>\mathbf{u}_k=\left[u_0, u_1, \dots, u_{k-1} \right],</math>
:<math>\mathbf{v}_k=\left[v_0, v_1, \dots, v_{k-1} \right].</math>
 
These related projections may be iterated themselves as
 
:<math>P_{k+1}= P_k+ \left( 1-P_k\right) u_k \otimes {v_k^* A\left(1-P_k \right) \over v_k^* A\left(1-P_k \right) u_k}.</math>
 
A relation to [[Quasi-Newton method]]s is given by <math>P_k= A_k^{-1} A</math> and <math>x_{k+1}= x_k- A_{k+1}^{-1}\left(A x_k -b \right)</math>, where
:<math>A_{k+1}^{-1}= A_k^{-1}+ \left( 1-A_k^{-1}A\right) u_k \otimes {v_k^* \left(1-A A_k^{-1} \right) \over v_k^* A\left(1-A_k^{-1}A \right) u_k}.</math>
 
The new directions
 
:<math>p_k = \left(1-P_k \right) u_k,</math>
:<math>p_k^* = v_k^* A \left(1- P_k \right) A^{-1}</math>
 
are then orthogonal to the residuals:
 
:<math>v_i^* r_k= p_i^* r_k=0,</math>
:<math>r_k^* u_j = r_k^* p_j= 0,</math>
 
which themselves satisfy
 
:<math>r_k= A \left( 1- P_k \right) A^{-1} r_j,</math>
:<math>r_k^*= r_j^* \left( 1- P_k \right)</math>
 
where <math>i,j<k</math>.
 
The biconjugate gradient method now makes a special choice and uses the setting
:<math>u_k = M^{-1} r_k,\,</math>
:<math>v_k^* = r_k^* \, M^{-1}.\,</math>
 
With this particular choice, explicit evaluations of <math>P_k</math> and {{math|<var>A</var><sup>&minus;1</sup>}} are avoided, and the algorithm takes the form stated above.
 
==Properties==
 
* If <math>A= A^*\,</math> is [[Conjugate transpose|self-adjoint]], <math>x_0^*= x_0</math> and <math>b^*=b</math>, then <math>r_k= r_k^*</math>, <math>p_k= p_k^*</math>, and the [[conjugate gradient method]] produces the same sequence <math>x_k= x_k^*</math> at half the computational cost.
 
* The sequences produced by the algorithm are [[Biorthogonal system|biorthogonal]], i.e., <math>p_i^*Ap_j=r_i^*M^{-1}r_j=0</math> for <math>i \neq j</math>.
 
* if <math>P_{j'}\,</math> is a polynomial with <math>\mathrm{deg}\left(P_{j'}\right)+j<k</math>, then <math>r_k^*P_{j'}\left(M^{-1}A\right)u_j=0</math>. The algorithm thus produces projections onto the [[Krylov subspace]].
 
* if <math>P_{i'}\,</math> is a polynomial with <math>i+\mathrm{deg}\left(P_{i'}\right)<k</math>, then <math>v_i^*P_{i'}\left(AM^{-1}\right)r_k=0</math>.
 
==See also==
* [[Biconjugate gradient stabilized method]]
* [[Conjugate gradient method]]
 
==References==
* {{cite journal|first=R.|last=Fletcher|year=1976|title=Conjugate gradient methods for indefinite systems|journal=Numerical Analysis|volume=506|series=Lecture Notes in Mathematics|publisher=Springer Berlin / Heidelberg|issn=1617-9692|isbn=978-3-540-07610-0|pages=73&ndash;89|url=http://www.springerlink.com/content/974t1l33m84217um/|doi=10.1007/BFb0080109|editor1-last=Watson|editor1-first=G. Alistair}}
* {{Cite book |last1=Press|first1=WH|last2=Teukolsky|first2=SA|last3=Vetterling|first3=WT|last4=Flannery|first4=BP|year=2007|title=Numerical Recipes: The Art of Scientific Computing|edition=3rd|publisher=Cambridge University Press| publication-place=New York|isbn=978-0-521-88068-8|chapter=Section 2.7.6|chapter-url=http://apps.nrbook.com/empanel/index.html?pg=87 |postscript=<!-- Bot inserted parameter. Either remove it; or change its value to "." for the cite to end in a ".", as necessary. -->{{inconsistent citations}}}}
 
{{Numerical linear algebra}}
 
[[Category:Numerical linear algebra]]
[[Category:Gradient methods]]

Latest revision as of 00:34, 15 September 2014

Wilber Berryhill is the title his mothers and fathers gave him and he completely digs that name. Kentucky is where I've usually been living. He is an info officer. It's not a common factor but what she likes performing is to perform domino but she doesn't have the time lately.

Visit my web page; online psychic chat (visit the up coming internet page)