Main Page: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
No edit summary
No edit summary
 
(386 intermediate revisions by more than 100 users not shown)
Line 1: Line 1:
The '''Gauss–Newton algorithm''' is a method used to solve [[non-linear least squares]] problems. It can be seen as a modification of [[newton's method in optimization|Newton's method]] for finding a [[maxima and minima|minimum]] of a [[function (mathematics)|function]]. Unlike Newton's method, the Gauss–Newton algorithm can ''only'' be used to minimize a sum of squared function values, but it has the advantage that second derivatives, which can be challenging to compute, are not required.  
This is a preview for the new '''MathML rendering mode''' (with SVG fallback), which is availble in production for registered users.


Non-linear least squares problems arise for instance in [[non-linear regression]], where parameters in a model are sought such that the model is in good agreement with available observations.
If you would like use the '''MathML''' rendering mode, you need a wikipedia user account that can be registered here [[https://en.wikipedia.org/wiki/Special:UserLogin/signup]]
* Only registered users will be able to execute this rendering mode.
* Note: you need not enter a email address (nor any other private information). Please do not use a password that you use elsewhere.


The method is named after the mathematicians [[Carl Friedrich Gauss]] and [[Isaac Newton]].
Registered users will be able to choose between the following three rendering modes:


== Description ==
'''MathML'''
Given ''m'' functions '''r''' = (''r''<sub>1</sub>, …, ''r''<sub>''m''</sub>) of ''n'' variables '''''&beta;'''''&nbsp;=&nbsp;(''&beta;''<sub>1</sub>, …, ''&beta;''<sub>''n''</sub>), with ''m''&nbsp;≥&nbsp;''n'', the Gauss–Newton algorithm [[iterative method|iteratively]] finds the minimum of the sum of squares<ref name=ab>Björck (1996)</ref>
:<math forcemathmode="mathml">E=mc^2</math>


:<math> S(\boldsymbol \beta)= \sum_{i=1}^m r_i^2(\boldsymbol \beta).</math>
<!--'''PNG'''  (currently default in production)
:<math forcemathmode="png">E=mc^2</math>


Starting with an initial guess <math>\boldsymbol \beta^{(0)}</math> for the minimum, the method proceeds by the iterations
'''source'''
:<math forcemathmode="source">E=mc^2</math> -->


:<math> \boldsymbol \beta^{(s+1)} = \boldsymbol \beta^{(s)} - \left(\mathbf{J_r}^\top \mathbf{J_r} \right)^{-1} \mathbf{ J_r} ^\top \mathbf{r}(\boldsymbol \beta^{(s)}) </math>
<span style="color: red">Follow this [https://en.wikipedia.org/wiki/Special:Preferences#mw-prefsection-rendering link] to change your Math rendering settings.</span> You can also add a [https://en.wikipedia.org/wiki/Special:Preferences#mw-prefsection-rendering-skin Custom CSS] to force the MathML/SVG rendering or select different font families. See [https://www.mediawiki.org/wiki/Extension:Math#CSS_for_the_MathML_with_SVG_fallback_mode these examples].


where
==Demos==
:<math> \mathbf{J_r} = \frac{\partial r_i (\boldsymbol \beta^{(s)})}{\partial \beta_j}</math>


is the [[Jacobian matrix]] of '''r''' at <math>\boldsymbol \beta^{(s)}</math> and the symbol <math>^\top</math> denotes the [[matrix transpose]].
Here are some [https://commons.wikimedia.org/w/index.php?title=Special:ListFiles/Frederic.wang demos]:


If ''m''&nbsp;=&nbsp;''n'', the iteration simplifies to


:<math> \boldsymbol \beta^{(s+1)} = \boldsymbol \beta^{(s)} - \left( \mathbf{J_r} \right)^{-1} \mathbf{r}(\boldsymbol \beta^{(s)}) </math>
* accessibility:
** Safari + VoiceOver: [https://commons.wikimedia.org/wiki/File:VoiceOver-Mac-Safari.ogv video only], [[File:Voiceover-mathml-example-1.wav|thumb|Voiceover-mathml-example-1]], [[File:Voiceover-mathml-example-2.wav|thumb|Voiceover-mathml-example-2]], [[File:Voiceover-mathml-example-3.wav|thumb|Voiceover-mathml-example-3]], [[File:Voiceover-mathml-example-4.wav|thumb|Voiceover-mathml-example-4]], [[File:Voiceover-mathml-example-5.wav|thumb|Voiceover-mathml-example-5]], [[File:Voiceover-mathml-example-6.wav|thumb|Voiceover-mathml-example-6]], [[File:Voiceover-mathml-example-7.wav|thumb|Voiceover-mathml-example-7]]
** [https://commons.wikimedia.org/wiki/File:MathPlayer-Audio-Windows7-InternetExplorer.ogg Internet Explorer + MathPlayer (audio)]
** [https://commons.wikimedia.org/wiki/File:MathPlayer-SynchronizedHighlighting-WIndows7-InternetExplorer.png Internet Explorer + MathPlayer (synchronized highlighting)]
** [https://commons.wikimedia.org/wiki/File:MathPlayer-Braille-Windows7-InternetExplorer.png Internet Explorer + MathPlayer (braille)]
** NVDA+MathPlayer: [[File:Nvda-mathml-example-1.wav|thumb|Nvda-mathml-example-1]], [[File:Nvda-mathml-example-2.wav|thumb|Nvda-mathml-example-2]], [[File:Nvda-mathml-example-3.wav|thumb|Nvda-mathml-example-3]], [[File:Nvda-mathml-example-4.wav|thumb|Nvda-mathml-example-4]], [[File:Nvda-mathml-example-5.wav|thumb|Nvda-mathml-example-5]], [[File:Nvda-mathml-example-6.wav|thumb|Nvda-mathml-example-6]], [[File:Nvda-mathml-example-7.wav|thumb|Nvda-mathml-example-7]].
** Orca: There is ongoing work, but no support at all at the moment [[File:Orca-mathml-example-1.wav|thumb|Orca-mathml-example-1]], [[File:Orca-mathml-example-2.wav|thumb|Orca-mathml-example-2]], [[File:Orca-mathml-example-3.wav|thumb|Orca-mathml-example-3]], [[File:Orca-mathml-example-4.wav|thumb|Orca-mathml-example-4]], [[File:Orca-mathml-example-5.wav|thumb|Orca-mathml-example-5]], [[File:Orca-mathml-example-6.wav|thumb|Orca-mathml-example-6]], [[File:Orca-mathml-example-7.wav|thumb|Orca-mathml-example-7]].
** From our testing, ChromeVox and JAWS are not able to read the formulas generated by the MathML mode.


which is a direct generalization of [[Newton's method]] in one dimension.
==Test pages ==


In data fitting, where the goal is to find the parameters '''''&beta;''''' such that  a given model function ''y''&nbsp;=&nbsp;''f''(''x'', '''''&beta;''''') best fits some data points (''x''<sub>''i''</sub>, ''y''<sub>''i''</sub>), the functions ''r''<sub>''i''</sub> are the [[residual (statistics)|residuals]]
To test the '''MathML''', '''PNG''', and '''source''' rendering modes, please go to one of the following test pages:
*[[Displaystyle]]
*[[MathAxisAlignment]]
*[[Styling]]
*[[Linebreaking]]
*[[Unique Ids]]
*[[Help:Formula]]


: <math>r_i(\boldsymbol \beta)= y_i - f(x_i, \boldsymbol \beta).</math>
*[[Inputtypes|Inputtypes (private Wikis only)]]
 
*[[Url2Image|Url2Image (private Wikis only)]]
Then, the Gauss-Newton method can be expressed in terms of the Jacobian of the function ''f'' as
==Bug reporting==
 
If you find any bugs, please report them at [https://bugzilla.wikimedia.org/enter_bug.cgi?product=MediaWiki%20extensions&component=Math&version=master&short_desc=Math-preview%20rendering%20problem Bugzilla], or write an email to math_bugs (at) ckurs (dot) de .
:<math> \boldsymbol \beta^{(s+1)} = \boldsymbol \beta^{(s)} - \left(\mathbf{J_f}^\top \mathbf{J_f} \right)^{-1} \mathbf{ J_f} ^\top \mathbf{r}(\boldsymbol \beta^{(s)}). </math>
 
==Notes==
 
The assumption ''m''&nbsp;≥&nbsp;''n'' in the algorithm statement is necessary, as otherwise the matrix '''J<sub>r</sub>'''<sup>T</sup>'''J<sub>r</sub>''' is not invertible and the normal equations cannot be solved (at least uniquely).
 
The Gauss–Newton algorithm can be derived by [[linear approximation|linearly approximating]] the vector of functions ''r''<sub>''i''</sub>. Using [[Taylor's theorem]], we can write at every iteration:
 
: <math>\mathbf{r}(\boldsymbol \beta)\approx \mathbf{r}(\boldsymbol \beta^s)+\mathbf{J_r}(\boldsymbol \beta^s)\Delta</math>
 
with <math>\Delta=\boldsymbol \beta-\boldsymbol \beta^s.</math> The task of finding &Delta; minimizing the sum of squares of the right-hand side, i.e.,
: <math>\mathbf{min}\|\mathbf{r}(\boldsymbol \beta^s)+\mathbf{J_r}(\boldsymbol \beta^s)\Delta\|_2^2</math>,
is a [[linear least squares (mathematics)|linear least squares]] problem, which can be solved explicitly, yielding the normal equations in the algorithm.
 
The normal equations are ''m'' linear simultaneous equations in the unknown increments, &Delta;. They may be solved in one step, using [[Cholesky decomposition]], or, better, the [[QR factorization]] of '''J'''<sub>'''r'''</sub>. For large systems, an [[iterative method]], such as the [[conjugate gradient]] method, may be more efficient. If there is a linear dependence between columns of '''J'''<sub>'''r'''</sub>, the iterations will fail as '''J<sub>r</sub>'''<sup>T</sup>'''J<sub>r</sub>''' becomes singular.
 
==Example==
[[File:Gauss Newton illustration.png|thumb|right|280px|Calculated curve obtained with <math>\hat\beta_1=0.362</math> and <math>\hat\beta_2=0.556</math> (in blue) versus the observed data (in red).]]
In this example, the Gauss–Newton algorithm will be used to fit a model to some data by minimizing the sum of squares of errors between the data and model's predictions.
 
In a biology experiment studying the relation between substrate concentration [''S''] and reaction rate in an enzyme-mediated reaction, the data in the following table were obtained.
:{|class="wikitable" style="text-align: center;"
|''i'' || 1 || 2 || 3 || 4 || 5 || 6 || 7
|-
| [S] || 0.038 || 0.194 || 0.425 || 0.626 || 1.253 ||  2.500 || 3.740
|-
| rate || 0.050 || 0.127 || 0.094  ||  0.2122 ||  0.2729 ||  0.2665 || 0.3317
|}
 
It is desired to find a curve (model function) of the form
 
:<math>\text{rate}=\frac{V_\text{max}[S]}{K_M+[S]}</math>
 
that fits best the data in the least squares sense, with the parameters <math>V_\text{max}</math> and <math>K_M</math> to be determined.
 
Denote by <math>x_i</math> and <math>y_i</math> the value of <math>[S]</math> and the rate from the table, <math>i=1, \dots, 7.</math> Let <math>\beta_1=V_\text{max}</math> and <math>\beta_2=K_M.</math> We will find <math>\beta_1</math> and <math>\beta_2</math> such that the sum of squares of the residuals
: <math>r_i = y_i - \frac{\beta_1x_i}{\beta_2+x_i}</math>  &nbsp; (<math>i=1,\dots, 7</math>)
is minimized.
 
The Jacobian <math>\mathbf{J_r}</math> of the vector of residuals <math>r_i</math> in respect to the unknowns <math>\beta_j</math> is an <math>7\times 2</math> matrix with the <math>i</math>-th row having the entries
 
:<math>\frac{\partial r_i}{\partial \beta_1}= -\frac{x_i}{\beta_2+x_i},\  \frac{\partial r_i}{\partial \beta_2}= \frac{\beta_1x_i}{\left(\beta_2+x_i\right)^2}.</math>
 
Starting with the initial estimates of <math>\beta_1</math>=0.9 and <math>\beta_2</math>=0.2, after five iterations of the Gauss–Newton algorithm the optimal values <math>\hat\beta_1=0.362</math> and <math>\hat\beta_2=0.556</math> are obtained. The sum of squares of residuals decreased from the initial value of 1.445 to 0.00784 after the fifth iteration. The plot in the figure on the right shows the curve determined by the model for the optimal parameters versus the observed data.
 
==Convergence properties==
 
It can be shown<ref>Björck (1996)  p260</ref> that the increment &Delta; is a [[descent direction]] for ''S'', and, if the algorithm converges, then the limit is a [[stationary point]] of ''S''. However, convergence is not guaranteed, not even [[local convergence]] as in [[Newton's method in optimization|Newton's method]].
 
The rate of convergence of the Gauss–Newton algorithm can approach [[rate of convergence|quadratic]].<ref>Björck (1996)  p341, 342</ref> The algorithm may converge slowly or not at all  if the initial guess is far from the minimum or  the matrix <math>\mathbf{J_r^T  J_r}</math> is [[ill-conditioned]].  For example, consider the problem with <math>m=2</math> equations and <math>n=1</math> variable, given by
:<math> \begin{align}
r_1(\beta) &= \beta + 1 \\
r_2(\beta) &= \lambda \beta^2 + \beta - 1.
\end{align} </math>
The optimum is at <math>\beta = 0</math>. If <math>\lambda = 0</math> then the problem is in fact linear and the method finds the optimum in one iteration. If |λ| < 1 then the method converges linearly and the error decreases asymptotically with a factor |λ| at every iteration. However, if |λ| > 1, then the method does not even converge locally.<ref>Fletcher (1987) p.113</ref>
 
== Derivation from Newton's method ==
 
In what follows, the Gauss–Newton algorithm will be derived from [[Newton's method in optimization|Newton's method]] for function optimization via an approximation. As a consequence, the rate of convergence of the Gauss–Newton algorithm is at most quadratic. 
 
The recurrence relation for Newton's method for minimizing a function ''S'' of parameters, '''''&beta;''''', is
:<math> \boldsymbol\beta^{(s+1)} = \boldsymbol\beta^{(s)} - \mathbf H^{-1} \mathbf g \, </math>
where '''g''' denotes the [[gradient|gradient vector]] of ''S'' and '''H''' denotes the [[Hessian matrix]] of ''S''.
Since <math> S = \sum_{i=1}^m r_i^2</math>, the gradient is given by
:<math>g_j=2\sum_{i=1}^m r_i\frac{\partial r_i}{\partial \beta_j}.</math>
Elements of the Hessian are calculated by differentiating the gradient elements, <math>g_j</math>, with respect to <math>\beta_k</math>
:<math>H_{jk}=2\sum_{i=1}^m \left(\frac{\partial r_i}{\partial \beta_j}\frac{\partial r_i}{\partial \beta_k}+r_i\frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k} \right).</math>
 
The Gauss–Newton method is obtained by ignoring the second-order derivative terms (the second term in this expression). That is, the Hessian is approximated by
 
:<math>H_{jk}\approx 2\sum_{i=1}^m J_{ij}J_{ik}</math>
 
where <math>J_{ij}=\frac{\partial r_i}{\partial \beta_j}</math> are entries of the Jacobian '''J<sub>r</sub>'''. The gradient and the approximate Hessian can be written in matrix notation as
 
:<math>\mathbf g=2\mathbf{J_r}^\top \mathbf{r}, \quad \mathbf{H} \approx 2 \mathbf{J_r}^\top \mathbf{J_r}.\,</math>
 
These expressions are substituted into the recurrence relation above to obtain the operational equations
 
:<math> \boldsymbol{\beta}^{(s+1)} = \boldsymbol\beta^{(s)}+\Delta;\quad \Delta = -\left( \mathbf{J_r}^\top \mathbf{J_r} \right)^{-1} \mathbf{J_r}^\top \mathbf{r}. </math>
 
Convergence of the Gauss–Newton method is not guaranteed in all instances. The approximation
:<math>\left|r_i\frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k}\right| \ll \left|\frac{\partial r_i}{\partial \beta_j}\frac{\partial r_i}{\partial \beta_k}\right|</math>
that needs to hold to be able to ignore the second-order derivative terms may be valid in two cases, for which convergence is to be expected.<ref>Nocedal (1997) {{Page needed|date=December 2010}}</ref>
#The function values <math>r_i</math> are small in magnitude, at least around the minimum.
#The functions are only "mildly" non linear, so that <math>\frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k}</math> is relatively small in magnitude.
 
== Improved versions ==
 
With the Gauss–Newton method the sum of squares ''S'' may not decrease at every iteration. However, since &Delta; is a descent direction, unless <math>S(\boldsymbol \beta^s)</math> is a stationary point, it holds that <math>S(\boldsymbol \beta^s+\alpha\Delta) < S(\boldsymbol \beta^s)</math> for all sufficiently small <math>\alpha>0</math>. Thus, if divergence occurs, one solution is to employ a fraction, <math>\alpha</math>, of the increment vector, &Delta; in the updating formula 
:<math> \boldsymbol \beta^{s+1} = \boldsymbol \beta^s+\alpha\  \Delta</math>.
In other words, the increment vector is too long, but it points in "downhill", so going just a part of the way will decrease the objective function ''S''. An optimal value for <math>\alpha</math> can be found by using a [[line search]] algorithm, that is, the magnitude of <math>\alpha</math> is determined by finding the value that minimizes ''S'', usually using a [[line search|direct search method]] in the interval <math>0<\alpha<1</math>.
 
In cases where the direction of the shift vector is such that the optimal fraction, <math> \alpha </math>, is close to zero, an alternative method for handling divergence is the use of the [[Levenberg–Marquardt algorithm]], also known as the "[[trust region]] method".<ref name="ab"/> The normal equations are modified in such a way that the increment vector is rotated towards the direction of [[steepest descent]],   
:<math>\left(\mathbf{J^TJ+\lambda D}\right)\Delta=\mathbf{J}^T \mathbf{r}</math>,
where '''D''' is a positive diagonal matrix. Note that when ''D'' is the identity matrix and <math>\lambda\to+\infty</math>, then  <math> \Delta/\lambda\to \mathbf{J}^T \mathbf{r}</math>, therefore the [[Direction (geometry, geography)|direction]] of &Delta; approaches the direction of the gradient <math> \mathbf{J}^T \mathbf{r}</math>.
 
The so-called Marquardt parameter, <math>\lambda</math>, may also be optimized by a line search, but this is inefficient as the shift vector must be re-calculated every time <math>\lambda</math> is changed. A more efficient strategy is this. When divergence occurs increase the Marquardt parameter until there is a decrease in S. Then, retain the value from one iteration to the next, but decrease it if possible until a cut-off value is reached when the Marquardt parameter can be set to zero; the minimization of ''S'' then becomes a standard Gauss–Newton minimization.
 
== Related algorithms ==
 
In a [[quasi-Newton method]], such as that due to [[Davidon-Fletcher-Powell (DFP) formula|Davidon, Fletcher and Powell]] or Broyden–Fletcher–Goldfarb–Shanno ([[BFGS method]]) an estimate of the full Hessian, <math>\frac{\partial^2 S}{\partial \beta_j \partial\beta_k}</math>, is built up numerically using first derivatives <math>\frac{\partial r_i}{\partial\beta_j}</math> only so that after ''n'' refinement cycles the method closely approximates to Newton's method in performance. Note that quasi-Newton methods can minimize general real-valued functions, whereas Gauss-Newton, Levenberg-Marquardt, etc. fits only to nonlinear least-squares problems.
 
Another method for solving minimization problems using only first derivatives is [[gradient descent]]. However, this method does not take into account the second derivatives even approximately. Consequently, it is highly inefficient for many functions, especially if the parameters have strong interactions.
 
==Notes==
<references />
 
==References==
*{{cite book
| last      = Björck | first      = A.
| title      = Numerical methods for least squares problems
| publisher  = SIAM, Philadelphia  | year      = 1996  | isbn      = 0-89871-360-9 }}
* {{Cite book | last1=Fletcher | first1=Roger | title=Practical methods of optimization | publisher=[[John Wiley & Sons]] | location=New York | edition=2nd | isbn=978-0-471-91547-8 | year=1987 }}.
*{{cite book
| last      = Nocedal  | first      = Jorge  | coauthors  = Wright, Stephen
| title      = Numerical optimization
| publisher  = New York: Springer  | year      = 1999  | isbn      = 0-387-98793-2 }}
 
{{Least Squares and Regression Analysis}}
 
{{Optimization algorithms}}
 
{{DEFAULTSORT:Gauss-Newton algorithm}}
[[Category:Optimization algorithms and methods]]
[[Category:Least squares]]
[[Category:Statistical algorithms]]

Latest revision as of 23:52, 15 September 2019

This is a preview for the new MathML rendering mode (with SVG fallback), which is availble in production for registered users.

If you would like use the MathML rendering mode, you need a wikipedia user account that can be registered here [[1]]

  • Only registered users will be able to execute this rendering mode.
  • Note: you need not enter a email address (nor any other private information). Please do not use a password that you use elsewhere.

Registered users will be able to choose between the following three rendering modes:

MathML


Follow this link to change your Math rendering settings. You can also add a Custom CSS to force the MathML/SVG rendering or select different font families. See these examples.

Demos

Here are some demos:


Test pages

To test the MathML, PNG, and source rendering modes, please go to one of the following test pages:

Bug reporting

If you find any bugs, please report them at Bugzilla, or write an email to math_bugs (at) ckurs (dot) de .