|
|
Line 1: |
Line 1: |
| '''Interior point methods''' (also referred to as '''barrier methods''') are a certain class of [[algorithm]]s to solve linear and nonlinear [[convex optimization]] problems. | | I'm Christa and was born on 9 December 1970. My hobbies are Dog sport and Table tennis.<br><br>Look into my website; [http://www.studiotel.it/portfolio-view/gallery-format/ FIFA coin Generator] |
| [[File:karmarkar.png|thumb|200px|right|Example solution]]
| |
| The interior point method was invented by [[John von Neumann]].<ref>{{Cite book|first1=George B.|last1= Dantzig |first2= Mukund N. |last2=Thapa|year= 2003|title= Linear Programming 2: Theory and Extensions|publisher=Springer-Verlag}}</ref> Von Neumann suggested a new method of linear programming, using the homogeneous linear system of Gordan (1873) which was later popularized by [[Karmarkar's algorithm]], developed by [[Narendra Karmarkar]] in 1984 for [[linear programming]]. The method consists of a [[self-concordant]] [[barrier function]] used to encode the [[convex set]]. Contrary to the [[Simplex algorithm|simplex method]], it reaches an optimal solution by traversing the interior of the [[feasible region]].
| |
| | |
| Any convex optimization problem can be transformed into minimizing (or maximizing) a [[linear function]] over a convex set by converting to the [[epigraph form]].<ref>{{cite book|last=Boyd|first=Stephen|last2=Vandenberghe|first2=Lieven|title=Convex Optimization|publisher=[[Cambridge University Press]]|location=Cambridge |year=2004|pages=143|isbn=0-521-83378-7|mr=2061575}}</ref> The idea of encoding the [[candidate solution|feasible set]] using a barrier and designing barrier methods was studied in the early 1960s by, amongst others, Anthony V. Fiacco and Garth P. McCormick. These ideas were mainly developed for general [[nonlinear programming]], but they were later abandoned due to the presence of more competitive methods for this class of problems (e.g. [[sequential quadratic programming]]).
| |
| | |
| [[Yurii Nesterov]] and [[Arkadi Nemirovski]] came up with a special class of such barriers that can be used to encode any convex set. They guarantee that the number of [[iteration]]s of the algorithm is bounded by a polynomial in the dimension and accuracy of the solution.<ref>{{Cite journal|mr=2115066|doi=10.1090/S0273-0979-04-01040-7|title=The interior-point revolution in optimization: History, recent developments, and lasting consequences|year=2004|last1=Wright|first1=Margaret H.|journal=Bulletin of the American Mathematical Society|volume=42|pages=39 }}</ref>
| |
| | |
| Karmarkar's breakthrough revitalized the study of interior point methods and barrier problems, showing that it was possible to create an algorithm for linear programming characterized by [[polynomial time|polynomial complexity]] and, moreover, that was competitive with the simplex method.
| |
| Already [[Leonid Khachiyan|Khachiyan]]'s [[ellipsoid method]] was a polynomial time algorithm; however, in practice it was too slow to be of practical interest.
| |
| | |
| The class of primal-dual path-following interior point methods is considered the most successful.
| |
| [[Mehrotra predictor-corrector method|Mehrotra's predictor-corrector algorithm]] provides the basis for most implementations of this class of methods{{Citation needed|date=February 2011}}.
| |
| | |
| ==Primal-dual interior point method for nonlinear optimization==
| |
| The primal-dual method's idea is easy to demonstrate for constrained [[nonlinear optimization]].
| |
| For simplicity consider the all-inequality version of a nonlinear optimization problem:
| |
| | |
| :minimize <math>f(x)</math> subject to <math>c(x) \ge 0~~ x \in \mathbb{R}^n, c(x) \in \mathbb{R}^m~~~~~~(1)</math>.
| |
| | |
| The logarithmic [[barrier function]] associated with (1) is
| |
| :<math>B(x,\mu) = f(x) - \mu~ \sum_{i=1}^m\ln(c_i(x))~~~~~(2)</math>
| |
| | |
| Here <math>\mu</math> is a small positive scalar, sometimes called the "barrier parameter". As <math>\mu</math> converges to zero the minimum of <math>B(x,\mu)</math> should converge to a solution of (1).
| |
| | |
| The barrier function [[gradient]] is
| |
| :<math>g_b = g - \mu\sum_{i=1}^m \frac{1}{c_i(x)} \nabla c_i(x)~~~~~~(3)</math>
| |
| | |
| where <math>g</math> is the gradient of the original function <math>f(x)</math> and <math>\nabla c_i</math> is the gradient of <math>c_i</math>.
| |
| | |
| In addition to the original ("primal") variable <math>x</math> we introduce a [[Lagrange multiplier]] inspired [[Lagrange_multiplier#The_strong_Lagrangian_principle:_Lagrange_duality|dual]] variable <math>\lambda\in \mathbb{R} ^m</math>(sometimes called "slack variable")
| |
| :<math>\forall_{i=1}^m c_i(x) \lambda_i=\mu~~~~~~~(4)</math>
| |
| | |
| (4) is sometimes called the "perturbed complementarity" condition, for its resemblance to "complementary slackness" in [[KKT conditions]].
| |
| | |
| We try to find those <math>(x_\mu, \lambda_\mu)</math> for which the gradient of the barrier function is zero.
| |
| | |
| Applying (4) to (3) we get equation for gradient:
| |
| :<math>g - A^T \lambda = 0~~~~~~(5)</math>
| |
| where the matrix <math>A</math> is the constraint <math>c(x)</math> [[Jacobian matrix and determinant|Jacobian]].
| |
| | |
| The intuition behind (5) is that the gradient of <math>f(x)</math> should lie in the subspace spanned by the constraints' gradients. The "perturbed complementarity" with small <math>\mu</math> (4) can be understood as the condition that the solution should either lie near the boundary <math>c_i(x) = 0</math> or that the projection of the gradient <math>g</math> on the constraint component <math>c_i(x)</math> normal should be almost zero.
| |
| | |
| Applying [[Newton method|Newton's method]] to (4) and (5) we get an equation for <math>(x, \lambda)</math> update <math>(p_x, p_\lambda)</math>:
| |
| :<math>\begin{pmatrix}
| |
| W & -A^T \\
| |
| \Lambda A & C
| |
| \end{pmatrix}\begin{pmatrix}
| |
| p_x \\
| |
| p_\lambda
| |
| \end{pmatrix}=\begin{pmatrix}
| |
| -g + A^T \lambda \\
| |
| \mu 1 - C \lambda
| |
| \end{pmatrix}</math>
| |
| | |
| where <math>W</math> is the [[Hessian matrix]] of <math>f(x)</math> and <math>\Lambda</math> is a [[diagonal matrix]] of <math>\lambda</math>.
| |
| | |
| Because of (1), (4) the condition
| |
| :<math>\lambda \ge 0</math>
| |
| | |
| should be enforced at each step. This can be done by choosing appropriate <math>\alpha</math>:
| |
| :<math>(x,\lambda) \rightarrow (x+ \alpha p_x, \lambda + \alpha p_\lambda)</math>.
| |
| | |
| ==See also==
| |
| *[[Augmented Lagrangian method]]
| |
| *[[Penalty method]]
| |
| *[[Karush–Kuhn–Tucker conditions]]
| |
| | |
| ==References==
| |
| {{Reflist}}
| |
| == Bibliography ==
| |
| * {{cite book|last1=Bonnans|first1=J. Frédéric|last2=Gilbert|first2=J. Charles|last3=Lemaréchal|first3=Claude| authorlink3=Claude Lemaréchal|last4=Sagastizábal|first4=Claudia A.|title=Numerical optimization: Theoretical and practical aspects|url=http://www.springer.com/mathematics/applications/book/978-3-540-35445-1|edition=Second revised ed. of translation of 1997 <!-- ''Optimisation numérique: Aspects théoriques et pratiques'' --> French| series=Universitext|publisher=Springer-Verlag|location=Berlin|year=2006|pages=xiv+490|isbn=3-540-35445-X|doi=10.1007/978-3-540-35447-5|mr=2265882}}
| |
| * {{cite journal|doi=10.1145/800057.808695|url=http://retis.sssup.it/~bini/teaching/optim2010/karmarkar.pdf|chapter=A new polynomial-time algorithm for linear programming|title=Proceedings of the sixteenth annual ACM symposium on Theory of computing - STOC '84|year=1984|last1=Karmarkar|first1=N.|isbn=0-89791-133-4|pages=302}}
| |
| * {{Cite journal|doi=10.1137/0802028|title=On the Implementation of a Primal-Dual Interior Point Method|year=1992|last1=Mehrotra|first1=Sanjay|journal=SIAM Journal on Optimization|volume=2|issue=4|pages=575}}
| |
| *{{cite book|title = Numerical Optimization | first=Jorge| last = Nocedal | coauthors= and Stephen Wright| year=1999 | publisher=Springer | location=New York, NY| isbn=0-387-98793-2}}
| |
| *{{Cite book | last1=Press | first1=WH | last2=Teukolsky | first2=SA | last3=Vetterling | first3=WT | last4=Flannery | first4=BP | year=2007 | title=Numerical Recipes: The Art of Scientific Computing | edition=3rd | publisher=Cambridge University Press | publication-place=New York | isbn=978-0-521-88068-8 | chapter=Section 10.11. Linear Programming: Interior-Point Methods | chapter-url=http://apps.nrbook.com/empanel/index.html#pg=537}}
| |
| *{{cite book|title = Primal-Dual Interior-Point Methods | first=Stephen| last = Wright | year=1997 | publisher=SIAM | location=Philadelphia, PA| isbn=0-89871-382-X}}
| |
| *{{cite book|title = Convex Optimization |last1=Boyd|first1=Stephen|last2=Vandenberghe|first2=Lieven|year=2004|publisher=Cambridge University Press|url=http://www.stanford.edu/~boyd/cvxbook/}}
| |
| | |
| {{Use dmy dates|date=February 2011}}
| |
| | |
| {{Optimization algorithms|convex}}
| |
| | |
| [[Category:Optimization algorithms and methods]]
| |
I'm Christa and was born on 9 December 1970. My hobbies are Dog sport and Table tennis.
Look into my website; FIFA coin Generator