|
|
(One intermediate revision by one other user not shown) |
Line 1: |
Line 1: |
| '''Penalty methods''' are a certain class of [[algorithm]]s for solving [[Constraint (mathematics)|constrained]] [[optimization (mathematics)|optimization]] problems.
| | Nice to meet you, my name is Ling and I totally dig that title. Arizona has usually been my residing location but my wife wants us to transfer. What she loves doing is taking part in croquet and she is trying to make it a occupation. Interviewing is what she does.<br><br>my web blog: auto warranty ([http://www.amazinghostingsolutions.com/ActivityFeed/MyProfile/tabid/57/UserId/20310/Default.aspx visit the following post]) |
| | |
| A penalty method replaces a constrained optimization problem by a series of unconstrained problems whose solutions ideally converge to the solution of the original constrained problem. The unconstrained problems are formed by adding a term, called a '''penalty function''', to the [[objective function]] that consists of a ''penalty parameter'' multiplied by a measure of violation of the constraints. The measure of violation is nonzero when the constraints are violated and is zero in the region where constraints are not violated.
| |
| | |
| == Example ==
| |
| | |
| Let us say we are solving the following constrained problem:
| |
| :<math> \min f(\bold x) </math>
| |
| subject to
| |
| :<math> c_i(\bold x) \ge 0 ~\forall i \in I. </math>
| |
| | |
| This problem can be solved as a series of unconstrained minimization problems
| |
| :<math> \min \Phi_k (\bold x) = f (\bold x) + \sigma_k ~ \sum_{i\in I} ~ g(c_i(\bold x)) </math>
| |
| where
| |
| :<math> g(c_i(\bold x))=\min(0,~c_i(\bold x ))^2. </math>
| |
| | |
| In the above equations, <math> g(c_i(\bold x))</math> is the ''penalty function'' while <math>\sigma_k</math> are the ''penalty coefficients''. In each iteration ''k'' of the method, we increase the penalty coefficient <math>\sigma_k</math> (e.g. by a factor of 10), solve the unconstrained problem and use the solution as the initial guess for the next iteration. Solutions of the successive unconstrained problems will eventually converge to the solution of the original constrained problem.
| |
| | |
| == Practical application ==
| |
| | |
| [[Image compression]] optimization algorithms can make use of penalty functions for selecting how best to compress zones of colour to single representative values.<ref>{{cite journal|last=Galar|first=M.|last2=Jurio|first2=A.|last3=Lopez-Molina|first3= C.|last4=Paternain|first4= D.|last5=Sanz|first5= J.|last6=Bustince|first6=H.|title=Aggregation functions to combine RGB color channels in stereo matching|journal=Optics Express|year=2013|volume=21|pages=1247-1257}}</ref><ref>{{cite web|title=Researchers restore image using version containing between 1 and 10 percent of information|url=http://phys.org/news/2013-10-image-version-percent.html|publisher=Phys.org (Omicron Technology Limited)|accessdate=26 October 2013}}</ref>
| |
| | |
| == Barrier methods ==
| |
| | |
| [[Barrier method (mathematics)|Barrier method]]s constitute an alternative class of algorithms for constrained optimization. These methods also add a penalty-like term to the objective function, but in this case the iterates are forced to remain interior to the feasible domain and the barrier is in place to bias the iterates to remain away from the boundary of the feasible region.
| |
| | |
| == See also ==
| |
| * [[Barrier function]]
| |
| * [[Interior point method]]
| |
| * [[Augmented Lagrangian method]]
| |
| | |
| ==References==
| |
| {{reflist}}
| |
| | |
| Smith, Alice E.; Coit David W. [http://140.138.143.31/teachers/ycliang/heuristic%20optimization%20912/penalty%20function.pdf Penalty functions] Handbook of Evolutionary Computation, Section C 5.2. Oxford University Press and Institute of Physics Publishing, 1996.
| |
| | |
| Courant, R. [http://www.ams.org/bull/1943-49-01/S0002-9904-1943-07818-4/S0002-9904-1943-07818-4.pdf Variational methods for the solution of problems of equilibrium and vibrations]. Bull. Amer. Math. Soc., 49, 1–23, 1943.
| |
| | |
| {{optimization algorithms}}
| |
| [[Category:Optimization algorithms and methods]]
| |
Nice to meet you, my name is Ling and I totally dig that title. Arizona has usually been my residing location but my wife wants us to transfer. What she loves doing is taking part in croquet and she is trying to make it a occupation. Interviewing is what she does.
my web blog: auto warranty (visit the following post)