Supporting hyperplane: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Rubinbot
m r2.5.4) (Robot: Adding de:Trennungssatz, ko:받침 초평면
 
en>BG19bot
m →‎Supporting hyperplane theorem: WP:CHECKWIKI error fix for #64. Do general fixes if a problem exists. - using AWB (9838)
Line 1: Line 1:
Golda is what's created on my birth certification although it is not the name on my beginning certification. I've always cherished residing in Mississippi. Playing badminton is a factor that he is totally addicted to. For years she's been working as a journey agent.<br><br>Feel free to visit my homepage; psychic phone ([http://fashionlinked.com/index.php?do=/profile-13453/info/ fashionlinked.com])
In [[mathematical optimization]], '''constrained optimization''' (in some contexts called '''constraint optimization''') is the process of optimizing an objective function with respect to some [[variable (mathematics)|variables]]  in the presence of [[Constraint (mathematics)|constraints]] on those variables. The objective function is either a [[Loss function|cost function]] or [[energy function]] which is to be [[minimize]]d, or a [[reward function]] or [[utility function]], which is to be [[maximize]]d. Constraints can be either ''hard constraints'' which set conditions for the variables that are required to be satisfied, or ''soft constraints'' which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied.
 
==General form==
 
A general constrained minimization problem may be written as follows:
 
: <math>
\begin{array}{rcll}
\min &~& f(\mathbf{x}) & \\
\mathrm{subject~to} &~& g_i(\mathbf{x}) = c_i &\text{for } i=1,\ldots,n \quad \text{Equality constraints} \\
&~& h_j(\mathbf{x}) \geqq d_j &\text{for } j=1,\ldots,m \quad \text{Inequality constraints}
\end{array}
</math>
 
where <math> g_i(\mathbf{x}) = c_i ~\mathrm{for~} i=1,\ldots,n </math> and <math> h_j(\mathbf{x}) \ge d_j ~\mathrm{for~} j=1,\ldots,m  </math> are constraints that are required to be satisfied; these are called [[Constraint (mathematics)#Hard and soft constraints|hard constraints]].
 
In some problems, often called ''constraint optimization problems'', the objective function is actually the sum of cost functions, each of which penalizes the extent (if any) to which a [[Constraint (mathematics)#Hard and soft constraints|soft constraint]] (a constraint which is preferred but not required to be satisfied) is violated.
 
==Solution methods==
 
Many unconstrained optimization algorithms can be adapted to the constrained case, often via the use of a [[penalty method]]. However, search steps taken by the unconstrained method may be unacceptable for the constrained problem, leading to a lack of convergence. This is referred to as the Maratos effect.<ref>Optimization Theory and Methods: Nonlinear Programming, by Wenyu Sun, Ya-Xiang Yua, Springer, pg. 541</ref>
 
===Equality constraints===
 
If the constrained problem has only equality constraints, the method of [[Lagrange multipliers]] can be used to convert it into an unconstrained problem whose number of variables is the original number of variables plus the original number of equality constraints. Alternatively, if the constraints are all equality constraints and are all linear, they can be solved for some of the variables in terms of the others, and the former can be substituted out of the objective function, leaving an unconstrained problem in a smaller number of variables.
 
===Inequality constraints===
 
With inequality constraints, the problem can be characterized in terms of the [[Karush–Kuhn–Tucker conditions]], in which simple problems may be solvable.
 
===Linear programming===
 
If the objective function and all of the hard constraints are linear, then the problem is a [[linear programming]] problem. This can be solved by the [[simplex method]], which usually works in [[polynomial time]] in the problem size but is not guaranteed to, or by [[interior point method]]s which are guaranteed to work in polynomial time.
 
===Quadratic programming===
 
If all the hard constraints are linear but the objective function is quadratic, the problem is a [[quadratic programming]] problem. It can still be solved in polynomial time by the [[ellipsoid method]] if the objective function is [[Convex function|convex]]; otherwise the problem is [[NP hard]].
 
===Constraint optimization problems===
 
====Branch and bound====
 
Constraint optimization can be solved by [[branch and bound]] algorithms. These are backtracking algorithms storing the cost of the best solution found during execution and use it for avoiding part of the search. More precisely, whenever the algorithm encounters a partial solution that cannot be extended to form a solution of better cost than the stored best cost, the algorithm backtracks, instead of trying to extend this solution.
 
Assuming that cost is to be maximized, the efficiency of these algorithms depends on how the cost that can be obtained from extending a partial solution is evaluated. Indeed, if the algorithm can backtrack from a partial solution, part of the search is skipped. The lower the estimated cost, the better the algorithm, as a lower estimated cost is more likely to be lower than the best cost of solution found so far.
 
On the other hand, this estimated cost cannot be lower than the effective cost that can be obtained by extending the solution, as otherwise the algorithm could backtrack while a solution better than the best found so far exists. As a result, the algorithm requires an upper bound on the cost that can be obtained from extending a partial solution, and this upper bound should be as small as possible.
 
A variation of this approach called Hansen's method uses [[Interval arithmetic#History|interval methods]].<ref>{{cite book |last=Leader|first=Jeffery J. | title=Numerical Analysis and Scientific Computation |year=2004|publisher=Addison Wesley |location= |isbn= 0-201-73499-0 }}</ref> It inherently implements rectangular constraints.
 
 
=====First-choice bounding functions=====
 
One way for evaluating this upper bound for a partial solution is to consider each soft constraint separately. For each soft constraint, the maximal possible value for any assignment to the unassigned variables is assumed. The sum of these values is an upper bound because the soft constraints cannot assume a higher value. It is exact because the maximal values of soft constraints may derive from different evaluations: a soft constraint may be maximal for <math>x=a</math> while another constraint is maximal for <math>x=b</math>.
 
=====Russian doll search=====
 
This method runs a branch-and-bound algorithm on <math>n</math> problems, where <math>n</math> is the number of variables. Each such problem is the subproblem obtained by dropping a sequence of variables <math>x_1,\ldots,x_i</math> from the original problem, along with the constraints containing them. After the problem on variables <math>x_{i+1},\ldots,x_n</math> is solved, its optimal cost can be used as an upper bound while solving the other problems,
 
In particular, the cost estimate of a solution having <math>x_{i+1},\ldots,x_n</math> as unassigned variables is added to the cost that derives from the evaluated variables. Virtually, this corresponds on ignoring the evaluated variables and solving the problem on the unassigned ones, except that the latter problem has already been solved. More precisely, the cost of soft constraints containing both assigned and unassigned variables is estimated as above (or using an arbitrary other method); the cost of soft constraints containing only unassigned variables is instead estimated using the optimal solution of the corresponding problem, which is already known at this point.
 
====Bucket elimination====
 
The [[bucket elimination]] algorithm can be adapted for constraint optimization. A given variable can be indeed removed from the problem by replacing all soft constraints containing it with a new soft constraint. The cost of this new constraint is computed assuming a maximal value for every value of the removed variable. Formally, if <math>x</math> is the variable to be removed, <math>C_1,\ldots,C_n</math> are the soft constraints containing it, and <math>y_1,\ldots,y_m</math> are their variables except <math>x</math>, the new soft constraint is defined by:
 
<!-- not exactly the correct notation, but clear enough -->
:<math>C(y_1=a_1,\ldots,y_n=a_n) = \max_a \sum_i C_i(x=a,y_1=a_1,\ldots,y_n=a_n)</math>
 
Bucket elimination works with an (arbitrary) ordering of the variables. Every variable is associated a bucket of constraints; the bucket of a variable contains all constraints having the variable has the highest in the order. Bucket elimination proceed from the last variable to the first. For each variable, all constraints of the bucket are replaced as above to remove the variable. The resulting constraint is then placed in the appropriate bucket.
 
==See also==
* [[Integer programming]]
* [[Distributed constraint optimization]]
* [[Penalty method]]
 
==References==
{{reflist}}
 
*{{cite book
| first=Rina
| last=Dechter
| title=Constraint Processing
| publisher=Morgan Kaufmann
| url=http://www.ics.uci.edu/~dechter/books/index.html
| year=2003
| isbn=1-55860-890-7
}}
*HillStormer, a practical tool for constrained, nonlinear and multivariate Optimization by Nelder-Mead http://www.berkutec.com
 
[[Category:Mathematical optimization]]
[[Category:Constraint programming]]

Revision as of 01:14, 6 January 2014

In mathematical optimization, constrained optimization (in some contexts called constraint optimization) is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or energy function which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can be either hard constraints which set conditions for the variables that are required to be satisfied, or soft constraints which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied.

General form

A general constrained minimization problem may be written as follows:

where and are constraints that are required to be satisfied; these are called hard constraints.

In some problems, often called constraint optimization problems, the objective function is actually the sum of cost functions, each of which penalizes the extent (if any) to which a soft constraint (a constraint which is preferred but not required to be satisfied) is violated.

Solution methods

Many unconstrained optimization algorithms can be adapted to the constrained case, often via the use of a penalty method. However, search steps taken by the unconstrained method may be unacceptable for the constrained problem, leading to a lack of convergence. This is referred to as the Maratos effect.[1]

Equality constraints

If the constrained problem has only equality constraints, the method of Lagrange multipliers can be used to convert it into an unconstrained problem whose number of variables is the original number of variables plus the original number of equality constraints. Alternatively, if the constraints are all equality constraints and are all linear, they can be solved for some of the variables in terms of the others, and the former can be substituted out of the objective function, leaving an unconstrained problem in a smaller number of variables.

Inequality constraints

With inequality constraints, the problem can be characterized in terms of the Karush–Kuhn–Tucker conditions, in which simple problems may be solvable.

Linear programming

If the objective function and all of the hard constraints are linear, then the problem is a linear programming problem. This can be solved by the simplex method, which usually works in polynomial time in the problem size but is not guaranteed to, or by interior point methods which are guaranteed to work in polynomial time.

Quadratic programming

If all the hard constraints are linear but the objective function is quadratic, the problem is a quadratic programming problem. It can still be solved in polynomial time by the ellipsoid method if the objective function is convex; otherwise the problem is NP hard.

Constraint optimization problems

Branch and bound

Constraint optimization can be solved by branch and bound algorithms. These are backtracking algorithms storing the cost of the best solution found during execution and use it for avoiding part of the search. More precisely, whenever the algorithm encounters a partial solution that cannot be extended to form a solution of better cost than the stored best cost, the algorithm backtracks, instead of trying to extend this solution.

Assuming that cost is to be maximized, the efficiency of these algorithms depends on how the cost that can be obtained from extending a partial solution is evaluated. Indeed, if the algorithm can backtrack from a partial solution, part of the search is skipped. The lower the estimated cost, the better the algorithm, as a lower estimated cost is more likely to be lower than the best cost of solution found so far.

On the other hand, this estimated cost cannot be lower than the effective cost that can be obtained by extending the solution, as otherwise the algorithm could backtrack while a solution better than the best found so far exists. As a result, the algorithm requires an upper bound on the cost that can be obtained from extending a partial solution, and this upper bound should be as small as possible.

A variation of this approach called Hansen's method uses interval methods.[2] It inherently implements rectangular constraints.


First-choice bounding functions

One way for evaluating this upper bound for a partial solution is to consider each soft constraint separately. For each soft constraint, the maximal possible value for any assignment to the unassigned variables is assumed. The sum of these values is an upper bound because the soft constraints cannot assume a higher value. It is exact because the maximal values of soft constraints may derive from different evaluations: a soft constraint may be maximal for while another constraint is maximal for .

Russian doll search

This method runs a branch-and-bound algorithm on problems, where is the number of variables. Each such problem is the subproblem obtained by dropping a sequence of variables from the original problem, along with the constraints containing them. After the problem on variables is solved, its optimal cost can be used as an upper bound while solving the other problems,

In particular, the cost estimate of a solution having as unassigned variables is added to the cost that derives from the evaluated variables. Virtually, this corresponds on ignoring the evaluated variables and solving the problem on the unassigned ones, except that the latter problem has already been solved. More precisely, the cost of soft constraints containing both assigned and unassigned variables is estimated as above (or using an arbitrary other method); the cost of soft constraints containing only unassigned variables is instead estimated using the optimal solution of the corresponding problem, which is already known at this point.

Bucket elimination

The bucket elimination algorithm can be adapted for constraint optimization. A given variable can be indeed removed from the problem by replacing all soft constraints containing it with a new soft constraint. The cost of this new constraint is computed assuming a maximal value for every value of the removed variable. Formally, if is the variable to be removed, are the soft constraints containing it, and are their variables except , the new soft constraint is defined by:

Bucket elimination works with an (arbitrary) ordering of the variables. Every variable is associated a bucket of constraints; the bucket of a variable contains all constraints having the variable has the highest in the order. Bucket elimination proceed from the last variable to the first. For each variable, all constraints of the bucket are replaced as above to remove the variable. The resulting constraint is then placed in the appropriate bucket.

See also

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  • HillStormer, a practical tool for constrained, nonlinear and multivariate Optimization by Nelder-Mead http://www.berkutec.com
  1. Optimization Theory and Methods: Nonlinear Programming, by Wenyu Sun, Ya-Xiang Yua, Springer, pg. 541
  2. 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534