Nowadays, strong and efficient mathematical programming techniques are available for solving a great variety of nonlinear problems, which are based on solid theoretical results and extensive numerical studies. Approximated functions, derivatives and optimal solutions can be employed together with optimization algorithms to reduce the computational time. The aim of this section is not to describe state-of-the-art algorithms in nonlinear programming, but to explain, in a simple way, a number of modern algorithms for solving nonlinear problems. These techniques are typically iterative in the sense that, given an initial point , a sequence of points, , is obtained by repeated application of an algorithmic rule. The objective is to make this sequence converge to a point in a pre-specified solution set. Typically, the solution set is specified in terms of optimality conditions, such as those presented in 2.5 through 2.8.
We start by recalling a number of concepts in 2.9.1. Then, we discuss the principles of Newton-like algorithms for nonlinear systems in 2.9.2, and use these concepts for the solution of unconstrained optimization problems in 2.9.3. Finally, algorithms for solving general nonlinear problems are presented in 2.9.7, with emphasizes on sequential unconstrained minimization (SUM) and sequential quadratic programming (SQP) techniques.
Two essential questions must be addressed concerning iterative algorithms. The first question, which is qualitative in nature, is whether a given algorithm in some sense yields, at least in the limit, a solution to the original problem; the second question, the more quantitative one, is concerned with how fast the algorithm converges to a solution. We elaborate on these concepts in this subsection.
The convergence of an algorithm is said to be asymptotic when the solution is not achieved after a finite number of iterations; except for particular problems such as linear and quadratic programming, this is generally the case in nonlinear programming. That is, a very desirable property of an optimization algorithm is global convergence:
Definition 2.23: Global Convergence, Local Convergence
An algorithm is said to be globally convergent if, for any initial point , it generates a sequence of points that converges to a point in the solution set. It is said to be locally convergent if there exists a such that for any initial point such that it generates a sequence of points converging to in the solution set.
Most modern mathematical programming algorithms are globally convergent. Locally convergent algorithms are not useful in practice because the neighborhood of convergence is not known in advance and can be arbitrarily small.
Next, what distinguishes optimization algorithms with the global convergence property is the order of convergence:
Definition 2.24: Order of Convergence
The order of convergence of a sequence is the largest nonnegative integer p such that
When and the convergence ratio , the convergence is said to be linear; if , the convergence is said to be superlinear. When , the convergence is said to be quadratic.
Since they involve the limit when , and are a measure of the asymptotic rate of convergence, i.e., as the iterates gets close to the solution; yet, a sequence with a good order of convergence may be very ”slow” far from the solution. Clearly, the convergence is faster when is larger and is smaller. Near the solution, if the convergence rate is linear, then the error is multiplied by at each iteration. The error reduction is squared for quadratic convergence, i.e., each iteration roughly doubles the number of significant digits. The methods that will be studied hereafter have convergence rates varying between linear and quadratic.
Example 2.19. Consider the problem to minimize , subject to .
Let the (point-to-point) algorithmic map be defined defined as . It is easily verified that the sequence obtained by applying the map , with any starting point, converges to the optimal solution , i.e., is globally convergent. For example, with , the algorithm generates the sequence . We have , so that the limit in Definition 2.24is with ; moreover, for , this limit is infinity. Consequently, linearly with convergence ratio .
On the other hand, consider the (point-to-point) algorithmic map be defined defined as . Again, the sequence obtained by applying converges to , from any starting point. However, we now have , which approaches as . Hence, superlinearly in this case. With , the algorithm generates the sequence . The algorithmic maps and are illustrated on the left and right plots in Fig 1.14., respectively.
It should also be noted that for most algorithms, the user must set initial values for certain parameters, such as the starting point and the initial step size, as well as parameters for terminating the algorithm. Optimization procedures are often quite sensitive to these parameters, and may produce different results, or even stop prematurely, depending on their values. Therefore, it is crucial for the user to understand the principles of the algorithms used, so that he or she can select adequate values for the parameters and diagnose the reasons of a premature termination (failure).
The fundamental approach to most iterative schemes was suggested over 300 years ago by Newton. In fact, Newton’s method is the basis for nearly all the algorithms that are described herein.
Suppose one wants to find the value of the variable such that
where is continuously differentiable. Let us assume that one such solution exists, and denote it by . Let also be a guess for the solution. The basic idea of Newton’s method is to approximate the nonlinear function by the first two terms in its Taylor series expansion about the current point . This yields a linear approximation for the vector function at the new point ,
Using this linear approximation, and provided that the Jacobian matrix is nonsingular, a new estimate for the solution can be computed by solving the previous equation for
Letting , we get the update .
Of course, being a nonlinear function, one cannot expect that , but there is much hope that be a better estimate for the root than the original guess . In other words, we might expect that
If the new point is an improvement, then it makes sense to repeat the process, thereby defining a sequence of points . An algorithm implementing Newton’s method is as follows:
Algorithm 2.1
Initialization Step:
Let be a termination scalar, and choose an initial point .
Let and go to the main step.
Main Step:
It can be shown that the rate of convergence for Newton’s method is quadratic (see Definition 2.24). Loosely speaking, it implies that each successive estimate of the solution doubles the number significant digits, which is a very desirable property for an algorithm to possess.
Theorem 2.21: Quadratic convergence for Newton’s algorithm
Let be continuously differentiable, and consider Newton’s algorithm defined by the map . Let be such that , and suppose that is nonsingular. Let the starting point be sufficiently close to , so that there exist with , and
for each satisfying . Then, Newton’s algorithm converges with a quadratic rate of convergence.
Proof. See [6, Theorem 8.6.5] for a proof. □
But can anything go wrong with Newton’s method?
Lack of Global Convergence First and foremost, if the initial guess is not sufficiently close to the solution, i.e., within the region of convergence, Newton’s method may diverge. Said differently, Newton’s method as presented above does not have the global convergence property (see Definition 2.23 and Example 2.20 hereafter). This is because may not be a valid descent direction far from the solution, and even if , a unit step size might not give a descent in . Globalization strategies, which aim at correcting this latter deficiency, will be presented in 2.9.4 in the context of unconstrained optimization.
Singular Jacobian Matrix A second difficulty occurs when the Jacobian matrix becomes singular during the iteration process, since the correction defined by the Newton’s algorithm is not defined in this case. Note that the assumption in Theorem 2.21 guarantees that cannot be singular. But when the Jacobian matrix is singular at the solution point , then Newton’s method looses its quadratic convergence property.
Computational Efficiency Finally, at each iteration, Newton’s method requires (i) that the Jacobian matrix be computed, which may be difficult and/or costly especially when the expression of is complicated, and (ii) that a linear system be solved. The analytic Jacobian can be replaced by a finite-difference approximation, yet this is costly as additional evaluations of are required at each iteration. With the objective of reducing the computational effort, quasi-Newton methods generate an approximation of the Jacobian matrix, based on the information gathered from the iteration progress. To avoid solving a linear system for the search direction, variants of quasi-Newton methods also exist that generate an approximation of the inverse of the Jacobian matrix. Such methods will be described in 2.9.5 in the context of unconstrained optimization.
Example 2.20. Consider the problem to find a solution to the nonlinear equation
The Newton iteration sequence obtained by starting from is as follows:
Notice the very fast convergence to the solution , as could be expected from Theorem 2.21. The first three iterations are represented in Fig. 1.15., on the left plot.
However, convergence is not guaranteed for any initial guess. More precisely, it can be shown that Newton’s method actually diverges when the initial guess is chosen such that , with being a solution of ; further, the method cycles indefinitely for . Both these situations are illustrated in the right plot and the bottom plot of Fig. 1.15., respectively.
We now turn to a description of basic techniques used for iteratively solving unconstrained problems of the form:
Many unconstrained optimization algorithms work along the same lines. Starting from an initial point, a direction of movement is determined according to a fixed rule, and then a move is made in that direction so that the objective function value is reduced; at the new point, a new direction is determined and the process is repeated. The main difference between these algorithms rest with the rule by which successive directions of movement are selected. A distinction is usually made between those algorithms which determine the search direction without using gradient information (gradient-free methods), and those using gradient (and higher-order derivatives) information (gradient-based methods). Here, we shall focus our attention on the latter class of methods, and more specifically on Newton-like algorithms.
Throughout this subsection, we shall assume that the objective function is twice continuously differentiable. By theorem 2.3, a necessary condition for to be a local minimum of is . Hence, the idea is to devise an iterative scheme that finds a point satisfying the foregoing condition. Following the techniques discussed earlier in 2.9.2, this can be done by using a Newton-like algorithm, with corresponding to the gradient of , and to its Hessian matrix .
At each iteration, a new iterate is obtained such that the linear approximation to the gradient at that point is zero,
The linear approximation yields the Newton search direction:
As discussed in 2.9.2, if it converges, Newton’s method exhibits a quadratic rate of convergence when the Hessian matrix is nonsingular at the solution point. However, since the Newton iteration is based on finding a zero of the gradient vector, there is no guarantee that the step will move towards a local minimum, rather than a saddle point or even a maximum. To preclude this, the Newton steps should be taken downhill, i.e., the following descent condition should be satisfied at each iteration,
Interestingly enough, with the Newton direction iteration, the descent condition becomes
That is, a sufficient condition to obtain a descent direction at is that the Hessian matrix be positive definite. Moreover, if is positive definite at a local minimizer of , then the Newton iteration converges to when started sufficiently close to . (Recall that, by Theorem 2.8, positive definiteness of is a sufficient condition for a local minimum of to be a strict local minimum.)
We now discuss two important improvements to Newton’s method, which are directly related to the issues discussed in Subsection 2.9.2, namely (i) the lack of global convergence, and (ii) computational efficiency.
Up to this point, the development has focused on the application of Newton’s method. However, even in the simplest one-dimensional applications, Newton’s method has deficiencies (see, e.g., Example 2.20). Methods for correcting global convergence deficiencies are referred to as globalization strategies. It should be stressed than an efficient globalization strategy should only alter the iterates when a problem is encountered, but it should not impede the ultimate behavior of the method, i.e., the quadratic convergence of a Newton’s method should be retained.
In unconstrained optimization, one can detect problems in a very simple fashion, by monitoring whether the next iterate satisfies a descent condition with respect to the actual iterate , e.g., . Then, either one of two globalization strategies can be used to correct the Newton step. The first strategy, known as line search method, is to alter the magnitude of the step; the second one, known as trust region method, is to modify both the step magnitude and direction. We shall only concentrate on the former class of globalization strategies subsequently.
A line search method proceeds by replacing the full Newton step with
where the step-length is chosen such that the objective function is reduced,
Clearly, the optimal choice for would be . The resulting minimization problem can be solved by any one-dimensional exact minimization technique (e.g., Newton’s method itself). However, such techniques are costly in the sense that they often require many iterations to converge and, therefore, many function (or even gradient) evaluations.
In response to this, most modern algorithms implement so-called inexact line search criteria, which aim to find a step-length giving an ”acceptable” decrease in the objective function. Note that sacrificing accuracy, we might impair the convergence of the overall algorithm that iteratively employs such a line search. However, by adopting a line search that guarantees a sufficient degree of descent in the objective function, the convergence of the overall algorithm can still be established.
We now describe one popular definition of an acceptable step-length known as Armijo’s rule. Other popular approaches are the quadratic and cubic fit techniques, as well as Wolfe’s and Glodstein’s tests. Armijo’s rule is driven by two parameters and , which respectively manage the acceptable step-length from being too large or too small (Typical vales are and ). Define the line search function , for , and consider the modified first-order approximation . A step-length is deemed acceptable if the following conditions hold:
The first condition prevents the step-length from being too large, whereas the second condition prevents from being too small. The acceptable region defined by the Armijo’s rule is shown in Fig. 1.16. below.
Another limitation of Newton’s method when applied to unconstrained optimization problems is that the Hessian matrix of the objective function is needed at each iteration, then a linear system must be solved for obtaining the search direction. For many applications, this can be a costly computational burden. In response to this, quasi-Newton methods attempt to construct this information recursively. However, by so doing, the quadratic rate of convergence is lost.
The basic idea for many quasi-Newton methods is that two successive iterates together with the corresponding gradients , yield curvature information by means of the first-order approximation relation
with . In particular, given linearly independent iteration increments an approximation of the Hessian matrix can be obtained as
or for the inverse Hessian matrix as
where
Note that when the objective function is quadratic, the previous relations are exact. Many interesting quasi-Newton methods use similar ways, although more sophisticated, to construct an approximate Hessian matrix that progressively approaches the inverse Hessian. One of the most popular class of quasi-Newton methods (known as the Broyden family) proceeds as follows:
where . It is easily seen that when supplemented with a line search strategy, at each , and hence the Hessian matrix approximations are guaranteed to exist. Moreover, it can be shown that the successive approximates remain positive definite provided that is itself positive definite.
By setting , the previous equation yields the Davidon-Fletcher-Powell (DFP) method, which is historically the first quasi-Newton method, while setting gives the Broyden-Fletcher-Goldfard-Shanno (BFGS) method, for which there is substantial evidence that it is the best general purpose quasi-Newton method currently known.
A Newton-like algorithm including both a line search method (Armijo’s rule) and Hessian recursive update (DFP update) is as follows:
Algorithm 2.2
Initialization Step:
Let be a termination scalar, and choose an initial point and a symmetric, positive definite matrix . Let , and go to the main step.
Main Step:
with and
The standard unconstrained optimization algorithm in MATLAB (i.e. the function fminunc) is an implementation of quasi-Newton’s method, with DFP or BFGS update, and a line search strategy.
Example 2.21. Consider the problem to find a minimum to Rosenbrock’s function
for with . We solved this problem using the function fminunc .
The results are shown in Fig. 1.17. Observe the slow convergence of the iterates far from the optimal solution but the very fast convergence in the vicinity of .
In this subsection, we turn our attention to algorithms for iteratively solving constrained problems of the form:
Many modern deterministic algorithms for constrained NLP problems are based on the (rather natural) principle that, instead of solving a difficult problem directly, one had better solve a sequence of simpler, but related, subproblems, which converges to a solution of the original problem either in a finite number of steps or in the limit. Working along these lines, two classes of algorithms can be distinguished for solution of NLP problems with equality and/or inequality constraints. On the one hand, penalty function and interior-point methods consist of solving the problem as a sequence of unconstrained problems (or problems with simple constraints), so that algorithms for unconstrained optimization can be used. These methods, which do not rely on the KKT theory described earlier in 2.6 through 2.8, shall be briefly presented in 2.9.8 and 2.9.9. On the other hand, Newton-like methods solve NLP problems by attempting to find a point satisfying the necessary conditions of optimality (KKT conditions in general). Sequential quadratic programming (SQP), which shall be presented in 2.9.10, represents one such class of methods.
Methods using penalty functions transform a constrained problem into a single unconstrained problem or a sequence of unconstrained problems. This is done by placing the constraints into the objective function via a penalty parameter in a way that penalizes any violation of the constraints. To illustrate it, consider the NLP problem:
where , and , and are defined on .
In general, a suitable penalty function for the minimization problem is defined by:
where and are continuous functions satisfying the conditions:
Typically, and are of the forms
with a positive integer (taking provides continuously differentiable penalty functions). The function is referred to as the auxiliary function.
Example 2.22. Consider the problem to minimize , subject to . It is immediately evident that the optimal solution lies at the point , and has objective value .
Now, consider the penalty problem to minimize in , where is a large number. Note first that for any , the auxiliary function is convex since it is sum of convex functions. Thus, a necessary and sufficient condition for optimality is that the gradient of be equal to zero, yielding . Thus, the solution of the penalty problem can be made arbitrarily close to the solution of the original problem by choosing sufficiently large. Moreover, , which can also be made arbitrarily close to by taking sufficiently large. These considerations are illustrated in Fig. 1.18. below.
The conclusions of Example 2.22 that the solution of the penalty problem can be made arbitrarily close to the solution of the original problem, and the optimal auxiliary function value arbitrarily close to the optimal objective value, by choosing sufficiently large, is formalized in the following:
Theorem 2.22
Consider a general NLP problem, where , and are continuous functions on and is a nonempty set in . Suppose that the NLP problem has a feasible solution, and let be a continuous penalty function. Suppose further that for each there exists a solution to the problem , and that is contained in a compact subset of . Then,
with . Furthermore, the limit of any convergent subsequence of is an optimal solution to the original problem and as .
Proof. See [6, Theorem 9.2.2] for a proof. □
Note that the assumption that is compact is necessary, for it possible that the optimal objective values of the original and penalty problems are not equal otherwise. Yet, this assumption is not very restrictive in most practical cases as the variables usually lie between finite upper and lower bounds. Note also that no restriction is imposed on , and other than continuity. However, the application of an efficient solution procedure for the (unconstrained) auxiliary problems may impose additional restriction on these functions (see 2.9.3).
Under the conditions that (i) , , and , are continuously differentiable, and (ii) is a regular point the solution to the penalty problem can be used to recover the Lagrange multipliers associated with the constraints at optimality. In the particular case where , we get
The larger , the better the approximation of the Lagrange multipliers:
Example 2.23. Consider the same problem as in Example 1.71. The auxiliary function being continuously differentiable, the Lagrange multiplier associated to the inequality constraint can be recovered as (assuming ). Note that the exact value of the Lagrange multiplier is obtained for each here, because is a linear constraint.
From a computational viewpoint, superlinear convergence rates might be achievable, in principle, by applying Newton’s method (or its variants such as quasi-Newton methods). Yet, one can expect ill-conditioning problems when is taken very large in the penalty problem. With a large , more emphasis is placed on feasibility, and most procedures for unconstrained optimization will move quickly towards a feasible point. Even though this point may be far from the optimum, both slow convergence and premature termination can occur due to very small step size and finite precision computations (round-off errors).
As a result of the above mentioned difficulties associated with large penalty parameters, most algorithms using penalty functions employ a sequence of increasing penalty parameters. With each new value of the penalty parameter, an optimization technique is employed, starting with the optimal solution corresponding to the previously chosen parameters value. Such an approach is often referred to as sequential unconstrained minimization (SUM) technique. This way, a sequence of infeasible points is typically generated, whose limit is an optimal solution to the original problem (hence the term exterior penalty function approach).
To conclude our discussion on the penalty function approach, we give an algorithm to solve a general NLP problem, where the penalty function used are of the form specified by and
Algorithm 2.3
Initialization Step:
Let be a termination scalar, and choose an initial point , a penalty parameter , and a scalar . Let and go to the main step.
Main Step:
Similar to penalty functions, barrier functions can also be used to transform a constrained problem into an unconstrained problem (or into a sequence of unconstrained problems). These functions act as a barrier and prevent the iterates from leaving the feasible region. If the optimal solution occurs at the boundary of the feasible domain, the procedure moves from the interior to the boundary of the domain, hence the name interior-point methods. To illustrate these methods, consider the NLP problem:
where is a subset of , and , are continuous on . Note that equality constraints, if any, should be accommodated within the set . (In the case of affine equality constraints, one can possibly eliminate them after solving for some variables in terms of the others, thereby reducing the dimension of the problem.) The reason why this treatment is necessary is because barrier function methods require the set to be nonempty; this would obviously be not possible if the equality constraints were accommodated within the set of inequalities as and .
A barrier problem formulates as:
where . Ideally, the barrier function should take value zero on the region , and value on its boundary. This would guarantee that the iterates do not leave the domain provided the minimization problem started at an interior point. However, this discontinuity poses serious difficulties for any computational procedure. Therefore, this ideal construction of is replaced by the more realistic requirement that be nonnegative and continuous over the region and approach infinity as the boundary is approached from the interior:
where is a continuous function over that satisfies the conditions
In particular, approaches the ideal barrier function described above as approaches zero.
Typically barrier functions are
The following barrier function, known as Frisch’s logarithmic barrier function, is also widely used
Actually the Frisch’s logarithmic barrier does not satisfy the nonnegativity requirement for . However the requirement on can be relaxed and it is sufficient that be positive close to .
The function is referred to as the auxiliary function.
Given , evaluating seems no simpler than solving the original problem because of the constraint . However, starting the optimization from a point in the region yields an optimal point in , even when the constraint is ignored. This is because approaches infinity as the iterates approach the boundary of from within , hence preventing them from leaving the set . This is formalized in the following:
Theorem 2.23
Consider a NLP problem with inequality constraints , where and are continuous functions on and is a nonempty closed set in . Suppose that the minimization problem has an optimal solution with the property that, given any neighborhood around , there exists an such that . Suppose further that for each , there exists a solution to the problem . Then,
with . Furthermore, the limit of any convergent subsequence of is an optimal solution to the original problem, and as .
Proof. See [6, Theorem 9.4.3] for a proof. □
Under the conditions that (i) , and are continuously differentiable, and (ii) is a regular point, the solution to the barrier function problem can be used to recover the Lagrange multipliers associated with the constraints at optimality. In the particular case where , we get:
The approximation of the Lagrange multipliers, gets better as gets closer to ,
Example 2.24. Consider the problem to minimize , subject to , the solution of which lies at the point with objective value . Now, consider the barrier function problem to minimize in , where is a large number. Note first that for any , the auxiliary function is convex 1 1 is a concave function on the convex set , therefore is convex and thus is sum of convex functions. . Thus, a necessary and sufficient condition for optimality is that the gradient of be equal to zero, yielding (assuming ). Thus, the solution of the penalty problem can be made arbitrarily close to the solution of the original problem by choosing sufficiently close to zero. Moreover, , which can also be made arbitrarily close to by taking sufficiently close to zero. These considerations are illustrated in Fig. 1.19. below. Regarding the Lagrange multiplier associated to the inequality constraint , the objective and constraint functions being continuously differentiable, an approximate value can be obtained as . Here again, the exact value of the Lagrange multiplier is obtained for each because is a linear constraint.
The use of barrier functions for solving constrained NLP problems also faces several computational difficulties. First, the search must start with a point such that , and finding such a point may not be an easy task for some problems. Also, because of the structure of the barrier function , and for small values of the parameter , most search techniques may face serious ill-conditioning and difficulties with round-off errors while solving the problem to minimize over , especially as the boundary of the region is approached. Accordingly, interior-point algorithms employ a sequence of decreasing penalty parameters as ; with each new value , an optimal solution to the barrier problem is sought by starting from the previous optimal solution. As in the exterior penalty function approach, it is highly recommended to use suitable second-order Newton or quasi-Newton methods for solving the successive barrier problems.
We describe below a scheme using barrier functions for optimizing a NLP problem.
Algorithm 2.4
Initialization Step:
Let be a termination scalar, and choose an initial point , with . Let , , , and go to the main step.
Main Step:
Note that although the constraint may be ignored, it is considered in the problem formulation as most line search methods use discrete steps, and a step could lead to a point outside the feasible region (where the value of the barrier function is a large negative number), when close to the boundary. Therefore, the problem can effectively be treated as an unconstrained optimization problem only if an explicit check for feasibility is made.
In recent years, there has been much excitement because some variants of the interior-point algorithm can be shown to be polynomial in time for many classes of convex programs. Moreover, interior-point codes are now proving to be highly competitive with codes based on other algorithms, such as SQP algorithms presented subsequently. A number of free and commercial interior-point solvers are given in Tab. 1.1. below.
Sequential quadratic programming (SQP) methods, also known as successive, or recursive, quadratic programming, employ Newton’s method (or quasi-Newton methods) to directly solve the KKT conditions for the original problem. As a result, the accompanying subproblem turns out to be the minimization of a quadratic approximation to the Lagrangian function subject to a linear approximation to the constraints. Hence, this type of process is also known as a projected Lagrangian, or the Newton-Lagrange, approach. By its nature, this method produces both primal and dual (Lagrange multiplier) solutions.
To present the concept of SQP, consider first the equality constrained nonlinear program:
where , and , are twice continuously differentiable. We shall also assume throughout that the equality constraints are linearly independent at a solution . (The extension for including inequality constraints is considered subsequently.)
By Theorem 2.13, the first-order necessary conditions of optimality for an equality constrained NLP require a primal solution and a Lagrange multiplier vector such that:
where . Now, consider a Newton-like method to solve the latter system of nonlinear equations. Given an iterate , a new iterate is obtained by solving the first-order approximation:
Substituting , the first row of the system of equations is:
The Lagrange multiplier can be simplified and thus:
The second row of the system of equations is independent of the Lagrange multipliers, it is:
Therefore the system can be simplified as:
Where we have defined the Newton step on as . The simplified system can be solved for if a solution exists. Setting , and incrementing by , we can then repeat the process until happens to solve the linear system. When this occurs, if at all, we shall have found a stationary point to the equality constrained NLP.
Interestingly enough, a quadratic programming (QP) minimization subproblem can be employed in lieu of the foregoing linear system to find any optimal solution for the linearized NLP problem. Given we define the subproblem as:
First, note that an optimum to , if it exists, together with the set of Lagrange multipliers associated with the linearized constraints satisfies the linearized KKT conditions. Second, the objective function of represents not just a quadratic approximation for but also incorporates an additional term to represent the curvature of the constraints.
Algorithm 2.5: SQP
Initialization Step:
Choose and initial primal/dual point , let , and go to the main step.
Main Step:
In the case is a regular stationary solution for the NLP problem which, together with a set of Lagrange multipliers , satisfies the second-order sufficiency conditions of Theorem 2.16, then the matrix
can be shown to be nonsingular. Hence, the above rudimentary SQP algorithm exhibits a quadratic rate of convergence by Theorem 2.21.
We now consider the inclusion of inequality constraints in the NLP problem, that is:
where the functions are twice continuously differentiable.
Given an iterate , where and are the Lagrange multiplier estimates for the equality and inequality constraints, respectively, consider the following QP subproblem as a direct extension of :
where . Note that the KKT conditions for require that, in addition to primal feasibility, Lagrange multipliers , be found such that:
with and unrestricted in sign. Clearly, if , then together with , yields a KKT solution to the NLP original problem. Otherwise, we set as before, increment by , and repeat the process. Regarding convergence rate, it can be shown that if is a regular KKT solution which, together with , satisfies the second-order sufficient conditions of Theorem 2.18, and if is initialized sufficiently close to , then the foregoing iterative procedure shall exhibit a quadratic convergence rate.
The SQP method, as presented thus far, obviously shares the disadvantages of Newton’s method: (i) it requires second-order derivatives to be calculated, which in addition might not be positive definite, and (ii) it lacks the global convergence property.
It can be shown that this modification to the rudimentary SQP algorithm, similar to the quasi-Newton modification of Newton’s algorithm, looses the quadratic convergence rate property. Instead, it can be shown that the convergence is superlinear when initialized sufficiently close to a solution that satisfies both regularity and second-order sufficiency conditions. However, this superlinear convergence rate is strongly based on the use of unit step sizes (see point (ii) below).
which satisfies the important property that is a local minimizer of , provided satisfies the second-order sufficient conditions (see Theorem 2.18) and the penalty parameter is so chosen that , and , . Yet, the merit function is not differentiable at those with either or , and it can be unbounded below even though is a local minimizer.
with , has similar properties to the merit function, provided is chosen large enough, and is continuously differentiable (although its Hessian matrix is discontinuous). Yet, for to be a (local) minimizer of , it is necessary that and .
An SQP algorithm including the modifications discussed in (i) and (ii) is as follows:
Algorithm 2.6: Improved SQP
Initialization Step:
Choose and initial primal/dual point , with , and a positive definite matrix . Let , and go to the main step.
Main Step:
What Can Go Wrong?
The material presented up to this point was intended to give the reader an understanding of how SQP methods should work. Things do not go so smoothly in practice though. We now discuss a number of common difficulties that can be encountered, and suggest remedial actions to correct the deficiencies. Because real applications may involve more than a single difficulty, the user must be prepared to correct all problems before obtaining satisfactory performance from an optimization software.
One of the most common difficulties occurs when the NLP problem has infeasible constraints, i.e., the constraints taken all together have no solution. Applying general-purpose SQP software to such problems typically produces one or more of the following symptoms:
Although robust SQP software attempt to diagnose this situation, ultimately the only remedy is to reformulate the NLP.
In contrast to the previous situation, it is possible that the constraints be consistent, but the Jacobian matrix of the active constraints, at the solution point, be either ill-conditioned or rank deficient. This situation was illustrated in Examples 1.43 and 1.55. The application of general-purpose SQP software is likely to produce the following symptoms:
Note that many symptoms of rank-deficient constraints are the same as those of inconsistent constraints. It is therefore quite common to confuse this deficiency with inconsistent constraints. Again, the remedy is to reformulate the problem.
A third type of difficulty occurs when the NLP problem contains redundant constraints. Two types of redundancy may be distinguished. In the first type, some of the constraints are unnecessary to the problem formulation, which typically results in the following symptoms:
In the second type, the redundant constraints give rise to rank deficiency, and the problem then exhibits symptoms similar to the rank-deficient case discussed previously. Obviously, the remedy is to reformulate the problem by eliminating the redundant constraints.
Perhaps the biggest obstacle encountered in the practical application of SQP methods (as well as many other NLP methods including SUM techniques) is the presence of discontinuous behavior. All of the numerical methods described herein assume continuous and differentiable objective function and constraints. Yet, there are many common examples of discontinuous functions in practice, including: IF tests in codes; absolute value, , and functions; linear interpolation of data; internal iterations such as root finding; etc.
Regarding SQP methods, the standard QP subproblems are no longer appropriate when discontinuities are present. In fact, the KKT necessary conditions simply do not apply! The most common symptoms of discontinuous functions are:
The remedy consists in reformulating discontinuous problems into smooth problems: for absolute value, min, and max functions, tricks can be used that introduce slack variables and additional constraints; linear data interpolation can be replaced by higher order interpolation schemes that are continuous through second derivatives; internal iterations can also be handled via additional NLP constraints; etc.
Any SQP code requires that the user supply the objective function and constraint values, as well as their gradient (and possibly their Hessian too). In general, the user is proposed the option to calculate the gradients via finite differences, e.g.,
However, this may cause the problem to stop prematurely. First of all, the choice of the perturbation vector is highly non trivial. If too large a value clearly provides inaccurate estimates, too small a value may also result in very bad estimates due to finite arithmetic precision computations. Therefore, one must try to find a trade-off between these two extreme situations. The difficulty stems from the fact that a trade-off may not necessarily exist if the requested accuracy for the gradient is too high. In other word, the error made in the finite-difference approximation of a gradient cannot be made as small as desired. Further, the maximum accuracy that can be achieved with finite difference is both problem dependent (e.g., badly-scaled functions are more problematic than well-scaled functions) and machine dependent (e.g., double precision computations provides more accurate estimates than single precision computations). Typical symptoms of inaccurate gradient estimates in an SQP code are:
The situation can be understood as follows. Assume that the gradient estimate is contaminated with noise. Then, instead of computing the true value , we get . But since the iteration seeks a point such that , we can expect either a degraded rate of convergence or, worse, no convergence at all, because ultimately the gradient will be dominated by noise.
To avoid these problems, the user should always consider providing the gradients explicitely to the SQP solver, instead of relying on finite-difference estimates. For large-scale problems, this is obviously a time-consuming and error-prone task. In response to this, efficient algorithmic differentiation tools (also called automatic differentiation) have been developed within the last fifteen years. The idea behind it is that, given a piece of program calculating a number of function values (e.g., in fortran77 or C language), an auxiliary program is generated that calculates the derivatives of these functions.
Scaling affects everything! Poor scaling can make a good algorithm behave badly. Scaling changes the convergence rate, termination tests, and numerical conditioning.
The most common way of scaling a problem is by introducing scaled variables of the form
for with and being scale weights and shifts, respectively. Likewise, the objective function and constraints are commonly scaled using
for . The idea is to let the optimization algorithm work with the well-scaled quantities in order to improve performance. However, what well-scaled quantities mean is hard to define, although conventional wisdom suggests the following hints: