Over the past sixty years, a great body of knowledge has been built up on the subject of feedback control systems for linear, time-invariant, single input and single output dynamic systems. This knowledge plays an important role in our technology. The primary aim of the designer using classical control design methods is to stabilize the system/plant, whereas secondary aims may involve obtaining a certain transient response, bandwidth, disturbance rejection, steady state error, and robustness to plant variations or uncertainties. The designer’s methods are a combination of analytical ones (e.g., Laplace transform, Routh test), graphical ones (e.g., Nyquist plots, Nichols charts), and a good deal of empirically based knowledge (e.g., a certain class of compensator works satisfactorily for a certain class of plants).
For higher-order systems, multiple-input systems, or systems that do not possess the properties usually assumed in the classical control approach (linear and time invariant), the techniques for analysis and design of linear, time-invariant systems are, in general, not applicable to these more complicated systems.
One of the main aims of modern control is to present solutions to a much wider class of control problems than classical control can tackle by providing an array of analytical design procedures that facilitate the design task.
Optimal control is one particular branch of modem control that sets out to design systems that are not supposed merely stable but satisfy, in the best possible way, any one of the desirable constraints associated with classical control. The problem of optimal control can then be stated as: ”Determine the control signals that will cause a system to satisfy the physical constraints and, at the same time, minimize (or maximize) some performance criterion”.
After more than three hundred years of evolution of calculus of variations, optimal control has developed into a well-established research area and finds its applications in many scientific fields, ranging from mathematics and engineering to biomedical and management sciences. As for modern control theory, despite the development of a vast body of knowledge and despite some spectacular applications to practical situations, some of the theory has yet to find application, and many practical control problems have yet to find a theory that will successfully deal with them.
In particular, from the theoretical point of view, the existence problem of an optimal control has still to be solved for many classes of problems. Note that solving this problem is of crucial importance, since it does not make much sense to seek a solution if none exists.
Another important topic is to actually find an optimal control, i.e., give a ’recipe’ for operating the system in such a way that it satisfies the constraints in an optimal manner. Such recipe should start from necessary and/or sufficient conditions for optimality that are instrumental for identifying a small class of candidates for an optimal control. Unfortunately, these conditions are not available for some classes of problems.
Finally, once a good candidate for an optimal control has been sorted out, it has to be computed. As most real-world problems are too complex to allow for an analytical solution, computational algorithms are inevitable in solving optimal control problems. As a result, several successful families of algorithms have been developed over the years. Research is still going on in trying to speed up codes as well as in including special constraints (e.g. integer constrains).