Control (optimal control theory)
{{Short description|Variables Control}}In optimal control theory, a control is a variable chosen by the controller or agent to manipulate state variables, similar to an actual control valve. Unlike the state variable, it does not have a predetermined equation of motion.{{cite book |first1=Brian S. |last1=Ferguson |first2=G. C. |last2=Lim |title=Introduction to Dynamic Economic Problems |location=Manchester |publisher=Manchester University Press |year=1998 |isbn=0-7190-4996-2 |page=162 }} The goal of optimal control theory is to find some sequence of controls (within an admissible set) to achieve an optimal path for the state variables (with respect to a loss function).
A control given as a function of time only is referred to as an open-loop control. In contrast, a control that gives optimal solution during some remainder period as a function of the state variable at the beginning of the period is called a closed-loop control.{{cite book |first1=Daniel |last1=Léonard |first2=Ngo Van |last2=Long |title=Optimal Control Theory and Static Optimization in Economics |location=New York |publisher=Cambridge University Press |year=1992 |isbn=0-521-33158-7 |page=181 |url=https://books.google.com/books?id=gSHxK5Cq4BgC&pg=PA181 }}