Differential dynamic programming

{{Short description|Algorithm for trajectory optimization}}

Differential dynamic programming (DDP) is an optimal control algorithm of the trajectory optimization class. The algorithm was introduced in 1966 by Mayne{{Cite journal

| volume = 3

| pages = 85–95

| last = Mayne

| first = D. Q.

| author-link=David Mayne

| title = A second-order gradient method of optimizing non-linear discrete time systems

| journal = Int J Control

| year = 1966

| doi = 10.1080/00207176608921369

}} and subsequently analysed in Jacobson and Mayne's eponymous book.{{cite book|last1=Mayne|first1= David Q.|last2= Jacobson|first2= David H.|title=Differential dynamic programming|year=1970|publisher=American Elsevier Pub. Co.|location=New York|isbn=978-0-444-00070-5|url=https://books.google.com/books?id=tA-oAAAAIAAJ}} The algorithm uses locally-quadratic models of the dynamics and cost functions, and displays quadratic convergence. It is closely related to Pantoja's step-wise Newton's method.{{Cite journal

| doi = 10.1080/00207178808906114

| issn = 0020-7179

| volume = 47

| issue = 5

| pages = 1539–1553

| last = de O. Pantoja

| first = J. F. A.

| title = Differential dynamic programming and Newton's method

| journal = International Journal of Control

| year = 1988

}}{{Cite journal

| last = Liao

| first = L. Z.

|author2=C. A Shoemaker | author2-link = Christine Shoemaker

| title = Advantages of differential dynamic programming over Newton's method for discrete-time optimal control problems

|website=Cornell University

| year = 1992

|url=https://hdl.handle.net/1813/5474

| hdl = 1813/5474 |hdl-access=free

}}

Finite-horizon discrete-time problems

The dynamics

{{NumBlk|:|\mathbf{x}_{i+1} = \mathbf{f}(\mathbf{x}_i,\mathbf{u}_i)|{{EquationRef|1}}}}

describe the evolution of the state \textstyle\mathbf{x} given the control \mathbf{u} from time i to time i+1. The total cost J_0 is the sum of running costs \textstyle\ell and final cost \ell_f, incurred when starting from state \mathbf{x} and applying the control sequence \mathbf{U} \equiv \{\mathbf{u}_0,\mathbf{u}_1\dots,\mathbf{u}_{N-1}\} until the horizon is reached:

:J_0(\mathbf{x},\mathbf{U})=\sum_{i=0}^{N-1}\ell(\mathbf{x}_i,\mathbf{u}_i) + \ell_f(\mathbf{x}_N),

where \mathbf{x}_0\equiv\mathbf{x}, and the \mathbf{x}_i for i>0 are given by {{EquationNote|1|Eq. 1}}. The solution of the optimal control problem is the minimizing control sequence

\mathbf{U}^*(\mathbf{x})\equiv \operatorname{argmin}_{\mathbf{U}} J_0(\mathbf{x},\mathbf{U}).

Trajectory optimization means finding \mathbf{U}^*(\mathbf{x}) for a particular \mathbf{x}_0, rather than for all possible initial states.

Dynamic programming

Let \mathbf{U}_i be the partial control sequence \mathbf{U}_i \equiv \{\mathbf{u}_i,\mathbf{u}_{i+1}\dots,\mathbf{u}_{N-1}\} and define the cost-to-go J_i as the partial sum of costs from i to N:

:J_i(\mathbf{x},\mathbf{U}_i)=\sum_{j=i}^{N-1}\ell(\mathbf{x}_j,\mathbf{u}_j) + \ell_f(\mathbf{x}_N).

The optimal cost-to-go or value function at time i is the cost-to-go given the minimizing control sequence:

:V(\mathbf{x},i)\equiv \min_{\mathbf{U}_i}J_i(\mathbf{x},\mathbf{U}_i).

Setting V(\mathbf{x},N)\equiv \ell_f(\mathbf{x}_N), the dynamic programming principle reduces the minimization over an entire sequence of controls to a sequence of minimizations over a single control, proceeding backwards in time:

{{NumBlk|:|V(\mathbf{x},i)= \min_{\mathbf{u}}[\ell(\mathbf{x},\mathbf{u}) + V(\mathbf{f}(\mathbf{x},\mathbf{u}),i+1)].|{{EquationRef|2}}}}

This is the Bellman equation.

Differential dynamic programming

DDP proceeds by iteratively performing a backward pass on the nominal trajectory to generate a new control sequence, and then a forward-pass to compute and evaluate a new nominal trajectory. We begin with the backward pass. If

:\ell(\mathbf{x},\mathbf{u}) + V(\mathbf{f}(\mathbf{x},\mathbf{u}),i+1)

is the argument of the \min[\cdot] operator in {{EquationNote|2|Eq. 2}}, let Q be the variation of this quantity around the i-th (\mathbf{x},\mathbf{u}) pair:

:\begin{align}Q(\delta\mathbf{x},\delta\mathbf{u})\equiv &\ell(\mathbf{x}+\delta\mathbf{x},\mathbf{u}+\delta\mathbf{u})&&{}+V(\mathbf{f}(\mathbf{x}+\delta\mathbf{x},\mathbf{u}+\delta\mathbf{u}),i+1)

\\

-&\ell(\mathbf{x},\mathbf{u})&&{}-V(\mathbf{f}(\mathbf{x},\mathbf{u}),i+1)

\end{align}

and expand to second order

{{NumBlk|:|

\approx\frac{1}{2}

\begin{bmatrix}

1\\

\delta\mathbf{x}\\

\delta\mathbf{u}

\end{bmatrix}^\mathsf{T}

\begin{bmatrix}

0 & Q_\mathbf{x}^\mathsf{T} & Q_\mathbf{u}^\mathsf{T}\\

Q_\mathbf{x} & Q_{\mathbf{x}\mathbf{x}} & Q_{\mathbf{x}\mathbf{u}}\\

Q_\mathbf{u} & Q_{\mathbf{u}\mathbf{x}} & Q_{\mathbf{u}\mathbf{u}}

\end{bmatrix}

\begin{bmatrix}

1\\

\delta\mathbf{x}\\

\delta\mathbf{u}

\end{bmatrix}

|{{EquationRef|3}}}}

The Q notation used here is a variant of the notation of Morimoto where subscripts denote differentiation in denominator layout.{{Cite conference

| volume = 2

| pages = 1927–1932

| last = Morimoto

| first = J. |author2=G. Zeglin |author3=C.G. Atkeson

| title = Minimax differential dynamic programming: Application to a biped walking robot

| book-title = Intelligent Robots and Systems, 2003.(IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on

| date = 2003

}}

Dropping the index i for readability, primes denoting the next time-step V'\equiv V(i+1), the expansion coefficients are

:

\begin{alignat}{2}

Q_\mathbf{x} &= \ell_\mathbf{x}+ \mathbf{f}_\mathbf{x}^\mathsf{T} V'_\mathbf{x} \\

Q_\mathbf{u} &= \ell_\mathbf{u}+ \mathbf{f}_\mathbf{u}^\mathsf{T} V'_\mathbf{x} \\

Q_{\mathbf{x}\mathbf{x}} &= \ell_{\mathbf{x}\mathbf{x}} + \mathbf{f}_\mathbf{x}^\mathsf{T} V'_{\mathbf{x}\mathbf{x}}\mathbf{f}_\mathbf{x}+V_\mathbf{x}'\cdot\mathbf{f}_{\mathbf{x}\mathbf{x}}\\

Q_{\mathbf{u}\mathbf{u}} &= \ell_{\mathbf{u}\mathbf{u}} + \mathbf{f}_\mathbf{u}^\mathsf{T} V'_{\mathbf{x}\mathbf{x}}\mathbf{f}_\mathbf{u}+{V'_\mathbf{x}} \cdot\mathbf{f}_{\mathbf{u} \mathbf{u}}\\

Q_{\mathbf{u}\mathbf{x}} &= \ell_{\mathbf{u}\mathbf{x}} + \mathbf{f}_\mathbf{u}^\mathsf{T} V'_{\mathbf{x}\mathbf{x}}\mathbf{f}_\mathbf{x} + {V'_\mathbf{x}} \cdot \mathbf{f}_{\mathbf{u} \mathbf{x}}.

\end{alignat}

The last terms in the last three equations denote contraction of a vector with a tensor. Minimizing the quadratic approximation {{EquationNote|3|(3)}} with respect to \delta\mathbf{u} we have

{{NumBlk|:|

{\delta \mathbf{u}}^* = \operatorname{argmin}\limits_{\delta \mathbf{u}}Q(\delta \mathbf{x},\delta

\mathbf{u})=-Q_{\mathbf{u}\mathbf{u}}^{-1}(Q_\mathbf{u}+Q_{\mathbf{u}\mathbf{x}}\delta \mathbf{x}),

|{{EquationRef|4}}}}

giving an open-loop term \mathbf{k}=-Q_{\mathbf{u}\mathbf{u}}^{-1}Q_\mathbf{u} and a feedback gain term \mathbf{K}=-Q_{\mathbf{u}\mathbf{u}}^{-1}Q_{\mathbf{u}\mathbf{x}}. Plugging the result back into {{EquationNote|3|(3)}}, we now have a quadratic model of the value at time i:

:

\begin{alignat}{2}

\Delta V(i) &= &{} -\tfrac{1}{2}Q^T_\mathbf{u} Q_{\mathbf{u}\mathbf{u}}^{-1}Q_\mathbf{u}\\

V_\mathbf{x}(i) &= Q_\mathbf{x} & {}- Q_\mathbf{xu} Q_{\mathbf{u}\mathbf{u}}^{-1}Q_{\mathbf{u}}\\

V_{\mathbf{x}\mathbf{x}}(i) &= Q_{\mathbf{x}\mathbf{x}} &{} - Q_{\mathbf{x}\mathbf{u}}Q_{\mathbf{u}\mathbf{u}}^{-1}Q_{\mathbf{u}\mathbf{x}}.

\end{alignat}

Recursively computing the local quadratic models of V(i) and the control modifications \{\mathbf{k}(i),\mathbf{K}(i)\}, from i=N-1 down to i=1, constitutes the backward pass. As above, the Value is initialized with V(\mathbf{x},N)\equiv \ell_f(\mathbf{x}_N). Once the backward pass is completed, a forward pass computes a new trajectory:

:

\begin{align}

\hat{\mathbf{x}}(1)&=\mathbf{x}(1)\\

\hat{\mathbf{u}}(i)&=\mathbf{u}(i) + \mathbf{k}(i) +\mathbf{K}(i)(\hat{\mathbf{x}}(i) - \mathbf{x}(i))\\

\hat{\mathbf{x}}(i+1)&=\mathbf{f}(\hat{\mathbf{x}}(i),\hat{\mathbf{u}}(i))

\end{align}

The backward passes and forward passes are iterated until convergence.

If the Hessians Q_{\mathbf{x}\mathbf{x}}, Q_{\mathbf{u}\mathbf{u}}, Q_{\mathbf{u}\mathbf{x}}, Q_{\mathbf{x}\mathbf{u}} are replaced by their Gauss-Newton approximation, the method reduces to the iterative Linear Quadratic Regulator (iLQR).{{Cite conference

| pages = 1-7

| last = Baumgärtner

| first = K.

| title = A Unified Local Convergence Analysis of Differential Dynamic Programming, Direct Single Shooting, and Direct Multiple Shooting

| conference = 2023 European Control Conference (ECC)

| year = 2023

| doi = 10.23919/ECC57647.2023.10178367

| url = https://ieeexplore.ieee.org/abstract/document/10178367

| url-access = subscription

}}

Monte Carlo version

Sampled differential dynamic programming (SaDDP) is a Monte Carlo variant of differential dynamic programming.{{Cite conference |title=Sampled differential dynamic programming |book-title=2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) |language=en-US|doi=10.1109/IROS.2016.7759229|s2cid=1338737}}{{Cite conference|last1=Rajamäki|first1=Joose|first2=Perttu|last2=Hämäläinen|url=https://ieeexplore.ieee.org/document/8430799|title=Regularizing Sampled Differential Dynamic Programming - IEEE Conference Publication|conference=2018 Annual American Control Conference (ACC)|date=June 2018 |pages=2182–2189 |doi=10.23919/ACC.2018.8430799 |s2cid=243932441 |language=en-US|access-date=2018-10-19|url-access=subscription}}{{Cite book|first=Joose|last=Rajamäki|date=2018|title=Random Search Algorithms for Optimal Control|url=http://urn.fi/URN:ISBN:978-952-60-8156-4|language=en|issn=1799-4942|isbn=978-952-60-8156-4|publisher=Aalto University}} It is based on treating the quadratic cost of differential dynamic programming as the energy of a Boltzmann distribution. This way the quantities of DDP can be matched to the statistics of a multidimensional normal distribution. The statistics can be recomputed from sampled trajectories without differentiation.

Sampled differential dynamic programming has been extended to Path Integral Policy Improvement with Differential Dynamic Programming.{{Cite book|last1=Lefebvre|first1=Tom|last2=Crevecoeur|first2=Guillaume|title=2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM) |chapter=Path Integral Policy Improvement with Differential Dynamic Programming |date=July 2019|chapter-url=https://ieeexplore.ieee.org/document/8868359|pages=739–745|doi=10.1109/AIM.2019.8868359|hdl=1854/LU-8623968|isbn=978-1-7281-2493-3|s2cid=204816072|url=https://biblio.ugent.be/publication/8623968 |hdl-access=free}} This creates a link between differential dynamic programming and path integral control,{{Cite book|last1=Theodorou|first1=Evangelos|last2=Buchli|first2=Jonas|last3=Schaal|first3=Stefan|title=2010 IEEE International Conference on Robotics and Automation |chapter=Reinforcement learning of motor skills in high dimensions: A path integral approach |date=May 2010|chapter-url=https://ieeexplore.ieee.org/document/5509336|pages=2397–2403|doi=10.1109/ROBOT.2010.5509336|isbn=978-1-4244-5038-1|s2cid=15116370}} which is a framework of stochastic optimal control.

Constrained problems

Interior Point Differential dynamic programming (IPDDP) is an interior-point method generalization of DDP that can address the optimal control problem with nonlinear state and input constraints.{{cite arXiv |last1=Pavlov |first1=Andrei|last2=Shames|first2=Iman| last3=Manzie|first3=Chris|date=2020 |title=Interior Point Differential Dynamic Programming |eprint=2004.12710 |class=math.OC}}

See also

References

{{Reflist}}