The analysis and performance of numerical computations for optimal control problems is complicated by the fact that they are ill-posed. It is for example often the case that optimal solutionsdepend discontinuously on data. Moreover, the optimal control, if it exists, may be a highly non-regular function, with many points of discontinuity etc. On the other hand optimal control problems are well-posed in the sense that the associated value function is well-behaved, with such properties as continuous dependence on data.
I will present an error representation for approximation of the value function when the Symplectic Euler scheme is used to discretize the Hamiltonian system associated with the optimal control problem. It contains a computable error density term and a higher order remainder term. In order to prove this the following two facts are used:
1) The value function solves a non-linear PDE, the Hamilton-Jacobi-Bellman equation. When this property is used, we take advantage of the well-posed character of the optimal control problem.
2) The Symplectic Euler scheme corresponds to the minimization of a discrete optimal control problem.
Using the error representation I will show an example of an adaptive algorithm, and illustrate its performance with numerical tests. I will also discuss the applicability of the adaptive algorithm in cases where the Hamiltonian is non-smooth.
Mattias Sandberg received his M. S. at KTH 2000 with a thesis about Gowdy space-time models in general relativity. In his first year of PhD studies he continued his research in this area. Soon he switched subject into applied mathematics, and received his PhD 2006 with a thesis on approximation of optimal control. After two years as a Post. Doc. at Oslo University he returned to KTH for a position as Associate Professor 2009. He has continued his research on optimal control theory, and has also worked on numerical methods for differential inclusions.