We consider the infinite-horizon, discrete-time full-information controlproblem . Motivated by learning theory, as a criterion for controller design we focus on regret . In thefull-information setting, there is a unique optimal non-causal controller that dominates all other controllers . The regret-optimal controller is the sum of the classical $H_2$ state-feedback law and a finite-dimensional controller obtainedfrom the Nehari problem . The controller construction simply requires the solution to the standard LQR Riccati equation, in addition to two Lyapunovequations . Simulations over a range of plants demonstrates that theregret-Optimal controller interpolates nicely between the $H2$ and the$H_\infty$ optimal controllers, and generally has $H_.\Infty$ coststhat are simultaneously close to their optimal values . The regrets-optimistic controller is a viable option for control system design. It thus presents itself as a viable options for control systems with regret-control system design, we say . We showthat the regret-optimized controller interpolated nicely between $H#2 and $H #1$ and $$H### # ## ### . We also show that

Author(s) : Oron Sabag, Gautam Goel, Sahin Lale, Babak Hassibi

Links : PDF - Abstract

Code :

https://github.com/nhynes/abc


Coursera

Keywords : controller - optimal - regret - h - control -

Leave a Reply

Your email address will not be published. Required fields are marked *