There are none
In the last post, we showed that continuous-time LQR has “natural robustness” insofar as the optimal solution is robust to a variety of model-mismatch conditions. LQR makes the assumption that the state of the system...Margin Walker
I want to dive into some classic results in robust control and try to relate them to our current data-driven mindset. I’m going to try to do this in a modern way, avoiding any frequency...What We've Learned to Control
I’m giving a keynote address at the virtual IFAC congress this July, and I submitted an abstract that forces me to reflect on the current state of research at the intersection of machine learning and...The Uncanny Valley of Virtual Conferences
We wrapped up two amazing days of L4DC 2020 last Friday. It’s pretty wild to watch this community grow so quickly: starting as a workshop at CDC 2018, the conference organizers put together an inaugural...You Cannot Serve Two Masters: The Harms of Dual Affiliation
Facebook would like to have computer science faculty in AI committed to work 80% of their time in industrial jobs and 20% of their time at their university. They call this scheme “co-employment” or “dual...Towards Actionable Intelligence
I’m going to close my outsider’s tour of Reinforcement Learning by announcing the release of a short survey of RL that coalesces my views from the perspectives of continuous control. Though the RL and controls...Coarse-ID Control
This is the thirteenth part of “An Outsider’s Tour of Reinforcement Learning.” Part 14 is here. Part 12 is here. Part 1 is here. Can poor models be used in control loops and still achieve...Lost Horizons
This is the twelfth part of “An Outsider’s Tour of Reinforcement Learning.” Part 13 is here. Part 11 is here. Part 1 is here. This series began by describing a view of reinforcement learning as...
Newer