This post reflects on the live blog of my graduate class “Convex Optimization.” A Table of Contents is here.
It’s class presentation week! No matter which graduate class I teach, my assignment is the same: I ask students to choose a project related to their research but connected to the course content. This is purposefully vague and open, as all academic research questions begin. Not only do I like seeing how students grapple with making (sometimes tenuous) connections between the class and their interests, but I learn a lot from their interpretations of the assignment. I get insights into what students took away from the class and into the current hot research topics on campus.
The projects also gauge how well pedagogical content connects to these current hot research directions. This was one of the main questions I set out to answer in the first place this semester, and I’m getting the opportunity to see over thirty different ways in which convex optimization can be applied in our age of LLMs.
Right now, all I have are the slides for the presentations, but I can give a superficial view of some of these connections. Unsurprisingly, the vast majority of the projects have something to do with neural networks and machine learning. A few projects use convex methods to analyze neural net systems, aiming to better understand what the neural nets are doing and how they might best be optimized. Some projects wanted to see if they could use convex techniques to replace the neural methods. Some wanted to see if neural techniques could improve on convex methods. I like this tension! There’s no clean answer for when one will be better than the other. We too often run with what is trendy, not what is best.
Of the projects not aimed at understanding neural nets, the two dominant application areas were energy and robotics. The energy projects ranged from studying building energy management, managing energy and utilization of public transportation, planning new additions to the electricity grid, and understanding how to solve massive inverse problems in energy resources. Since energy management is tightly coupled to policy concerns about conservation and economics, it’s not surprising that optimization techniques play a central role.
The other dominant application area was robotics. High level robotic planning is a persistently challenging optimization problem, and I suppose unsurprising that we’re still trying to devise clever ways to solve these problems. The last week of class may have made it seem like all optimal control could be solved by simply running backpropagation. Unfortunately, in practice, the physical world gets in the way of such clean plans. The course projects propose new methods for solving problems about avoiding collisions and obstructions, dealing with sensor noise, and guaranteeing safe execution. There were a lot of proposed neural methods here as well, such as training robots to mimic people and training robots to mimic convex optimization solutions.
Though energy and robotics were the dominant applications, they were by no means the only applications. I received project submissions accelerating linear algebra, scheduling hardware, understanding neural coding, and improving variational inference. It’s good to see how convex optimization can still inspire new techniques and methods in a broad set of fields.
That’s my superficial overview of the projects, and I’m excited to get more details from the students. Later this week, I’ll share follow-up thoughts about what comes out of the lightning presentations and associated discussions.