One of the most important lessons I learned as a freshman math major was that all assigned problems have been solved somewhere. Buried by Paul Sally’s Analysis class, I would spend my days digging through the stacks in the math library. Invariably, I’d find something resembling the problem I needed to solve. In the process, I’d have pulled and skimmed a dozen books. I didn’t know it at the time, but I was being taught how to do math research. It was just as important for me to learn how to find solutions in the literature as it was to learn how to recognize patterns in mathematical questions.
Of course, now you don’t have to go to the math stacks. Maybe the biggest difference about teaching Convex Optimization in the Age of LLMs is that everyone has access to chatGPT. And chatGPT can sort of solve your math homework for you.
Boyd and Vandenberghe’s book has an amazing set of exercises, and they maintain more exercises in a github repo. I learned this material just by working on the exercises on my own.
But can chatGPT solve them? Problem 2.2 asks
Show that a set is convex if and only if its intersection with any line is convex.
I typed this directly into chatGPT and it gave me the correct answer. It even rendered it out in LaTeX. I was going to paste the solution here, but its solution is so long and wordy. I mean, I can give you a sufficient answer succinctly in plain text:
Assume a set is convex. Then its intersection with a line is convex because lines are convex and intersection preserves convexity.
Now assume the intersection of a set of any line is convex. Take any two points in the set. The intersection of the set with the line containing those two points is convex, so it contains the line segment between the two points. Therefore the set contains the line segment between the two points and is hence convex.
I’m sure you could make the answer even tighter. Maybe Damek Davis can write us this proof in Lean.
Now what happens if I give the chat bot a harder question? I tried 2.35,
I queried chatGPT with
Find the dual cone of the set { X \in \R^{n \times n} : X = X^T and z^T X z >=0 for all z >=0}
It again gave me an absurdly wordy answer. It even identifies that this is the set of co-positive matrices without me telling it. And it sort of looked correct. But its dual cone is completely wrong.1 Aha, I got you chatGPT! You might object that I didn’t type in the exact question. What if I asked it the exact wording, writing it out in LaTeX?
A matrix $X \in \mathbf{S}_n$ is called copositive if $z^T X z \geq 0$ for all $z \succeq 0$. Verify that the set of copositive matrices is a proper cone. Find its dual cone.
Now chatGPT gives me the right answer, though regurgitated in an even more verbose stream of text and symbols. I’d definitely deduct points for excessive wordiness.
My reaction to interaction with generative AI products like this is always the same. It’s hard not to be mind blown by the quality of the answers that come out. These are by far the best machine learning demos ever. But the answers are still so much worse than what I get from stack overflow with a little more effort. And, as usual, there’s no way to check if the answers here are correct. You have to know the material in order to tell when it’s giving the right answer. Generative AI always seems to provide the minimal effort path to a passing but shitty solution.
Like all other faculty out there, I’m still grappling with how to think about the cheating-at-scale enabled by chatGPT. Is this as rich an experience as what I did as an undergrad? Clearly no. On the other hand, is this a difference in kind or a difference in degree from going to the math stacks?
Only part of math research is being able to find the right answer. Another essential part is learning to see patterns so you don’t have to look up the answer as you piece together an argument. There’s a complex interaction between this pattern recognition and understanding how to engage with external literature. And, you know, there’s also the ability to know when something is correct. When I translated solutions from library books, I had to work through the logic of the proofs and know when pieces didn’t fit.
So yeah, I feel like chatGPT is a difference in kind from going to the library. But why should my 25 year old experience of learning math dictate how you learn today? Perhaps a more relevant question is whether chatGPT is different in kind or degree from stackoverflow? From chegg? From course hero?
Maybe chatGPT is the natural end of internet searching. If you wanted to cheat by looking stuff up online, it was definitely possible before 2023. I mean, here’s a much faster path to the solutions of the two problems I gave here:
What’s the path forward with assignments and evaluations in the age of LLMs? I, like all of my colleagues, remain unsure. I guess my answer is something like this: Doing the exercises is more like learning guitar than it is taking a standardized aptitude test. When I assign problems in a math class, I’m asking you to do them for skill acquisition. The goal of my graduate courses is to help you learn how to be a good researcher. I am giving you a path to master a subject that’s along the lines of how I did it. Part of that is being able to search for existing solutions. Part of that is being able to reason through when those solutions are correct. And part of that is being able to synthesize new problems and solutions entirely. They are all important and connected, and LLMs so far only give you tools for one of the three.
It says it’s the set of all matrices with positive entries, but the actual answer is the set of positive definite matrices with a non-negative matrix factorization, i.e., the completely positive matrices.
LOL
The answer, in the end, is captured in the point that learning math (or just about anything) is like guitar practice. If being a musician were a regular career with some sort of credential for entry, and the technology were available, people would cheat. In fact, even without the career incentives people would cheat rather than admit to their guitar teacher that they had slacked off. But in the end, they would be unable to conceal, from others or themselves, that they couldn't play.
The response is twofold, but neither part is easy. First, give up on the idea that assessment is part of our job, as opposed to a way of forcing the students to do some work. Second, find better ways to motivate students to put in the effort
Reading the comments, I see that you've made this point laready, but I will post it anyway