20 Comments
User's avatar
Damek Davis's avatar

LOL

Expand full comment
John Quiggin's avatar

The answer, in the end, is captured in the point that learning math (or just about anything) is like guitar practice. If being a musician were a regular career with some sort of credential for entry, and the technology were available, people would cheat. In fact, even without the career incentives people would cheat rather than admit to their guitar teacher that they had slacked off. But in the end, they would be unable to conceal, from others or themselves, that they couldn't play.

The response is twofold, but neither part is easy. First, give up on the idea that assessment is part of our job, as opposed to a way of forcing the students to do some work. Second, find better ways to motivate students to put in the effort

Reading the comments, I see that you've made this point laready, but I will post it anyway

Expand full comment
Ben Recht's avatar

Yes, we all have to say it multiple times until we change our course. Academic inertia can be overwhelming.

Expand full comment
Terry underwood's avatar

Say what exactly? Stop assessing learning?

Expand full comment
Anna Gilbert's avatar

Shout out for Math 207-8-9 and Paul Sally :)

Expand full comment
Ben Recht's avatar

IYKYK! In terms of effect on my life trajectory, it was the most impactful class I ever took.

Expand full comment
galen's avatar

The library comparison reminds me of Alison Gopnik's thing that LLMs can be framed as a cultural technology, like libraries or the Internet, that aide in knowledge transmission. Unfortunately the friction is reduced (by design) to the point where LLMs are not equivalently helpful for learning.

Expand full comment
Ben Recht's avatar

My quibble is that saying they aren't helpful is a step too far. I can see how they could be helpful for math insofar as you have to be able to tell when the answer is wrong. But how helpful they are remains to be seen.

Expand full comment
Josh Brake's avatar

Great post, thanks Ben! Really like this way of thinking and the clear distinction of the types of thinking you’re trying to get your students to do.

Expand full comment
Zeyu Yun's avatar

Nice Post. But cheating at scale is not the worse. What would be crazy is if the TA also got lazy, and grade the homework with ChatGPT. Imagine that lol

Expand full comment
Bill Young's avatar

"I" appears seven times in the first paragraph. It's a communication problem that Ben Recht shares with many STEM academics.

Expand full comment
Terry underwood's avatar

Assessment IS a critical part of a teacher’s job. But I don’t think you mean to say assessment? You mean evaluation? Your guitar analogy is apt for the situation, but I think you are seeing this analogy from the perspective of a teacher, not a learner. That’s my assessment. I learned guitar without career aspirations just as I learned math, particularly the statistics courses I took in my doc program. Why would I feel it to be an “admission” to tell the teacher I “slacked off”? A confession? Am I a laborer under supervision? Are guitar teachers high priests? Learners of the guitar practice for themselves not their teacher unless they are being forced or are deluded. Teachers aren’t the beneficiaries of practice, yet they often act as if they are. Can you put on your learner glasses and look again at learning the guitar? As you say, seeing patterns for one’s self locks them inside our expertise. There is such a gap between deluded cheating and aware cheating. Cheaters don’t really want to play badly enough. There’s something they are missing. That’s the teachers job. Assess. Find it and help dissolve the delusion.

Expand full comment
Adam Ginensky's avatar

Firstly respect about Paul Sally. A remarkable human being as well as a remarkable teacher. With regard to LLM and doing mathematics, there is also a nice article by Norman Matloff on AI and statistics (https://magazine.amstat.org/blog/2023/04/01/chatgpt-finds-statistics-difficult/). His main point, which I think is your point too, is that chat gpt has issues with complicated questions. To me this is part of the 'garbage in, garbage out' principle. To wit, take a topic that is subtle or hard and there is very little good that is written about it. That will clearly make LLM's job harder. I leave to the more philosophically minded whether this means that LLM's are limited to being copy cats or whether they can eventually 'learn'.

Expand full comment
FourierBot's avatar

optimization is fun :)

Expand full comment
FourierBot's avatar

solvable ones😶

Expand full comment
brutalist's avatar

while I could submit my solutions to the problem sets in my Intro to Logic class electronically, my inductive proofs in Intermediate Logic were written in a paper notebook inside a supervised classroom and graded by hand. over a decade later, it’s still the only workaround for ChatGPT I can think of.

Expand full comment
Ben Recht's avatar

Education needs to figure out how to separate evaluation and learning. It's been a problem for a century, and hopefully LLMs will force us to revaluate our conception of what school is for.

Expand full comment
brutalist's avatar

writing those proofs out by hand served both purposes! (for me, anyway)

Expand full comment
Lalitha Sankar's avatar

I am afraid we won't really have any meaningful answers. How is it possible if we haven't for a century? I am unsure if we can especially when identifying if something is artificially generated is seemingly impossible at present. I think the only fact that may work is letting our students know that they may get a certificate in the best prompts to an LLM but may be in real trouble if asked to do something by themselves. I've gone to the point of sharing correct and incorrect code and proof solutions from LLMs asking students to see if it works. Here is to another semester of teaching Stat ML in the age of LLM prompts.

Expand full comment