This new paper is also relevant https://arxiv.org/abs/2510.19804 . I imagine the authors are having a little fun with the word "resolve" in the abstract, meaning either resolve or re-solve.
I do want to point out that Mark Sellke did not start the tweeting chain. He quoted the initiating tweet that was not misleading, and in that context his tweet wasn't misleading either. However, twitter truncates quotes to one level, so when his tweet was quoted by followup tweets, the original context was lost.
I agree. I don't think Sellke or Bloom were bad actors, and they did their best to clarify what was happening. But in this boom where there's so much money at stake, you can't even work on cute math problems in peace anymore.
It's the singularity. Of hype and hysteria amplified through social networks algorithms.
I dread a transition from artificial superhuman intelligence to artificial hyper-human intelligence, then possibly artificial ultra-human intelligence.
We must draw the line somewhere! I'll sign any petition calling for a ban on artificial supercalifragilisticexpialidocious intelligence. I hope the pope, alt-right leadership, progressive bureaucrats, four-star generals, British royalty, social media influencers, Nobel laureates, and the Dalai Lama will join in. Amazingly, this is not the start of a joke involving going to a bar, but the reality of our times.
It's a bit brutal on my friends and coauthors, but I see the point. Lost lore and willful forgetting sound so medieval. That, too, is in sync with our times.
While I was studying martingales from Ross’ Stochastic processes textbook about 2 months ago, I could not see where the conditional independence or something like that is used in a proof of a lemma. In such dire situations, I tend to ask a knowledgeable friend and get a thoughtful response if all goes right. Unfortunately, I could not find a knowledgeable friend in this topic and taken the screenshot and posted it to chatgpt (free version). It has very quickly and kindly explained what I am missing out and the explanation was all correct. So, I cannot say that it is pushing the barriers of human knowledge; but, that conversation agent helped me to push my personal barriers and learn something new. This is what research advisors and people at similar roles do to help grad. students. I believe all education fronts will be among the firsts to experience this positive wave. Personalized training, access to a chatty subject expert has changed chess a lot, we have very many grandmasters around the world at the ages of 10-15 today. Even Turkey (my country) has two grandmasters who are in secondary and high school and they compete at the levels of Kasparov and are the top two players of the country. Hopefully, math education will also get positively affected with a similar personalized AI training with a subject expert and we can detect and educate the new Ramajuans whereever they are. Chatgpt will make Mathematicians Great Again, as if it is 18th century!
There has been quite a lot of hype that is then corrected by academia, sometimes doing experiments to show why the hype is not valid.
If ChatGPT can basically operate as a search engine, then it should be marketed as such, in that role. LLMs can be useful to find source material in specific domains. What is needed are non-hallucinated references to go along with the response, so that one can check it is correct. But as Google already does search, it seems to me that Google should be the player to deliver quality search with citations on a problem. It shouldn't even need to add the requirement in a prompt.
The LLM interface could be set for search mode to do this properly, with an internal check to eliminate any hallucinated response fragments associated with a bogus citation.
Now I am not saying I would pay money for that, and certainly not at $x/mth. [I recall with horror the cost of Dialog library searches in the mid-1980s.] If a solid search with citations was provided, then I would pay the sort of $25-50/yr I donate to Wikipedia, as I use it so much.
While the posted example is funny (and there are a number that have emerged over the last few years), it has reached the point that I want to see proof of the claim, not an excited anecdote. Would it hurt for these highly capitalized tech companies to slow up, and pay a respected academic to test the claim?
The Bubeck bit seems bad faith. One could know about self-contracted curves, know about its connection to a space, and not make a connection to a new problem in that same space. One could also introduce them to a more general audience as "there is something called self-contracted curves." One could contest hype, make fun of something like sparks (as they should), call out various actors involved, but still make sure one is kosher and good faith. Unfortunately, you almost never manage to do that. You are much more similar to those guys then you realize.
This new paper is also relevant https://arxiv.org/abs/2510.19804 . I imagine the authors are having a little fun with the word "resolve" in the abstract, meaning either resolve or re-solve.
incredible.
I do want to point out that Mark Sellke did not start the tweeting chain. He quoted the initiating tweet that was not misleading, and in that context his tweet wasn't misleading either. However, twitter truncates quotes to one level, so when his tweet was quoted by followup tweets, the original context was lost.
I agree. I don't think Sellke or Bloom were bad actors, and they did their best to clarify what was happening. But in this boom where there's so much money at stake, you can't even work on cute math problems in peace anymore.
It's the singularity. Of hype and hysteria amplified through social networks algorithms.
I dread a transition from artificial superhuman intelligence to artificial hyper-human intelligence, then possibly artificial ultra-human intelligence.
We must draw the line somewhere! I'll sign any petition calling for a ban on artificial supercalifragilisticexpialidocious intelligence. I hope the pope, alt-right leadership, progressive bureaucrats, four-star generals, British royalty, social media influencers, Nobel laureates, and the Dalai Lama will join in. Amazingly, this is not the start of a joke involving going to a bar, but the reality of our times.
It's a bit brutal on my friends and coauthors, but I see the point. Lost lore and willful forgetting sound so medieval. That, too, is in sync with our times.
You ended this blog post so well!
> "If you want to believe that AGI is here, it helps to become willfully forgetful of what you already know."
This isn't the message I expected to take away from the subtitle, "When insight comes from willful forgetting.", which is the ironic part.
Probably it's all Dr. Soong's fault.
While I was studying martingales from Ross’ Stochastic processes textbook about 2 months ago, I could not see where the conditional independence or something like that is used in a proof of a lemma. In such dire situations, I tend to ask a knowledgeable friend and get a thoughtful response if all goes right. Unfortunately, I could not find a knowledgeable friend in this topic and taken the screenshot and posted it to chatgpt (free version). It has very quickly and kindly explained what I am missing out and the explanation was all correct. So, I cannot say that it is pushing the barriers of human knowledge; but, that conversation agent helped me to push my personal barriers and learn something new. This is what research advisors and people at similar roles do to help grad. students. I believe all education fronts will be among the firsts to experience this positive wave. Personalized training, access to a chatty subject expert has changed chess a lot, we have very many grandmasters around the world at the ages of 10-15 today. Even Turkey (my country) has two grandmasters who are in secondary and high school and they compete at the levels of Kasparov and are the top two players of the country. Hopefully, math education will also get positively affected with a similar personalized AI training with a subject expert and we can detect and educate the new Ramajuans whereever they are. Chatgpt will make Mathematicians Great Again, as if it is 18th century!
Thanks for writing this, it clarifies a lot. Is this kind of 'lore laundering' a bigger risk for AI credabilty than we think?
There has been quite a lot of hype that is then corrected by academia, sometimes doing experiments to show why the hype is not valid.
If ChatGPT can basically operate as a search engine, then it should be marketed as such, in that role. LLMs can be useful to find source material in specific domains. What is needed are non-hallucinated references to go along with the response, so that one can check it is correct. But as Google already does search, it seems to me that Google should be the player to deliver quality search with citations on a problem. It shouldn't even need to add the requirement in a prompt.
The LLM interface could be set for search mode to do this properly, with an internal check to eliminate any hallucinated response fragments associated with a bogus citation.
Now I am not saying I would pay money for that, and certainly not at $x/mth. [I recall with horror the cost of Dialog library searches in the mid-1980s.] If a solid search with citations was provided, then I would pay the sort of $25-50/yr I donate to Wikipedia, as I use it so much.
While the posted example is funny (and there are a number that have emerged over the last few years), it has reached the point that I want to see proof of the claim, not an excited anecdote. Would it hurt for these highly capitalized tech companies to slow up, and pay a respected academic to test the claim?
The Bubeck bit seems bad faith. One could know about self-contracted curves, know about its connection to a space, and not make a connection to a new problem in that same space. One could also introduce them to a more general audience as "there is something called self-contracted curves." One could contest hype, make fun of something like sparks (as they should), call out various actors involved, but still make sure one is kosher and good faith. Unfortunately, you almost never manage to do that. You are much more similar to those guys then you realize.
Did you sign up for a Substack account just to tell me this? Fascinating.
Looks like it was made automatically on clicking a button while posting the comment.