Comment by zozbot234
Sure, but unless all solvable problems can be said to "appear as low-hanging fruit in hindsight" this doesn't detract from Tao's assessment in any way. Solving a genuinely complex problem is a different matter than spotting simpler solutions that others had missed.
In this case, the solution might have been missed before simply because the difference between the "easy" and "hard" formulations of the problem wasn't quite clear, including perhaps to Erdős, prior to it being restated (manually) as a Lean goal to be solved. So this is a success story for formalization as much as AI.
One of the math academics on that thread says the following:
> My point is that basic ideas reappear at many places; humans often fail to realize that they apply in a different setting, while a machine doesn't have this problem! I remember seeing this problem before and thinking about it briefly. I admit that I haven't noticed this connection, which is only now quite obvious to me!
Doesn't this sound extremely familiar to all of us who were taught difficult/abstract topics? Looking at the problem, you don't have a slightest idea what is it about but then someone comes along and explains the innerbits and it suddenly "clicks" for you.
So, yeah, this is exactly what I think is happening here. The solution was there, and it was simple, but nobody discovered it up until now. And now that we have an explanation for it we say "oh, it was really simple".
The bit which makes it very interesting is that this hasn't been discovered before and now it has been discovered by the AI model.
Tao challenges this by hypothesizing that it actually was done before but never "released" officially, and which is why the model was able to solve the problem. However, there's no evidence (yet) for his hypothesis.