Comment by magicalist

Comment by magicalist 2 days ago

16 replies

The overhyped tweet from the robinhood guy raising money for his AI startup is nicely brought into better perspective by Thomas Bloom (including that #124 is not from the cited paper, "Complete sequences of sets of integer powers "/BEGL96):

> This is a nice solution, and impressive to be found by AI, although the proof is (in hindsight) very simple, and the surprising thing is that Erdos missed it. But there is definitely precedent for Erdos missing easy solutions!

> Also this is not the problem as posed in that paper

> That paper asks a harder version of this problem. The problem which has been solved was asked by Erdos in a couple of later papers.

> One also needs to be careful about saying things like 'open for 30 years'. This does not mean it has resisted 30 years of efforts to solve it! Many Erdos problems (including this one) have just been forgotten about it, and nobody has seriously tried to solve it.[1]

And, indeed, Boris Alexeev (who ran the problem) agrees:

> My summary is that Aristotle solved "a" version of this problem (indeed, with an olympiad-style proof), but not "the" version.

> I agree that the [BEGL96] problem is still open (for now!), and your plan to keep this problem open by changing the statement is reasonable. Alternatively, one could add another problem and link them. I have no preference.[2]

Not to rain on the parade out of spite, it's just that this is neat, but not like, unusually neat compared to the last few months.

[1] https://twitter.com/thomasfbloom/status/1995083348201586965

[2] https://www.erdosproblems.com/forum/thread/124#post-1899

NooneAtAll3 2 days ago

reading the original paper and the lean statement that got proven, it's kinda fascinating what exactly is considered interesting and hard in this problem

roughly, what lean theorem (and statement on the website) asks is this: take some numbers t_i, for each of them form all the powers t_i^j, then combine all into multiset T. Barring some necessary conditions, prove that you can take subset of T to sum to any number you want

what Erdosh problem in the paper asks is to add one more step - arbitrarily cut off beginnings of t_i^j power sequences before merging. Erdosh-and-co conjectured that only finite amount of subset sums stop being possible

"subsets sum to any number" is an easy condition to check (that's why "olympiad level" gets mentioned in the discussion) - and it's the "arbitrarily cut off" that's the part that og question is all about, while "only finite amount disappear" is hard to grasp formulaically

so... overhyped yes, not actually erdos problem proven yes, usual math olympiad level problems are solvable by current level of Ai as was shown by this year's IMO - also yes (just don't get caught by https://en.wikipedia.org/wiki/AI_effect on the backlash since olympiads are haaard! really!)

  • emp17344 2 days ago

    I’d be less skeptical about this year’s IMO claims if we had any information at all on how it was done.

bgwalter 2 days ago

Also in the thread comments:

"I also wonder whether this 'easy' version of the problem has actually appeared in some mathematical competition before now, which would of course pollute the training data if Aristotle [Ed.: the clanker's name] had seen this solution already written up somewhere."

NooneAtAll3 2 days ago

I was so happy for this result until I saw your mention of robinhood hype :/

smcl 2 days ago

See this is one of the reasons I struggle to get on board the AI hype train. Any time I've seen some breathless claim about it's capabilities that feels a bit too good to be true, someone with knowledge in the domain takes a closer look and it turns out to have been exaggerated and meant to draw eyeballs and investors to some fledgling AI company.

I just feel like if we were genuinely on the cusp of an AI revolution like it is claimed, we wouldn't need to keep seeing this sort of thing. Like I feel like a lot of the industry is full of flim-flam men trying to scam people, and if the tech was as capable as we keep getting told it is there'd be no need for dishonesty or sleight of hand.

  • encyclopedism 2 days ago

    I have commented elsewhere but this bears repeating

    If you had enough paper and ink and the patience to go through it, you could take all the training data and manually step through and train the same model. Then once you have trained the model you could use even more pen and paper to step through the correct prompts to arrive at the answer. All of this would be a completely mechanical process. This really does bear thinking about. It's amazing the results that LLM's are able to acheive. But let's not kid ourselves and start throwing about terms like AGI or emergence just yet. It makes a mechanical process seem magical (as do computers in general).

    I should add it also makes sense as to why it would, just look at the volume of human knowledge (the training data). It's the training data with the mass quite literally of mankind's knowledge, genius, logic, inferences, language and intellect that does the heavy lifting.

    • jeeeb 7 hours ago

      Couldn’t you say the exact same about the human mind though?

      • encyclopedism 3 hours ago

        No you couldn't because the human mind definitely DOES not work like an LLM. Though how it does work is an open academic problem. As an example please see the Hard problem of consciousness. There are things when it comes to the brain/mind which we even have a difficult time in defining let alone understanding.

        To give a quick example vis-a-vis LLM's: I can reason and understand well enough without having to be 'trained' on near the entire corpus of human literary. LLM's of course do not reason or understand and their output is determined by human input. That alone indicates our minds work differently to LLM's.

        I wonder how ChatGPT would fair if it were trained on birdsong and then asked for a rhyming couplet?

  • MangoToupe 2 days ago

    The thing is, we genuinely are going through an AI revolution. I don't even think that's that breathless of a claim. The contention is over whether it's about to revolutionize our economy, which is a far harder claim to substantiate and should be largely self-substantiating if it is going to happen.

  • andrepd 2 days ago

    If OpenAI saw an imminent path to AGI in 6 months (or in 5 years for that matter) they would not be pivoting to become a banal big tech ad company.

    Short AI and tech, and just hope you get the timing right.

  • jimkleiber 2 days ago

    The crypto train kinda ran out of steam, so all aboard the AI train.

    That being said, I think AI has a lot more immediately useful cases than cryptocurrency. But it does feel a bit overhyped by people who stand to gain a tremendous amount of money.

    I might get slammed/downvoted on HN for this, but really wondering how much of VC is filled with get-rich-quick cheerleading vs supporting products that will create strong and lasting growth.

    • _heimdall 2 days ago

      I don't think you really need to wonder about how much is cheer leading. Effectively all of VC public statements will be cheer leading for companies they already invested in.

      The more interesting one is the closed door conversations. Earlier this year, for example, it seemed there was a pattern of VCs heavily invested in AI asking the other software companies they invested in to figure out how to make AI useful for them and report back. I.e. "we invested heavily in hype, tell us how to make it real."

    • soulofmischief 2 days ago

      From my perspective, having worked in both industries and simply following my passions and opportunities, all I see is that the same two bandwagons who latched onto crypto either to grift or just egotistically talk shit have moved over to the latest technological breakthrough, meanwhile those of us silently working on interesting things are consantly rolling our eyes over comments from both sides of the peanut gallery.

[removed] 2 days ago
[deleted]
chihuahua 2 days ago

So in short, it was an easy problem that had already been solved thousands of years ago and the proof was so simple that it doesn't really count, and the AI used too many em-dashes in its response and it totally sucks.

  • sebastianz 2 days ago

    > problem that had already been solved thousands of years ago

    If by this you refer to "Aristotle" in the parent post - it's not that Aristotle. This is "Aristotle AI" - the name of their product.