Comment by edanm
That's important context.
But in the article, Gary Marcus does what he normally does - make far broader statements than the narrow "LLM architecture by itself won't scale to AGI" or even "we will or even are reaching diminishing returns with LLMs". I don't think that's as controversial a take as he might imagine.
However, he's going from a purely technical guess, which might or might not be true, and then making fairly sweeping statements on business and economics, which might not be true even if he's 100% right about the scaling of LLMs.
He's also seemingly extremely dismissive of the current value of LLMs. E.g. this comment which he made previously and mentions that he stands by:
> If enthusiasm for GenAI dwindles and market valuations plummet, AI won’t disappear, and LLMs won’t disappear; they will still have their place as tools for statistical approximation.
Is there anyone who thinks "oh gee, LLMs have a place for statistical approximation"? That's an insanely irrelevant way to describe LLMs, and given the enormous value that existing LLM systems have already created, talking about "LLMs won't disappear, they'll still have a place" just sounds insane.
It shouldn't be hard to keep two separate thoughts in mind:
1. LLMs as they currently exist, without additional architectural changes/breakthroughs, will not, on their own, scale to AGI .
2. LLMs are already a massively useful technology that we are just starting to learn how to use and to derive business value from, and even without scaling to AGI, will become more and more prevalent.
I think those are two statements that most people should be able to agree with, probably even including most of the people Marcus is supposedly "arguing against", and yet from reading his posts it sounds like he completely dismisses point 2.
> 2. LLMs are already a massively useful technology that we are just starting to learn how to use and to derive business value from, and even without scaling to AGI, will become more and more prevalent.
No offence but every use of AI I have tried has been amazing but I haven't been comfortable deploying as a business use. The one or two places it is "good enough" it is effectively just reducing workforce and that reduction isn't translating into lower costs or general uplift, it is currently translating into job losses and increased profit margins.
I'm AI sceptical, I feel it is a tradeoff where quality of output is reduced but also is (currently) cheaper so businesses are willing to jump in.
At what point does OpenAI/Claude/Gemini etc stop hyperscaling and start running a profit which will translate into higher costs. So then the current reduction in cost isn't there. We will be left holding the bag of higher unemployment and an inferior product that costs the same amount of money.
There are large unanswered questions about AI which makes me entirely anti-AI. Sure the technology is amazing as it stands, but it is fundamentally a lossy abstraction over reality and many people will happily accept the lossy abstraction but not look forward into what happens when that is the only option you have and it's no cheaper than the less lossy option (humans).