Comment by aurareturn
Comment by aurareturn 2 days ago
It seems like progress is accelerating, not slowing down.
ARC AGI 2: https://x.com/poetiq_ai/status/2003546910427361402
METR: https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...
Comment by aurareturn 2 days ago
It seems like progress is accelerating, not slowing down.
ARC AGI 2: https://x.com/poetiq_ai/status/2003546910427361402
METR: https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...
The systems around the LLM will get built out. But do you think it will take 50 years to build out like you said before?
I’m thinking 5 years at most.
The key is that the LLMs get smart enough.
The more I think of it the less likely I think it is that "all code written via LLM" will happen at all.
I use LLMs to generate systems that interpret code that I use to express my wishes, but I don't think is would be desirable to express those wishes in natural language all of the time.
That's why people don't think software engineers as a profession will disappear. It'll just change.
Improved benchmarks are undeniably an improvement, but the bottleneck isn't the models anymore, it's the context engineering necessary to harness them. The more time and effort we put into our benchmarking systems the better we're able to differentiate between models, but then when you take an allegedly smart one and try to do something real with it, it behaves like a dumb one again because you haven't put as much work into the harness for the actual task you've asked it to do as you did into the benchmark suite.
The knowledge necessary to do real work with these things is still mostly locked up in the humans that have traditionally done that work.