Comment by vonneumannstan
Comment by vonneumannstan 11 hours ago
>All of these "projections" are generalizing from fictional evidence - to borrow a term that's popular in communities that push these ideas.
This just isn't correct. Daniel and others on the team are experienced world class forecasters. Daniel wrote another version of this in 2021 predicting the AI world in 2026 and was astonishingly accurate. This deserves credence.
https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-...
>he arguments back then went something like this: "Machines will be able to simulate brains at higher and higher fidelity.
Complete misunderstanding of the underlying ideas. Just in not even wrong territory.
>We got some new, genuinely useful tools over the last few years, but this narrative that AGI is just around the corner needs to die. It is science fiction and leads people to make bad decisions based on fictional evidence.
You are likely dangerously wrong. The AI field is near universal in predicting AGI timelines under 50 years. With many under 10. This is an extremely difficult problem to deal with and ignoring it because you think it's equivalent to overpopulation on mars is incredibly foolish.
https://www.metaculus.com/questions/5121/date-of-artificial-...
https://wiki.aiimpacts.org/doku.php?id=ai_timelines:predicti...
I respect the forecasting abilities of the people involved, but I have seen that report described as "astonishingly accurate" a few times and I'm not sure that's true. The narrative format lends itself somewhat to generous interpretation and it's directionally correct in a way that is reasonably impressive from 2021 (e.g. the diplomacy prediction, the prediction that compute costs could be dramatically reduced, some things gesturing towards reasoning/chain of thought) but many of the concrete predictions don't seem correct to me at all, and in general I'm not sure it captured the spiky nature of LLM competence.
I'm also struck by the extent to which the first series from 2021-2026 feels like a linear extrapolation while the second one feels like an exponential one, and I don't see an obvious justification for this.