Comment by reducesuffering
Comment by reducesuffering a day ago
Will people finally wake up that the AGI X-Risk people have been right and we’re rapidly approaching a really fucking big deal?
This forum has been so behind for too long.
Sama has been saying this a decade now: “Development of Superhuman machine intelligence is probably the greatest threat to the continued existence of humanity” 2015 https://blog.samaltman.com/machine-intelligence-part-1
Hinton, Ilya, Dario Amodei, RLHF inventor, Deepmind founders. They all get it, which is why they’re the smart cookies in those positions.
First stage is denial, I get it, not easy to swallow the gravity of what’s coming.
People have been predicting the singularity to occur sometimes around 2030 and 2045 waaaay further back then 2015. And not just by enthusiasts, I dimly remember an interview with Richard Darkins from back in the day...
Though that doesn't mean that the current version of language models will ever achieve AGI, and I sincerely doubt they will. They'll likely be a component in the AI, but likely not the thing that "drives"