Comment by AlphaAndOmega0

Comment by AlphaAndOmega0 a day ago

12 replies

Daniel Kokotajlo released the (excellent) 2021 forecast. He was then hired by OpenAI, and not at liberty to speak freely, until he quit in 2024. He's part of the team making this forecast.

The others include:

Eli Lifland, a superforecaster who is ranked first on RAND’s Forecasting initiative. You can read more about him and his forecasting team here. He cofounded and advises AI Digest and co-created TextAttack, an adversarial attack framework for language models.

Jonas Vollmer, a VC at Macroscopic Ventures, which has done its own, more practical form of successful AI forecasting: they made an early stage investment in Anthropic, now worth $60 billion.

Thomas Larsen, the former executive director of the Center for AI Policy, a group which advises policymakers on both sides of the aisle.

Romeo Dean, a leader of Harvard’s AI Safety Student Team and budding expert in AI hardware.

And finally, Scott Alexander himself.

kridsdale3 a day ago

TBH, this kind of reads like the pedigrees of the former members of the OpenAI board. When the thing blew up, and people started to apply real scrutiny, it turned out that about half of them had no real experience in pretty much anything at all, except founding Foundations and instituting Institutes.

A lot of people (like the Effective Altruism cult) seem to have made a career out of selling their Sci-Fi content as policy advice.

  • MrScruff a day ago

    I kind of agree - since the Bostrom book there is a cottage industry of people with non-technical backgrounds writing papers about singularity thought experiments, and it does seem to be on the spectrum with hard sci-fi writing. A lot of these people are clearly intelligent, and it's not even that I think everything they say is wrong (I made similar assumptions long ago before I'd even heard of Ray Kurzweil and the Singularity, although at the time I would have guessed 2050). It's just that they seem to believe their thought process and Bayesian logic is more rigourous than it actually is.

  • flappyeagle a day ago

    c'mon man, you don't believe that, let's have a little less disingenuousness on the internet

    • arduanika a day ago

      How would you know what he believes?

      There's hype and there's people calling bullshit. If you work from the assumption that the hype people are genuine, but the people calling bullshit can't be for real, that's how you get a bubble.

      • flappyeagle 10 hours ago

        Because they are not the same in any way. It’s not a bunch of junior academics, it’s literally including someone who worked at OpenAI

Bjorkbat 6 hours ago

Minor pet-peeve of mine, I really don't like the term "superforecaster". First time I encountered it was in association with some guy who was making predictions a year or two out.

Which to be fair it actually is kind of impressive if someone can make accurate predictions about the future that far head, but only because people are really bad at predicting the future.

Implicitly when I hear "superforecaster" I think they're someone that's really good at predicting the future, but deeper inspection often reveals that "the future" is constrained to the next 2 years. Beyond that they tend to be as bad as any other "futurist".

pixodaros 16 hours ago

Scott Alexander, for what its worth, is a psychiatrist, race science enthusiast, and blogger whose closest connection to software development is Bay Area house parties and a failed startup called MetaMed (2012-2015) https://rationalwiki.org/wiki/MetaMed

[removed] 15 hours ago
[deleted]
nice_byte a day ago

this sounds like a bunch of people who make a living _talking_ about the technology, which lends them close to 0 credibility.

superconduct123 a day ago

I mean either researchers creating new models or people building products using the current models

Not all these soft roles