Comment by samth
I think it's not holding up that well outside of predictions about AI research itself. In particular, he makes a lot of predictions about AI impact on persuasion, propaganda, the information environment, etc that have not happened.
I think it's not holding up that well outside of predictions about AI research itself. In particular, he makes a lot of predictions about AI impact on persuasion, propaganda, the information environment, etc that have not happened.
Could you give some specific examples of things you feel definitely did not come to pass? Because I see a lot of people here talking about how the article missed the mark on propaganda; meanwhile I can tab over to twitter and see a substantial portion of the comment section of every high-engagement tweet being accused of being Russia-run LLM propaganda bots.
This doesn’t seem like a great way to reason about the predictions.
For something like this, saying “There is no evidence showing it” is a good enough refutation.
Counterpointing that “Well, there could be a lot of this going on, but it is in secret.” - that could be a justification for any kooky theory out there. Bigfoot, UFOs, ghosts. Maybe AI has already replaced all of us and we’re Cylons. Something we couldn’t know.
The predictions are specific enough that they are falsifiable, so they should stand or fall based on the clear material evidence supporting or contradicting them.
Agree. The base claims about LLMs getting bigger, more popular, and capturing people's imagination are right. Those claims are as easy as it gets, though.
Look into the specific claims and it's not as amazing. Like the claim that models will require an entire year to train, when in reality it's on the order of weeks.
The societal claims also fall apart quickly:
> Censorship is widespread and increasing, as it has for the last decade or two. Big neural nets read posts and view memes, scanning for toxicity and hate speech and a few other things. (More things keep getting added to the list.) Someone had the bright idea of making the newsfeed recommendation algorithm gently ‘nudge’ people towards spewing less hate speech; now a component of its reward function is minimizing the probability that the user will say something worthy of censorship in the next 48 hours.
This is a common trend in rationalist and "X-risk" writers: Write a big article with mostly safe claims (LLMs will get bigger and perform better!) and a lot of hedging, then people will always see the article as primarily correct. When you extract out the easy claims and look at the specifics, it's not as impressive.
This article also shows some major signs that the author is deeply embedded in specific online bubbles, like this:
> Most of America gets their news from Twitter, Reddit, etc.
Sites like Reddit and Twitter feel like the entire universe when you're embedded in them, but when you step back and look at the numbers only a fraction of the US population are active users.