Comment by Aurornis

Comment by Aurornis a day ago

0 replies

Agree. The base claims about LLMs getting bigger, more popular, and capturing people's imagination are right. Those claims are as easy as it gets, though.

Look into the specific claims and it's not as amazing. Like the claim that models will require an entire year to train, when in reality it's on the order of weeks.

The societal claims also fall apart quickly:

> Censorship is widespread and increasing, as it has for the last decade or two. Big neural nets read posts and view memes, scanning for toxicity and hate speech and a few other things. (More things keep getting added to the list.) Someone had the bright idea of making the newsfeed recommendation algorithm gently ‘nudge’ people towards spewing less hate speech; now a component of its reward function is minimizing the probability that the user will say something worthy of censorship in the next 48 hours.

This is a common trend in rationalist and "X-risk" writers: Write a big article with mostly safe claims (LLMs will get bigger and perform better!) and a lot of hedging, then people will always see the article as primarily correct. When you extract out the easy claims and look at the specifics, it's not as impressive.

This article also shows some major signs that the author is deeply embedded in specific online bubbles, like this:

> Most of America gets their news from Twitter, Reddit, etc.

Sites like Reddit and Twitter feel like the entire universe when you're embedded in them, but when you step back and look at the numbers only a fraction of the US population are active users.