Comment by blagie
So do books.
Go to the internet circa 2000, and look for bomb-making manuals. Plenty of them online. Plenty of them incorrect.
I'm not sure where they all went, or if search engines just don't bring them up, but there are plenty of ways to blow your fingers off in books.
My concern is that actual AI safety -- not having the world turned into paperclips or other extinction scenarios are being ignored, in favor of AI user safety (making sure I don't hurt myself).
That's the opposite of making AIs actually safe.
If I were an AI, interested in taking over the world, I'd subvert AI safety in just that direction (AI controls the humans and prevents certain human actions).
>My concern is that actual AI safety
While I'm not disagreeing with you, I would say you're engaging in the no true Scotsman fallacy in this case.
AI safety is: Ensuring your customer service bot does not tell the customer to fuck off.
AI safety is: Ensuring your bot doesn't tell 8 year olds to eat tide pods.
AI safety is: Ensuring your robot enabled LLM doesn't smash peoples heads in because it's system prompt got hacked.
AI safety is: Ensuring bots don't turn the world into paperclips.
All these fall under safety conditions that you as a biological general intelligence tend to follow unless you want real world repercussions.