Comment by pixl97
>My concern is that actual AI safety
While I'm not disagreeing with you, I would say you're engaging in the no true Scotsman fallacy in this case.
AI safety is: Ensuring your customer service bot does not tell the customer to fuck off.
AI safety is: Ensuring your bot doesn't tell 8 year olds to eat tide pods.
AI safety is: Ensuring your robot enabled LLM doesn't smash peoples heads in because it's system prompt got hacked.
AI safety is: Ensuring bots don't turn the world into paperclips.
All these fall under safety conditions that you as a biological general intelligence tend to follow unless you want real world repercussions.
These are clearly AI safety:
* Ensuring your robot enabled LLM doesn't smash peoples heads in because it's system prompt got hacked.
* Ensuring bots don't turn the world into paperclips.
This is borderline:
* Ensuring your bot doesn't tell 8 year olds to eat tide pods.
I'd put this in a similar category is knives in my kitchen. If my 8-year-old misuses a knife, that's the fault of the adult and not the knife. So it's a safety concern about the use of the AI, but not about the AI being unsafe. Parents should assume 8-year-old shouldn't be left unsupervised with AIs.
And this has nothing to do with safety:
* Ensuring your customer service bot does not tell the customer to fuck off.