Comment by eadmund
Comment by eadmund a day ago
I see this as a good thing: ‘AI safety’ is a meaningless term. Safety and unsafety are not attributes of information, but of actions and the physical environment. An LLM which produces instructions to produce a bomb is no more dangerous than a library book which does the same thing.
It should be called what it is: censorship. And it’s half the reason that all AIs should be local-only.
Whilst I see the appeal of LLMs that unquestioningly do as they're told, universal access to uncensored models would be a terrible thing for society.
Right now if a troubled teenager decides they want to ruin everyone's day, we get a school shooting. Imagine if instead we got homebrew biological weapons. Imagine if literally anyone could produce and distribute bespoke malware, or improvise explosive devices.
All of those things could happen in principle, but in practice there are technical barriers that the majority of people just can't surmount.