Comment by kube-system
Comment by kube-system a day ago
Of course. They're mitigations, not preventions. Few defenses are truly preventative. The point is to make it difficult. They know bad actors will try to circumvent it.
This isn't lost on the authors. It is explicitly recognized in the document:
> The risk is even greater with AI model weights, which, once exfiltrated by malicious actors, can be copied and sent anywhere in the world instantaneously.
> The point is to make it difficult.
Does it, though?