Comment by falcor84
Comment by falcor84 5 hours ago
> "The creation of CSAM using AI is inherently harmful to children because the machine-learning models utilized by AI have been trained on datasets containing thousands of depictions of known CSAM victims," it says, "revictimizing these real children by using their likeness to generate AI CSAM images into perpetuity."
The word "inherently" there seems like a big stretch to me. I see how it could be harmful to them, but I also see an argument for how such AI generated material is a substitute for the actual CSAM. Has this actually been studied, or is it a taboo topic for policy research?
There's a legally challengable assertion there; "trained on CSAM images".
I imagine an AI image generation model could be readily trained on images of adult soldiers at war and images of children from instagram and then be used to generate imagery of children at war.
I have zero interest in defending exploitation of children, the assertion that children had to have been exploited in order to create images of children engaged in adult activities seems shaky. *
* FWiW I'm sure there are AI models out there that were trained on actual real world CSAM .. it's the implied neccessity that's being questioned here.