Comment by empath75
Comment by empath75 a day ago
When someone figures this out, it's going to be a multi billion dollar company, but the safety concerns for actually putting something like this into the hands of children are unbelievable.
Comment by empath75 a day ago
When someone figures this out, it's going to be a multi billion dollar company, but the safety concerns for actually putting something like this into the hands of children are unbelievable.
I'm trying to use my imagination, but what exactly is the fear? Perhaps the AI will explain where baby's come from in graphic detail before the parent is ready to have that conversation or something similar? Or, for us in US, maybe it tells your kid they should wear a bullet proof vest to pre-K instead of bringing a stuffy for naptime?
Essentially, telling kids the truth before they're ready and without typical parental censorship? Or is there some other fear, like the AI will get compromised by a pedo and he'll talk your kid into who knows what? Or similar for "fill in state actor" using mind control on your kid (which, honestly, I feel like is normalized even for adults; eg. Fox News, etc., again US-centric)
I'll respond to the content, because I think there are some genuine questions amongst the condescension and jumping to conclusions.
> telling kids the truth before they're ready and without typical parental censorship
Does AI today reliably respond with "the truth"? There are countless documented incidents of even full-grown, extremely well-educated adults (e.g. lawyers) believing well-phased hallucinations. Kids, and particularly small kids who haven't yet had much education about critical thinking and what to believe, have no chance. Conversational AI today isn't an uncensured search engine into a set of well-reasoned facts, it's an algorithm constructing a response based on what it's learned people on the internet want to hear, with no real concept of what's right or wrong, or a foundational set of knowledge about the world to contrast with and validate against.
> what exactly is the fear
Being fed reliable-sounding misinformation is one. Another is being used for emotional support (which kids do even with non-talking stuffed animals), when the AI has no real concept of how to emotionally support a kid and could just as easily do the opposite. I guess overall, the concern is having a kid spend a large amount of time talking to "someone" who sounds very convincing, has no real sense of morality or truth, and can potentially distort their world view in negative ways.
And yea, there's also exposing kids to subjects they're in no way equipped to handle yet, or encouraging them to do something that would result in harm to themselves or to others. Kids are very suggestible, and it takes a long while for them to develop a real understanding of the consequences of their actions.
Bravo, this is an answer beyond the outright fearmongering that actually makes sense and I wasn't considering. I still struggle with how it's much different than social media in terms of shaping what kids believe and their perception of reality, but I do get what you're saying - that this could be next level dangerous in terms of them believing what it says without much critical thinking.
How about encouraging self-harm, even murder and suicide?
https://www.npr.org/2024/12/10/nx-s1-5222574/kids-character-...
https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-a...
https://www.euronews.com/next/2023/03/31/man-ends-his-life-a...
Can this not occur on Youtube/Roblox and other places where kids using tablets go? Mass generalizations about what I observe -> I don't see why/how parents do the mental gymnastics that tablets are acceptable but AI is to be feared. There's always going to be articles like this, it's a big world everything will have a dark side if you search for it. It's life. [Actually, I think a lot of parents are willing to accept/ignore the risks because tablets offer too great of a service. This type of AI simply won't entertain/babysit a kid long enough for parents to give into it.]
I have a 6 year old FWIW, I'm not some childless ignoramus I just do my risk calcs differently and view it as my job to oversee their use of a device like this. I wouldn't fear it outright because of what could happen. If I took that stance, my kid would never have any experiences at all.
Can't play baseball, I read a story where kid got hit by a bat. Can't travel to Mexico, cartels are in the news again. Home school it is, because shootings. And so on.
> Perhaps the AI will explain where baby's come from in graphic detail before the parent is ready to have that conversation or something similar?
I mean, that's not a silly fear. But perhaps you don't have any children? "Typical parental censorship" doesn't mean prudish pearl-clutching.
I have an autistic child who already struggles to be appropriate with things like personal space and boundaries -- giving him an early "birds and bees talk" could at minimum result in him doing and saying things that could cause severe trauma to his peers. And while he uses less self-control than a typical kid, even "completely normal" kids shouldn't be robbed of their innocence and forced to confront every adult subject until they're mature enough to handle it. There's a reason why content ratings exist.
Explaining difficult subjects to children, such as the Holocaust, sexual assault, etc. is very difficult to do in a way that doesn't leave them scarred, fearful, or worse, end up warping their own moral development so that they identify with the bad actors.
I have a 6 year old. I don't let him use the internet or tablets or phones, so I get it, question was out of curiosity of other people's thought process. I just lack the imagination to know what other people are actually afraid of as I often find people have what I consider far fetched boogeyman imaginations. Yet, they allow their infants to play on an iPad for hours, etc. which I find no more/less risky especially as they become older and can seek out content they prefer. My ban on it for my kid is more so based on my parenting opinion that boredom is a life skill and beneficial to young minds (probably all ages actually) and constant entertainment/screentime is unhealthy. I don't ban the devices because I'm afraid of the content he may encounter, I just want him to enjoy his childhood before it's inevitably stolen by screens.
I think my theory is kind of correct, people generally 'trust' a YouTube censor but an AI censor is currently seen as untrusted boogeyman territory.
Reminds me of Conan O'Brien's old WikiBear skits
This. The idea is super cool in theory! But given how these sort of things work today, having a toy that can have an independent conversation with a kid and that, despite the best intentions of the prompt writer, isn't guaranteed to stay within its "sandbox", is terrifying enough to probably not be worth the risk.
IMO this is only exacerbated by how little children (who are the presumably the target audience for stuffed animals that talk) often don't follow "normal" patterns of conversation or topics, so it feels like it'd be hard to accurately simulate/test ways in which unexpected & undesirable responses could come out.