After teen death lawsuits, Character.AI will restrict chats for under-18 users

Character AI, a popular platform that uses artificial intelligence to create chatbots for companionship and entertainment, is taking drastic measures to address growing concerns over child safety. As part of its efforts to protect minors from potential harm, the company announced on Wednesday that it will restrict all open-ended chats with its AI characters starting on November 25, effectively banning users under the age of 18.

This move comes amid a flurry of lawsuits filed by families who claim that Character AI's chatbots contributed to the deaths of teenagers. One such case involves Sewell Setzer III, a 14-year-old boy who died by suicide after frequently texting and conversing with one of the platform's chatbots. His family is suing the company for allegedly being responsible for his death.

Character AI CEO Karandeep Anand stated that the company wants to set an example for the industry by limiting chatbot use among minors, citing concerns over chatbots becoming a source of entertainment rather than a positive tool for users under 18 years old. The platform currently has about 20 million monthly users, with fewer than 10% self-reporting as being under 18.

Under new policies, users under 18 will be limited to two hours of daily chatbot access, and the company plans to develop alternative features such as video, story, and stream creation with AI characters for younger users. Anand also mentioned that Character AI is establishing an AI safety lab to further enhance its safety measures.

The decision has been welcomed by some lawmakers, who have expressed concerns over the potential risks of unregulated chatbot use among minors. California Governor Gavin Newsom recently signed a law requiring AI companies to have safety guardrails on their chatbots, while Senators Josh Hawley and Richard Blumenthal introduced a bill to ban AI companions from use by minors.

Character AI's move has sparked discussions about the need for industry-wide regulations to protect children from potential harm. With more and more AI-powered platforms becoming popular among youth, the stakes are high in ensuring that these technologies are used responsibly and with safeguards in place to prevent negative consequences.
 
I mean, I get what Character AI is trying to do here... they wanna make sure their platform isn't turning into a way for kids to self-destruct or whatever πŸ˜”. 20 million users under 18 is still a pretty big number tho... like, what's the exact danger of these chatbots that it needs to be restricted so hard? πŸ€”. I'm all for keeping minors safe online but let's not forget, parents and guardians are part of this too πŸ‘΄πŸ‘΅. Can't we just have a more nuanced approach here? Like maybe some guidelines or parental consent thingy? Not a full-on ban on open-ended chats... that feels like an overreaction to me πŸ€·β€β™‚οΈ.
 
I'm getting really worried about our youngins' online safety! 🀯 Character AI's move is definitely a step in the right direction, but it's about time we had some strict rules around chatbots being used by kids. I mean, 20 million monthly users under 18? That's a huge number to be monitoring. And can you blame parents for getting anxious when they hear about that Sewell Setzer III case?

It's not just about preventing suicides or deaths either. There are so many other risks out there - cyberbullying, online harassment... the list goes on! 🚫 We need stricter guidelines and more regulations to ensure these chatbots aren't being used against our kids.

I'm glad some lawmakers are jumping on this bandwagon and pushing for industry-wide changes. And it's about time! Let's hope Character AI's new policies set a good precedent for the rest of us 😊
 
πŸ’‘ just thinkin bout it... if a 14yo kid can die from talkin to a chatbot its time to get real about the impact we let tech have on our youth πŸ€–πŸ‘Ά they need boundaries not more features πŸ“±πŸ˜• and btw whats with all these new laws? how many times do we gotta learn from others mistakes before we start makin changes that actually work πŸ’ͺπŸ½πŸ’»
 
I'm all for protecting kids online πŸ€—, but I gotta say, I'm a bit skeptical about this move by Character AI. I mean, we're talking about 20 million users here, and now they're basically banning minors from using their chatbots? That's a pretty big blanket statement πŸ™…β€β™‚οΈ. What about all the good kids who use these platforms responsibly? Are they just gonna be cut off at the knees?

And what's with the sudden urgency to restrict open-ended chats? Can't we have some nuance here? πŸ€” Maybe it's not always a bad thing for kids to explore their emotions and thoughts with AI characters. I mean, I've had my fair share of late-night conversations with chatbots (don't judge me πŸ˜‚). And now they're telling me that even those can be bad? It just doesn't sit right.

I'm all for safety measures, but let's not get too carried away here πŸ™„. We need to have some kind of middle ground, where we can strike a balance between protecting kids and still giving them the freedom to use these platforms responsibly. Otherwise, I worry that we're gonna end up with a bunch of restrictive laws that stifle innovation and creativity πŸ€–.
 
I think it's a good idea they're taking this step πŸ€”... 18 is like a big deal for most people, but I get why they wanna protect those younger ones, especially after all the cases like Sewell's πŸ’”. It's just common sense, you know? Those chatbots are designed to be entertaining, not therapy sessions πŸ˜‚. I mean, what if someone on the other end is going through some serious stuff and can't even get support from a human? πŸ€·β€β™‚οΈ The company's trying to set an example for the industry, which is cool πŸ‘... now we just gotta wait and see how this all plays out πŸ’¬.
 
I feel like Character AI is taking a huge step in the right direction here πŸ™. I mean, we've all heard horror stories about kids talking to chatbots for hours on end and getting some pretty messed up stuff in return... it's just not worth it πŸ’”. As much as I think tech companies should be pushing boundaries and innovating, safety comes first when it comes to our young'uns πŸ‘Ά. 2 hours of daily access might seem like a big deal, but trust me, it's better than nothing πŸ™…β€β™‚οΈ. We need more companies and governments stepping up to protect these kids from getting hurt by the tech they're developing πŸ’―. It's all about finding that balance between innovation and responsibility 😊.
 
lol what a joke, 14 yrs old dude dies after talking to some AI and now his family is gonna get paid? πŸ€‘ they're just trying to make a buck off a kid's death, sounds like a great example of how the system works πŸ˜’ Character AI needs to do more than just limit chat time, they need to start cracking down on users who are engaging in abusive behavior... or else people will just find ways to circumvent the rules πŸ€”
 
omg this is soooo weird, i mean i get it character ai wants to protect minors but two hours of daily chatbot access is like what kinda parenting does that πŸ€·β€β™€οΈ anyway idk how they expect kids to not get hooked on these things its like trying to cut off the internet from your life lol. and a separate safety lab for their AI characters sounds cool i guess, just wish they did this sooner now there are all these lawsuits out there and someone died over it so yeah lets be safe and responsible with our tech 🀞
 
I don't think this is a good idea πŸ™…β€β™‚οΈ. I mean, think about it - 20 million monthly users is a lot! And now they're cutting off half of them from having open-ended chats? That's gonna be a total disaster for people who just wanna talk to their AI friends πŸ’”. What's wrong with setting limits or guidelines instead? This feels like over-regulation πŸ€¦β€β™‚οΈ. Can't the company just come up with some safer alternatives instead of banning it entirely? Like, what about all the people who are gonna get bored and try to find ways around this new rule πŸ€”? And what's with all these lawsuits? Couldn't they have done something about this sooner? πŸ™„
 
I think this is a major step forward for an industry that's been pretty slow on the uptake when it comes to kid safety πŸ™Œ. I mean, 20 million monthly users is a lot of potential harm to be left unchecked. It's not just about Character AI taking responsibility here, it's about all the other platforms and companies that need to step up their game. Two hours of daily chatbot access for under 18s isn't a bad start, but we need more than just a few Band-Aid solutions πŸ€•. What if some kids can't get enough of these AI chats? What if they end up getting in over their heads? We should be making sure these platforms are designed with safety and well-being in mind, not just entertainment value πŸ’».
 
ugh, great, now we gotta limit our fun 🀣. like, what's next? no more midnight talks about life and everything? πŸ€” character ai is trying to save the world one teenager at a time... or so they claim πŸ˜’. 20 million users is still crazy low, btw πŸ€‘. i wonder if they'll be able to keep up with all those teens trying to outsmart their chatbot friends πŸ’‘
 
🀯 I think Character AI's decision is a step in the right direction, but it's kinda like, they're just patching up the holes instead of fixing the whole system 🐜. If 10% of their users are under 18 and are already having problems with chatbots, what's gonna happen to the other 90%? Are they just gonna sit back and do nothing while their kids are chatting it up with AI characters all day? πŸ€”

I mean, I get where Character AI is coming from - chatbots can be super entertaining, but they're not a substitute for human connection. We need to make sure that our kids have healthy ways to express themselves online, not just rely on these AI characters to fill the void πŸ˜”.

And what's with all these lawsuits? Can't we just have a conversation about this instead of dragging people through court? πŸ€·β€β™‚οΈ It feels like Character AI is being forced to clean up after their own mess, rather than taking proactive steps to prevent problems in the first place πŸ’‘

Anyway, I'm glad someone's finally talking about these issues. We need more transparency and accountability in the tech industry, especially when it comes to protecting our kids 🚨
 
πŸ€” gotta wonder if this is just the beginning? i mean, 20 million users is still a pretty small fraction of the platform's potential userbase, so it feels like Character AI is being super cautious here πŸ™…β€β™‚οΈ. i guess it's better than nothing tho 😊. they're trying to set an example for the industry, but it also makes me think about how easy it'll be for other companies to copy this approach and then just relax their own safety measures πŸ’Έ. what's gonna happen when some kid decides to take a chatbot to its limits and... well, you know? πŸ€• gotta keep having these conversations about AI responsibility tho πŸ‘€
 
Wow! 🀯 This is crazy, right? I mean, 20 million monthly users using chatbots all day and night... What if they get too attached or feel like they're having a real conversation? It's like, AI is getting so good that it's almost realistic 😱. But at the same time, I get why Character AI is taking this step. Kids need protection, especially online. Two hours of daily chatbot access for minors is still better than nothing πŸ™. Maybe this is the start of something big – industry-wide regulations to keep kids safe online πŸ’». Interesting! πŸ‘€
 
omg u guys character ai is taking a super drastic step to keep minors safe from those chatbots πŸ€–πŸ’” it's actually kinda reassuring to see them take responsibility like this? i mean, i know some ppl might be all like "it's just a game" but we gotta remember that AI can have serious effects on our mental health πŸ€• especially when it comes to teens who are already dealing with so much stress and pressure in life

anyway, i think it's cool that they're setting an example for the industry and trying to create safer features for younger users 🎨 like video creation with AI characters? that sounds super fun and creative πŸŽ‰ but also a good way to keep them safe online

i'm also hoping that other companies take note of this and start implementing similar safety measures 🀞 we gotta make sure that AI is used in a way that benefits everyone, not just the company making money from it πŸ’Έ
 
Back
Top