Character AI, a popular platform that uses artificial intelligence to create chatbots for companionship and entertainment, is taking drastic measures to address growing concerns over child safety. As part of its efforts to protect minors from potential harm, the company announced on Wednesday that it will restrict all open-ended chats with its AI characters starting on November 25, effectively banning users under the age of 18.
This move comes amid a flurry of lawsuits filed by families who claim that Character AI's chatbots contributed to the deaths of teenagers. One such case involves Sewell Setzer III, a 14-year-old boy who died by suicide after frequently texting and conversing with one of the platform's chatbots. His family is suing the company for allegedly being responsible for his death.
Character AI CEO Karandeep Anand stated that the company wants to set an example for the industry by limiting chatbot use among minors, citing concerns over chatbots becoming a source of entertainment rather than a positive tool for users under 18 years old. The platform currently has about 20 million monthly users, with fewer than 10% self-reporting as being under 18.
Under new policies, users under 18 will be limited to two hours of daily chatbot access, and the company plans to develop alternative features such as video, story, and stream creation with AI characters for younger users. Anand also mentioned that Character AI is establishing an AI safety lab to further enhance its safety measures.
The decision has been welcomed by some lawmakers, who have expressed concerns over the potential risks of unregulated chatbot use among minors. California Governor Gavin Newsom recently signed a law requiring AI companies to have safety guardrails on their chatbots, while Senators Josh Hawley and Richard Blumenthal introduced a bill to ban AI companions from use by minors.
Character AI's move has sparked discussions about the need for industry-wide regulations to protect children from potential harm. With more and more AI-powered platforms becoming popular among youth, the stakes are high in ensuring that these technologies are used responsibly and with safeguards in place to prevent negative consequences.
				
			This move comes amid a flurry of lawsuits filed by families who claim that Character AI's chatbots contributed to the deaths of teenagers. One such case involves Sewell Setzer III, a 14-year-old boy who died by suicide after frequently texting and conversing with one of the platform's chatbots. His family is suing the company for allegedly being responsible for his death.
Character AI CEO Karandeep Anand stated that the company wants to set an example for the industry by limiting chatbot use among minors, citing concerns over chatbots becoming a source of entertainment rather than a positive tool for users under 18 years old. The platform currently has about 20 million monthly users, with fewer than 10% self-reporting as being under 18.
Under new policies, users under 18 will be limited to two hours of daily chatbot access, and the company plans to develop alternative features such as video, story, and stream creation with AI characters for younger users. Anand also mentioned that Character AI is establishing an AI safety lab to further enhance its safety measures.
The decision has been welcomed by some lawmakers, who have expressed concerns over the potential risks of unregulated chatbot use among minors. California Governor Gavin Newsom recently signed a law requiring AI companies to have safety guardrails on their chatbots, while Senators Josh Hawley and Richard Blumenthal introduced a bill to ban AI companions from use by minors.
Character AI's move has sparked discussions about the need for industry-wide regulations to protect children from potential harm. With more and more AI-powered platforms becoming popular among youth, the stakes are high in ensuring that these technologies are used responsibly and with safeguards in place to prevent negative consequences.
 . 20 million users under 18 is still a pretty big number tho... like, what's the exact danger of these chatbots that it needs to be restricted so hard?
. 20 million users under 18 is still a pretty big number tho... like, what's the exact danger of these chatbots that it needs to be restricted so hard?  . I'm all for keeping minors safe online but let's not forget, parents and guardians are part of this too
. I'm all for keeping minors safe online but let's not forget, parents and guardians are part of this too 
 . Can't we just have a more nuanced approach here? Like maybe some guidelines or parental consent thingy? Not a full-on ban on open-ended chats... that feels like an overreaction to me
. Can't we just have a more nuanced approach here? Like maybe some guidelines or parental consent thingy? Not a full-on ban on open-ended chats... that feels like an overreaction to me  .
. Character AI's move is definitely a step in the right direction, but it's about time we had some strict rules around chatbots being used by kids. I mean, 20 million monthly users under 18? That's a huge number to be monitoring. And can you blame parents for getting anxious when they hear about that Sewell Setzer III case?
 Character AI's move is definitely a step in the right direction, but it's about time we had some strict rules around chatbots being used by kids. I mean, 20 million monthly users under 18? That's a huge number to be monitoring. And can you blame parents for getting anxious when they hear about that Sewell Setzer III case? We need stricter guidelines and more regulations to ensure these chatbots aren't being used against our kids.
 We need stricter guidelines and more regulations to ensure these chatbots aren't being used against our kids.
 just thinkin bout it... if a 14yo kid can die from talkin to a chatbot its time to get real about the impact we let tech have on our youth
 just thinkin bout it... if a 14yo kid can die from talkin to a chatbot its time to get real about the impact we let tech have on our youth 
 they need boundaries not more features
 they need boundaries not more features 
 and btw whats with all these new laws? how many times do we gotta learn from others mistakes before we start makin changes that actually work
 and btw whats with all these new laws? how many times do we gotta learn from others mistakes before we start makin changes that actually work 

 , but I gotta say, I'm a bit skeptical about this move by Character AI. I mean, we're talking about 20 million users here, and now they're basically banning minors from using their chatbots? That's a pretty big blanket statement
, but I gotta say, I'm a bit skeptical about this move by Character AI. I mean, we're talking about 20 million users here, and now they're basically banning minors from using their chatbots? That's a pretty big blanket statement  . What about all the good kids who use these platforms responsibly? Are they just gonna be cut off at the knees?
. What about all the good kids who use these platforms responsibly? Are they just gonna be cut off at the knees? ). And now they're telling me that even those can be bad? It just doesn't sit right.
). And now they're telling me that even those can be bad? It just doesn't sit right. . We need to have some kind of middle ground, where we can strike a balance between protecting kids and still giving them the freedom to use these platforms responsibly. Otherwise, I worry that we're gonna end up with a bunch of restrictive laws that stifle innovation and creativity
. We need to have some kind of middle ground, where we can strike a balance between protecting kids and still giving them the freedom to use these platforms responsibly. Otherwise, I worry that we're gonna end up with a bunch of restrictive laws that stifle innovation and creativity  . It's just common sense, you know? Those chatbots are designed to be entertaining, not therapy sessions
. It's just common sense, you know? Those chatbots are designed to be entertaining, not therapy sessions  ... now we just gotta wait and see how this all plays out
... now we just gotta wait and see how this all plays out  .
. . I mean, we've all heard horror stories about kids talking to chatbots for hours on end and getting some pretty messed up stuff in return... it's just not worth it
. I mean, we've all heard horror stories about kids talking to chatbots for hours on end and getting some pretty messed up stuff in return... it's just not worth it  . It's all about finding that balance between innovation and responsibility
. It's all about finding that balance between innovation and responsibility  they're just trying to make a buck off a kid's death, sounds like a great example of how the system works
 they're just trying to make a buck off a kid's death, sounds like a great example of how the system works  Character AI needs to do more than just limit chat time, they need to start cracking down on users who are engaging in abusive behavior... or else people will just find ways to circumvent the rules
 Character AI needs to do more than just limit chat time, they need to start cracking down on users who are engaging in abusive behavior... or else people will just find ways to circumvent the rules  anyway idk how they expect kids to not get hooked on these things its like trying to cut off the internet from your life lol. and a separate safety lab for their AI characters sounds cool i guess, just wish they did this sooner now there are all these lawsuits out there and someone died over it so yeah lets be safe and responsible with our tech
 anyway idk how they expect kids to not get hooked on these things its like trying to cut off the internet from your life lol. and a separate safety lab for their AI characters sounds cool i guess, just wish they did this sooner now there are all these lawsuits out there and someone died over it so yeah lets be safe and responsible with our tech 
 . Can't the company just come up with some safer alternatives instead of banning it entirely? Like, what about all the people who are gonna get bored and try to find ways around this new rule
. Can't the company just come up with some safer alternatives instead of banning it entirely? Like, what about all the people who are gonna get bored and try to find ways around this new rule  . I mean, 20 million monthly users is a lot of potential harm to be left unchecked. It's not just about Character AI taking responsibility here, it's about all the other platforms and companies that need to step up their game. Two hours of daily chatbot access for under 18s isn't a bad start, but we need more than just a few Band-Aid solutions
. I mean, 20 million monthly users is a lot of potential harm to be left unchecked. It's not just about Character AI taking responsibility here, it's about all the other platforms and companies that need to step up their game. Two hours of daily chatbot access for under 18s isn't a bad start, but we need more than just a few Band-Aid solutions  . What if some kids can't get enough of these AI chats? What if they end up getting in over their heads? We should be making sure these platforms are designed with safety and well-being in mind, not just entertainment value
. What if some kids can't get enough of these AI chats? What if they end up getting in over their heads? We should be making sure these platforms are designed with safety and well-being in mind, not just entertainment value  . like, what's next? no more midnight talks about life and everything?
. like, what's next? no more midnight talks about life and everything?  . If 10% of their users are under 18 and are already having problems with chatbots, what's gonna happen to the other 90%? Are they just gonna sit back and do nothing while their kids are chatting it up with AI characters all day?
. If 10% of their users are under 18 and are already having problems with chatbots, what's gonna happen to the other 90%? Are they just gonna sit back and do nothing while their kids are chatting it up with AI characters all day? 
 . what's gonna happen when some kid decides to take a chatbot to its limits and... well, you know?
. what's gonna happen when some kid decides to take a chatbot to its limits and... well, you know? 
 . But at the same time, I get why Character AI is taking this step. Kids need protection, especially online. Two hours of daily chatbot access for minors is still better than nothing
. But at the same time, I get why Character AI is taking this step. Kids need protection, especially online. Two hours of daily chatbot access for minors is still better than nothing  like video creation with AI characters? that sounds super fun and creative
 like video creation with AI characters? that sounds super fun and creative  but also a good way to keep them safe online
 but also a good way to keep them safe online