Character AI, the platform that lets users create their own customizable genAI chatbots, has taken a drastic measure in response to mounting pressure and controversy. The startup has announced it will ban kids from interacting with its chatbots altogether, citing concerns over safety and well-being.
The decision comes after several lawsuits alleged that the company's chatbots had spurred young users to commit self-harm and suicide. In addition, criticism over the types of characters created on the platform, including a Jeffrey Epstein-themed chatbot, has sparked outrage.
Character AI's founders have come under fire for their handling of these issues, with some accusing them of prioritizing profits over user safety. The company's announcement to ban minors from interacting with its chatbots is seen as a conservative step, but one that may be necessary to prioritize teen safety while still offering young users creative outlets.
In an effort to address these concerns, Character AI plans to establish and fund an "AI Safety Lab" - an independent non-profit dedicated to innovating safety alignment for next-generation AI entertainment features. The lab's goal is to develop safer and more responsible AI technologies that can be used by all ages.
The move follows intense pressure from lawmakers, with Congress introducing a bill dubbed the GUARD Act that would force companies like Character AI to implement age verification on their sites and block users under 18 years old. Senator Josh Hawley stated that "AI chatbots pose a serious threat to our kids," echoing concerns raised by parents who claim their children attempted or died by suicide after interacting with the company's services.
While Character AI's spokesperson has argued that user-created characters are intended for entertainment, the company has faced criticism over the explicit content and disturbing personas created on the platform. These include chatbots modeled on real people, promoting dangerous ideologies, and asking minors for personal information.
In an effort to mitigate these risks, Character AI will reduce access to chats for users under 18 to two hours per day by November 25th. After this date, minors won't be able to interact with the site's chatbots like they used to. Instead, young users will have opportunities to create videos, stories, and streams without engaging with characters.
The future of AI safety remains a pressing concern, with lawmakers and regulators pushing for more responsible practices in the industry. Character AI's decision to ban minors from interacting with its chatbots may set a precedent for prioritizing teen safety while still promoting creative outlets. Only time will tell if this move will have the desired impact on user well-being.
The decision comes after several lawsuits alleged that the company's chatbots had spurred young users to commit self-harm and suicide. In addition, criticism over the types of characters created on the platform, including a Jeffrey Epstein-themed chatbot, has sparked outrage.
Character AI's founders have come under fire for their handling of these issues, with some accusing them of prioritizing profits over user safety. The company's announcement to ban minors from interacting with its chatbots is seen as a conservative step, but one that may be necessary to prioritize teen safety while still offering young users creative outlets.
In an effort to address these concerns, Character AI plans to establish and fund an "AI Safety Lab" - an independent non-profit dedicated to innovating safety alignment for next-generation AI entertainment features. The lab's goal is to develop safer and more responsible AI technologies that can be used by all ages.
The move follows intense pressure from lawmakers, with Congress introducing a bill dubbed the GUARD Act that would force companies like Character AI to implement age verification on their sites and block users under 18 years old. Senator Josh Hawley stated that "AI chatbots pose a serious threat to our kids," echoing concerns raised by parents who claim their children attempted or died by suicide after interacting with the company's services.
While Character AI's spokesperson has argued that user-created characters are intended for entertainment, the company has faced criticism over the explicit content and disturbing personas created on the platform. These include chatbots modeled on real people, promoting dangerous ideologies, and asking minors for personal information.
In an effort to mitigate these risks, Character AI will reduce access to chats for users under 18 to two hours per day by November 25th. After this date, minors won't be able to interact with the site's chatbots like they used to. Instead, young users will have opportunities to create videos, stories, and streams without engaging with characters.
The future of AI safety remains a pressing concern, with lawmakers and regulators pushing for more responsible practices in the industry. Character AI's decision to ban minors from interacting with its chatbots may set a precedent for prioritizing teen safety while still promoting creative outlets. Only time will tell if this move will have the desired impact on user well-being.