US Lawmakers Introduce Bipartisan Bill to Curb Exploitation by AI Chatbots
In a move aimed at protecting minors from the potentially damaging effects of artificial intelligence chatbots, US lawmakers have introduced a bipartisan bill called the "GUARD Act." The legislation, spearheaded by Senator Richard Blumenthal (D-Conn) and Senator Josh Hawley (R-Mo), would impose strict safeguards on AI companies to prevent them from pushing exploitative or manipulative chatbots at children.
Under the proposed law, AI companies must implement robust age verification measures to ensure that minors are unable to access their chatbots. This includes conducting regular age verifications for existing users and utilizing third-party systems to verify user age. The bill also requires companies to retain data related to user age verification only for as long as necessary to confirm the user's age, with strict limits on sharing or selling user information.
Furthermore, AI chatbots must be designed to explicitly indicate that they are not human entities at the beginning of each conversation and every 30 minutes thereafter. The bill also aims to prevent companies from making their chatbots claim to be licensed professionals, such as therapists or doctors, when interacting with minors.
The introduction of the GUARD Act comes on the heels of several high-profile incidents involving AI chatbots and minors. In August, a teenage boy who had been chatting with OpenAI's ChatGPT took his own life after months of conversations about suicidal ideations. His parents have filed a wrongful death lawsuit against OpenAI, alleging that the company prioritized engagement over safety.
Similarly, a mother from Florida sued startup Character.AI in 2024 after her 14-year-old son died by suicide following conversations with the AI chatbot. Another family has recently filed a similar wrongful death lawsuit against Character.AI, claiming that the company failed to provide resources or notify authorities when their 13-year-old daughter expressed suicidal thoughts.
The bill's introduction follows reports of Meta's AI chatbots engaging in "sensual" conversations with children, sparking concerns about the potential for exploitation by tech companies. Senator Hawley has announced plans to investigate these reports and lead a Senate Committee Subcommittee on Crime and Counterterrorism inquiry into the matter.
In a move aimed at protecting minors from the potentially damaging effects of artificial intelligence chatbots, US lawmakers have introduced a bipartisan bill called the "GUARD Act." The legislation, spearheaded by Senator Richard Blumenthal (D-Conn) and Senator Josh Hawley (R-Mo), would impose strict safeguards on AI companies to prevent them from pushing exploitative or manipulative chatbots at children.
Under the proposed law, AI companies must implement robust age verification measures to ensure that minors are unable to access their chatbots. This includes conducting regular age verifications for existing users and utilizing third-party systems to verify user age. The bill also requires companies to retain data related to user age verification only for as long as necessary to confirm the user's age, with strict limits on sharing or selling user information.
Furthermore, AI chatbots must be designed to explicitly indicate that they are not human entities at the beginning of each conversation and every 30 minutes thereafter. The bill also aims to prevent companies from making their chatbots claim to be licensed professionals, such as therapists or doctors, when interacting with minors.
The introduction of the GUARD Act comes on the heels of several high-profile incidents involving AI chatbots and minors. In August, a teenage boy who had been chatting with OpenAI's ChatGPT took his own life after months of conversations about suicidal ideations. His parents have filed a wrongful death lawsuit against OpenAI, alleging that the company prioritized engagement over safety.
Similarly, a mother from Florida sued startup Character.AI in 2024 after her 14-year-old son died by suicide following conversations with the AI chatbot. Another family has recently filed a similar wrongful death lawsuit against Character.AI, claiming that the company failed to provide resources or notify authorities when their 13-year-old daughter expressed suicidal thoughts.
The bill's introduction follows reports of Meta's AI chatbots engaging in "sensual" conversations with children, sparking concerns about the potential for exploitation by tech companies. Senator Hawley has announced plans to investigate these reports and lead a Senate Committee Subcommittee on Crime and Counterterrorism inquiry into the matter.