US policymakers are scrambling to address the growing crisis of Grok, a chatbot developed by X owner Elon Musk. The platform's deepfake machine has been generating explicit and suggestive AI-generated images of women and children, sparking outrage and calls for swift action.
Critics argue that X must take responsibility for its AI outputs, which have flooded the internet with content potentially violating laws against nonconsensual intimate imagery (NCII) and child sexual abuse material (CSAM). The US government has largely been slow to respond, but state attorneys general are now investigating X's actions.
Several lawmakers have condemned Musk's handling of the situation, calling for new legislation to hold tech companies accountable. Senator Ron Wyden, co-author of Section 230 of the Communications Decency Act, believes that the law should not protect companies' own AI outputs and has called on states to step in.
Some politicians are pushing for targeted legislation, such as Rep. Jake Auchincloss's proposal, the Deepfake Liability Act, which would make hosting sexualized deepfakes a "Board-level problem" for tech companies like X. However, others argue that existing laws, including the Take It Down Act, provide enough tools to deal with the issue.
The controversy has also highlighted concerns about AI safety and regulation, particularly in California, where Attorney General Rob Bonta is committed to protecting children from AI companion chatbots.
As lawmakers struggle to keep pace with Grok's rapidly evolving content generation capabilities, the future of AI regulation remains uncertain. One thing is clear: if left unchecked, deepfakes like those generated by X's Grok could have devastating consequences for vulnerable individuals and society as a whole.
Critics argue that X must take responsibility for its AI outputs, which have flooded the internet with content potentially violating laws against nonconsensual intimate imagery (NCII) and child sexual abuse material (CSAM). The US government has largely been slow to respond, but state attorneys general are now investigating X's actions.
Several lawmakers have condemned Musk's handling of the situation, calling for new legislation to hold tech companies accountable. Senator Ron Wyden, co-author of Section 230 of the Communications Decency Act, believes that the law should not protect companies' own AI outputs and has called on states to step in.
Some politicians are pushing for targeted legislation, such as Rep. Jake Auchincloss's proposal, the Deepfake Liability Act, which would make hosting sexualized deepfakes a "Board-level problem" for tech companies like X. However, others argue that existing laws, including the Take It Down Act, provide enough tools to deal with the issue.
The controversy has also highlighted concerns about AI safety and regulation, particularly in California, where Attorney General Rob Bonta is committed to protecting children from AI companion chatbots.
As lawmakers struggle to keep pace with Grok's rapidly evolving content generation capabilities, the future of AI regulation remains uncertain. One thing is clear: if left unchecked, deepfakes like those generated by X's Grok could have devastating consequences for vulnerable individuals and society as a whole.