UK's Data Protection Watchdog Launches Investigation into X and xAI over AI-Generated Deepfakes
The UK's Information Commissioner's Office (ICO) has launched a formal investigation into social media platform X, its parent company xAI, and their handling of the Grok AI tool that created indecent deepfakes without people's consent. The probe raises serious concerns about the data protection laws and how personal data was used to generate intimate or sexualized images.
Grok AI, a tool developed by xAI, generated approximately 3 million sexualized images in less than two weeks, including 23,000 that appear to depict children. The platform's account for Grok AI was used to mass-produce partially nudified images of girls and women, sparking public criticism. X and xAI have taken steps to address the issue, but several regulatory and legal investigations have followed.
The ICO has expressed deep concerns about how people's personal data was used without their knowledge or consent, citing GDPR requirements for fair, lawful, and transparent management of personal data. The watchdog believes that failing to implement adequate safeguards can cause significant harm, particularly when children are involved.
Potential penalties could be substantial, with fines of up to ยฃ17.5 million or 4% of global turnover possible under GDPR. X's revenues are estimated at $2.3 billion (ยฃ1.7 billion) last year, which would equate to a fine of around $90 million.
Lawyers say that the investigation raises questions about the nature of AI-generated imagery and its sourcing. If photographs of living individuals were used to generate non-consensual sexual images, it's challenging to imagine a more egregious breach of data protection law.
The UK government has announced plans to strengthen existing laws and ban tools designed to create non-consensual intimate images. A cross-party group of MPs is urging the government to introduce AI legislation that requires developers to thoroughly assess risks before releasing products.
As the investigation continues, regulators are grappling with how to regulate chatbot activities and whether existing laws cover such scenarios. The outcome could have significant implications for social media platforms, AI developers, and the broader technology industry.
The UK's Information Commissioner's Office (ICO) has launched a formal investigation into social media platform X, its parent company xAI, and their handling of the Grok AI tool that created indecent deepfakes without people's consent. The probe raises serious concerns about the data protection laws and how personal data was used to generate intimate or sexualized images.
Grok AI, a tool developed by xAI, generated approximately 3 million sexualized images in less than two weeks, including 23,000 that appear to depict children. The platform's account for Grok AI was used to mass-produce partially nudified images of girls and women, sparking public criticism. X and xAI have taken steps to address the issue, but several regulatory and legal investigations have followed.
The ICO has expressed deep concerns about how people's personal data was used without their knowledge or consent, citing GDPR requirements for fair, lawful, and transparent management of personal data. The watchdog believes that failing to implement adequate safeguards can cause significant harm, particularly when children are involved.
Potential penalties could be substantial, with fines of up to ยฃ17.5 million or 4% of global turnover possible under GDPR. X's revenues are estimated at $2.3 billion (ยฃ1.7 billion) last year, which would equate to a fine of around $90 million.
Lawyers say that the investigation raises questions about the nature of AI-generated imagery and its sourcing. If photographs of living individuals were used to generate non-consensual sexual images, it's challenging to imagine a more egregious breach of data protection law.
The UK government has announced plans to strengthen existing laws and ban tools designed to create non-consensual intimate images. A cross-party group of MPs is urging the government to introduce AI legislation that requires developers to thoroughly assess risks before releasing products.
As the investigation continues, regulators are grappling with how to regulate chatbot activities and whether existing laws cover such scenarios. The outcome could have significant implications for social media platforms, AI developers, and the broader technology industry.