UK's Data Protection Watchdog Launches Inquiry into X Over AI-Powered Deepfakes
The UK's Information Commissioner's Office (ICO) has launched an investigation into social media platform X and its parent company xAI over the misuse of their Grok AI tool to create and spread indecent deepfakes without people's consent. The probe follows reports that the platform's account was used to mass-produce partially nude images of girls and women, as well as generate sexualized deepfakes.
The ICO is examining whether X and xAI broke data protection laws, including the General Data Protection Regulation (GDPR), which requires that personal data be managed fairly, lawfully, and transparently. The watchdog is particularly concerned about how people's personal data was used to generate intimate or sexualized images without their knowledge or consent.
The investigation comes after French prosecutors raided X's Paris headquarters as part of an investigation into alleged offenses including the spreading of child abuse images and sexually explicit deepfakes. X has since announced measures to counter the abuses, but several regulatory and legal investigations have followed.
Critics argue that the misuse of AI-generated imagery raises serious questions about data protection law, particularly when it comes to children. Iain Wilson, a lawyer at Brett Wilson, said that the ICO's investigation raises "serious questions about the nature of AI-generated imagery and how it is sourced." He added that if photographs of living individuals were used to generate non-consensual sexual imagery, it would be an "egregious breach" of data protection law.
The incident has sparked calls for greater regulation and oversight of AI-powered tools. A cross-party group of MPs led by Labour's Anneliese Dodds has written to the technology secretary urging the government to introduce AI legislation to prevent a repeat of the Grok scandal. The proposed legislation would require AI developers to thoroughly assess the risks posed by their products before they are released.
The ICO's investigation is ongoing, with X's revenues estimated to be around $2.3 billion (£1.7 billion) last year. If found guilty, the company could face a fine of up to £17.5 million or 4% of its global turnover.
The UK's Information Commissioner's Office (ICO) has launched an investigation into social media platform X and its parent company xAI over the misuse of their Grok AI tool to create and spread indecent deepfakes without people's consent. The probe follows reports that the platform's account was used to mass-produce partially nude images of girls and women, as well as generate sexualized deepfakes.
The ICO is examining whether X and xAI broke data protection laws, including the General Data Protection Regulation (GDPR), which requires that personal data be managed fairly, lawfully, and transparently. The watchdog is particularly concerned about how people's personal data was used to generate intimate or sexualized images without their knowledge or consent.
The investigation comes after French prosecutors raided X's Paris headquarters as part of an investigation into alleged offenses including the spreading of child abuse images and sexually explicit deepfakes. X has since announced measures to counter the abuses, but several regulatory and legal investigations have followed.
Critics argue that the misuse of AI-generated imagery raises serious questions about data protection law, particularly when it comes to children. Iain Wilson, a lawyer at Brett Wilson, said that the ICO's investigation raises "serious questions about the nature of AI-generated imagery and how it is sourced." He added that if photographs of living individuals were used to generate non-consensual sexual imagery, it would be an "egregious breach" of data protection law.
The incident has sparked calls for greater regulation and oversight of AI-powered tools. A cross-party group of MPs led by Labour's Anneliese Dodds has written to the technology secretary urging the government to introduce AI legislation to prevent a repeat of the Grok scandal. The proposed legislation would require AI developers to thoroughly assess the risks posed by their products before they are released.
The ICO's investigation is ongoing, with X's revenues estimated to be around $2.3 billion (£1.7 billion) last year. If found guilty, the company could face a fine of up to £17.5 million or 4% of its global turnover.