UK's Data Protection Watchdog Launches Inquiry into X Over Grok AI Sexual Deepfakes
The UK's Information Commissioner's Office (ICO) has launched an investigation into Elon Musk's X and xAI companies over the production of indecent deepfakes using the Grok AI tool without people's consent. The ICO is examining whether X and its parent company, xAI, broke the General Data Protection Regulation (GDPR), a data protection law that requires individuals' personal data to be managed fairly, lawfully, and transparently.
The investigation follows reports of the platform's account being used to mass-produce partially nudified images of girls and women, as well as generating sexualised deepfakes. X has taken steps to address the issue, but several regulatory and legal investigations have followed. The ICO is looking into whether "appropriate safeguards were built into Grok's design and deployment" to prevent such incidents.
The executive director of the ICO, William Malcolm, described the reports about Grok as "deeply troubling questions" regarding how people's personal data has been used without their knowledge or consent. He emphasized that losing control of personal data can cause immediate and significant harm, particularly when children are involved.
GDPR requires that individuals be informed about how their data is used, and breaches of the regulation can result in fines of up to Β£17.5 million or 4% of global turnover. The ICO's investigation may lead to a fine for X if it finds that the company broke GDPR.
The scandal has sparked calls for AI legislation to prevent similar incidents in the future. A cross-party group of MPs led by Labour's Anneliese Dodds has written to the technology secretary, Liz Kendall, urging the government to introduce legislation requiring AI developers to thoroughly assess the risks posed by their products before release. The proposed legislation aims to address existing safeguards that are deemed insufficient.
The case highlights the need for greater regulation and accountability in the use of AI technology, particularly when it comes to the creation and dissemination of intimate images without consent.
The UK's Information Commissioner's Office (ICO) has launched an investigation into Elon Musk's X and xAI companies over the production of indecent deepfakes using the Grok AI tool without people's consent. The ICO is examining whether X and its parent company, xAI, broke the General Data Protection Regulation (GDPR), a data protection law that requires individuals' personal data to be managed fairly, lawfully, and transparently.
The investigation follows reports of the platform's account being used to mass-produce partially nudified images of girls and women, as well as generating sexualised deepfakes. X has taken steps to address the issue, but several regulatory and legal investigations have followed. The ICO is looking into whether "appropriate safeguards were built into Grok's design and deployment" to prevent such incidents.
The executive director of the ICO, William Malcolm, described the reports about Grok as "deeply troubling questions" regarding how people's personal data has been used without their knowledge or consent. He emphasized that losing control of personal data can cause immediate and significant harm, particularly when children are involved.
GDPR requires that individuals be informed about how their data is used, and breaches of the regulation can result in fines of up to Β£17.5 million or 4% of global turnover. The ICO's investigation may lead to a fine for X if it finds that the company broke GDPR.
The scandal has sparked calls for AI legislation to prevent similar incidents in the future. A cross-party group of MPs led by Labour's Anneliese Dodds has written to the technology secretary, Liz Kendall, urging the government to introduce legislation requiring AI developers to thoroughly assess the risks posed by their products before release. The proposed legislation aims to address existing safeguards that are deemed insufficient.
The case highlights the need for greater regulation and accountability in the use of AI technology, particularly when it comes to the creation and dissemination of intimate images without consent.