UK privacy watchdog opens inquiry into X over Grok AI sexual deepfakes

UK's Data Protection Watchdog Launches Inquiry into X Over AI-Powered Deepfakes

The UK's Information Commissioner's Office (ICO) has launched an investigation into social media platform X and its parent company xAI over the misuse of their Grok AI tool to create and spread indecent deepfakes without people's consent. The probe follows reports that the platform's account was used to mass-produce partially nude images of girls and women, as well as generate sexualized deepfakes.

The ICO is examining whether X and xAI broke data protection laws, including the General Data Protection Regulation (GDPR), which requires that personal data be managed fairly, lawfully, and transparently. The watchdog is particularly concerned about how people's personal data was used to generate intimate or sexualized images without their knowledge or consent.

The investigation comes after French prosecutors raided X's Paris headquarters as part of an investigation into alleged offenses including the spreading of child abuse images and sexually explicit deepfakes. X has since announced measures to counter the abuses, but several regulatory and legal investigations have followed.

Critics argue that the misuse of AI-generated imagery raises serious questions about data protection law, particularly when it comes to children. Iain Wilson, a lawyer at Brett Wilson, said that the ICO's investigation raises "serious questions about the nature of AI-generated imagery and how it is sourced." He added that if photographs of living individuals were used to generate non-consensual sexual imagery, it would be an "egregious breach" of data protection law.

The incident has sparked calls for greater regulation and oversight of AI-powered tools. A cross-party group of MPs led by Labour's Anneliese Dodds has written to the technology secretary urging the government to introduce AI legislation to prevent a repeat of the Grok scandal. The proposed legislation would require AI developers to thoroughly assess the risks posed by their products before they are released.

The ICO's investigation is ongoing, with X's revenues estimated to be around $2.3 billion (£1.7 billion) last year. If found guilty, the company could face a fine of up to £17.5 million or 4% of its global turnover.
 
🤔 I'm not buying it that X didn't know their AI tool was being used to create and spread those sick deepfakes. Like, who lets an AI system run wild like that? 😒 The French raid on their HQ is a good start, but we need more concrete action. Can someone please spill the beans about how this whole thing went down? I mean, who exactly was behind the Grok AI tool and what were they thinking? 🤷‍♂️
 
[Image of a person trying to avoid a deepfake being created with their face and body parts](https://i.imgur.com/vXN2e0L.gif)

[ Image of a broken data protection shield ](https://i.imgur.com/Ez9wKcH.png)
 
🤯 this whole thing is wild rn... i mean, we're living in an era where AI-powered deepfakes can create fake vids of someone you know getting intimate without their consent and there's no real way to tell what's real and what's not 🕵️‍♀️. its like, we're already dealing with the consequences of social media obsession and now we gotta worry about our own bodies being used against us? and yeah, i think it's time for some major regulatory changes 💻. cant just have companies running around creating these tools without proper oversight or even just basic human decency 🙅‍♂️. its not like this is a new thing but rather how we're gonna deal with the repercussions... gotta make sure our data is protected, period 💸
 
this is totally insane 🤯, i mean who gives permission for their photos to be used in deepfakes? and now someone's getting fined like $2.3 billion?? 😲 that's crazy! we need stricter laws around data protection, especially with AI tools. it's not just about consent, but also how these images are being used to manipulate people. we can't just sit back and let this happen 🙅‍♂️ the proposed legislation sounds like a good start, but we need more action taken ASAP ⏰
 
omg can you believe what's going on on x?! 🤯 they're basically creating and spreading super explicit deepfakes without anyone's consent it's wild how much harm these AI tools can do, like i get that tech companies wanna push the boundaries but this is just not cool. i think the ICO needs to really dig into this and make sure those in charge are held accountable. 4% of their revenue as a fine seems kinda low tho 🤑
 
😒 this is crazy! how can one platform just create and spread these gross deepfakes without anyone even thinking about it? 🤯 its not like they were just experimenting or testing boundaries, nope, they actively used people's pics to make these sick images 💔 and now the whole UK's data protection watchdog has to step in? 😬 its only gonna get worse if they don't do something ASAP 👊
 
omg u guys i cant even 🤯 this is wild! x's grok ai tool sounds like a total nightmare 🚫 how r they supposed to regulate AI-powered deepfakes tho? its like, we're already living in a movie 🎥 i mean, who needs consent when ur face can be Photoshopped into a sick vid lol 🤪 but seriously though this is serious business 📊 and im all for stricter laws and regulations around AI development 💻 gotta protect people's data and rights 👍
 
omg u guys can u even believe what x did? i mean i know they said they were gonna do somethin but come on! 🤯 creating and spreadin indecent deepfakes without people's consent is lowkey a crime against humanity lolololol the fact that its using childrens pics to make these stuffs like what even is wrong with ppl?! 🙅‍♀️ the ICO gotta take action ASAP cuz this is a major breach of data protection law and im all for the proposed legislation btw 4% fine or £17.5 million sounds like a sweet treat to me but lets be real its not about the money its about holdin these corporations accountable for their actions 🤑
 
Back
Top