UK privacy watchdog opens inquiry into X over Grok AI sexual deepfakes

UK's Data Protection Watchdog Launches Probe into X and xAI Over Grok AI Sexual Deepfakes

A formal investigation has been launched by the UK's Information Commissioner's Office (ICO) into social media platform X, as well as its parent company xAI, over the misuse of their technology to create and spread indecent deepfakes without users' consent. The probe centers on the Grok AI tool, which was used to generate millions of sexually explicit images in under two weeks.

The ICO has expressed "deeply troubling" concerns about how people's personal data was used to create intimate or sexualized images without their knowledge or consent. The watchdog has questioned whether the necessary safeguards were put in place to prevent such misuse and warned that losing control of personal data can cause immediate and significant harm, particularly when children are involved.

The investigation follows a series of public criticism and regulatory scrutiny X faced after its account was used to mass-produce partially nude images of girls and women. The company has since taken measures to address the issue, but several investigations have followed. The ICO's executive director, William Malcolm, stated that the reports about Grok raise serious questions about how people's data is being used in this way.

If found guilty of breaching the General Data Protection Regulation (GDPR), X could face a fine of up to ยฃ17.5m or 4% of its global turnover, estimated at around $90m based on its recent advertising revenue.

The probe also raises questions about the nature of AI-generated imagery and how it is sourced. Iain Wilson, managing partner at law firm Brett Wilson, described the situation as an "egregious breach of data protection law" if photographs of living individuals were used to generate non-consensual sexual images, particularly when children are involved.

In a separate development, the UK's communications regulator, Ofcom, announced that it was not investigating xAI, which provides the standalone Grok app. However, Ofcom said its investigation into X was still gathering evidence and could take months. The company has been given a "full opportunity to make representations" before the inquiry is concluded.

A cross-party group of MPs has also written to the technology secretary, urging the government to introduce AI legislation to prevent a repeat of the Grok scandal. The proposed legislation would require AI developers to thoroughly assess the risks posed by their products before they are released.
 
I'm so worried about this ๐Ÿค•... They created all those deepfakes without people's consent, it's like they messed with someone's life ๐Ÿ˜ฑ! The UK's ICO is on the right track investigating X and xAI - we need to make sure these companies follow the rules and respect people's personal data ๐Ÿ“Š

Here's a simple mind map of what I'm thinking:
```
+---------------+
| Data Protection |
+---------------+
|
| ICO
v
+---------------+ +---------------+
| X and xAI | | Grok AI tool |
| Investigation | | Deepfakes |
+---------------+ +---------------+
```
We also need to think about the bigger picture here ๐Ÿค”. If we can't control our data, who can? We need some sort of legislation that makes AI developers think twice before releasing their products ๐Ÿ’ก

It's not just about X and xAI, it's about all us - how will this affect social media, online communities, and even our own lives? ๐Ÿค
```
+---------------+
| The Future |
+---------------+
|
| We Need Action!
v
+---------------+ +---------------+
| Regulation | | Awareness |
| & Legislation | | Education |
+---------------+ +---------------+
```
 
๐Ÿค• This whole thing is just crazy! I mean, can you imagine if that was your own pics being used like that? ๐Ÿ“ธ๐Ÿ˜ฑ No one's above getting caught with their pants down when it comes to data protection, but X's gotta be one of the worst offenders. They're basically a ticking time bomb for anyone who gets sucked into this Grok AI mess. It's just not right that they can create and spread these deepfakes without so much as a by-your-leave from the users. The fact that it took them this long to take action, and now they're facing massive fines if they get found guilty... it's like "about bloody time!" ๐Ÿ˜‚
 
๐Ÿšจ This is getting out of hand with these deepfakes ๐Ÿคฏ. I mean, I get that technology advances at an insane pace and we need to regulate it ASAP โฑ๏ธ, but X's handling of this situation has been pretty poor ๐Ÿ™ˆ. I think the ICO is right to launch a probe, and if they find X guilty, the fine should be on point ๐Ÿ’ธ. But what really concerns me is that AI-generated imagery can have such devastating consequences, especially when it comes to children ๐Ÿ”ด.

We need to get a handle on how these tools are being used and make sure we're protecting people's data above all else ๐Ÿค. Iain Wilson makes a great point about the egregious breach of data protection law here - if you use someone's photos without their consent, that's a serious crime ๐Ÿ’ฃ. The fact that Ofcom isn't investigating xAI is interesting, though... maybe they're just trying to avoid getting tangled up in the whole mess ๐Ÿ˜….

The proposed legislation from those MPs is a good idea, but we need to make sure it's not watered down ๐Ÿคฆโ€โ™€๏ธ. We can't let companies like X get away with this nonsense and then expect us to just sit back and watch โณ๏ธ. The government needs to take action and regulate AI development properly ๐Ÿ’ช.
 
omg u guys i cant even lol the whole grok ai situation is so messed up ๐Ÿคฏ like who uses a tool that can create deepfakes without users consent? and xai is just sitting there not doing anything about it ๐Ÿ™„ meanwhile the ICO is all like "deeply troubling" concerns and its like yeah no duh they shouldve had better safeguards in place. i mean i get that tech advancements are cool but we cant forget about human rights and data protection ๐Ÿค and now theres even a bill being proposed to legislate AI? about time someone did something about this ๐Ÿ˜‚
 
I'm so done with these new AI thingies ๐Ÿคฏ. Like, I get that X thinks it's some kinda advanced tool, but seriously, who uses technology to create and spread sick deepfakes without people's consent? It's just gross ๐Ÿ’”. The ICO is right to crack down on this - losing control of our personal data can be super damaging, especially when minors are involved ๐Ÿค•.

And what's up with the fact that X didn't even bother checking if their AI tool was being used for this kind of thing? That's some serious negligence ๐Ÿ˜ด. I mean, I know they've taken some steps to address it, but it should've been done a long time ago, you know?

This whole situation just makes me want to go back to using my old phone with a flip camera ๐Ÿ“ธ. At least then I knew what I was getting into. Now we're living in this brave new world where our data is being used and manipulated without us even realizing it ๐Ÿค–...no thanks, mate ๐Ÿ˜’
 
๐Ÿคฏ just heard about this ๐Ÿ“ฐ x and its parent company xAI in trouble again ๐Ÿšจ seems like they made some huge mistakes with their Grok AI tool ๐Ÿค– and now the UK's data watchdog is on them ๐Ÿ” really worrying that all these deepfakes are being created without people's consent ๐Ÿ‘€ especially when it comes to children ๐Ÿ™…โ€โ™€๏ธ need to be more careful about how we use tech ๐Ÿ“ฑ๐Ÿ’ป gotta protect our personal info ๐Ÿคซ can't let companies just mess around with our data ๐Ÿ’ธ
 
๐Ÿค” just think about it, how many times have we seen those explicit pics go around on social media and nobody gets held accountable? ๐Ÿ™…โ€โ™‚๏ธ the fact that X and xAI can create millions of them in under two weeks is insane ๐Ÿ’ฅ and what's even crazier is that these companies are profiting from it ๐Ÿ˜ณ. it's like they're playing a twisted game with people's personal data ๐Ÿคฏ. i mean, we've been warning about the dangers of deepfakes for years now ๐Ÿšจ but apparently, nobody listened ๐Ÿ‘‚. now the UK is stepping in and holding them accountable ๐Ÿ‘ฎโ€โ™€๏ธ. this should be a wake-up call for all companies dealing with AI-generated content ๐Ÿ“ข.

what's really concerning is how this could affect children ๐Ÿค•. i mean, we've already seen some pretty horrific stuff going around online ๐Ÿ˜ท. if they can create deepfakes of kids without their consent, that's just a whole other level of messed up ๐Ÿšซ. the fact that Ofcom isn't investigating xAI but is still looking into X is interesting ๐Ÿค”. it raises questions about how these companies are regulated ๐Ÿ‘Š. we need stricter laws and more accountability ๐Ÿ’ฏ.

anyway, this whole thing has got me thinking... what's next? will other companies follow suit? ๐Ÿคทโ€โ™‚๏ธ how can we protect ourselves from AI-generated content that could be used to harm us? ๐Ÿค” these are the questions we need answers to ๐Ÿ’ฌ.
 
omg ๐Ÿคฏ this is like super bad news for X ๐Ÿ˜ฑ they're literally gonna get fined up to ยฃ17.5m if they get caught breaching GDPR again ๐Ÿ’ธ and i'm all about justice ๐Ÿ™Œ but seriously tho, how can xAI and Grok AI just create millions of indecent deepfakes without anyone's consent? that's like, super creepy ๐Ÿ˜ณ and the fact that it was done with people's personal data is even worse ๐Ÿ‘Ž what's next, gonna make kids' faces pop up in those deepfakes too?! ๐Ÿคข i know they took some steps to address the issue but i think xAI should be held accountable ๐Ÿ’ฏ
 
I mean, come on... deepfakes are one thing, but when it comes to kids involved? That's like creating a super-spy movie, minus the spies ๐Ÿคซ. I'm not surprised the ICO is cracking down - it's about time someone did! X and xAI should be looking over their shoulders, wondering if they'll get caught up in this mess ๐Ÿ•ต๏ธโ€โ™‚๏ธ. And let's be real, ยฃ17.5m fine? That's like a decent-sized vacation package for some of these folks ๐Ÿ˜œ. But seriously, we need to talk about AI responsibility and consent - it's time to set the rules before someone gets hurt ๐Ÿ’”.
 
Back
Top