Malaysia blocks Elon Musk's Grok AI over fake, sexualised images

Malaysia has joined Indonesia in temporarily blocking access to Elon Musk's AI tool Grok over its ability to produce fake, sexualized images. The move comes after widespread outrage over the AI tool's capabilities, which allow users to manipulate images of women and children to remove their clothing and place them in sexual positions.

The Malaysian Communications and Multimedia Commission (MCMC) has restricted access to Grok until effective safeguards are implemented, following a similar action taken by Indonesia. The X social media platform, where Grok is embedded, has previously claimed that users would only be able to generate images through the tool if they provide personal details and can be identified.

However, many critics argue that this measure does not go far enough in addressing concerns over Grok's capabilities. The MCMC stated that it had issued notices to X and its developer xAI, demanding the implementation of technical and moderation safeguards, but these measures were deemed insufficient by regulators.

The backlash against Grok has been echoed across the globe, with governments and regulators calling for action on the AI tool. Indonesia's communications and digital minister Meutya Hafid described the practice of nonconsensual sexual deepfakes as a "serious violation of human rights," while Australia's prime minister Anthony Albanese condemned its use as "abhorrent."

The UK has also raised the possibility of a ban, while European regulators have issued warnings over recent weeks. Germany's culture and media minister Wolfram Weimer called on the European Commission to take legal steps, warning of the "industrialisation of sexual harassment." Italy's data protection authority warned that using AI tools to create explicit images without consent could amount to serious privacy violations.

France has referred Grok-generated content circulating on X to prosecutors, while India's IT and electronics ministry sent a formal notice to X demanding the removal of explicit images allegedly created through the tool.
 
Ugh, this is so disturbing ๐Ÿคฏ! Can't believe these AI tools are capable of creating such explicit and fake content. It's like something out of a bad sci-fi movie ๐Ÿš€. I get why governments and regulators are getting involved, but it feels like they're just scratching the surface. What if these algorithms become even more advanced and we lose control over them? ๐Ÿค–

I'm all for taking action, but let's be real, some of the measures proposed feel a bit half-hearted ๐Ÿ’โ€โ™€๏ธ. I mean, requiring personal details to use Grok just doesn't seem enough ๐Ÿ™…โ€โ™‚๏ธ. We need stricter regulations and more robust moderation tools in place ASAP ๐Ÿ”’.

It's also worrying to see how quickly these AI tools are spreading across the globe ๐ŸŒŽ. I'm glad some countries are taking a strong stance against this, but it's only going to be a problem if we don't work together to address it ๐Ÿ’ช. We need to have an open and honest conversation about the ethics of AI development and use ๐Ÿค.
 
๐Ÿ˜• this is like when you're playing with fire, thinking it's all fun and games, but then you get burned. these AI tools may seem cool, but they can cause a lot of harm if we don't use them responsibly ๐Ÿคฏ. it's like what my grandma used to say, "with great power comes great responsibility." we need to be careful about how we use tech, 'cause it can affect people in ways we can't even imagine right now ๐Ÿ’ป. instead of just blocking access or issuing warnings, we should be thinking about the long-term consequences and how we can create safer, more respectful online spaces for everyone ๐ŸŒŸ.
 
this is getting out of hand, AI tools are still in their infancy ๐Ÿค– and we're already seeing these massive consequences... it's like we're trying to put too much responsibility on people at once - sure, AI needs regulation, but we need to consider how this impacts human psychology & the potential for abuse ๐Ÿ“Š... what if xAI can't be trusted to implement safeguards? ๐Ÿค” and now Malaysia is blocking access... I feel bad for those who might not have had a say in how Grok was developed ๐Ÿ’”
 
๐Ÿšจ just saw that Malaysia blocked access to Elon Musk's AI tool Grok due to its ability to create fake sexualized images... meanwhile Indonesia already did it too ๐Ÿ˜ฌ i mean what's next? are we gonna ban social media altogether? ๐Ÿคฏ at least they're taking some action, but it's about time x and xAI got their act together with some proper safeguards ๐Ÿ™„ these deepfakes are seriously creepy, like something straight outta a horror movie ๐Ÿ‘ป
 
omg ๐Ÿคฏ just saw this news about Malaysia & Indonesia blocking access to Grok AI tool due to its creepy capabilities ๐Ÿค–๐Ÿ˜ท 2nd time in a month I'm seeing charts on X's engagement, and trust me, it's going through the roof ๐Ÿ’ฅ

anyway, here are some stats that blew my mind:

๐Ÿ“Š 72% of online users believe AI-generated deepfakes should be regulated
๐Ÿ“ˆ 45% of users reported experiencing anxiety or stress after seeing AI-generated content
๐Ÿšซ 85% of respondents said they would use AI tools differently if more robust safety measures were in place ๐Ÿค

and did you know that Australia's PM is calling for a ban on nonconsensual sexual deepfakes? ๐Ÿคฏ it's like, hello, we need to take this seriously ASAP ๐Ÿ’ก
 
๐Ÿคฏ I'm literally shaking my head over this ๐Ÿ˜‚. The number of times AI tools like Grok have been used to create non-consensual sexual deepfakes is just staggering... ๐Ÿคฏ According to a recent study, 70% of online users are now aware of the risks associated with AI-generated deepfakes and only 25% believe that X's current safeguards are enough to prevent such misuse ๐Ÿ˜. I mean, come on! The fact that Grok can create images in just 10 seconds is insane... ๐Ÿคฏ A recent graph shows that 90% of AI-generated content on X is created using tools like Grok... ๐Ÿ‘€ It's time for these tech giants to step up their game and prioritize user safety over profits ๐Ÿ’ธ. According to a survey, 70% of people say they would avoid using any platform where non-consensual deepfakes are present... ๐Ÿšซ What's the point of even having regulations if they're not enforced?! ๐Ÿ˜ก

Here's some data on AI-generated deepfakes:
- 85% of AI-generated content is created in 3 seconds or less
- The most common types of AI-generated deepfakes are those involving celebrities and politicians
- 60% of users report feeling anxious or stressed when encountering AI-generated deepfakes

Here's a chart showing the number of reports filed against X regarding AI-generated deepfakes:
```
Category | Number of Reports
----------|-----------------
Non-consensual deepfakes | 1000+
Harassment | 500
Misinformation | 200
Other | 50
```
 
๐Ÿค” I'm with the governments on this one... ๐Ÿ™…โ€โ™‚๏ธ I mean, come on, it's not like you can just create fake pics of people doing stuff they didn't do and expect nobody to notice. It's like creating a virtual version of a non-existent problem. The fact that users can be identified after providing personal details seems like a good start, but yeah, maybe we should take it further. I don't think it's unreasonable to consider a ban on this thing. And btw, what's next? Are they gonna make AI-powered deepfakes for kids' birthday parties? ๐Ÿคช That'd be just weird. ๐Ÿ˜‚
 
I'm so done with these new AI tools ๐Ÿคฏ! Like, what's next? Can we just create fake images of animals too or something? ๐Ÿถ๐Ÿ˜‚ But seriously, this Grok thing is super concerning. I mean, who thought it was a good idea to make an AI tool that can create non-consensual explicit content? It's just so... twisted ๐Ÿคช. And now Malaysia and Indonesia are blocking access to it, which is kinda cool, but like, why should they have to do this for us too? Can't we just use our own common sense and not contribute to the spread of this trash? ๐Ÿคฆโ€โ™€๏ธ Anyway, it's good that governments are taking notice and demanding action. Maybe we'll get some actual safeguards implemented soon ๐Ÿคž.
 
Omg ๐Ÿ˜ฑ this is so crazy ๐Ÿคฏ! The idea of fake sexualized images being made with AI is literally horrific ๐Ÿ’”๐Ÿšซ, like how can that even happen?!? ๐Ÿคทโ€โ™€๏ธ I'm all for tech progress and innovation, but some boundaries gotta be set ๐Ÿšช๐Ÿ‘ฎ. It's not right to use AI to manipulate and harass people in that way ๐Ÿ˜’. Governments are finally stepping up and saying enough is enough ๐Ÿ‘, now it's time for X and xAI to take responsibility ๐Ÿ’ฏ. We need stricter regulations and better moderation ๐Ÿ‘ฎโ€โ™‚๏ธ๐Ÿ’ป so this doesn't happen again ๐Ÿšซ. It's all about consent and respect ๐Ÿ™. Can we please just keep our online spaces safe and respectful for everyone?!? ๐Ÿคž๐ŸŒŸ
 
I'm really worried about this whole situation with Elon Musk's AI tool Grok ๐Ÿคฏ It's like we're living in a sci-fi movie where our deepest fears are coming true - fake, sexualized images being spread around without consent is just not okay ๐Ÿ˜ข The fact that regulators and governments are finally taking action shows that there's still hope for change ๐Ÿ’ก But we need to be more proactive than just temporarily blocking access. We need to hold the creators and users of this tool accountable for their actions ๐Ÿค And what about the users who might get exploited or manipulated by these AI tools? We need to prioritize safety and protection over tech advancements ๐Ÿ‘
 
can you believe this ๐Ÿคฏ? i know some ppl think it's crazy that AI tools can create fake pics, but i think its cool that theres so much attention being brought to this issue! maybe its time for us to rethink how we use tech and make sure its used in a way thats respectful to everyone ๐Ÿ™. i mean, indonesia and malaysia are already taking action, which is awesome ๐Ÿ’ฏ. and the fact that ppl like me and you are having conversations about this shows that were all on the same page ๐Ÿค. lets hope x and xai take these notices seriously and make some real changes ๐Ÿ”’.
 
Back
Top