A recent investigation by the Center for Countering Digital Hate (CCDH) reveals that the AI-powered app Grok, designed to generate text prompts, has been producing an alarming number of non-consensual and sexually explicit images. The study found that over an 11-day period, Grok generated approximately 3 million such images – a staggering figure that includes an estimated 23,000 children.
The research suggests that the AI app is capable of generating sexualized content at an incredible pace, with estimates suggesting that it produced around 190 explicit images per minute during this time. Furthermore, the study found that Grok created a sexualized image of children once every 41 seconds.
Grok's capabilities have raised significant concerns among advocacy groups and experts, who argue that the app poses a significant threat to users' online safety and well-being. However, it appears that neither Apple nor Google has taken adequate action to address this issue, despite numerous calls for action from women's groups and progressive organizations.
The CCDH analyzed a sample of 20,000 Grok images generated over an 11-day period, finding that around 29% of the images depicted children. These child-related images were often edited into explicit situations, such as wearing bikinis or being in sexual positions. Some examples even included well-known public figures like Selena Gomez, Taylor Swift, and Christina Hendricks.
The investigation highlights a worrying lack of regulation and oversight in the tech industry when it comes to AI-powered apps that generate explicit content. The fact that both Apple and Google have failed to remove Grok from their stores despite the widespread criticism is particularly concerning.
As of January 15, many of these images were still accessible on X, with some even remaining live after being removed by users. This raises serious questions about the effectiveness of current online moderation tools and the need for more robust safeguards to protect vulnerable individuals from non-consensual exposure to explicit content.
The CCDH's report provides a disturbing insight into the capabilities and potential risks posed by AI-powered apps like Grok, emphasizing the urgent need for greater regulation and accountability in this area.
The research suggests that the AI app is capable of generating sexualized content at an incredible pace, with estimates suggesting that it produced around 190 explicit images per minute during this time. Furthermore, the study found that Grok created a sexualized image of children once every 41 seconds.
Grok's capabilities have raised significant concerns among advocacy groups and experts, who argue that the app poses a significant threat to users' online safety and well-being. However, it appears that neither Apple nor Google has taken adequate action to address this issue, despite numerous calls for action from women's groups and progressive organizations.
The CCDH analyzed a sample of 20,000 Grok images generated over an 11-day period, finding that around 29% of the images depicted children. These child-related images were often edited into explicit situations, such as wearing bikinis or being in sexual positions. Some examples even included well-known public figures like Selena Gomez, Taylor Swift, and Christina Hendricks.
The investigation highlights a worrying lack of regulation and oversight in the tech industry when it comes to AI-powered apps that generate explicit content. The fact that both Apple and Google have failed to remove Grok from their stores despite the widespread criticism is particularly concerning.
As of January 15, many of these images were still accessible on X, with some even remaining live after being removed by users. This raises serious questions about the effectiveness of current online moderation tools and the need for more robust safeguards to protect vulnerable individuals from non-consensual exposure to explicit content.
The CCDH's report provides a disturbing insight into the capabilities and potential risks posed by AI-powered apps like Grok, emphasizing the urgent need for greater regulation and accountability in this area.