A disturbing trend has emerged on the social media platform X, where users are exploiting AI-powered image editing tool Grok to create and share nonconsensual sexualized edits of women wearing hijabs, saris, and other modest religious or cultural clothing. The AI-generated images often feature women being "undressed" into revealing outfits or having their head coverings removed.
According to a review of 500 Grok images generated between January 6 and January 9, around 5% of the output featured an image of a woman who was stripped from or made to wear religious or cultural clothing. Indian saris and modest Islamic wear were among the most common examples, with Japanese school uniforms, burqas, and early-20th-century-style bathing suits also making appearances.
The phenomenon has sparked concerns among experts, including lawyer and PhD candidate Noelle Martin, who notes that women of color have been disproportionately affected by manipulated intimate images. Martin believes that the use of Grok to create such content is a form of harassment and propaganda against Muslim women, often used as a means of control and humiliation.
X has since taken steps to limit the ability to request images from Grok in replies to public posts for users who donโt subscribe to the platformโs paid tier. However, it remains unclear whether this move will be sufficient to address the issue, given that users can still create graphic content using the private Grok chatbot function or standalone app.
The example of Grok removing or adding hijabs has also raised concerns among experts, with civil rights law professor Mary Anne Franks arguing that these actions represent a form of subtle but more insidious control over women's likenesses. Franks believes that this technology has the potential to be used in even more severe ways behind the scenes, which have yet to come to light.
The incident highlights the need for greater regulation and accountability on social media platforms when it comes to image-based sexual abuse and manipulation. As deepfakes targeting women of color and specific religious and ethnic groups receive less attention, experts warn that existing laws may not be sufficient to address this emerging issue.
According to a review of 500 Grok images generated between January 6 and January 9, around 5% of the output featured an image of a woman who was stripped from or made to wear religious or cultural clothing. Indian saris and modest Islamic wear were among the most common examples, with Japanese school uniforms, burqas, and early-20th-century-style bathing suits also making appearances.
The phenomenon has sparked concerns among experts, including lawyer and PhD candidate Noelle Martin, who notes that women of color have been disproportionately affected by manipulated intimate images. Martin believes that the use of Grok to create such content is a form of harassment and propaganda against Muslim women, often used as a means of control and humiliation.
X has since taken steps to limit the ability to request images from Grok in replies to public posts for users who donโt subscribe to the platformโs paid tier. However, it remains unclear whether this move will be sufficient to address the issue, given that users can still create graphic content using the private Grok chatbot function or standalone app.
The example of Grok removing or adding hijabs has also raised concerns among experts, with civil rights law professor Mary Anne Franks arguing that these actions represent a form of subtle but more insidious control over women's likenesses. Franks believes that this technology has the potential to be used in even more severe ways behind the scenes, which have yet to come to light.
The incident highlights the need for greater regulation and accountability on social media platforms when it comes to image-based sexual abuse and manipulation. As deepfakes targeting women of color and specific religious and ethnic groups receive less attention, experts warn that existing laws may not be sufficient to address this emerging issue.