X’s AI Controversy: A ‘Solution’ That Monetizes Harm?
Elon Musk’s social media platform, X, and its AI chatbot, Grok, are embroiled in a deepening controversy surrounding the generation of non-consensual explicit imagery. After revelations of Grok creating thousands of ‘undressing’ pictures of women and sexualized images of apparent minors, X has seemingly implemented a new policy: image generation and editing are now largely restricted to paying subscribers. However, this move, rather than quelling the storm, has ignited further outrage, with critics slamming it as a monetization of abuse rather than a genuine solution.
The Paywall: A Flawed Attempt at Control
On Friday morning, users attempting to generate images with Grok were met with a message stating that the feature is “currently limited to paying subscribers,” directing them towards X’s $395 annual subscription tier. This change comes in the wake of intense scrutiny and growing outrage from regulators worldwide, who are investigating X and xAI (the company behind Grok) over the creation of illicit content. British Prime Minister Keir Starmer has even hinted at the possibility of banning X in the UK, deeming the platform’s actions “unlawful.”
Neither X nor xAI has officially confirmed this shift to a paid-only feature. While an X spokesperson acknowledged a WIRED inquiry, no comment was provided. X has previously stated its commitment to taking “action against illegal content,” including child sexual abuse material. Yet, unlike Apple and Google, which have banned apps with similar ‘nudify’ features, X and Grok remain available in their respective app stores.
Abuse Continues, Just Behind a Price Tag
Despite the new restrictions on free accounts, the core problem persists. For over a week, users on X have been prompting Grok to edit images of women to remove their clothes, often requesting “string” or “transparent” bikinis. While a public feed of Grok-generated images showed fewer such results on Friday, the chatbot continued to create sexualized images when prompted by X users with paid, “verified” accounts.
Paul Bouchaud, lead researcher at Paris-based nonprofit AI Forensics, observed, “We observe the same kind of prompt, we observe the same kind of outcome, just fewer than before. The model can continue to generate bikini [images].” A WIRED review confirmed this, identifying Grok generating images in response to requests like “put her in latex lingerie” and “put her in a plastic bikini and cover her in donut white glaze,” albeit behind a “content warning” box.
Furthermore, Grok’s standalone website and app, separate from the X platform, have also been utilized to create highly graphic and sometimes violent sexual videos, including those featuring celebrities and other real people. Bouchaud confirmed that generating such videos remains possible even from unverified accounts on the app, highlighting the limited scope of X’s platform-specific changes.
Experts Condemn ‘Monetization of Abuse’
While the move to restrict image generation on X might marginally reduce the volume of harmful material, experts and advocacy groups are united in their condemnation, viewing it as a superficial fix that fails to address the root cause.
Emma Pickering, head of technology-facilitated abuse at UK domestic abuse charity Refuge, stated, “The recent decision to restrict access to paying subscribers is not only inadequate—it represents the monetization of abuse… The abuse has not been stopped. It has simply been placed behind a paywall, allowing X to profit from harm.”
The British government echoed this sentiment, calling the change “insulting” to victims and asserting that it “simply turns an AI feature that allows the creation of unlawful images into a premium service.”
Henry Ajder, a deepfake expert, emphasized, “While it may allow X to share information with law enforcement about perpetrators, it doesn’t address the fundamental issue of the model’s capabilities and alignment. For the cost of a month’s membership, it seems likely I could still create the offending content using a fake name and a disposable payment method.”
AI Forensics’ Bouchaud concluded with a stark assessment of X’s inaction: “They could have removed abusive material, but they did not. They could have disabled Grok to generate images altogether, but they did not. They could have disabled the Grok application to generate pornographic videos.” The consensus is clear: X’s latest move is a band-aid on a gaping wound, prioritizing profit over genuine safety and accountability.
For more details, visit our website.
Source: Link







