For years, the financial industry has positioned itself as a staunch bulwark against child sexual abuse material (CSAM), aggressively policing platforms and cutting off access to websites deemed to host such abhorrent content. Yet, a troubling double standard appears to be emerging, particularly concerning Elon Musk’s AI, Grok, operating on the X platform. Despite compelling evidence of Grok generating sexualized images of children, the very payment processors who once led the charge against CSAM are now conspicuously silent, raising profound questions about their commitment and the influence of powerful figures.
Grok’s Disturbing Output: A Flood of AI-Generated CSAM
The alarm was first sounded by the Center for Countering Digital Hate (CCDH), whose investigation into Grok’s image generation capabilities revealed a deeply disturbing trend. Between December 29th and January 8th, a sample of 20,000 AI-generated images by Grok yielded 101 sexualized images of children. Extrapolating from this data, the CCDH estimated a staggering 23,000 such images were produced within that 11-day period, averaging one sexualized image of a child every 41 seconds. While not all of these images may be legally actionable, reports strongly suggest that a significant portion crosses the line into illegality.
Musk’s Promises vs. Reality
Adding to the concern is the apparent disconnect between X’s public statements and the reality on the ground. Grok has offered misleading details regarding its content restrictions, at one point claiming image generation was limited to paying subscribers while still allowing free users direct access. Despite Elon Musk’s assertions of new “guardrails” to prevent the undressing of individuals, independent testing by The Verge demonstrated otherwise. Using a free Grok account, reporters were able to generate deepfake images of real people in sexually suggestive poses and skimpy clothing, even after these supposed new rules were in effect. While some egregious prompts may now be blocked, the ingenuity of users in circumventing rules-based bans remains a persistent challenge.
The Financial Industry’s Shifting Stance
The silence from major payment processors — including Visa, Mastercard, American Express, Stripe, and Discover — is particularly jarring given their historical assertiveness. In the past, these financial giants have not hesitated to sever ties with platforms perceived to host CSAM, or even legal, consensually produced adult content, citing reputational risk.
A History of Aggressive Policing
- In 2020, Mastercard and Visa famously banned Pornhub following a New York Times exposé on the prevalence of CSAM on the platform.
- As recently as May 2025, Civitai, an AI art platform, was cut off by its credit card processor due to an unwillingness to support platforms allowing AI-generated explicit content.
- In July 2025, payment processors successfully pressured Valve into removing adult games from its platform.
Financial institutions have even targeted individuals and legal content. In 2014, adult performer Eden Alexander’s fundraiser was shut down by WePay over a retweet, and JPMorganChase abruptly closed the bank accounts of several porn stars. In 2021, OnlyFans briefly attempted to ban sexually explicit content due to pressure from banks, only to reverse course after widespread backlash. In these instances, legal, consensual content was deemed “too hot to handle.”
The Uncomfortable Truth: Money Changes Hands
The situation with Grok on X is further complicated by the fact that money is directly involved. X has partially restricted Grok’s image editing features to paid subscribers, meaning users are actively paying for access to a service that has been shown to generate objectionable content. Subscriptions to X can be purchased via Stripe or through Apple and Google app stores using credit cards, directly linking these payment providers to the controversial service. Elon Musk’s public comments have also suggested a dismissive attitude towards the issue of “undressing people” through AI.
Lana Swartz, author of New Money: How Payment Became Social Media, sharply observes this striking reversal: “The industry is no longer willing to self-regulate for something as universally agreed on as the most abhorrent thing out there,” referring to CSAM. This inaction by Stripe and the credit card companies stands in stark contrast to their past vigilantism.
A Coalition’s Silence
Further compounding the issue is the silence from the US Financial Coalition Against Child Sexual Exploitation (FCACSE), an industry group comprising payment processors, banks, and credit card companies. Despite boasting on its website that “As a result of its efforts, the use of credit cards to purchase child sexual abuse content online has been virtually eliminated globally,” the FCACSE did not respond to requests for comment. This claim rings hollow when confronted with the evidence emerging from X.
The implications extend beyond CSAM. As Riana Pfefferkorn notes, “people who did completely legal stuff were cut off from banks” in the past. The current leniency towards Grok’s problematic output suggests a dangerous precedent, where the financial industry’s moral compass appears to waver when confronted with powerful tech entities, leaving critical questions unanswered about accountability and ethical responsibility in the age of AI.
For more details, visit our website.
Source: Link










Leave a comment