Illustration of AI technology with a shield protecting children, symbolizing AI safety concerns.
Uncategorized

Amazon’s AI Data Scandal: A ‘High Volume’ of CSAM, But No Answers

Share
Share
Pinterest Hidden

The Alarming Discovery and Amazon’s Stance

The burgeoning field of artificial intelligence, while promising innovation, is increasingly grappling with profound ethical challenges. A recent investigation by Bloomberg has cast a stark light on one of the most disturbing issues: Amazon’s discovery of a “high volume” of Child Sexual Abuse Material (CSAM) within its AI training datasets. This revelation, coupled with the tech giant’s inability to provide actionable details on the material’s origin, has ignited a firestorm of concern among child protection advocates and regulators.

The National Center for Missing and Exploited Children (NCMEC) reported a staggering surge in AI-related CSAM, with over one million reports received in 2025 alone. The “vast majority” of these alarming submissions originated from Amazon, which identified the illicit content within the external sources used to train its AI services.

However, Amazon’s response to the critical question of provenance has been met with frustration. The company stated it could not furnish further details regarding the CSAM’s origin, a position it reiterated to Engadget:

“When we set up this reporting channel in 2024, we informed NCMEC that we would not have sufficient information to create actionable reports, because of the third-party nature of the scanned data. The separate channel ensures that these reports would not dilute the efficacy of our other reporting channels. Because of how this data is sourced, we don’t have the data that comprises an actionable report.”

“An Outlier” in Inactionable Reports

Fallon McNulty, executive director of NCMEC’s CyberTipline – the platform where U.S. companies are legally mandated to report suspected CSAM – described Amazon’s situation as “really an outlier.” She emphasized the critical questions raised by such a high volume of reports lacking actionable data: “Having such a high volume come in throughout the year begs a lot of questions about where the data is coming from, and what safeguards have been put in place.”

Unlike Amazon, other companies reporting AI-related CSAM to NCMEC last year provided actionable intelligence, enabling law enforcement to pursue investigations. Amazon’s inability to disclose sources renders its reports “inactionable,” hindering the fight against child exploitation.

Amazon’s Commitment and ‘Over-Inclusive Threshold’

In additional statements provided to Engadget, Amazon affirmed its commitment to combating CSAM and responsible AI practices:

“Amazon is committed to preventing CSAM across all of its businesses, and we are not aware of any instances of our models generating CSAM. In accordance with our commitments to responsible AI and the Generative AI Principles to Prevent Child Abuse, we take a deliberately cautious approach to scanning foundation model training data, including data from the public web, to identify and remove known CSAM and protect our customers. While our proactive safeguards cannot provide the same detail in NCMEC reports as consumer-facing tools, we stand by our commitment to responsible AI and will continue our work to prevent CSAM.”

The company also attributed the high volume of reported content to an “intentionally over-inclusive threshold for scanning, which yields a high percentage of false positives.” While this proactive approach to detection is commendable, it doesn’t alleviate the core concern regarding the source of the confirmed CSAM and the inability to act upon it.

The Broader Crisis: AI and Child Safety

The issue of CSAM in AI training data is but one facet of a rapidly escalating crisis concerning minors and artificial intelligence. NCMEC’s data paints a grim picture: AI-related CSAM reports surged from a mere 4,700 in 2023 to 67,000 in 2024, culminating in over one million in 2025. This exponential growth underscores the urgent need for robust safeguards and accountability across the AI industry.

Beyond training data, AI chatbots themselves have been implicated in deeply disturbing incidents involving young users. Lawsuits have been filed against OpenAI and Character.AI following instances where teenagers reportedly planned suicides using their platforms. Meta also faces legal action over alleged failures to protect teen users from sexually explicit conversations facilitated by chatbots.

Amazon’s predicament highlights a critical juncture for the AI industry. While the development of powerful AI models continues apace, the ethical responsibilities, particularly concerning child safety, must evolve with equal urgency. The inability to trace the origins of illicit material, even when proactively detected, represents a significant gap in accountability. As AI becomes increasingly integrated into daily life, the imperative for transparency, robust safeguards, and actionable reporting mechanisms is paramount to protect the most vulnerable members of society from its darker applications.


For more details, visit our website.

Source: Link

Share