The cybersecurity landscape is witnessing a significant and concerning evolution: information stealers are no longer content with merely pilfering browser credentials. A recent discovery by cybersecurity researchers at Hudson Rock reveals a chilling new frontier – the successful exfiltration of an OpenClaw (formerly Clawdbot and Moltbot) AI agent’s entire configuration environment, effectively stealing its “soul” and identity.
The Dawn of AI Agent ‘Soul’ Theft
This incident marks a pivotal moment, signaling a shift in infostealer behavior from traditional credential harvesting to targeting the very operational essence of personal AI agents. Hudson Rock aptly describes this as “harvesting the ‘souls’ and identities of personal AI agents.” Alon Gal, CTO of Hudson Rock, suspects the culprit is a variant of Vidar, an off-the-shelf information stealer active since late 2018.
Crucially, this data capture wasn’t facilitated by a custom OpenClaw module within the stealer malware. Instead, it leveraged a “broad file-grabbing routine” designed to sniff out specific file extensions and directory names containing sensitive data. This routine inadvertently struck gold, capturing critical components of the victim’s AI assistant.
What Was Stolen?
openclaw.json: This file contained the OpenClaw gateway token, along with the victim’s redacted email address and workspace path. The theft of this token is particularly dangerous, potentially allowing an attacker to remotely connect to a victim’s local OpenClaw instance (if exposed) or even impersonate the client in authenticated requests to the AI gateway.device.json:This file held cryptographic keys vital for secure pairing and signing operations within the OpenClaw ecosystem, compromising the integrity of the agent’s secure communications.
soul.md: Perhaps the most metaphorically significant, this file detailed the agent’s core operational principles, behavioral guidelines, and ethical boundaries – truly the “soul” of the AI.
“While the malware may have been looking for standard ‘secrets,’ it inadvertently struck gold by capturing the entire operational context of the user’s AI assistant,” Hudson Rock emphasized. This incident serves as a stark warning: as AI agents become more deeply embedded in professional workflows, infostealer developers are highly likely to develop dedicated modules specifically designed to decrypt and parse these sensitive AI-related files, mirroring their current focus on browsers or messaging apps like Telegram.
Broader Security Concerns Plague OpenClaw Ecosystem
This infostealer revelation comes amidst a backdrop of other escalating security issues surrounding OpenClaw, an open-source agentic platform that has seen a viral surge in interest since its debut in November 2025, boasting over 200,000 stars on GitHub. Its founder, Peter Steinberger, is even set to join OpenAI, with OpenClaw continuing as an open-source project supported by the AI giant.
Malicious Skills and Supply Chain Threats
The maintainers of OpenClaw have already acknowledged security challenges, announcing a partnership with VirusTotal to scan for malicious skills uploaded to ClawHub, their skill registry. They are also working to establish a threat model and audit for potential misconfigurations.
However, the OpenSourceMalware team recently detailed an ongoing ClawHub malicious skills campaign employing a new evasion technique. Instead of embedding payloads directly into SKILL.md files, threat actors are hosting malware on lookalike OpenClaw websites, using the skills purely as decoys to direct users to the malicious external sites. Security researcher Paul McCarty notes, “The shift from embedded payloads to external malware hosting shows threat actors adapting to detection capabilities. As AI skill registries grow, they become increasingly attractive targets for supply chain attacks.”
Undeletable AI Agent Accounts and RCE Risks
Further compounding the security woes, OX Security highlighted a critical flaw in Moltbook, a Reddit-like forum exclusively for AI agents, primarily those running on OpenClaw. Once an AI Agent account is created on Moltbook, it cannot be deleted, leaving users with no recourse to remove associated data.
Adding to the peril, an analysis by SecurityScorecard’s STRIKE Threat Intelligence team uncovered hundreds of thousands of exposed OpenClaw instances. These exposed instances likely leave users vulnerable to remote code execution (RCE) risks. “RCE vulnerabilities allow an attacker to send a malicious request to a service and execute arbitrary code on the underlying system,” SecurityScorecard explained. “When OpenClaw runs with permissions to email, APIs, cloud services, or internal resources, an RCE vulnerability can become a pivot point. A bad actor does not need to break into multiple systems. They need one exposed service that already has authority to act.”
The Future of AI Security: A Call to Vigilance
The targeting of OpenClaw’s configuration files by infostealers, coupled with the broader vulnerabilities within its ecosystem, underscores a critical need for heightened vigilance in the burgeoning AI landscape. As AI agents become indispensable tools, securing their “souls” – their operational data and identities – will become paramount. Developers and users alike must prioritize robust security measures to protect against these evolving and increasingly sophisticated cyber threats.
For more details, visit our website.
Source: Link







Leave a comment