Security

When Infostealers Target Your AI: How Malware Is Harvesting OpenClaw Secrets

For the first time, infostealer malware has been caught stealing OpenClaw configuration files, API keys, and private cryptographic keys. Here's what happened and how to protect yourself.

Bradley Taylor ·

A new kind of theft

On February 13, 2026, security researcher Alon Gal at Hudson Rock identified something unprecedented: a variant of the Vidar infostealer malware had been updated to specifically target OpenClaw configuration files. Not browser passwords. Not crypto wallets. The malware was going after AI agent identities.

This is the first documented case of infostealer malware targeting personal AI agents, and it signals a shift in how attackers think about what’s valuable on your machine.

What was stolen

The Vidar variant harvested several files from the OpenClaw directory on infected machines:

openclaw.json — This is the gateway configuration file. It contains authentication tokens that control access to the OpenClaw gateway, the central process that routes messages between your AI agent, communication channels, and external services. With this file, an attacker can impersonate your gateway and intercept or send messages as your agent.

device.json — This file stores private cryptographic keys used for device identity and pairing. OpenClaw uses a device trust model where each machine is individually authorized. Stealing device.json gives an attacker a cloned device identity, allowing them to authenticate as your machine against your agent infrastructure.

soul.md — Your agent’s personality file. This defines how your AI agent behaves, its tone, its priorities, and its instructions for handling different types of requests. Depending on how you’ve configured it, this file may contain sensitive business logic, decision rules, or instructions that reference private workflows.

MEMORY.md — Perhaps the most sensitive file of all. This is your agent’s persistent memory — the running context it uses across conversations. As reported by The Hacker News, this can include private messages, calendar items, personal preferences, relationship details, meeting notes, and anything else the agent has been asked to remember.

Why this is different from stealing a password

Hudson Rock described this attack as “a significant milestone in the evolution of infostealer behavior: the transition from stealing browser credentials to harvesting the ‘souls’ and identities of personal AI agents.”

That assessment is not hyperbole. Consider what a stolen browser password gets you: access to one account. You can reset the password, enable MFA, and move on. The blast radius is limited.

Now consider what these stolen OpenClaw files get an attacker — and why credentials are just one part of what Palo Alto Networks calls the lethal trifecta of AI agent risk:

A mirror of the victim’s life. As Alon Gal put it: “By stealing OpenClaw files, an attacker does not just get a password; they get a mirror of the victim’s life.” Your agent’s memory is a running log of your context — who you’re working with, what you’re planning, what you’ve discussed in private. This is social engineering ammunition that no amount of password resets can claw back.

Persistent impersonation. With device.json and openclaw.json, an attacker can connect to your agent infrastructure, read your messages, send messages as your agent, and interact with your connected services. They don’t need your password. They have your cryptographic identity.

API key harvesting. If your OpenClaw configuration includes API keys for AI providers, communication platforms, or other services stored in local config, those keys are now compromised. The attacker gains access to every service your agent connects to.

Intelligence gathering at scale. As Techzine reported, this isn’t a targeted nation-state attack. Infostealers like Vidar operate at mass scale, harvesting data from thousands of machines and selling it on dark web markets. The fact that OpenClaw files are now in the harvesting list means they’ll be collected from every infected machine, whether or not the attacker initially cared about AI agents.

Why are AI agent secrets vulnerable to theft?

The core vulnerability here isn’t specific to OpenClaw. It’s an architectural pattern: storing secrets as plaintext files on a user’s local filesystem.

This is how most developer tools work. Your ~/.aws/credentials file has your AWS keys. Your ~/.ssh/id_rsa has your SSH private key. Your .env files have API tokens. Infostealers have been harvesting all of these for years.

What makes the OpenClaw case alarming is the combination of what’s stored: cryptographic identity, authentication tokens, API keys, and deeply personal contextual data, all in a handful of files in a predictable directory location. As SecurityAffairs noted, the attack surface is concentrated and high-value.

This is an inherent risk of running AI agents locally with file-based secret storage. The secrets have to live somewhere, and on a local machine, “somewhere” means the filesystem — where any process running as your user can read them.

How Alpha Agent handles this differently

Alpha Agent is built on OpenClaw, so we take this attack seriously. But the architecture of Alpha Agent means this specific attack vector does not apply.

Here’s why.

Secrets never touch the filesystem

In Alpha Agent’s cloud deployment, user secrets — API keys, channel tokens, OAuth credentials — are encrypted with AWS KMS using the architecture described in our KMS encryption model, and stored in DynamoDB. They are never written to local config files. They are never stored in environment variables. They are never passed through user-data scripts.

When your container needs a secret, it retrieves the encrypted value from DynamoDB and decrypts it through KMS at runtime. The plaintext secret exists only in process memory, for the duration of the request. There is no openclaw.json sitting on disk with your gateway tokens. There is no device.json with private keys that malware can harvest.

Learn more about our encryption model on our encryption documentation page.

Container isolation limits the blast radius

Even in the unlikely event that code running inside your container is compromised, the container itself is hardened:

  • Read-only filesystem — The root filesystem is immutable. Malware can’t install itself, modify binaries, or persist across container restarts.
  • No-new-privileges — Processes cannot escalate permissions through setuid/setgid.
  • Network isolation — Containers cannot communicate with each other. There is no lateral movement between users.
  • Resource limits — CPU and memory constraints prevent abuse.

An infostealer that somehow ran inside your container would find no secret files to steal, no writable filesystem to persist on, and no network path to other users.

Desktop users are protected too

Alpha Agent Desktop runs on your local machine, but it does not store secrets the way a raw OpenClaw installation does. When you enter an API key through the Alpha Agent Desktop app, it is stored through the application’s secure credential management, not dropped into a plaintext JSON file in a well-known directory.

The desktop app authenticates through Alpha Agent’s infrastructure, which means your secrets management benefits from the same server-side protections. The attack described in the Vidar variant — scanning for OpenClaw config files in a known path — would not find harvestable credentials from an Alpha Agent Desktop installation.

What you should do if you run OpenClaw directly

If you run a self-hosted OpenClaw instance (not through Alpha Agent), here are immediate steps to take:

  1. Check for compromise. Review your machine for signs of infostealer infection. If you suspect compromise, rotate all API keys stored in your OpenClaw configuration immediately.

  2. Rotate your device identity. If your device.json was potentially exposed, generate new device keys and re-pair your device.

  3. Audit your memory files. Review what’s in your MEMORY.md and soul.md. If they contain sensitive information that could be used for social engineering, consider what an attacker could do with that context.

  4. Move secrets out of config files. Where possible, use environment variables sourced from a secrets manager rather than hardcoded values in JSON config files. This doesn’t eliminate the risk entirely, but it raises the bar.

  5. Consider a managed deployment. Services like Alpha Agent exist specifically to solve the secrets-on-disk problem. When your secrets live in encrypted cloud storage rather than local files, endpoint compromise doesn’t equal secrets compromise. For a broader look at how to evaluate AI agent risk at an organizational level, see our security leaders guide.

This is the beginning, not the end

The Vidar variant targeting OpenClaw is almost certainly the first of many. As AI agents become more common and more deeply integrated into daily life, their configuration and memory files will become high-value targets for infostealers. Every major infostealer family — Raccoon, RedLine, Lumma, Meta — will likely follow Vidar’s lead. Malicious skills are another growing vector for this kind of theft, as documented in our research on supply chain risks in AI skills.

The uncomfortable reality is that AI agents know more about you than almost any other application on your machine. Your browser knows what sites you visit. Your email client has your messages. But your AI agent has the context that connects everything: your priorities, your relationships, your plans, your private instructions. That context, aggregated in a memory file, is a social engineering goldmine.

This is why we built Alpha Agent with a zero-secrets-on-disk architecture from day one. Not because we predicted this specific attack, but because we understood that any secret stored on a user’s endpoint is one malware infection away from compromise.

The question for anyone running a personal AI agent is straightforward: where do your secrets live? If the answer is “in a file on my laptop,” the Vidar variant just demonstrated what happens next.

Read more about Alpha Agent’s security model on our security page.