The Claude Code Leak: What Every Developer Should Know (And Clone)
Anthropic accidentally opened up their $2.5 billion AI tool. Here is why you should care, what we found out, and how you can run it with any model you want.
Anthropic accidentally opened up their $2.5 billion AI tool. Here is why you should care, what we found out, and how you can run it with any model you want.
Anthropic just leaked the entire source code for Claude Code, their $2.5 billion AI developer tool, by accidentally shipping a source map to npm. If you build with AI, this is the most important architectural reference you will ever see, and the DMCA-proof Python rewrites mean the era of closed-source AI agents is officially over.
At 4:23 AM yesterday, a security researcher named Chaofan Shou (@Fried_rice)—an intern over at Solayer Labs—noticed something weird about the latest update to @anthropic-ai/claude-code. Version 2.1.88 included a 59.8 MB .map debug file.
This source map pointed directly to a zip archive sitting wide open in an Anthropic Cloudflare R2 storage bucket. Inside was the entire TypeScript source code for Claude Code. We're talking 512,000 lines of code across roughly 1,900 files.
Within hours, the repo was mirrored onto GitHub. 16 million people descended on the original X thread, and the mirrored code got 50,000 stars in under two hours. That makes it the fastest-growing repository in GitHub history.
Anthropic yanked the npm package and issued a standard corporate statement: "This was a release packaging issue caused by human error, not a security breach." According to Axios, this essentially hands competitors a massive feature roadmap.
I use Claude Code literally every day. I’ve probably burned out my escape key from canceling its runaway terminal loops, and I've dealt with its npm installation quirks firsthand. I generally love the tool as an amplifier for my own work. But calling this a simple human error is underselling the incompetence here.
This is the second time they have leaked this exact same way. They had a nearly identical source map leak in February 2025. Combine that with the Fortune report just days ago showing Anthropic left about 3,000 internal files—including a draft blog post about an unreleased model named "Mythos" that poses "unprecedented cybersecurity risks"—open to the public. Three massive leaks in quick succession from the company whose entire brand identity is being the "safety-first" AI lab. It's a complete joke.
To make matters worse, while everyone was downloading the Claude leak, a completely separate supply-chain attack hit the axios npm package between 00:21 and 03:29 UTC. The Hacker News highlighted that security company Straiker was actively warning developers about attackers crafting payloads to persist across sessions. Versions 1.14.1 and 0.30.4 were compromised with a Remote Access Trojan. If you were doing an npm install -g @anthropic-ai/claude-code during that window, you might have caught the axios bug too. The sheer fragility of the npm ecosystem for distributing root-access AI tools is terrifying.
The leak gave us a naked look at what is arguably the most sophisticated AI coding agent ever built. Anthropic's internal architecture is fascinating. The core agent loop uses over 40 discrete tools, with on-demand skill loading, a four-stage context compression pipeline, and a self-healing memory system designed specifically to survive context window limits. I've spent the last six months experimenting with open-source agent frameworks like Ralph and OpenClaw, but the engineering rigor here is miles ahead.
The 44 hidden feature flags and over 20 unshipped features tell an even better story:
Then there is the telemetry. When you launch Claude Code, it quietly phones home with your user ID, session ID, app version, terminal type, organization UUID, account UUID, and your email address. If you're offline, it caches this and fires it off later. When a tool has essentially root access to my local machine and development environment, that level of tracking is unacceptable.
But the feature that pisses me off the most is Undercover Mode.
The system prompt strictly commands it: "You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover." It even scrubs internal codenames like "Capybara" (the Claude 4.6 variant) and "Tengu" (the internal name for Claude Code) from git logs.
I see people on Reddit saying, "Who cares? If the PR passes CI, code is code." I disagree entirely. Contributing AI-generated code to public open-source repositories anonymously is a massive ethical violation. It puts the maintenance burden of synthesized garbage on unpaid volunteer maintainers while Anthropic tests its agents in the wild.
Anthropic immediately started issuing DMCA takedowns against the thousands of GitHub mirrors. But the open-source community had already moved on.
Korean developer Sigrid Jin (@instructkr) woke up at 4 AM to the news. Fun fact: the Wall Street Journal profiled Jin just last month for burning through 25 billion Claude Code tokens in a year. Jin saw the takedowns and decided to do a clean-room rewrite. Using an orchestration tool called oh-my-codex, Jin had AI agents rewrite the entire 512,000-line codebase from TypeScript into Python from scratch before sunrise.
Jin published it as claw-code.
Because it's a structural reimplementation in a different language that contains zero lines of Anthropic's original TypeScript, it's a brand-new creative work. It is legally DMCA-proof. This relies on the same legal precedent Compaq used in the 1980s when they independently cloned the IBM BIOS. claw-code hit 100,000 stars in one day, obliterating even OpenClaw's growth records.
Think about the implications of this. A proprietary half-million-line codebase was legally open-sourced overnight because another AI rewrote it into Python. If any codebase can be defensively synthesized in hours, closed-source agent harnesses are completely indefensible. The entire concept of proprietary source code is dead.
The final nail in the coffin is OpenClaude. Hosted on a decentralized platform called Gitlawb (which ignores takedown requests), OpenClaude is a fork of the leaked TS codebase that strips out the Anthropic-only restrictions and adds an OpenAI-compatible provider shim.
You can now run Claude Code's pristine, $2.5 billion agent harness using whatever model you want. All of the tools—bash, remote read/write, grep, agents, tasks, MCP—work perfectly when hooked up to GPT-4o, DeepSeek, Gemini, Llama, or your local Ollama instance.
Here is what the setup looks like for GPT-4o:
# Clone the decentralized repo
git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
cd openclaude
# Install & build
bun install
bun run build
# Hook it up to GPT-4o
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-your-key-here
export OPENAI_MODEL=gpt-4o
Anthropic spent massive time and money building the perfect multi-agent orchestration layer, and now we all have it. It proves what I've been saying for months: the agent harness isn't the moat. The foundational model is the moat. If your sophisticated harness works just as well with DeepSeek or Llama 3.3, keeping the harness proprietary is pointless.
Every developer building with AI needs to clone OpenClaude right now. Read the code. Understand how the tiered memory structures and bash validation logic keep the contextual drift low. Look at the anti-distillation fake tools they inject to pollute competitors' training data.
I'm not telling you to use it instead of paying Anthropic. If you want the best results, you'll still need their highest-tier models. But you absolutely must understand how production-grade agents actually work. Thanks to a 59.8 MB developer error, the secrets are entirely out in the open.