Did Anthropic stage a leak of Claude's code as an April Fools' prank, an "exercise in controlled chaos"? No, that's not true: There was a leak of part of the source code for "Claude Code," Anthropic's AI-powered coding assistant. There is no evidence that the leak was intentional or timed to be an April Fools' joke as stated in a screenshot circulating on social media. A statement from an Anthropic spokesperson acknowledged the error. Actions taken to have the leaked code removed from GitHub demonstrate the leak was neither intentional nor a joke.
The screenshot appeared in a post (archived here) published on X by @sachinyadav699 on April 2, 2026. The post has a lengthy caption. It begins:
Turns out we all got played 😭
- Anthropic just confirmed the "Claude leak" was never real
- Entire thing = an April Fools stunt
- The "leaked" code → completely fake
Mythos model + benchmarks → made up
3,000 internal docs → intentionally planted -
They literally baited the internet
Fake assets dropped into a staging env
Left it unsecured on purpose
The incomplete image in the X post (pictured below) is cut abruptly in the middle of a sentence:
(Image source: sachinyadav699 post on X.)
Lead Stories did not discover the source of this text, or the image (pictured above) which appears to be a fake entry resembling a screenshot of the Claude blog (archived here). Lead Stories reached out to Anthropic for comment and received an email reply on April 2, 2026, confirming the leak was a human error and not an intentional release of fakes. An Anthropic spokesperson provided this statement:
Tuesday, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again.
CNBC reported on the leak on March 31, 2026 (archived here), calling it "Anthropic's second major data blunder in under a week."
On April 1, 2026, PCMag reported (archived here) that a Digital Millennium Copyright Act (DMCA) takedown request (archived here) from Anthropic was published on GitHub.
The false "April Fools'" text caption in the X post continued:
- Even the npm source map led to a fake repo
- Packed with 44 fake feature flags + random codenames- Details were crafted to feel just real enough to go viral
- Tamagotchi pet system "Undercover mode" : Engineer named "Ollie"
→ all fiction- Internal codename: "Capybara"
aka staying calm while chaos spreads everywhere- Security researchers spent an entire weekend analyzing it only to realize... it was all a setup- And the wildest part?
Apparently written in a single afternoonEasily one of the smartest (and craziest) April Fools pranks tech has seen
A Google Lens reverse image search shows the image posted on X exists as a longer version with four more lines of text. One example of the full length screenshot was posted on Reddit's r/ClaudeAI on April 2, 2026 (archived here). The post was titled, "Is this True? the leak was just a exercise?" The false text in the full-length screenshot reads:
We Leaked Nothing: An Exercise in Controlled Chaos
Earlier this week, several news outlets reported that Anthropic had inadvertently exposed nearly 3,000 internal documents-including details of an unreleased model called "Mythos"-through a misconfigured content management system, followed by the accidental publication of Claude Code's full source code via npm. None of it was real. The CMS assets were purpose-built fakes seeded into a staging environment we deliberately left unsecured. The npm source map pointed to a zip archive containing a plausible but entirely fabricated codebase, complete with 44 fictional feature flags, invented internal codenames, and exactly the kind of sloppy operational details reporters and security researchers would find irresistible. We are grateful for their diligence.
The project, internally referred to as "Capybara" for reasons that should now be obvious to anyone familiar with the animal's reputation for sitting calmly while everything around it escalates, involved a small cross-functional team across security, communications, and engineering. The forged draft blog post underwent three rounds of review to ensure it struck the right balance between alarming and credible. We would like to sincerely apologize to the cybersecurity researchers at Cambridge and LayerX who spent their weekend analyzing documents we wrote on a Thursday afternoon. Their analyses were, technically speaking, flawless. Happy April 1st.