On March 31, 2026, Anthropic accidentally exposed the full source code of Claude Code (its flagship terminal-based AI coding agent) through a 59.8 MB JavaScript source map (.map) file bundled in the public npm package. The leaked file contained approximately 513,000 lines of unobfuscated TypeScript across 1,906 files, revealing the complete client-side agent harness, according to online publications. Within hours, the codebase was downloaded from Anthropic’s own Cloudflare R2 bucket, mirrored to GitHub, and forked tens of thousands of times. Anthropic has issued Digital Millennium Copyright Act (DMCA) notices on some mirrors, but the code is now available across hundreds of public repositories. Here is the full story: https://lnkd.in/eqf6Mnx3 #claudeai #Anthropic #zscaler #cybersecurity #codeleak #malware #package #javascript #github
Marcelo Diniz’s Post
More Relevant Posts
-
#Anthropic accidentally exposed the full source code of #Claude Code through a 59.8 MB #JavaScript source map (.map) file bundled in the public #npm package. Some of the #GitHub repositories have gained over 84,000 stars and 82,000 forks. Thousands of repositories now host the leaked code or derivatives. Threat actors can (and already are) seeding trojanized versions with backdoors, data exfiltrators, or cryptominers. Unsuspecting users cloning “official-looking” forks risk immediate compromise. Zscaler ThreatLabz team discovered #Vidar and #Ghostsocks malware in the available downloads. Recommendations: Implement Zero Trust architecture and prioritize segmenting mission critical application access. Do not download, fork, build, or run code from any GitHub repository claiming to be the “leaked Claude Code.” Verify every source against Anthropic’s official channels only. Educate developers that leaked code is not “open source”. It remains proprietary and dangerous to run unmodified. Avoid running AI agents with local shell/tool access on untrusted codebases. Monitor for anomalous telemetry or outbound connections from developer workstations. Use official channels and signed binaries only. Scan local environments and Git clones for suspicious processes, modified hooks, or unexpected npm packages, and wait for a cool down period before using the latest npm packages. Watch for Anthropic patches addressing newly exposed paths. https://lnkd.in/eJyebjFg
To view or add a comment, sign in
-
🚨 Claude Source Code Leak — What Happened? It has been reported that the source code of Claude (by Anthropic has been leaked 😮 ◾️ Due to an unexpected mistake, Anthropic accidentally left a Source Map file inside their npm package. As a result, the entire source code of Claude (over 500,000 lines of code) became publicly accessible. 💻 Developers quickly backed up the files, leading to a surge of new repositories appearing on GitHub. 🔗 Related repositories: https://lnkd.in/gRaY_H3q https://lnkd.in/g3jjW4dQ https://lnkd.in/gnWdYk4X ⚠️ Anthropic has responded, requesting immediate removal of the leaked code and asserting their copyright rights. 👉 This incident highlights how even small oversights in deployment can lead to massive exposure in the AI world. Read More: https://lnkd.in/gAM9_3h9 #AI #Claude #Anthropic #TechNews #CyberSecurity #DataLeak #AITrends #Developers #GitHub #Innovation #ArtificialIntelligence #TechLeak #AICommunity #CyberThreat #OpenSource #AITools #FutureOfAI #TechUpdates #AIInnovation #DevelopersLife #SourceCode
To view or add a comment, sign in
-
-
About the anthropic Leak, by Claude. What happened: On March 31, 2026, Anthropic accidentally shipped the source code for Claude Code (the CLI coding tool) inside an npm package. A map file in version 2.1.88 referenced an unobfuscated TypeScript source, which pointed to a zip archive on Anthropic's Cloudflare R2 storage — publicly accessible. Security researcher Chaofan Shou spotted it and the code was quickly backed up to GitHub, forked over 41,500 times. (The Register) What was in it: The leak exposed around 512,000 lines of code, including 44 feature flags covering features fully built but not yet shipped, internal system prompts, and details about unreleased tools. (The-ai-corner) One notable thing people found: a feature codenamed KAIROS, an autonomous daemon mode that allows Claude Code to operate as an always-on background agent, performing "memory consolidation" while the user is idle. Anthropic's response: Anthropic confirmed it was human error — a misconfigured release packaging — and said no customer data or credentials were involved. They've been rolling out measures to prevent recurrence. (The Register) They also issued copyright takedown requests that removed thousands of GitHub copies, though they later admitted the takedown was broader than intended and scaled it back. (Bloomberg) The security risk angle: Separately and coincidentally, users who installed or updated Claude Code via npm on March 31 between 00:21 and 03:29 UTC may have pulled a trojanized version of the axios HTTP client containing a remote access trojan. (The Hacker News) That's the part worth actually worrying about if you use Claude Code — worth checking your lockfiles for axios versions 1.14.1 or 0.30.4. To be clear: this is the Claude Code CLI tool's source code, not my model weights or anything about how I reason or think. My underlying model is a completely separate thing.
To view or add a comment, sign in
-
HackerNotes TLDR for episode 168! — https://lnkd.in/g5qqcXQp ►⠀React Router's useParams double URL decodes path parameters, and a case-sensitive regex on the React Source Code of matchPath means %252F (uppercase F) decodes to / while %252f (lowercase f) does not ►⠀Not all frameworks are equal: Vue and React are the most vulnerable to CSPT, Next.js and SvelteKit expose secondary context path traversal on the server side, while SolidStart is largely safe ►⠀Pre-production endpoints may serve uploaded HTML inline instead of as Content-Disposition: attachment, a reliable technique to bypass XSS mitigations on file upload ►⠀fetch() silently strips tab characters (%09), enabling WAF bypasses with payloads like %2F%2e%09%2e%5C
To view or add a comment, sign in
-
The Claude Code leak is not an april fools day joke! It happened yesterday and is a really big deal. Anthropic accidentally shipped Claude Code v2.1.88 with a large cli.js.map source map that exposed ~500k lines of TypeScript - all of Claude Code's secret sauce (no model weights - just the details of the harness built on top of Claude models). This is a huge deal - a lot of Claude Code's lead is due to the cleverness of how they constructed their harness. A lot of folks are probably going to distill this information into catch-up features that will weaken Claude Code's lead for ai-based dev tooling. The code was forked 10's of thousands of times after the leak was discovered. Claude Code - at least the present incarnation - is effectively open source now. https://lnkd.in/ghdDnNDC
To view or add a comment, sign in
-
Anthropic accidentally open-sourced itself. The internet did the rest. On March 31, Anthropic pushed a routine update to Claude Code — and accidentally bundled a debug file exposing 512,000 lines of TypeScript source code to anyone who downloaded it. Within hours, it was mirrored across thousands of GitHub repositories, dissected by developers worldwide, and forked 41,500 times. Anthropic fired back with DMCA takedowns — hitting 8,100 repositories. Including legitimate forks of their own public repo. Developers received copyright strikes for code they never touched. They retracted most of it. But the damage was done. A community rewrite called "claw-code" became the fastest growing GitHub repository in history — 100,000 stars in a single day. More stars than Anthropic's own repo. What the leak revealed is fascinating. 44 hidden feature flags. A background AI agent that works while you're idle. A "self-healing memory" architecture. An "Undercover Mode" that stops Claude from revealing internal codenames in public commits. They built a system to prevent internal leaks. Then shipped the whole thing in a .map file. The code is permanently in the wild. The internet doesn't forget. And Anthropic is reportedly planning an IPO. #Anthropic #ClaudeCode #AI #OpenSource #GitHub #CyberSecurity #AIEngineering #DeveloperCommunity #ArtificialIntelligence #TechNews
To view or add a comment, sign in
-
-
If you are running OpenClaw and have any dependency on LiteLLM, you will quickly be learning today about how supply chain attacks work. Specifically, if you (or your OpenClaw agent) built anything on March 24 with a dependency on 'litellm' (litellm==1.82.7 or litellm==1.82.8), you should consider your sandbox and any secrets it has access to as COMPROMISED and they should be ROTATED IMMEDIATELY. Pin the dependency to 1.82.6. The details of the exploit go deeper than I care to expose in this post, but come from a compromised security scanner action that was exploited and found litellm devs in its crosshairs. It is easy enough to google and find more info if needed. What I want to focus on is that this is an avoidable problem, but not an ENTIRELY avoidable problem without the cooperation of litellm and other devs that should understand their publication responsibility clearly, and the need for python tools to do proper validation natively. The immediate recommendation I have is for library consumers to first run a pypi-attestation check on (critical) dependencies now going forward. If you are running this via an agent, don't leave this to its discretion - just run it on ALL dependencies. I leave any optimization to make this more efficient via fingerprinting, etc, as an exercise for the reader. IMHO this should be a default check in pip, uv, and friends. The second recommendation is for the litellm dev team and any library creators to just bite the bullet and use PyPI Trusted Publishing. It has been available since mid-2023 and prevents this very issue by ensuring any compromise like this happens IN THE OPEN, which means checks like pypi-attestation can be run by publishers and consumers alike to expose NON-ATTESTED releases. I have set this up myself and there are a few steps, but it literally will take you LESS THAN AN HOUR. It is too easy to find libraries that have NEVER attached this and as we can see, it is leaving consumers ripe for issues like this to arise. I credit the amazing Will Woodruff (https://lnkd.in/gF_bM9Nb) for teaching me about this at PyCon 2023 and letting me know there is a way to restore our sanity (and security) with python builds.
To view or add a comment, sign in
-
The credential-stealing malware group that hit Aqua Security and Checkmarx has moved on to a new target: litellm, one of the largest Python projects in the AI space with 95 million monthly downloads. Known as TeamPCP, the group has now crossed five ecosystems in five days: GitHub Actions, Docker Hub, npm, OpenVSX, and now PyPI. The latest attack involves two backdoored versions of litellm (1.82.7 and 1.82.8) shipped with a full credential harvester, Kubernetes lateral movement toolkit, and persistent backdoor. The maintainer's GitHub account is still compromised as of this post, though PyPI has quarantined the project. The Endor Labs security research team is tracking the latest on our blog. Full technical breakdown in the comments. https://lnkd.in/e3hzQGTZ
To view or add a comment, sign in
-
"Now consider the poor open source developers who, for the last 18 months, have complained about a torrent of slop vulnerability reports. I’d had mixed sympathies, but the complaints were at least empirically correct. That could change real fast. The new models find real stuff. Forget the slop; will projects be able to keep up with a steady feed of verified, reproducible, reliably-exploitable sev:hi vulnerabilities? That’s what’s coming down the pipe." 😬 https://lnkd.in/ebq3MBvH
To view or add a comment, sign in
-
🚨 Vibe Check: Is your AI-generated app actually a backdoor? If you’ve been vibe coding lately you might have unknowingly invited a RAT into your house. 🐀 Two pillars of modern dev ecosystem, Axios and LiteLLM, were hit by massive supply chain attacks this month (March 2026). Because many AI agents pull these in as silent dependencies, you could be compromised without ever writing a line of their code. The Breach: Axios (Mar 31): Versions 1.14.1 & 0.30.4 dropped a Remote Access Trojan (RAT) to steal cloud keys. LiteLLM (Mar 24): Versions 1.82.7 & 1.82.8 used a malicious .pth file to exfiltrate every API key in your .env. How to check & fix: 1. Javascript (Axios): Check your tree: npm list axios | grep -E "1.14.1|0.30.4" Search for the hidden dropper: ls node_modules/plain-crypto-js Fix: Downgrade to 1.14.0. 2. Python (LiteLLM): Check your version: pip show litellm (Avoid 1.82.7/8) Look for this persistence file: ~/.config/sysmon/sysmon.py Fix: Update to 1.83.0. If you find these, treat your machine as breached. Rotate your AWS/OpenAI/GitHub keys immediately. Don't let a "good vibe" turn into a security nightmare and heck your locks. 🔒 #VibeCoding #CyberSecurity #Javascript #Python #AI #SoftwareEngineering
To view or add a comment, sign in