google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
UK

Claude’s code: Anthropic leaks source code for AI software engineering tool | Technology

Anthropic accidentally released some of the internal source code for its AI-powered coding assistant Claude Code due to “human error,” the company said Tuesday.

The internal-use file, mistakenly included in a software update, pointed to an archive containing approximately 2,000 files and 500,000 lines of code that was quickly copied to the developer platform GitHub. A post sharing a link to leaked code in X had been viewed more than 29 million times by early Wednesday, and the rewritten version of the source code quickly became GitHub’s fastest-downloaded repository ever. Anthropic published copyright takedown requests Trying to contain the spread of the code. Inside the code, users saw plans for a Tamagotchi-like coding assistant and an always-on AI agent. threshold.

“Earlier today, a release of Claude Code included some internal source code. No sensitive customer data or credentials were involved or disclosed,” a spokesperson for Anthropic said. “This was not a security breach but a release packaging issue caused by human error.”

The exposed code relates to the vehicle’s internal architecture, but does not contain confidential data from Claude, Anthropic’s underlying AI model.

Claude Code’s source code was partially known because the tool was reverse engineered by independent developers. Source code for an earlier version of the assistant was revealed in February 2025.

Claude Code has emerged as an important product for Anthropic as the company’s paid subscriber base continues to grow. TechCrunch reported Paid subscriptions have more than doubled this year, an Anthropic spokesperson said last week. Anthropic’s Claude chatbot also received a popularity boost amid CEO Dario Amodei’s feud with the Pentagon; Claude rose to the top of Apple’s list of top free apps in the US more than a month ago. Amodei has refused to back down on red lines on using his company’s technology for mass surveillance and fully autonomous weapons.

This is the second time Anthropic has suffered a data leak in recent weeks. wealth before reported on a separate breach and noted that the company stored thousands of internal files on public systems. This included a draft of a blog post referencing “Mythos” and an upcoming model known as “Capybara”.

Some experts worry that the leaks point to internal vulnerabilities at Anthropic. This can be especially troubling for a company focused on AI security.

The leaks could also help rivals like OpenAI and Google better understand how Claude Code’s AI system works. Wall StreetJournal reported It said the latest leak included commercially sensitive information, such as tools and instructions for enabling AI models to work as coding agents.

The latest breach comes weeks after the US government designated Anthropic as a supply chain risk; Anthropic is fighting these claims in court. Last week, a U.S. district judge granted a temporary injunction to block the appointment.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button