Anthropic allows some employees to participate in ₹54,577 crore share sale — Here’s all you need to know
Claude AI maker Anthropic is offering some current and former employees the opportunity to sell their shares in the company at a valuation of around $350 billion ( ₹3.18 lakh crore), Bloomberg reported citing sources. This will allow them to cash in on the last $30 billion ( ₹27,285 crore) donations were collected, he added.
The company is worth between $5 billion and $6 billion (approximately ₹54,577 crore for the share sale), but the final amount will depend on how many eligible Anthropic employees choose to sell, said one of the people, who asked not to be identified as the information is confidential. Details are not yet finalized and may still change.
In Anthropic’s last funding round, completed at the beginning of this month, the company was valued at $380 billion, including the cash invested by investors. Anthropic declined to comment on the new share sale.
Antropik insider shares are on sale
Foreign investors, not Antropik, will buy the inside shares. The deal will be open to current and former employees who have worked at the company for at least 12 months, one of the sources said.
Secondary share sales are becoming an increasingly popular method for startups to allow staff to benefit from a startup’s valuation boost even without an acquisition or initial public offering. This tactic has become important in the competitive AI recruiting landscape as larger startups choose to stay private longer.
Stripe Inc. and SpaceX also allowed employees to sell stock. OpenAI, Anthropic’s biggest rival, routinely sold shares last year, including $6.6 billion in secondary shares at a $500 billion valuation.
Anthropic, OpenAI, and SpaceX have recently taken steps toward initial public offerings.
Anthropic says Chinese companies are abusing Claude
In a blog post on Monday, Anthropic alleged that three Chinese companies — DeepSeek, Moonshot and MiniMax — used the Claude AI chatbot to improperly obtain capabilities to improve their own models.
He accused the companies of creating more than 16 million interactions with Claude, using nearly 24,000 fake accounts to expose Claude’s “ability to train and develop their own models.”
“Distillation (the method used) may be legitimate: AI labs use it to create smaller, cheaper models for their customers. But foreign labs that illegally distill American models can remove protections and feed model capabilities into their own military, intelligence and surveillance systems,” he added.
But the internet is not happy with the company led by Dario Amodei. Netizens said Anthropic used the open internet to train its own AI and then hypocritically accused others of doing the same.
In front of the line of fire was xAI founder and billionaire Elon Musk. “Anthropic is guilty of stealing educational data on a massive scale and has been forced to pay billions of dollars in damages for their theft. It’s just a fact,” he wrote.
Musk included screenshots of the community note to prove that Anthropic had settled a $1.5 billion lawsuit over the creation of Claude AI and that the company was “also training using stolen data.”
Another user asked the following question: “If you train your model by bombing others for free, it’s okay. But if others pay to train your model, is that illegal? Isn’t it ethical?”
(With input from Bloomberg)


