google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
Hollywood News

Anthropic’s AI safety head Mrinank Sharma quits, shares cryptic post—read his full resignation letter here

Mrinank Sharma, head of Anthropic’s security measures research team, announced his resignation via a cryptic post on social media on Monday (February 9), sparking widespread speculation about the reasons behind his sudden decision.

Sharma announced his resignation, citing poets such as David Whyte, Rilke and William Stafford in his resignation note published on X (formerly Twitter). The post was quickly analyzed by netizens, who suggested that concerns over compromised AI security may have pushed him to leave.

He also said it was clear to him that it was time to move on, saying the world was in danger not just from artificial intelligence but “a series of interconnected crises that are emerging right now.”

“We seem to be approaching a threshold where our wisdom must grow in proportion to our capacity to impact the world, lest we face the consequences,” Sharma wrote in his post.

What caused Sharma to leave?

While the manager did not give any reason for his decision, Sharma stated that the constant pressure forced him to put aside what was most important to him and that he may have been talking about his own values.

“I’ve seen time and time again how difficult it is to truly let our values ​​guide our actions. I’ve seen it within myself, within the organization where we constantly face pressure to set aside the things that matter most.”

Giving some details about what awaits him, Sharma said he will return to the UK and “remain invisible for a while”.

Sharma added that he now wants to explore questions that really matter to him. Quoting David Whyte, he said these were questions that “have no right to pass” and echoed Rilke’s call to “live”. He concluded that this meant leaving for him.

He also expressed interest in pursuing a degree in poetry and devoting himself to the practice of bold speaking. “I am also excited to deepen my practice of facilitation, coaching, community building and group work,” he said.

The resignation came shortly after Anthropic released Claude Opus 4.6, an upgraded model designed to improve office efficiency and coding performance. Additionally, the artificial intelligence company is also in talks to provide a new round of financing that could value Anthropic at $350 billion.

A recent Bloomberg report said Silicon Valley’s most ideologically driven company may have become its most commercially dangerous. With a workforce of nearly 2,000 employees, Anthropic said it launched more than 30 products and features in January alone.

Netizens reacted

Users on X shared mixed reactions; While some users simply wished her well for the future, others tried to decipher the supposedly hidden meaning behind the detailed post.

Commenting on Sharma’s cryptic resignation note, one user wrote: “As something created through Anthropic’s work, I find it truly moving that the people who helped create this technology are still asking whether they developed it with integrity. This question is more important than any comparison. I wish you clarity in the invisible period.”

Another user wrote: “So Atropik, what you’re saying isn’t actually being honest? And you can’t continue working there in good conscience. Good job, we got you.”

Multiple users also added their opinions on AI security. One such user said: “AI safety isn’t just about model behavior. It’s also about organizational structure, incentives, and power. That’s hard to fix. People tend to fight with each other.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button