An artificial intelligence researcher left his job at the U.S. firm Anthropic this week with a cryptic warning about the state of the world, marking the latest resignation in a wave of departures over safety risks and ethical dilemmas.
In a letter posted on X, Mrinank Sharma wrote that he had achieved all he had hoped during his time at the AI safety company and was proud of his efforts, but was leaving over fears that the “world is in peril,” not just because of AI, but from a “whole series of interconnected crises,” ranging from bioterrorism to concerns over the industry’s “sycophancy.”
1:57
People ‘must be very careful’ using AI for medical advice
Story continues below advertisement
He said he felt called to writing, to pursue a degree in poetry and to devote himself to “the practice of courageous speech.”
“Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions,” he continued.
Anthropic was founded in 2021 by a breakaway group of former OpenAI employees who pledged to design a more safety-centric approach to AI development than its competitors.
Get daily National news
Get the day's top news, political, economic, and current affairs headlines, delivered to your inbox once a day.
Sharma led the company’s AI safeguards research team.
Anthropic has released reports outlining the safety of its own products, including Claude, its hybrid-reasoning large language model, and markets itself as a company committed to building reliable and understandable AI systems.
The company faced criticism last year after agreeing to pay US$1.5 billion to settle a class-action lawsuit from a group of authors who alleged the company used pirated versions of their work to train its AI models.
Today is my last day at Anthropic. I resigned.
Here is the letter I shared with my colleagues, explaining my decision. pic.twitter.com/Qe4QyAFmxL
— mrinank (@MrinankSharma) February 9, 2026
Story continues below advertisement
Sharma’s resignation comes the same week OpenAI researcher Zoë Hitzig announced her resignation in an essay in the New York Times, citing concerns about the company’s advertising strategy, including placing ads in ChatGPT.
“I once believed I could help the people building A.I. get ahead of the problems it would create. This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I’d joined to help answer,” she wrote.
Trending Now
“People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”
Anthropic and OpenAI recently became embroiled in a public spat after Anthropic released a Super Bowl advertisement criticizing OpenAI’s decision to run ads on ChatGPT.
In 2024, OpenAI CEO Sam Altman said he was not a fan of using ads and would deploy them as a “last resort.”
Last week, he disputed the commercial’s claim that embedding ads was deceptive with a lengthy post criticizing Anthropic.
“I guess it’s on brand for Anthropic doublespeak to use a deceptive ad to critique theoretical deceptive ads that aren’t real, but a Super Bowl ad is not where I would expect it,” he wrote, adding that ads will continue to enable free access, which he said creates “agency.”
Story continues below advertisement
First, the good part of the Anthropic ads: they are funny, and I laughed.
But I wonder why Anthropic would go for something so clearly dishonest. Our most important principle for ads says that we won’t do exactly this; we would obviously never run ads in the way Anthropic…
— Sam Altman (@sama) February 4, 2026
Employees at competing companies — Hitzig and Sharma — both expressed grave concern about the erosion of guiding principles established to preserve the integrity of AI and protect its users from manipulation.
Hitzig wrote that a potential “erosion of OpenAI’s own principles to maximise engagement” might already be happening at the firm.
Sharma said he was concerned about AI’s capacity to “distort humanity.”
© 2026 Global News, a division of Corus Entertainment Inc.









English (US) ·