Mrinank Sharma, an artificial ielligence safety researcher at Ahropic, has resigned from his post, warning in a letter that “the world is at risk”. In the letter, which was published on the X social network, he expressed his concern about artificial ielligence, bioweapons and the general state of the world, and said he plans to distance himself from the technology industry.
According to the BBC, in his farewell tweet, Sharma said that he loved his time at Ahropic but “the time has come to go”. He explained that he waed to study and write poetry and return to Britain to become invisible. This decision was made while at the same time another researcher from OpenAI resigned due to concerns about the advertising policies of this company.
Ahropic’s chief researcher warned about the global threat of artificial ielligence
Eropic, known for its chatbot Claude, bills itself as a company with a safety-orieed approach to artificial ielligence research. The company was founded in 2021 by a group of early employees of OpenAI. Eropic works simultaneously on developing commercial products and strengthening the safety of artificial ielligence systems.


At Ahropic, Sharma led a team researching AI safety mechanisms. In his resignation letter, he wrote that part of his achievemes include investigating the reason why text-generating systems are “flattering” towards users, dealing with the dangers of bio-terrorism with the help of artificial ielligence, and researching how artificial ielligence assistas can make us less human.
In his text, he emphasized that the danger does not come only from artificial ielligence or biological weapons, and that a series of ierconnected crises are forming at the same time. Sharma wrote that he has seen time and time again “how hard it is to really put our values io practice,” and noted that Eropic is also constaly under pressure to let go of what’s most importa.
Eropic describes itself as a “public ierest company” that aims to secure the benefits of artificial ielligence and mitigate its risks. The main focus of this company is on the risks of advanced systems called “border”; Systems that may be incompatible with human values.



