Mrinank Sharma, an artificial intelligence safety researcher at Anthropic, has resigned from his post, warning in a letter that “the world is at risk”. In the letter, which was published on the X social network, he expressed his concern about artificial intelligence, bioweapons and the general state of the world, and said he plans to distance himself from the technology industry.
According to the BBC, in his farewell tweet, Sharma said that he loved his time at Anthropic but “the time has come to go”. He explained that he wanted to study and write poetry and return to Britain to become invisible. This decision was made while at the same time another researcher from OpenAI resigned due to concerns about the advertising policies of this company.
Anthropic’s chief researcher warned about the global threat of artificial intelligence
Entropic, known for its chatbot Claude, bills itself as a company with a safety-oriented approach to artificial intelligence research. The company was founded in 2021 by a group of early employees of OpenAI. Entropic works simultaneously on developing commercial products and strengthening the safety of artificial intelligence systems.

At Anthropic, Sharma led a team researching AI safety mechanisms. In his resignation letter, he wrote that part of his achievements include investigating the reason why text-generating systems are “flattering” towards users, dealing with the dangers of bio-terrorism with the help of artificial intelligence, and researching how artificial intelligence assistants can make us less human.
In his text, he emphasized that the danger does not come only from artificial intelligence or biological weapons, and that a series of interconnected crises are forming at the same time. Sharma wrote that he has seen time and time again “how hard it is to really put our values into practice,” and noted that Entropic is also constantly under pressure to let go of what’s most important.
Entropic describes itself as a “public interest company” that aims to secure the benefits of artificial intelligence and mitigate its risks. The main focus of this company is on the risks of advanced systems called “border”; Systems that may be incompatible with human values.
RCO NEWS


