The CEO of Anthropics, Dario Amoudi, has examined the challenges of clarifying the performance of artificial intelligence models in a new article and said he intends to reveal the technology’s mechanism by 2027. The company intends to be completely transparent in how to make artificial intelligence models.
In an article recently published, Dario Amoudi, CEO of Anthropics, has spoken of his concerns about the lack of transparency in advanced artificial intelligence models. He emphasized that the anthropic wants to achieve complete transparency in the performance of models by 2027. Amoudi says this is essential not only to increase confidence in using artificial intelligence, but also to prevent its potential risks in various economic, security and technology sectors.
In this article, the CEO of Anthropics explains that their company works specifically on how to analyze and understand the decision -making processes of models. The company has now been able to trace the intellectual paths of the models with new ways, but Amoudi believes it is a longer path ahead.
The anthropic hopes that, with further improvement in this area, it will be able to make the performance of the models more accurate and more accurate, which can ultimately have many commercial and safety benefits.

In an article titled “Emergency Interpretation”, the CEO of Anthropics said the company has succeeded in making initial progress in tracking its decision -making process of its artificial intelligence models, but Amoudi emphasized that they need more research to complete these systems; Especially with the increase in the strength and complexity of the models.
Anthropic is one of the leading companies in the field of “mechanical interpretability”; A branch aims to open the Black Box of Artificial Intelligence and understand the reason for these models and decisions. According to Amoudi, despite the significant improvements in the performance of artificial intelligence models, there is still no precise understanding of how they decide.
“When the productive system summarizes, for example, a financial document, we don’t know why it chooses some words or why, despite its high accuracy, is sometimes wrong,” he says. Anthropic wants to understand the reason for this.
Amoudi also mentioned Chris Olah, anthropic fellow. First, he believes that artificial intelligence models “grow more to be made”, which means that researchers have been able to upgrade the performance of these models, but they still don’t know how the process happens.
RCO NEWS




