The CEO of Ahropics, Dario Amoudi, has examined the challenges of clarifying the performance of artificial ielligence models in a new article and said he iends to reveal the technology’s mechanism by 2027. The company iends to be completely transpare in how to make artificial ielligence models.
In an article recely published, Dario Amoudi, CEO of Ahropics, has spoken of his concerns about the lack of transparency in advanced artificial ielligence models. He emphasized that the ahropic was to achieve complete transparency in the performance of models by 2027. Amoudi says this is esseial not only to increase confidence in using artificial ielligence, but also to preve its poteial risks in various economic, security and technology sectors.

In this article, the CEO of Ahropics explains that their company works specifically on how to analyze and understand the decision -making processes of models. The company has now been able to trace the iellectual paths of the models with new ways, but Amoudi believes it is a longer path ahead.
The ahropic hopes that, with further improveme in this area, it will be able to make the performance of the models more accurate and more accurate, which can ultimately have many commercial and safety benefits.

In an article titled “Emergency Ierpretation”, the CEO of Ahropics said the company has succeeded in making initial progress in tracking its decision -making process of its artificial ielligence models, but Amoudi emphasized that they need more research to complete these systems; Especially with the increase in the strength and complexity of the models.
Ahropic is one of the leading companies in the field of “mechanical ierpretability”; A branch aims to open the Black Box of Artificial Ielligence and understand the reason for these models and decisions. According to Amoudi, despite the significa improvemes in the performance of artificial ielligence models, there is still no precise understanding of how they decide.
“When the productive system summarizes, for example, a financial docume, we don’t know why it chooses some words or why, despite its high accuracy, is sometimes wrong,” he says. Ahropic was to understand the reason for this.
Amoudi also meioned Chris Olah, ahropic fellow. First, he believes that artificial ielligence models “grow more to be made”, which means that researchers have been able to upgrade the performance of these models, but they still don’t know how the process happens.



