The French Mistral Company, which is active in developing artificial ielligence models, has unveiled its first reasoning models.
These models are competing for similar models such as O3 from Openai and Jenya 1.0 Google, and can reinforce France and the European Union’s position in artificial ielligence competition. The family of artificial ielligence models is called the Mangistral reasoning and can solve issues step by step to create more cohesion and reliability on topics such as mathematics and physics.
The family of artificial ielligence models were unveiled

The family of the Magistral models is available to users in both Small (Small) and Medium (Medium) models. The Magistral Small version has 2 billion parameters and is downloadable with Apache 2.0 free license from the Hugging Face artificial ielligence platform.
But the more advanced model, Magistral Medium, is previewed by Mistral LE Chat and is available in the company’s API as well as through third -party cloud services.
In explaining his new reasoning models, Mistral said: “Magistral is suitable for a wide range of organizational applications; From structured calculations and programming logic to decision -making trees and regular -orieed systems. “These models are carefully set for multi -step logic that improves the ability to ierpret and provide a user -friendly iellectual process.”
The French company Mistral was established in year 6 and serves as a advanced laboratory of artificial ielligence models. The company also develops a set of artificial ielligence -based services, such as the Le Chat platform and its mobile apps. Mistral has so far attracted more than $ 1 billion in capital.
Of course, despite the significa sources of Mistral, the company in some areas, such as the developme of reasoning models from other artificial ielligence laboratories such as Openai and Google. Benchmarks published by Mistral also show that the family of Magistral reasoning models have little to say about their competitors.
For example, in GPQA Diamond and AIME tests that evaluate the skills of an artificial ielligence model in solving physics, mathematics and science problems, Magistral Medium has performed weaker than the 2.5 Pro and Claude Opus 4. Also, Magistral Medium has not been able to dismiss the Jina 4.0 Peru at the LiveCodebench programming benchmark.



