The French Mistral Company, which is active in developing artificial intelligence models, has unveiled its first reasoning models.
These models are competing for similar models such as O3 from Openai and Jenya 1.0 Google, and can reinforce France and the European Union’s position in artificial intelligence competition. The family of artificial intelligence models is called the Mangistral reasoning and can solve issues step by step to create more cohesion and reliability on topics such as mathematics and physics.
The family of artificial intelligence models were unveiled
The family of the Magistral models is available to users in both Small (Small) and Medium (Medium) models. The Magistral Small version has 2 billion parameters and is downloadable with Apache 2.0 free license from the Hugging Face artificial intelligence platform.
But the more advanced model, Magistral Medium, is previewed by Mistral LE Chat and is available in the company’s API as well as through third -party cloud services.
In explaining his new reasoning models, Mistral said: “Magistral is suitable for a wide range of organizational applications; From structured calculations and programming logic to decision -making trees and regular -oriented systems. “These models are carefully set for multi -step logic that improves the ability to interpret and provide a user -friendly intellectual process.”
The French company Mistral was established in year 6 and serves as a advanced laboratory of artificial intelligence models. The company also develops a set of artificial intelligence -based services, such as the Le Chat platform and its mobile apps. Mistral has so far attracted more than $ 1 billion in capital.
Of course, despite the significant sources of Mistral, the company in some areas, such as the development of reasoning models from other artificial intelligence laboratories such as Openai and Google. Benchmarks published by Mistral also show that the family of Magistral reasoning models have little to say about their competitors.
For example, in GPQA Diamond and AIME tests that evaluate the skills of an artificial intelligence model in solving physics, mathematics and science problems, Magistral Medium has performed weaker than the 2.5 Pro and Claude Opus 4. Also, Magistral Medium has not been able to dismiss the Jina 4.0 Peru at the LiveCodebench programming benchmark.
RCO NEWS




