Alibaba’s Chinese company released the new version of the QWen 2.5 artificial intelligence on February 5, claiming that it is more efficient than Depsic and other leading models, including GPT and Lama 4.1.
According to Alibaba, the company’s new model of artificial intelligence, QWen 2.5, works better in benchmarks such as Arena-Hard, Livecodebench and GPQA-Diamond, and in other benchmarks the Dip-Sikh model is almost almost. The company also says its model is better than GPT and Lama 4.1 in various sections.
The QWen 2.5 is a large -scale MOE model that is trained on more than 5 trillion token to accurate monitoring and learning methods of human feedback. In general, the MOE approach helps to achieve artificial intelligence without massive GPU clusters and reduces infrastructure costs by 5 % to 5 % over other large language models.
You can now access the API of this model in Alibaba Claude. This powerful model has also been released in Gwen Chat, where you can also produce photos and videos.
The new model of Alibaba and, of course, Deep Sik shows that instead of investing in data centers and large GPU clusters, it can be developed by optimizing artificial intelligence architecture.
Of course, just good performance in benchmarks is not enough to make an artificial intelligence model popular among users. Observing user data privacy, providing prize to API and long -term support are also effective factors. Overall, we have to see if the new Alibaba model can perform well in competition with Dipsic and US companies.
RCO NEWS