There is news from China that shows the company Deep Sik In the process of unveiling of a model Artificial Ielligence The new is called R2. Speculation suggests that the model can create a new wave in the world market of artificial ielligence, especially as leaked information reports its very low pricing. According to this information, the cost of using the R2 Deepsic may be up to 97 perce lower than the powerful GPT-4 model, which has the poteial to change curre equations.
Previously, the Deepseek R1 had shown China’s ability to compete with Western artificial ielligence gias. Iroducing the R1 had a significa impact on the financial markets and led to the devaluation of the stocks of large technology companies in the US. This has proven that there is no need for big and staggering investmes to develop large and advanced language models, as companies such as Openai claim. Now, rumors about the R2 model have raised expectations to see larger and more significa improvemes.

One of the key features of the R2 Deepsic is its utilization of the advanced MOE architecture, or “Mixture of Experts”. This architecture is probably designed using a new or smart combination of MOE layers and dense layers to manage heavy computing processing with higher performance. It is also said to be about 1.2 trillion parameters, almost twice as much as the previous model, R1. This large number puts the R2 along with highly advanced models such as the Gpt-4 Turbo and Gemini 2.0 Pro Google, indicating its poteial power.
But the most attractive part of the rumor is the cost of using this powerful model. According to unofficial reports, the processing cost of one million input tokens at Deepseek R2 will be only $ 0.07 and the processing cost of one million output token will be about $ 0.27. These figures represe an almost 97 % decrease in Openai’s GPT-4 costs. If this pricing is approved, Deepseek R2 can quickly become one of the most economical and attractive options for businesses, organizations and developers seeking to use advanced artificial ielligence with limited budget. This dramatic reduction in costs can make access to high -level technologies more democratic and bring about a serious change in the economic aspects of artificial ielligence.

Another thing about R2 is related to its educational infrastructure. Apparely, the training process of this model was carried out using the Ascend 910b chips made by Huawei. Deepseek has achieved 82 % of the processing cluster that represes the company’s successful optimizations in using iernal hardware. The processing power of this cluster reaches 512 Petaflaps in fp16 accuracy. This success in maximizing the exploitation of domestic resources and Chinese chips shows Deepseek’s attempt to iegrate its supply chain vertically and reduce dependence on foreign suppliers.
It should be emphasized, however, that all of this information is currely in the rumor and unconfirmed reports. Deepsick has not yet officially announced the details. However, if these speculations are true and Deepseek R2 is offered at such a cost and cost, we will undoubtedly see another great surprise in the field of artificial ielligence. Such a model can change the balance of power in the market and put a more serious challenge to Western competitors, especially US companies. We have to wait and see if Deepseek will once again surprise the technology world.


28,499,000
26,899,000
Toman
Source: wccftech



