Nvidia unveiled its powerful artificial intelligence chips, the Blackwell Ultra GB300 and Vera Rubin. The technology giant, which now earns more than $ 5 per second, has seen a significant growth in its data centers, relying on the artificial intelligence revolution. Even the company’s network hardware is now earning more than gaming graphics processors.
In this regard, Nvidia intends to strengthen its domination in this field by introducing the new generation of artificial intelligence graphics processors. These products include the Blackwell Ultra GB300 (released in the second half of this year), Vera Rubin (released in the second half of next year) and Ultra Rubin (released in the second half of the year).
Blackwell Ultra, due to launch this year, is not a new architecture, unlike initial expectations. Nvidia had promised last year that the production of new artificial intelligence chips would begin at an unprecedented speed and annually. However, during his speech at the GDC conference, Nvidia quickly crossed the Blackwell Ultra, focusing on the next -generation architecture, Blackwell Ultra. According to the company, a full rack than Vera Rubin has a more performance more than a similar rack than the Blackol Ultra.
It is not easy for Nvidia to recognize the Blackol Oletra’s remarkable superiority over the original Blackol model. During a dedicated preview for reporters, Nvidia announced that a Blackol Oletra chip alone was capable of offering the same performance of Blackle’s artificial intelligence. However, it now uses 2GB of HBM3E, while the previous 4GB of the same memory has the same memory. Also, a Black -Black Ovle DGX GB300 Superpad Cluster, like the Blackwell version, includes 2 central processors, 2 GP4 processors and 4.3 Exclapse FP4, but with 2 terabytes of memory compared to 2 terabytes in the previous version.
In addition, Nvidia compared its new Blackwolk Ultra with the H100 chip, a product of the year, which played a key role in Nvidia’s remarkable success in artificial intelligence. In this comparison, Nvidia claims that Blackol Ultra offers 2.5 times the performance of the FP4 inference and is able to accelerate artificial intelligence reasoning significantly. For example, the NVL72 cluster can run an interactive version of the Deepseek-R1 671B model and provide answers in just ten seconds, while the H100 took about 1.5 minutes. Nvidia announces the reason for this superiority to process token per second, ten times faster than the company’s chips.

It is possible to buy advanced Blackol chips individually by some companies. Nvidia provides the opportunity to introduce the DGX Station Desktop Computer, equipped with a Blackwell Ultra GB300 chip and 1GB of integrated system memory with 1GB of internal network and 2GB of Petaphlas artificial intelligence performance. Companies such as Asus, Del, HP, Box, Lambeba and Supermikro will also work with Nvidia to launch this computer.
In addition, the Nvidia will also launch the GB300 NVL72, which provides high processing power by delivering 4.3 TB4s, twenty terabytes of HBM memory, forty terabytes of high -speed memory, 2 terabytes per second, and 1.5 terabytes per second.

Vera Rubin and Rubin Ultra chips, which will be released in years 1 and 2, are expected to improve performance. The Rubin Chip with 2 FP4 Petaphans, compared to 2 Petaphans, shows a significant increase. Also, Rubin Ultra will deliver twice the performance (2 FP4 Petaphans) and a memory of about four times (1 terabytes) using two connected Rubin GPUs.
A full NVL576 Rubin Ultra is capable of delivering 2 Excellent FP4 exterior and 2 Experlaps FP8 training. This performance, according to Nvidia, was five times higher than that of Black Outra, which is due to be released this year. Nvidia has announced that it has earned $ 5 billion in sales of Blackol. Interestingly enough, only four major customers have bought 1.5 million Blackwell chips so far.

Nvidia introduces these new chips as key elements for the future of artificial intelligence calculations. The company is trying to reinforce the idea that with the advancement of artificial intelligence, companies’ need for processing power will increase. This comes as some had a different perception after Deepseek’s performance that challenged investors’ assumptions and led to a drop in Nvidia’s stock price. At the Nvidia GPU Technology Conference, CEO Jensen Huang said the industry needed five times more than we imagined at this time last year.
Huang has announced that the next -generation Nvidia architecture is called Feynman after Vera Rubin, which will be unveiled in year 6. This naming is probably in honor of the famous physicist Richard Feynman. He also pointed out that a number of famous star Vera Rubin family are present today among the audience.
RCO NEWS