Cerebras Systems has unveiled its Wafer Scale Engine 3 chip (WSE-3), which is the world's fastest artificial intelligence chip.
Equipped with the Cerebras CS-3 AI supercomputer, the WSE-3 reportedly performs twice as well as its predecessor, the WSE-2, while having the same power consumption and price.
This chip is capable of training artificial intelligence models with 24 trillion. These figures show a significant evolution compared to previous models.
The WSE-3 is built on TSMC's 5nm process and has 44GB of on-chip SRAM. This chip has four trillion transistors and 900,000 computing cores optimized with artificial intelligence. It also has a maximum artificial intelligence performance of 125 petaflops, which is equivalent to about 62 Nvidia H100 GPUs.
The CS-3 supercomputer equipped with the WSE-3 chip is designed to train the next generation of artificial intelligence models and is 10 times larger than the GPT-4 and Jamnai. With a memory system of up to 1.2 petabytes, this supercomputer can store 24 trillion parameter models in the logical memory space, simplifying the training process and increasing the developers' productivity.
According to Sarbras company, the CS-3 supercomputer is optimized for both organizational and large-scale needs, and it has special expertise in the field of energy efficiency and software simplification, and compared to graphics cards, GPU for large language models (LLM) It requires 97% less code.
Andrew Feldman, CEO and co-founder of Serbras, said:
WSE-3 is the world's fastest artificial intelligence chip, built with the most advanced methods, from a combination of experts to 24 trillion parameter models. We're excited to bring WSE-3 and CS-3 to market to help solve today's biggest AI challenges.
According to the company, it currently has a large number of orders for the CS-3 in the corporate, government and international sectors. CS-3 will also play an important role in the strategic partnership between Cerebras and G42. They have already achieved performance by building the Condor Galaxy 1 and 2 AI supercomputers to deliver 8 exaFLOPs of AI computing power. The third version, the Condor Galaxy 3, is currently in development and will be built on 64 CS-3 systems and will produce 8 exaFLOPs of AI computing.
RCO NEWS