It is difficult to train a generative artificial intelligence (GAI) with different GPUs in a data center; But the more difficult task is to train a GAI in multiple data centers in different locations. However, apparently China has done this. In other words, China may be the first country to achieve such an achievement in the world of artificial intelligence.
Patrick Moorhead, senior analyst at Moor Insights & Strategy in X, says China appears to have done very well in creating and developing AI training clusters with many more nodes than the US.
China is also apparently the first country to train a GAI model in multiple separate data centers, Moorhead said. He found out about this achievement of China during a conversation in an unrelated meeting.
Training a generative artificial intelligence model in multiple data centers by China
Although training an AI model across multiple data centers is difficult, the technique is essential for China to achieve its AI goals and overcome US sanctions. Because Nvidia doesn’t want to lose the Chinese market, it created the lower-powered H20 chips that fit within the parameters allowed by the US government for export. However, there are rumors that even these chips may be banned soon.
Because of this uncertainty for importing Nvidia chips, Chinese researchers have been working on integrating GPUs from different brands into one cluster. By doing so, tech firms can combine their limited number of high-end chips like Nvidia’s A100 with less powerful but affordable GPUs like Huawei’s Ascend 910B or Nvidia’s H20. This technique could help them cope with the shortage of high-end GPUs in China, although they may experience performance degradation over time.
However, China seems to have found a way to solve this problem as well: developing a generative AI model across multiple data centers. Although we don’t have any information about this GAI model yet, such a technique shows how Chinese researchers are practicing to achieve their AI ambitions.
RCO NEWS