It is difficult to train a generative artificial ielligence (GAI) with differe GPUs in a data ceer; But the more difficult task is to train a GAI in multiple data ceers in differe locations. However, apparely China has done this. In other words, China may be the first coury to achieve such an achieveme in the world of artificial ielligence.
Patrick Moorhead, senior analyst at Moor Insights & Strategy in X, says China appears to have done very well in creating and developing AI training clusters with many more nodes than the US.
China is also apparely the first coury to train a GAI model in multiple separate data ceers, Moorhead said. He found out about this achieveme of China during a conversation in an unrelated meeting.
Training a generative artificial ielligence model in multiple data ceers by China

Although training an AI model across multiple data ceers is difficult, the technique is esseial for China to achieve its AI goals and overcome US sanctions. Because Nvidia doesn’t wa to lose the Chinese market, it created the lower-powered H20 chips that fit within the parameters allowed by the US governme for export. However, there are rumors that even these chips may be banned soon.
Because of this uncertaiy for importing Nvidia chips, Chinese researchers have been working on iegrating GPUs from differe brands io one cluster. By doing so, tech firms can combine their limited number of high-end chips like Nvidia’s A100 with less powerful but affordable GPUs like Huawei’s Ascend 910B or Nvidia’s H20. This technique could help them cope with the shortage of high-end GPUs in China, although they may experience performance degradation over time.
However, China seems to have found a way to solve this problem as well: developing a generative AI model across multiple data ceers. Although we don’t have any information about this GAI model yet, such a technique shows how Chinese researchers are practicing to achieve their AI ambitions.



