Microsoft from the development of the largest model of artificial intelligence “1 bit»With the name Bitnet B1.58 2B4T He has announced. According to Redmonds, the model is open -source and optimized as it runs on conventional processors such as the Apple M2.
1 -bit or so -called models BitnetIntensive versions of artificial intelligence models are designed to perform well with limited hardware resources. In these models, weights are displayed with only three -1, 0 and 1, which dramatically reduces memory consumption and increases the speed of execution.
According to the Tekranch report, the new Microsoft model has 2 billion parameters It is a trained data collection containing 4 trillion token (equivalent to about 33 million books).
Microsoft’s 1 -bit model performance
According to the results, Bitnet B1.58 2B4T has been able to get better in tests such as GSM8K (Elementary Mathematics) and PIQA (Physical Logic) from its coherent models such as LLAMA 3.2 1b (meta), GEMMA 3 1b (Google) and QWen 2.5 Baba (owned by Alibaba).
Microsoft has also announced that this model is in some cases up to Two times faster It operates from similar models and at the same time consumes only part of the memory needed.
However, to achieve this model, one must be dedicated from Microsoft’s dedicated framework bitnet.cpp It used to be only compatible with limited hardware and does not include support for GPUs (which play an important role in the implementation of artificial intelligence models).
This shows that although Bitnets have high potential for use in low -powered and limited devices, the challenge of hardware adaptation is still one of the main obstacles to their development.
RCO NEWS




