Microsoft from the developme of the largest model of artificial ielligence “1 bit»With the name Bitnet B1.58 2B4T He has announced. According to Redmonds, the model is open -source and optimized as it runs on conveional processors such as the Apple M2.
1 -bit or so -called models BitnetIensive versions of artificial ielligence models are designed to perform well with limited hardware resources. In these models, weights are displayed with only three -1, 0 and 1, which dramatically reduces memory consumption and increases the speed of execution.
According to the Tekranch report, the new Microsoft model has 2 billion parameters It is a trained data collection coaining 4 trillion token (equivale to about 33 million books).
Microsoft’s 1 -bit model performance

According to the results, Bitnet B1.58 2B4T has been able to get better in tests such as GSM8K (Elemeary Mathematics) and PIQA (Physical Logic) from its cohere models such as LLAMA 3.2 1b (meta), GEMMA 3 1b (Google) and QWen 2.5 Baba (owned by Alibaba).
Microsoft has also announced that this model is in some cases up to Two times faster It operates from similar models and at the same time consumes only part of the memory needed.
However, to achieve this model, one must be dedicated from Microsoft’s dedicated framework bitnet.cpp It used to be only compatible with limited hardware and does not include support for GPUs (which play an importa role in the implemeation of artificial ielligence models).
This shows that although Bitnets have high poteial for use in low -powered and limited devices, the challenge of hardware adaptation is still one of the main obstacles to their developme.



