Figure AI recently unveiled the advanced Helix model. This model, which falls into the Vision-Language-Action-VLA category, allows humanist robots to identify and move unknown objects using simple language commands and perform complex tasks such as Multi-Rubot Collabration.
Helix is capable of performing upper body movements at a speed of 1 Hz by combining scene (Scene Understanding-System 2) and High-Frequency Motor Control-System 1). It also supports Zero-Shot Learning, meaning that robots will be able to learn new skills only through language commands without the need for manual planning.
If the video is realistic, this progress reflects the independent Figure AI capability to develop advanced language models, without the need to work with Openai, which had been canceled recently. By reducing the need for specialized programming, the technology reduces the cost of developing multipurpose robots and enhances their widespread use in home and industrial applications.
The ability to collaborate between robots is one of the remarkable aspects of this model that can make significant developments in the robotic industry. The development of such models in the future can lead to the production of automatic daily tasks such as picking up equipment, moving objects, and working in complex and crowded environments.
RCO NEWS