Tencent Chinese Company from a new artificial intelligence model called Hunyuanworld-voyager It has unveiled that it can turn a photo into 3D videos.
According to reports, the new model allows users to determine the camera’s direction and move on the virtual scenes produced based on the photo. This model simultaneously produces video and depth data, and without the need for traditional modeling tools allows for 3D models.
Of course, the results provided by this model are not exactly 3D models, but are two -dimensional videos that simulate the movement of the camera in a 3D environment by maintaining space consistency. It also produces only 49 frames (about two seconds of video) each time, but multiple clips can be connected and made for a few minutes.
The input of this model of artificial intelligence is just an image and path of the camera. Moves such as the face, rear, rotation or movements are also adjustable by its interface.
Tencent says this new artificial intelligence model is trained with over 100,000 video clips, including real scenes and Unreal Engine renderings. These data are automatically processed by software that calculates the camera’s movement and depth of each frame.
Limits of Model of Tencent Artificial Intelligence
However, architectural constraints Transformer It allows the model to simulate the patterns seen in educational data and make error in completely new situations. For this reason, Voyager is impaired in producing 360 -degree rotation.
In terms of performance, Voyager has the highest score of 77.62 at the Stanford University benchmark. The model has a brilliant performance in the control of objects, light compatibility and output quality, but came in second in control of the camera after Wonderworld.
It also requires high hardware power to run the model, as it requires at least 60 GB of graphics memory for the 540p output. Tencent has already released the different weights of the model in the Hugging Face and has made the code available for implementation.
RCO NEWS




