Meta has unveiled its new artificial intelligence model called Movie Gen, which is capable of producing high-definition videos with audio from text. This unveiling comes a few months after the introduction of Sora, OpenAI’s text-to-video conversion model, and shows the increasing competition in this field of artificial intelligence.
Movie Gen uses text commands to generate new videos as well as edit existing images and videos. This model can produce videos with different aspect ratios and artificially create sounds in harmony with the image, including ambient noise, sound effects and background music.
Movie Gen’s capabilities are not limited to generating new videos. This model can create custom videos from images or modify elements in videos. For example, Meta showed an example where a video was made of a headshot of a woman sitting next to some pumpkins while drinking. In addition, Movie Gen can change the style of existing videos or add new elements to them.
Meta stated that Movie Gen was trained using a combination of licensed and public data, but did not specify the exact details of this data. Meta Product Manager Chris Cox stated in Threads that Movie Gen will not be released as a product anytime soon due to high model implementation costs and long video generation times.
However, the introduction of Movie Gen represents a significant advance in the field of artificial intelligence video production. This technology could revolutionize various industries in the future, including filmmaking, advertising, and computer games, and has raised concerns about the ownership of images and its impact on the livelihoods of artists and filmmakers.
9,490,000
9,295,000
Toman
5,500,000
5,229,000
Toman
Source: Meta
RCO NEWS