Meta has unveiled its new artificial ielligence model called Movie Gen, which is capable of producing high-definition videos with audio from text. This unveiling comes a few mohs after the iroduction of Sora, OpenAI’s text-to-video conversion model, and shows the increasing competition in this field of artificial ielligence.
Movie Gen uses text commands to generate new videos as well as edit existing images and videos. This model can produce videos with differe aspect ratios and artificially create sounds in harmony with the image, including ambie noise, sound effects and background music.
Movie Gen’s capabilities are not limited to generating new videos. This model can create custom videos from images or modify elemes in videos. For example, Meta showed an example where a video was made of a headshot of a woman sitting next to some pumpkins while drinking. In addition, Movie Gen can change the style of existing videos or add new elemes to them.

Meta stated that Movie Gen was trained using a combination of licensed and public data, but did not specify the exact details of this data. Meta Product Manager Chris Cox stated in Threads that Movie Gen will not be released as a product anytime soon due to high model implemeation costs and long video generation times.
However, the iroduction of Movie Gen represes a significa advance in the field of artificial ielligence video production. This technology could revolutionize various industries in the future, including filmmaking, advertising, and computer games, and has raised concerns about the ownership of images and its impact on the livelihoods of artists and filmmakers.

9,490,000
9,295,000
Toman

5,500,000
5,229,000
Toman
Source: Meta



