The Runway Startup launches its latest video model, Gen-4. This model can maiain scenes and characters in multiple views, a problem seen in many of the videos produced with artificial ielligence. Runway claims that the Gen-4 gives more corol over users to create more coinuous visual stories.
Runway Gen-4; The new generation of artificial ielligence for cohere videos
According to Theverge, the Gen-4 model, now available to monetary and organizational users, allows users to keep characters and objects consta in differe views by preseing a reference image. Then, users can eer their desired description and the model will produce outputs from differe angles by maiaining cohesion.
For example, the company has released a video in which the image of a woman remains unchanged in differe scenes and differe light conditions. This feature can be widely used for filmmakers, coe producers and digital marketers.

How has Runway’s artificial ielligence improved the quality of production videos?
This unveiling was made less than a year after the iroduction of Gen-3 Alpha. The previous model enabled producing longer videos, but it was coroversial because it was said to have been used to teach you with YouTube videos and copyright videos. The new model of the Gen-4 Artificial Ielligence, focusing on resolving the problem of sudden changes in the images produced, allows users to have fixed scenes and characters unchanged.

The Gen-4 uses advanced deep learning techniques to improve visual cohesion and more users’ corol over the video production process. This model is currely available to Runway.



