The Runway Startup launches its latest video model, Gen-4. This model can maintain scenes and characters in multiple views, a problem seen in many of the videos produced with artificial intelligence. Runway claims that the Gen-4 gives more control over users to create more continuous visual stories.
Runway Gen-4; The new generation of artificial intelligence for coherent videos
According to Theverge, the Gen-4 model, now available to monetary and organizational users, allows users to keep characters and objects constant in different views by presenting a reference image. Then, users can enter their desired description and the model will produce outputs from different angles by maintaining cohesion.
For example, the company has released a video in which the image of a woman remains unchanged in different scenes and different light conditions. This feature can be widely used for filmmakers, content producers and digital marketers.
How has Runway’s artificial intelligence improved the quality of production videos?
This unveiling was made less than a year after the introduction of Gen-3 Alpha. The previous model enabled producing longer videos, but it was controversial because it was said to have been used to teach you with YouTube videos and copyright videos. The new model of the Gen-4 Artificial Intelligence, focusing on resolving the problem of sudden changes in the images produced, allows users to have fixed scenes and characters unchanged.

The Gen-4 uses advanced deep learning techniques to improve visual cohesion and more users’ control over the video production process. This model is currently available to Runway.
RCO NEWS