Startup Luma Labs who had previously iroduced the artificial ielligence model of video production Dream Machine, has now iroduced a new model. This artificial ielligence model Ray2 It’s called and Luma Labs claims it’s faster, better understands real-world physics, and produces more realistic videos compared to other similar models like OpenAI Sora.
According to published reports, Ray2 can detect ieractions between differe subjects as well as their type, which increases its realism. With Ray2, users can generate high-quality 10-second videos via text commands or an image. This model is available through Amazon’s AWS Bedrock service or Dream Machine.

Examples of videos made with Ray2 artificial ielligence
This model is currely only available to premium users, but some social media users have posted videos made with it. Amit Jain, founder and CEO of Luma AI, says at X that the model offers “fast, natural, consiste motion and physics” and is trained on 10 times more data than the original Ray1 model. Currely, Ray2 can only generate video based on text.
He also posted the following video, made by Ray2, of a rotating opaque orb.

The video below is another example made by Ray2 users, showing a cat running on furniture:

In rece times, we have seen the release of many artificial ielligence models for video production. In addition to Sora, Runway and Kling are among the other models in this field. Google’s Pika 2.0 and Veo 2 are other models that were unveiled in 2024. Even Luma recely updated its Dream Machine platform to add the ability to produce still images, as well as releasing its iOS app.



