Luma AI launches Ray3 generative video model

Luma AI has launched Ray3, its latest generative video model, billed as the first video model built to think like a creative partner. Unlike earlier models that could only generate visuals with limited control, Ray3 is the first systems, the company says, that can reason in visuals and concepts, evaluate its own outputs, and refine its results on the fly.

(Source: Luma AI)

Ray3 delivers cinematic-quality, HDR-native video up to 10 seconds long, with more natural motion, fewer hallucinations, and features designed for real creative use, including Draft Mode for faster iteration—enabling users to explore dozens of ideas up to 20× faster—and native 1080p generation.  Built on a new multimodal reasoning system, Ray3 better understands the user’s creative intent, plan coherent scenes, maintain character consistency, and produce motion that feels fresh and natural, according to Luma AI.

Ray3 not only runs on the company’s own Dream Machine platform, but is now being offered in the Adobe Firefly app, making Adobe the first third-party launch partner. Ray3 is available in both Firefly’s video generation module and Firefly Boards.