Overview
- Luma says its new Luma Agents handle end-to-end creative work across text, images, video and audio, and can coordinate with third-party models such as Google’s Veo, ByteDance’s Seedream, ElevenLabs’ voices, and Luma’s Ray 3.14.
- The agents run on Luma’s Unified Intelligence architecture, with the Uni-1 multimodal reasoning model designed to couple planning and rendering within a single system.
- According to Luma, the agents maintain persistent context across assets and collaborators and refine outputs through iterative self-critique.
- Early deployments are underway with Publicis Groupe and Serviceplan, with projects reported for brands including Adidas, Mazda and Humain.
- Luma’s rollout follows a $900 million funding round in November that the company said will support a planned Saudi Arabia “supercluster” data center to scale operations.