Meta Launches Video AI Platform “Movie Gen”
The platform will be able to produce video and audio clips based on prompts from users.
Meta is stepping up its competition with AI giants like OpenAI by unveiling a new artificial intelligence model named Movie Gen, designed to generate both video and audio clips from user prompts. This announcement signals Meta's ambition to expand its presence in the generative AI landscape, where companies like OpenAI and ElevenLabs have been leading the way.
Meta shared examples that highlight Movie Gen’s capabilities, including videos of animals engaging in activities like swimming and surfing, or personalized clips where users' photos were transformed to show them painting on a canvas. Beyond just video, the model also incorporates background music and sound effects, which are synchronized to match the generated content. The tool can even edit pre-existing videos, offering creators a range of functionalities.
In one demo, Movie Gen was able to place pom-poms in the hands of a man running alone through the desert. In another, it transformed a dry parking lot where a man was skateboarding into a scene with splashing puddles. According to Meta, Movie Gen can create videos up to 16 seconds long and generate audio clips as long as 45 seconds. Data shared by the company suggests the model is comparable to features from other startups in the space, such as Runway, OpenAI, ElevenLabs, and Kling.
Meta's announcement comes at a time when the entertainment industry is grappling with how to leverage generative AI video technologies. Back in February, OpenAI introduced Sora, a tool capable of creating cinematic-quality videos from text prompts. Since then, the film and television sectors have shown a growing interest in these technologies, seeking ways to streamline the creative process. However, there are also concerns, particularly regarding whether these tools rely on copyrighted materials for training, often without proper authorization. Beyond Hollywood, lawmakers have raised alarms about the potential misuse of AI-generated content, especially deepfakes.
Meta, which has previously released models like Llama for open developer use, is taking a more cautious approach with Movie Gen. The company indicated that it is evaluating the risks tied to each model and, at least for now, has no plans to release Movie Gen widely to developers. Rather, Meta aims to collaborate closely with the entertainment industry and content creators, with plans to integrate the tool into its own products in the coming year. The company detailed that Movie Gen was developed using a combination of licensed and publicly available datasets.
Meta's latest step with Movie Gen is part of an ongoing conversation about how AI will shape the future of entertainment, with creators, companies, and lawmakers all keeping a close watch on these developments.