Adobe’s new text-to-video AI model avoids licensing pitfalls, upping marketers’ confidence
News of the company’s forthcoming Firefly Video Model arrives amid a surge in interest in AI-generated video from marketers and other creative industries.
Adobe’s “Firefly Video Model excels at generating videos of the natural world,” according to the company. / Adobe
Adobe will roll out a new AI-generated video tool in beta later this year, the software giant announced on Wednesday. The tool builds upon Adobe’s existing suite of Firefly AI models, which first hit the market with a text-to-image model last spring.
With the input of creatives and video editors, Adobe is “developing new workflows leveraging the model to help editors ideate and explore their creative vision, fill gaps in their timeline and add new elements to existing footage,” wrote Ashley Still, senior vice-president and general manager of creative product group at Adobe, in a blog post announcing the debut.
While text- and image-generating models have been spreading widely through creative industries for well over a year now, AI-generated video is still viewed by many as something of a new frontier, replete with huge potential but also serious risks. OpenAI unveiled its text-to-video model Sora in February but only released it to a select group of red teamers and creative professionals. Smaller AI companies, such as Runway, have also been making rapid strides in the field.
A handful of brands have embraced AI-generated video for marketing purposes, but some of those efforts have been either gags – or the unwilling subject of mockery. In June, for example, Toys R Us released a minute-long brand film created using Sora; it was, in short order, widely criticized on social media.
Want to go deeper? Ask The Drum
Adobe’s new video model has been trained exclusively on the company’s own licensed content, a quality that could greatly boost its appeal for marketing teams, who “do not have to worry about infringing on other brands or intellectual property,” Zeke Koch, vice-president of product management at Adobe Firefly, told The Drum.
And some marketers are already expressing excitement. “While our teams have already been working with generative video tools from Runway [and other companies], we’ve only been able to consider them for internal and R&D purposes due to the [unlicensed] content their models have been trained on,” says James Young, head of creative innovation at ad agency BBDO. “Firefly’s video tool has the capacity to massively expand our sandbox.”
Advertisement
The deeply entrenched presence of Adobe’s tools in the day-to-day operations of many marketers could also boost the willingness of those professionals to experiment with Firefly Video – and with AI-generated video more broadly.
“Most marketers and agencies are still quite cautious with deploying raw generative AI [video] content into the wild, but there should be no doubt that these tools will become commonplace in agency workflows – especially as they get built into popular software suites like those from Adobe,” says Jeremy Lockhorn, senior vice-president of creative technologies and innovation at the 4A’s.
Yesterday’s announcement paints Firefly Video – which can generate five-second video clips from text prompts – as a comprehensive creative tool for videographers. “The ever-increasing demand for fresh, short-form video content means editors, filmmakers and content creators are being asked to do more and in less time,” Still wrote. “At Adobe, we’re leveraging the power of AI to help editors expand their creative toolset so they can work in these other disciplines, delivering high-quality results on the timelines their clients require.”
Advertisement
The model also “excels at generating videos of the natural world,” according to Still. Short AI-generated video clips included in the blog post depict an erupting volcano, a Sahara-like desert landscape with wisps of sand blowing off the crests of dunes and a snow-blanketed forest at sunset.
The tool allows video editors to implement a variety of ’camera controls,’ like angle and zoom, and also to create ’complimentary shots’ from existing footage. One example shows real footage of a young girl looking at a dandelion through a magnifying glass, followed by an AI-generated clip of the close-up flower viewed from the girl’s perspective and created from the prompt: “Detailed extremely macro closeup view of a white dandelion viewed through a large red magnifying glass.”
Adobe also plans to release a feature called Generative Extend for Premiere Pro later this year, which will allow editors to lengthen AI-generated clips by two seconds to fill in gaps in footage.
While the sophistication and availability of text-to-video AI models have been advancing rapidly, the technology is still in its infancy, and plenty of technical bugs remain. These kinds of models are notoriously bad at depicting text and certain fine details of the human anatomy, such as hands, and may make unpredictable errors – not unlike the tendency for ChatGPT and other text-generating models to hallucinate information when responding to user prompts.
Suggested newsletters for you
But companies like Adobe and OpenAI are continuing to invest in the technology, banking on the belief that it will have a transformative impact on commercial industries like marketing.
BBDO’s Young shares this vision. “While these tools still have their limitations when it comes to the quality of the creative output, they’re certainly now at a place where we can create and scale great content for a lot of our most high-volume media placements,” he says. “And these tools are only going to get more powerful, capable of creating even better content and further empowering us to realize our creative visions at a speed, cost and scale never before possible.”
For more on the latest happenings in AI and other cutting-edge technologies, sign up for The Emerging Tech Briefing newsletter.