Skip to content
commercial synthesis

Toys “R” Us riles critics with “first-ever” AI-generated commercial using Sora

AI-generated commercials are here, and critics are displeased—but human work is still key.

Benj Edwards
A screen capture from the partially AI-generated Toys "R" Us brand film created using Sora.
A screen capture from the partially AI-generated Toys "R" Us brand film created using Sora. Credit: Toys R Us
A screen capture from the partially AI-generated Toys "R" Us brand film created using Sora. Credit: Toys R Us

On Monday, Toys "R" Us announced that it had partnered with an ad agency called Native Foreign to create what it calls "the first-ever brand film using OpenAI's new text-to-video tool, Sora." OpenAI debuted Sora in February, but the video synthesis tool has not yet become available to the public. The brand film tells the story of Toys "R" Us founder Charles Lazarus using AI-generated video clips.

"We are thrilled to partner with Native Foreign to push the boundaries of Sora, a groundbreaking new technology from OpenAI that's gaining global attention," wrote Toys "R" Us on its website. "Sora can create up to one-minute-long videos featuring realistic scenes and multiple characters, all generated from text instruction. Imagine the excitement of creating a young Charles Lazarus, the founder of Toys "R" Us, and envisioning his dreams for our iconic brand and beloved mascot Geoffrey the Giraffe in the early 1930s."

The company says that The Origin of Toys "R" Us commercial was co-produced by Toys "R" Us Studios President Kim Miller Olko as executive producer and Native Foreign's Nik Kleverov as director. "Charles Lazarus was a visionary ahead of his time, and we wanted to honor his legacy with a spot using the most cutting-edge technology available," Miller Olko said in a statement.

In the video, we see a child version of Lazarus, presumably generated using Sora, falling asleep and having a dream that he is flying through a land of toys. Along the way, he meets Geoffery, the store's mascot, who hands the child a small red car.

Many of the scenes retain obvious hallmarks of AI-generated imagery, such as unnatural movement, strange visual artifacts, and the irregular shape of eyeglasses. In February, a few Super Bowl commercials intentionally made fun of similar AI-generated video defects, which became famous online after fake AI-generated beer commercial and "Pepperoni Hug Spot" clips made using Runway's Gen-2 model went viral in 2023.

A screen capture from the partially AI-generated Toys "R" Us brand film created using Sora.
A screen capture from the partially AI-generated Toys "R" Us brand film created using Sora.
A screen capture from the partially AI-generated Toys "R" Us brand film created using Sora.
A screen capture from the partially AI-generated Toys "R" Us brand film created using Sora.

AI-generated artwork receives frequent criticism online due to the use of human-created artwork to train AI models that create the works, the perception that AI synthesis tools will replace (or are currently replacing) human creative jobs, and the potential environmental impact of AI models, which are seen as energy-wasteful by some critics. Also, some people just think the output quality looks bad.

On the social network X, comedy writer Mike Drucker wrapped up several of these criticisms into one post, writing, "Love this commercial is like, 'Toys R Us started with the dream of a little boy who wanted to share his imagination with the world. And to show how, we fired our artists and dried Lake Superior using a server farm to generate what that would look like in Stephen King’s nightmares.'"

Other critical comments were more frank. Filmmaker Joe Russo posted: "TOYS ‘R US released an AI commercial and it fucking sucks."

More human work than you might think

Although the Toys "R" Us video uses key visual elements from Sora, it still required quite a bit of human post-production work to put it together. Sora eliminated the need for actors and cameras, but creating successful generations and piecing together the rest still took human scriptwriters and VFX artists to fill in the AI model's shortcomings.

"The brand film was almost entirely created with Sora, with some corrective VFX and an original music score composed by Aaron Marsh of famed indie rock band Copeland," wrote Toys "R" Us in a press release.

In March, OpenAI showed off a selection of short videos created using Sora. Without context, many of the professionally produced clips—such as Air Head by shy kids—gave the impression that Sora handled all of the work and that it could produce remarkably consistent video natively. But in April, Mike Seymour of the visual effects outlet fxguide published an account of the creation of Air Head, and it involved a large amount of human-powered editing and post-production work, including erasing unwanted portions of the generated video.

"While all the imagery was generated in SORA, the balloon still required a lot of post-work," wrote Seymour. "In addition to isolating the balloon so it could be re-colored, it would sometimes have a face on Sonny, as if his face was drawn on with a marker, and this would be removed in After Effects. Similar other artifacts were often removed."

"Air Head" behind-the-scenes video created by shy kids.

In a behind-the-scenes video for Air Head posted by shy kids on YouTube, the firm's animation director, Patrick Cederberg, said, "What ultimately you end up seeing took work, time, and human hands to get it looking semi-consistent. Be that through the curation, the script writing, the editing, the voiceover, the music, sound design, color correction—all the typical post-production stuff."

So while Sora apparently saved labor during the production of Air Head and the Toys "R" Us film, it's not yet a turnkey solution for instantly usable video clips with consistency across generations. But it could be a sign of what is coming in advertising, whether AI critics like it or not.

"Mock that Toys 'R' Us AI spot all you want — but it's just the beginning," wrote an advertisement copywriter named Dan Goldgeier on X. "Most consumers won't know the difference or care, and most marketers will be more than happy to make this kind of spot for less money."

Listing image: Toys R Us

Photo of Benj Edwards
Benj Edwards Senior AI Reporter
Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a widely-cited tech historian. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.
Staff Picks
swiftdraw
“What ultimately you end up seeing took work, time, and human hands to get it looking semi-consistent. Be that through the curation, the script writing, the editing, the voiceover, the music, sound design, color correction—all the typical post-production stuff."
Yeah, but it requires less people, and as the technologies and process develop and mature further, it will require even less over time. That’s the point of automating all of this, to cut cost and people usually are the biggest cost. Honestly, this reminds me of when automation and robotic assembly started taking off in manufacturing. “Well, people still need to work on the robots building the cars!” Yeah, but that takes 4 people where that assembly line used to employ 20 people.
Most Read
  1. Listing image for first story in Most Read: Helene ravaged the NC plant that makes 60% of the country’s IV fluid supply
    1. Helene ravaged the NC plant that makes 60% of the country’s IV fluid supply
  2. 2. Apple couldn’t tell fake iPhones from real ones, lost $2.5M to scammers
  3. 3. X fails to avoid Australia child safety fine by arguing Twitter doesn’t exist
  4. 4. Neo-Nazis head to encrypted SimpleX Chat app, bail on Telegram
  5. 5. ULA’s second Vulcan rocket lost part of its booster and kept going