adventures in living candy —

Runway’s latest AI video generator brings giant cotton candy monsters to life

New Gen-3 Alpha AI video generator can create detailed humans and surreal situations.

A few limitations

While these demos look fun at first glance, it's worth mentioning a few drawbacks of an announcement like this. Since Gen-3 is not yet public and we do not have access yet, we have not had the chance to evaluate it. That means that even if you take Runway's stated claim ("All of the videos on this page were generated with Gen-3 Alpha with no modifications") at face value, the videos were very likely cherry-picked as having especially optimal results.

Also, all image and video synthesis models require large datasets of existing images or video, usually either culled from sources found online without permission or licensed from rights holders. Runway has not said where it obtained the training data to train Gen-3, but it says the model was trained both on videos and still images.

That said, going by face value, the demo videos appear impressive and state-of-the-art (an ever-moving target) for video synthesis. If the tech keeps getting better over the next few years, it's likely that video synthesis clips will eventually find their way into professional video projects somehow.

Gen-3 Alpha prompt: "A man made of rocks walking in the forest, full-body shot."

While media has never accurately captured reality, photorealistic video was, for a long time, largely anchored to real objects and situations (barring expensive special effects and CGI departments). If a fine enough measure of generational control is achieved, AI video tech stands poised to bring that big-budget capability to low-budget video productions, which may dramatically lower the cost of filmmaking in the future. But with some entertainment industry jobs potentially at stake—including visual effects teams, actors, and set designers—we expect to see struggle and backlash along the way.

As mentioned, Gen-3 Alpha is not yet available to the public, but the company offers an inquiry sign-up for commercial entities who might want to fine-tune the model for future commercial use. Runway says that Gen-3's release, whenever it comes, will be accompanied by content safeguards, such as an in-house visual moderation system and C2PA provenance standards.

A recap of AI video synthesis on Ars Technica

Since 2022, we've covered a number of AI video synthesis models. We've also missed a few notable projects, such as Phenaki (mentioned briefly in one piece), Runway's Gen-1, Pika (mentioned in a roundup syndicated from FT), Luma Dream Machine, and Kling (both mentioned above). To provide a brief rundown of where the technology has been so far, here's a list of related Ars Technica articles. This is as much for our benefit as it is for yours because it's sometimes difficult to keep all of these AI video models straight.

Even a cursory look at the process from the earliest models above shows that AI video synthesis technology is steadily on the move, and the increased capability is likely only limited by available compute and enough high-quality training data. We'll keep you posted.

Channel Ars Technica