Skip to content
For art's sake

Tool preventing AI mimicry cracked; artists wonder what’s next

Artists must wait weeks for Glaze defense against AI scraping amid TOS updates.

Ashley Belanger
Credit: Aurich Lawson | Getty Images
Credit: Aurich Lawson | Getty Images

For many artists, it's a precarious time to post art online. AI image generators keep getting better at cheaply replicating a wider range of unique styles, and basically every popular platform is rushing to update user terms to seize permissions to scrape as much data as possible for AI training.

Defenses against AI training exist—like Glaze, a tool that adds a small amount of imperceptible-to-humans noise to images to stop image generators from copying artists' styles. But they don't provide a permanent solution at a time when tech companies appear determined to chase profits by building ever-more-sophisticated AI models that increasingly threaten to dilute artists' brands and replace them in the market.

In one high-profile example just last month, the estate of Ansel Adams condemned Adobe for selling AI images stealing the famous photographer's style, Smithsonian reported. Adobe quickly responded and removed the AI copycats. But it's not just famous artists who risk being ripped off, and lesser-known artists may struggle to prove AI models are referencing their works. In this largely lawless world, every image uploaded risks contributing to an artist's downfall, potentially watering down demand for their own work each time they promote new pieces online.

Unsurprisingly, artists have increasingly sought protections to diminish or dodge these AI risks. As tech companies update their products' terms—like when Meta suddenly announced that it was training AI on a billion Facebook and Instagram user photos last December—artists frantically survey the landscape for new defenses. That's why, counting among those offering scarce AI protections available today, The Glaze Project recently reported a dramatic surge in requests for its free tools.

Designed to help prevent style mimicry and even poison AI models to discourage data scraping without an artist's consent or compensation, The Glaze Project's tools are now in higher demand than ever. University of Chicago professor Ben Zhao, who created the tools, told Ars that the backlog for approving a "skyrocketing" number of requests for access is "bad." And as he recently posted on X (formerly Twitter), an "explosion in demand" in June is only likely to be sustained as AI threats continue to evolve. For the foreseeable future, that means artists searching for protections against AI will have to wait.

Even if Zhao's team did nothing but approve requests for WebGlaze, its invite-only web-based version of Glaze, "we probably still won't keep up," Zhao said. He has warned artists on X to expect delays.

Compounding artists' struggles, at the same time as demand for Glaze is spiking, the tool has come under attack by security researchers who claimed it was not only possible but easy to bypass Glaze's protections. For security researchers and some artists, this attack calls into question whether Glaze can truly protect artists in these embattled times. But for thousands of artists joining the Glaze queue, the long-term future looks so bleak that any promise of protections against mimicry seems worth the wait.

Attack cracking Glaze sparks debate

Millions have downloaded Glaze already, and many artists are waiting weeks or even months for access to WebGlaze, mostly submitting requests for invites on social media. The Glaze Project vets every request to verify that each user is human and ensure bad actors don't abuse the tools, so the process can take a while.

The team is currently struggling to approve hundreds of requests submitted daily through direct messages on Instagram and Twitter in the order they are received, and artists requesting access must be patient through prolonged delays. Because these platforms' inboxes aren’t designed to sort messages easily, any artist who follows up on a request gets bumped to the back of the line—as their message bounces to the top of the inbox and Zhao's team, largely volunteers, continues approving requests from the bottom up.

"This is obviously a problem," Zhao wrote on X while discouraging artists from sending any follow-ups unless they've already gotten an invite. "We might have to change the way we do invites and rethink the future of WebGlaze to keep it sustainable enough to support a large and growing user base."

Glaze interest is likely also spiking due to word of mouth. Reid Southen, a freelance concept artist for major movies, is advocating for all artists to use Glaze. Reid told Ars that WebGlaze is especially "nice" because it's "available for free for people who don't have the GPU power to run the program on their home machine."

"I would highly recommend artists use Glaze to protect their images," Southen told Ars. "There aren't many viable ways right now for artists to protect themselves from unauthorized scraping and training off their images and still keep their work online. Glaze is a great option, especially because it works on the pixel level, and the image can be uploaded anywhere."

But just as Glaze's userbase is spiking, a bigger priority for the Glaze Project has emerged: protecting users from attacks disabling Glaze's protections—including attack methods exposed in June by online security researchers in Zurich, Switzerland. In a paper published on Arxiv.org without peer review, the Zurich researchers, including Google DeepMind research scientist Nicholas Carlini, claimed that Glaze's protections could be "easily bypassed, leaving artists vulnerable to style mimicry."

Very quickly after the attack methods were exposed, Zhao's team responded by releasing an update that Zhao told Ars "doesn't completely address" the attack but makes it "much harder."

Tension then escalated after the Zurich team claimed that The Glaze Project's solution "missed the mark" and gave Glaze users a "false sense of security."

Another researcher on the Zurich team, Robert Hönig, told Ars that Glaze's protections are "not that bad," but an "art thief has all the time in the world to wait for some attack that will eventually break it," which he said puts any artist posting work online at a "disadvantage." In a blog post, Carlini wrote that the Glaze Project has a "noble goal," but "the damage has already been done for everyone who published their images with the first version of the defense," because once an artwork has been posted online, that version will inevitably remain available in an archive somewhere.

On the Glaze about page, Zhao's team makes clear that "Glaze is not a permanent solution against AI mimicry," as it relies on techniques that can always "be overcome by a future algorithm" and "possibly" render "previously protected art vulnerable."

Zhao told Ars that the impact of Hönig's team's attack appeared limited because it mostly targeted a prior version of Glaze, although Hönig told Ars that the newest version doesn't protect against every known robust mimicry attack detailed in his team's paper.

Zhao's team has since confirmed that updates will be posted to their website as they conduct further tests reimplementing Hönig's team's attack, with the expectation that any findings will be used to further strengthen Glaze protections against mimicry.

While both sides agree that Glaze's most recent update (v. 2.1) offers some protection for artists, they fundamentally disagree over how to best protect artists from looming threats of AI style mimicry. A debate has been sparked on social media, with one side arguing that artists urgently need tools like Glaze until more legal protections exist and the other insisting that these uncertain times call for artists to stop posting any work online if they don't want it to be copied by tomorrow's best image generator.

How Glaze protects artists

For artists who have no choice but to continue promoting work online, tools like Glaze can feel indispensable.

Recent Statista data showed that online art sales in 2023 reached nearly $12 billion, roughly double what artists made selling art online in 2019. Nearly a fifth of all art sales globally happened online last year, Statista reported, and competition online will likely only increase as big-budget tech companies heavily promote any eye-popping leaps in the quality of AI image generators' outputs.

Tools that help prevent style mimicry—like Glaze, Mist, and AntiDreamBooth—give artists a way to wall off their works from AI without losing visibility online.

Glaze works by making small changes to images that distort what the AI sees—essentially tricking the AI models into seeing something "quite different" from what the artist created, Glaze's about page says, which helps prevent tools from copying artists' unique styles.

"At a high level, Glaze works by understanding the AI models that are training on human art and using machine learning algorithms, computing a set of minimal changes to artworks, such that it appears unchanged to human eyes but appears to AI models like a dramatically different art style," The Glaze Project webpage explains.

The Glaze Project also created Nightshade, which "can help deter model trainers who disregard copyrights, opt-out lists, and do-not-scrape/robots.txt directives" by transforming images into "poison" samples that scramble AI models. Instead of training on images without an artist's consent, AI models "learn unpredictable behaviors that deviate from expected norms." An example The Glaze Project gives is a tool fielding "a prompt that asks for an image of a cow flying in space" that might instead generate "an image of a handbag floating in space."

Zhao told Ars that demand for the tools is spiking globally, with users "in just about every country from South Africa to Indonesia to Norway" and more.

These surges often occur after artists have "a negative reaction to some of the new policy announcements by tech companies," Zhao said. After Meta updated Instagram's policy, for example, there was an exodus of artists to Cara, a social media and portfolio platform where artists can apply Glaze when uploading works online.

In addition to using Cara, Zhao's team recommends that artists use Glaze and Nightshade together. In the future, he hopes to integrate the tools so that they can be applied through a single process. But integrating the tools has proven more challenging than anticipated, Zhao told Ars, since the tools somewhat step on each other's toes; both want to use the same pixels to "accomplish their slightly different goals."

With demand spiking for tools and other competing research priorities to attend to—including studying how easy or hard it is to identify if images are AI-generated today—Zhao's lab is currently overextended, and integrating the tools is a low priority. While his team works through Glaze invite requests and runs more tests on the most recent attack, they're also trying to figure out how to extend protections to videos.

For Zhao, the priority remains protecting as many artists as possible right now. In addition to responding to the attack, the most recent Glaze update includes a version for Intel Macs, expanding protections to more systems after Zhao said that "a bunch" of "unhappy" Mac users complained that they didn't have access to Glaze 2.0.

"There's always some sort of intermixing of priorities," Zhao told Ars. "There are certain things on the tool side—for example, like this attack—that we have to always manage because it's sort of the promise we have made [to Glaze users]. But in addition to that, we also want to add new tools and add new protective measures to try to change the landscape of how we are dealing with AI and unauthorized training."

Southen told Ars that he has been impressed by the Glaze team's improvements to its tools.

"The very nature of machine learning and adversarial development means that no solution is likely to hold forever, which is why it's great that the Glaze team is on top of current developments and always testing and tuning things to better protect artists' work as we push for things like legislation, regulation, and, of course, litigation," Southen said.

How does the Glaze attack work?

Before Hönig's team published their attack, they alerted Zhao's team to their findings, which provided an opportunity to study the attack and update Glaze. On his blog, Carlini explained that because none of the previously glazed images using earlier versions of the tool could be patched, his team decided not to wait for the Glaze update before posting details on how to execute the attack because it was "strictly better to publish the attack" on Glaze "as early as possible" to warn artists of the potential vulnerability.

Hönig told Ars that breaking Glaze was "simple." His team found that "low-effort and 'off-the-shelf' techniques"—such as image upscaling, "using a different finetuning script" when training AI on new data, or "adding Gaussian noise to the images before training"—"are sufficient to create robust mimicry methods that significantly degrade existing protections."

Sometimes, these attack techniques must be combined, but Hönig's team warned that a motivated, well-resourced art forger might try a variety of methods to break protections like Glaze. Hönig said that thieves could also just download glazed art and wait for new techniques to come along, then quietly break protections while leaving no way for the artist to intervene, even if an attack is widely known. This is why his team discourages uploading any art you want protected online.

Ultimately, Hönig's team's attack works by simply removing the adversarial noise that Glaze adds to images, making it once again possible to train an AI model on the art. They described four methods of attack that they claim worked to remove mimicry protections provided by popular tools, including Glaze, Mist, and Anti-DreamBooth. Three were considered "more accessible" because they don't require technical expertise. The fourth was more complex, leveraging algorithms to detect protections and purify the image so that AI can train on it.

This wasn't the first time Glaze was attacked, but it struck some artists as the most concerning, with Hönig's team boasting an apparent ability to successfully employ tactics previously proven ineffective at disabling mimicry defenses.

The Glaze team responded by reimplementing the attack, using a different code than Hönig's team, then updating Glaze to be more resistant to the attack, as they understood it from their own implementation. But Hönig told Ars that his team still gets different results using their code, finding Glaze to be only moderately resistant to attacks targeting different styles. Carlini wrote that after testing Glaze 2.1 on "our own denoiser implementation," the team found that "most of the claims made in the Glaze update don’t hold at all," remaining "effective" against some styles, such as cartoon style.

Perhaps more troubling to Carlini, however, was that the Glaze Project only tested the strongest attack method documented in his team's paper, seemingly not addressing other techniques that could leave artists vulnerable.

"In fact, we show that Glaze can be bypassed to various extent by a multitude of methods, including by doing nothing at all," Carlini wrote.

According to Carlini, his team's key finding is that "simply using a different fine-tuning script than the Glaze authors already weakens Glaze’s protections significantly." After pushback, the Glaze team decided to run more tests reimplementing the attack using Carlini's team's code. Zhao confirmed that Glaze's website will be updated to reflect the results of those tests.

In the meantime, Carlini concluded, "Glaze likely provides some form of protection, in the sense that by using it, artists are probably not worse-off than by not using it."

"But such 'better than nothing' security is a very low bar," Carlini wrote. "This could easily mislead artists into a false sense of security and deter them from seeking alternative forms of protection, e.g., the use of other (also imperfect) tools such as watermarks, or private releases of new art styles to trusted customers."

Debating how to best protect artists from AI

The Glaze Project has talked to a wide range of artists, including those "whose styles are intentionally copied," who not only "see loss in commissions and basic income" but suffer when "low quality synthetic copies scattered online dilute their brand and reputation," their website said.

Zhao told Ars that tools like Glaze and Nightshade provide a way for artists to fight the power imbalance between them and well-funded AI companies accused of stealing and copying their works. His team considers Glaze to be the "strongest tool for artists to protect against style mimicry," Glaze's website said, and to keep it that way, he promises to "work to improve its robustness, updating it as necessary to protect it against new attacks."

Part of preserving artist protections, The Glaze Project site explained, is protecting Glaze code, deciding not to open-source the code to "raise the bar for adaptive attacks." Zhao apparently declined to share the code with Carlini's team, explaining that "right now, there are quite literally many thousands of human artists globally who are dealing with ramifications of generative AI’s disruption to the industry, their livelihood, and their mental well-being… IMO, literally everything else takes a back seat compared to the protection of these artists.”

However, Carlini's team contends that The Glaze Project declining to share the code with security researchers makes artists more vulnerable because artists can then be blindsided by or even oblivious to evolving attacks that the Glaze team might not even be aware of.

"We don’t disagree in the slightest that we should be trying to help artists," Carlini and a co-author wrote in a blog post following the Glaze team's response. "But let’s be clear: the best way to help artists is not to pitch them a tool while refusing security analysis of that tool. If there are flaws in the approach, then we should discover them early so they can be fixed. And that’s easiest to do by openly studying the tool that’s being used."

Battle lines drawn, this tense debate seemingly got personal when Zhao claimed in a Discord chat screenshot taken by Carlini that "Carlini doesn't give a shit" about potential harms to artists from publishing his team's attack. Carlini's team responded by calling Glaze's response to the attack "misleading."

The security researchers have demanded that Glaze update its post detailing vulnerabilities to artists, and The Glaze Project has promised that updates will follow testing being conducted while the team juggles requests for invites and ongoing research priorities.

Artists still motivated to support Glaze

Yet for some artists waiting for access to Glaze, the question isn't whether the tool is worth the wait; it's whether The Glaze Project can sustain the project on limited funding. Zhao told Ars that as requests for invites spike, his team has "been getting a lot of unsolicited emails about wanting to donate to Glaze."

The Glaze Project is funded by research grants and donations from various organizations, including the National Science Foundation, DARPA, Amazon AWS, and C3.ai. The team's goal is not to profit off the tools but to "make a strong impact" defending artists who "generally barely make a living" against looming generative AI threats potentially capable of "destroying the human artist community."

"We are not interested in profit," the project's website says. "There is no business model, no subscription, no hidden fees, no startup. We made Glaze free for anyone to use."

While a gift link will soon be created, Zhao insisted that artists should not direct limited funds to researchers who can always write grants or seek funding from better-resourced donors. Zhao said that he has been asked by so many artists where they can donate to support the project that he has come up with a standard reply.

"If you're an artist, you should keep your money," Zhao said.

Southen, who recently gave a talk at the Conference on Computer Vision and Pattern Recognition "about how machine learning researchers and developers can better interface with artists and respect our work and needs," hopes to see more tools like Glaze introduced, as well as "more ethical" AI tools that "artists would actually be happy to use that respect people's property and process."

"I think there are a lot of useful applications for AI in art that don't need to be generative in nature and don't have to violate people's rights or displace them, and it would be great to see developers lean in to helping and protecting artists rather than displacing and devaluing us," Southen told Ars.

Listing image: Aurich Lawson | Getty Images

Photo of Ashley Belanger
Ashley Belanger Senior Policy Reporter
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
Prev story
Next story