Artificial intelligence will neither solve all our problems nor likely destroy the world, but it could help make our lives better if it’s both transparent enough for everyone to understand and available for everyone to use in ways that augment us and advance our goals — not for corporations or government to extract something from us and exert power over us. Imagine a future, for example, in which AI is a readily available tool for helping people communicate across language barriers, or for helping vision- or hearing-impaired people connect better with the world.
(You can also find this episode on the Internet Archive and on YouTube.)
This is the future that Kit Walsh, EFF’s Director of Artificial Intelligence & Access to Knowledge Legal Projects, and EFF Senior Staff Technologist Jacob Hoffman-Andrews, are working to bring about. They join EFF’s Cindy Cohn and Jason Kelley to discuss how AI shouldn’t be a tool to cash in, or to classify people for favor or disfavor, but instead to engage with technology and information in ways that advance us all.
In this episode you’ll learn about:
- The dangers in using AI to determine who law enforcement investigates, who gets housing or mortgages, who gets jobs, and other decisions that affect people’s lives and freedoms.
- How "moral crumple zones” in technological systems can divert responsibility and accountability from those deploying the tech.
- Why transparency and openness of AI systems — including training AI on consensually obtained, publicly visible data — is so important to ensure systems are developed without bias and to everyone’s benefit.
- Why “watermarking” probably isn’t a solution to AI-generated disinformation.
Kit Walsh is a senior staff attorney at EFF, serving as Director of Artificial Intelligence & Access to Knowledge Legal Projects. She has worked for years on issues of free speech, net neutrality, copyright, coders' rights, and other issues that relate to freedom of expression and access to knowledge, supporting the rights of political protesters, journalists, remix artists, and technologists to agitate for social change and to express themselves through their stories and ideas. Before joining EFF, Kit led the civil liberties and patent practice areas at the Cyberlaw Clinic, part of Harvard University's Berkman Klein Center for Internet and Society; earlier, she worked at the law firm of Wolf, Greenfield & Sacks, litigating patent, trademark, and copyright cases in courts across the country. Kit holds a J.D. from Harvard Law School and a B.S. in neuroscience from MIT, where she studied brain-computer interfaces and designed cyborgs and artificial bacteria.
Jacob Hoffman-Andrews is a senior staff technologist at EFF, where he is lead developer on Let's Encrypt, the free and automated Certificate Authority; he also works on EFF's Encrypt the Web initiative and helps maintain the HTTPS Everywhere browser extension. Before working at EFF, Jacob was on Twitter's anti-spam and security teams. On the security team, he implemented HTTPS-by-default with forward secrecy, key pinning, HSTS, and CSP; on the anti-spam team, he deployed new machine-learned models to detect and block spam in real-time. Earlier, he worked on Google’s maps, transit, and shopping teams.
Resources:
- Human Rights Data Analysis Group: “Predictive Policing Reinforces Police Bias” by Kristian Lum (Oct. 10, 2016)
- Engaging Science, Technology, and Society: “Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction” by Madeleine Clare Elish (Mar. 23, 2019)
- EFF: “How We Think About Copyright and AI Art” by Kit Walsh (April 3, 2023)
- Stable Diffusion Litigation
- EFF: “A Broad Federal Publicity Right Is a Risky Answer to Generative AI Problems” by Corynne McSherry (July 18, 2023)
- EFF: “AI Watermarking Won't Curb Disinformation” by Jacob Hoffman-Andrews (Jan. 5, 2024)
- EFF: “The Tech Apocalypse Panic is Driven by AI Boosters, Military Tacticians, and Movies” by Matthew Guariglia (Mar. 20, 2024)
What do you think of “How to Fix the Internet?” Share your feedback here.
Transcript
KIT WALSH
Contrary to some marketing claims, AI is not the solution to all of our problems. So I'm just going to talk about how AI exists in Kitopia. And in particular, the technology is available for everyone to understand. It is available for everyone to use in ways that advance their own values rather than hard coded to advance the values of the people who are providing it to you and trying to extract something from you and as opposed to embodying the values of a powerful organization, public or private, that wants to exert more power over you by virtue of automating its decisions.
So it can make more decisions classifying people, figuring out whom to favor, whom to disfavor. I'm defining Kitopia a little bit in terms of what it's not, but to get back to the positive vision, you have this intellectual commons of research development of data that we haven't really touched on privacy yet, but but data that is sourced in a consensual way and when it's, essentially, one of the things that I would love to have is a little AI muse that actually does embody my values and amplifies my ability to engage with technology and information on the Internet in a way that doesn't feel icky or oppressive and I don't have that in the world yet.
CINDY COHN
That’s Kit Walsh, describing an ideal world she calls “Kitopia”. Kit is a senior staff attorney at the Electronic Frontier Foundation. She works on free speech, net neutrality and copyright and many other issues related to freedom of expression and access to knowledge. In fact, her full title is EFF’s Director of Artificial Intelligence & Access to Knowledge Legal Projects. So, where is Kitopia, you might ask? Well we can’t get there from here - yet. Because it doesn’t exist. Yet. But here at EFF we like to imagine what a better online world would look like, and how we will get there and today we’re joined by Kit and by EFF’s Senior Staff Technologist Jacob Hoffman-Andrews. In addition to working on AI with us, Jacob is a lead developer on Let's Encrypt, and his work on that project has been instrumental in helping us encrypt the entire web. I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.
JASON KELLEY
And I’m Jason Kelley, EFF’s Activism Director. This is our podcast series How to Fix the Internet.
JACOB HOFFMAN-ANDREWS
I think in my ideal world people are more able to communicate with each other across language barriers, you know, automatic translation, transcription of the world for people who are blind or for deaf people to be able to communicate more clearly with hearing people. I think there's a lot of ways in which AI can augment our weak human bodies in ways that are beneficial for people and not simply increasing the control that their governments and their employers have over their lives and their bodies.
JASON KELLEY
We’re talking to Kit and Jacob both, because this is such a big topic that we really need to come at it from multiple angles to make sense of it and to figure out the answer to the really important question which is, How can AI actually make the world we live in, a better place?
CINDY COHN
So while many other people have been trying to figure out how to cash in on AI, Kit and Jacob have been looking at AI from a public interest and civil liberties perspective on behalf of EFF. And they’ve also been giving a lot of thought to what an ideal AI world looks like.
JASON KELLEY
AI can be more than just another tool that’s controlled by big tech. It really does have the potential to improve lives in a tangible way. And that’s what this discussion is all about. So we’ll start by trying to wade through the hype, and really nail down what AI actually is and how it can and is affecting our daily lives.
KIT WALSH
The confusion is understandable because AI is being used as a marketing term quite a bit, rather than as an abstract concept, rather than as a scientific concept.
And the ways that I think about AI, particularly in the decision-making context, which is one of our top priorities in terms of where we think that AI is impacting people's rights, is first I think about what kind of technology are we really talking about because sometimes you have a tool that actually no one is calling AI, but it is nonetheless an example of algorithmic decision-making.
That also sounds very fancy. This can be a fancy computer program to make decisions, or it can be a buggy Excel spreadsheet that litigators discover is actually just omitting important factors when it's used to decide whether people get health care or not in a state health care system.
CINDY COHN
You're not making those up, Kit. These are real examples.
KIT WALSH
That’s not a hypothetical. Unfortunately, it’s not a hypothetical, and the people who litigated that case lost some clients because when you're talking about not getting health care that can be life or death. And machine learning can either be a system where you – you, humans, code a reinforcement mechanism. So you have sort of random changes happening to an algorithm, and it gets rewarded when it succeeds according to your measure of success, and rejected otherwise.
It can be training on vast amounts of data, and that's really what we've seen a huge surge in over the past few years, and that training can either be what's called unsupervised, where you just ask your system that you've created to identify what the patterns are in a bunch of raw data, maybe raw images, or it can be supervised in the sense that humans, usually low paid humans, are coding their views on what's reflected in the data.
So I think that this is a picture of a cow, or I think that this picture is adult and racy. So some of these are more objective than others, and then you train your computer system to reproduce those kinds of classifications when it makes new things that people ask for with those keywords, or when it's asked to classify a new thing that it hasn't seen before in its training data.
So that's really a very high level oversimplification of the technological distinctions. And then because we're talking about decision-making, it's really important who is using this tool.
Is this the government which has all of the power of the state behind it and which administers a whole lot of necessary public benefits - that is using decisions to decide who is worthy and who is not to obtain those benefits? Or, who should be investigated? What neighborhoods should be investigated?
We'll talk a little bit more about the use in law enforcement later on, but it's also being used quite a bit in the private sector to determine who's allowed to get housing, whether to employ someone, whether to give people mortgages, and that's something that impacts people's freedoms as well.
CINDY COHN
So Jacob, two questions I used to distill down on AI decision-making are, who is the decision-making supposed to be serving and who bears the consequences if it gets it wrong? And if we think of those two framing questions, I think we get at a lot of the issues from a civil liberties perspective. That sound right to you?
JACOB HOFFMAN-ANDREWS
Yeah, and, you know, talking about who bears the consequences when an AI or technological system gets it wrong, sometimes it's the person that system is acting upon, the person who's being decided whether they get healthcare or not and sometimes it can be the operator.
You know, it's, uh, popular to have kind of human in the loop, like, oh, we have this AI decision-making system that's maybe not fully baked. So there's a human who makes the final call. The AI just advises the human and, uh, there's a great paper by Madeleine Clare Elish describing this as a form of moral crumple zones. Uh, so, you may be familiar in a car, modern cars are designed so that in a collision, certain parts of the car will collapse to absorb the force of the impact.
So the car is destroyed but the human is preserved. And, in some human in the loop decision making systems often involving AI, it's kind of the reverse. The human becomes the crumple zone for when the machine screws up. You know, you were supposed to catch the machine screwup. It didn't screw up in over a thousand iterations and then the one time it did, well, that was your job to catch it.
And, you know, these are obviously, you know, a crumple zone in a car is great. A moral crumple zone in a technological system is a really bad idea. And it takes away responsibility from the deployers of that system who ultimately need to bear the responsibility when their system harms people.
CINDY COHN
So I wanna ask you, what would it look like if we got it right? I mean, I think we do want to have some of these technologies available to help people make decisions.
They can find patterns in giant data probably better than humans can most of the time. And we'd like to be able to do that. So since we're fixing the internet now, I want to stop you for a second and ask you how would we fix the moral crumple zone problem or what were the things we think about to do that?
JACOB HOFFMAN-ANDREWS
You know, I think for the specific problem of, you know, holding say a safety driver or like a human decision-maker responsible for when the AI system they're supervising screws up, I think ultimately what we want is that the responsibility can be applied all the way up the chain to the folks who decided that that system should be in use. They need to be responsible for making sure it's actually a safe, fair system that is reliable and suited for purpose.
And you know, when a system is shown to bring harm, for instance, you know, a self-driving car that crashes into pedestrians and kills them, you know, that needs to be pulled out of operation and either fixed or discontinued.
CINDY COHN
Yeah, it made me think a little bit about, you know, kind of a change that was made, I think, by Toyota years ago, where they let the people on the front line stop the line, right? Um, I think one thing that comes out of that is you need to let the people who are in the loop have the power to stop the system, and I think all too often we don't.
We devolve the responsibility down to that person who's kind of the last fair chance for something but we don't give them any responsibility to raise concerns when they see problems, much less the people impacted by the decisions.
KIT WALSH
And that’s also not an accident of the appeal of these AI systems. It's true that you can't hold a machine accountable really, but that doesn't deter all of the potential markets for the AI. In fact, it's appealing for some regulators, some private entities, to be able to point to the supposed wisdom and impartiality of an algorithm, which if you understand where it comes from, the fact that it's just repeating the patterns or biases that are reflected in how you trained it, you see it's actually, it's just sort of automated discrimination in many cases and that can work in several ways.
In one instance, it's intentionally adopted in order to avoid the possibility of being held liable. We've heard from a lot of labor rights lawyers that when discriminatory decisions are made, they're having a lot more trouble proving it now because people can point to an algorithm as the source of the decision.
And if you were able to get insight in how that algorithm were developed, then maybe you could make your case. But it's a black box. A lot of these things that are being used are not publicly vetted or understood.
And it's especially pernicious in the context of the government making decisions about you, because we have centuries of law protecting your due process rights to understand and challenge the ways that the government makes determinations about policy and about your specific instance.
And when those decisions and when those decision-making processes are hidden inside an algorithm then the old tools aren't always effective at protecting your due process and protecting the public participation in how rules are made.
JASON KELLEY
It sounds like in your better future, Kit, there's a lot more transparency into these algorithms, into this black box that's sort of hiding them from us. Is that part of what you see as something we need to improve to get things right?
KIT WALSH
Absolutely. Transparency and openness of AI systems is really important to make sure that as it develops, it develops to the benefit of everyone. It's developed in plain sight. It's developed in collaboration with communities and a wider range of people who are interested and affected by the outcomes, particularly in the government context though I'll speak to the private context as well. When the government passes a new law, that's not done in secret. When a regulator adopts a new rule, that's also not done in secret. There's either, sure, that's, there are exceptions.
CINDY COHN
Right, but that’s illegal.
JASON KELLEY
Yeah, that's the idea. Right. You want to get away from that also.
KIT WALSH
Yeah, if we can live in Kitopia for a moment where, where these things are, are done more justly, within the framework of government rulemaking, if that's occurring in a way that affects people, then there is participation. There's meaningful participation. There's meaningful accountability. And in order to meaningfully have public participation, you have to have transparency.
People have to understand what the new rule is that's going to come into force. And because of a lot of the hype and mystification around these technologies, they're being adopted under what's called a procurement process, which is the process you use to buy a printer.
It's the process you use to buy an appliance, not the process you use to make policy. But these things embody policy. They are the rule. Sometimes when the legislature changes the law, the tool doesn't get updated and it just keeps implementing the old version. And that means that the legislature's will is being overridden by the designers of the tool.
JASON KELLEY
You mentioned predictive policing, I think, earlier, and I wonder if we could talk about that for just a second because it's one way where I think we at EFF have been thinking a lot about how this kind of algorithmic decision-making can just obviously go wrong, and maybe even should never be used in the first place.
What we've seen is that it's sort of, you know, very clearly reproduces the problems with policing, right? But how does AI or this sort of predictive nature of the algorithmic decision-making for policing exacerbate these problems? Why is it so dangerous I guess is the real question.
KIT WALSH
So one of the fundamental features of AI is that it looks at what you tell it to look at. It looks at what data you offer it, and then it tries to reproduce the patterns that are in it. Um, in the case of policing, as well as related issues around decisions for pretrial release and parole determinations, you are feeding it data about how the police have treated people, because that's what you have data about.
And the police treat people in harmful, racist, biased, discriminatory, and deadly ways that it's really important for us to change, not to reify into a machine that is going to seem impartial and seem like it creates a veneer of justification for those same practices to continue. And sometimes this happens because the machine is making an ultimate decision, but that's not usually what's happening.
Usually the machine is making a recommendation. And one of the reasons we don't think that having a human in the loop is really a cure for the discriminatory harms is that humans are more likely to follow the AI if it gives them cover for a biased decision that they're going to make. And relatedly, some humans, a lot of people, develop trust in the machine and wind up following it quite a bit.
So in these contexts, if you really wanted to make predictions about where a crime was going to occur, well it would send you to Wall Street. And that's not, that's not the result that law enforcement wants.
But, first of all, you would actually need data about where crimes occur, and generally people who don't get caught by the police are not filling out surveys to say, here are the crimes I got away with so that you can program a tool that's going to do better at sort of reflecting some kind of reality that you're trying to capture. You only know how the system has treated people so far and all that you can do with AI technology is reinforce that. So it's really not an appropriate problem to try to solve with this technology.
CINDY COHN
Yeah, our friends at Human Rights Data Analysis Group who did some of this work said, you know, we call it predictive policing, but it's really predicting the police because we're using what the police already do to train up a model, and of course it's not going to fix the problems with how police have been acting in the past. Sorry to interrupt. Go on.
KIT WALSH
No, to build on that, by definition, it thinks that the past behavior is ideal, and that's what it should aim for. So, it's not a solution to any kind of problem where you're trying to change a broken system.
CINDY COHN
And in fact, what they found in the research was that the AI system will not only replicate what the police do, it will double down on the bias because it's seeing a small trend and it will increase the trend. And I don't remember the numbers, but it's pretty significant. So it's not just that the AI system will replicate what the police do. What they found in looking at these systems is that the AI systems increase the bias in the underlying data.
It's really important that we continue to emphasize the ways in which AI and machine learning are already being used and already being used in ways that people may not see, but dramatically impact them. But right now, what's front of mind for a lot of people is generative AI. And I think many, many more people have started playing around with that. And so I want to start with how we think about generative AI and the issues it brings. And Jacob, I know you have some thoughts about that.
JACOB HOFFMAN-ANDREWS
Yeah. To call back to, at the beginning you asked about, how do we define AI? I think one of the really interesting things in the field is that it's changed so much over time. And, you know, when computers first became broadly available, you know, people have been thinking for a very long time, what would it mean for a computer to be intelligent? And for a while we thought, wow, you know, if a computer could play chess and beat a human, we would say that's an intelligent computer.
Um, if a computer could recognize, uh, what's in an image, is this an image of a cat or a cow - that would be intelligence. And of course now they can, and we don't consider it intelligence anymore. And you know, now we might say if a computer could write a term paper, that's intelligence and I don't think we're there yet, but the development of chatbots does make a lot of people feel like we're closer to intelligence because you can have a back and forth and you can ask questions and receive answers.
And some of those answers will be confabulations and, but some percentage of the time they'll be right. And it starts to feel like something you're interacting with. And I think, rightly so, people are worried that this will destroy jobs for writers and for artists. And to an earlier question about, you know, what does it look like if we get it right, I think, you know, the future we want is one where people can write beautiful things and create beautiful things and, you know, still make a great living at it and be fulfilled and safe in their daily needs and be recognized for that. And I think that's one of the big challenges we're facing with generative AI.
JASON KELLEY
Let’s pause for just a moment to say thank you to our sponsor. How to Fix the Internet is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. And now back to our discussion with Kit and Jacob about AI: the good, the bad, and what could be better.
CINDY COHN
There’s been a lot of focus on the dark side of generative AI and the idea of using copyright to address those problems has emerged. We have worries about that as a way to sort out between good and bad uses of AI, right Kit?
KIT WALSH
Absolutely. We have had a lot of experience with copyright being used as a tool of censorship, not only against individual journalists and artists and researchers, but also against entire mediums for expression, against libraries, against the existence of online platforms where people are able to connect and copyright not only lasts essentially forever, it comes with draconian penalties that are essentially a financial death sentence for the typical person in the United States. So in the context of generative AI, there is a real issue with the potential to displace creative labor. And it's a lot like the issues of other forms of automation that displace other forms of labor.
And it's not always the case that an equal number of new jobs are created, or that those new jobs are available to the people who have been displaced. And that's a pretty big social problem that we have. In Kitopia, we have AI and it's used so that there's less necessary labor to achieve a higher standard of living for people, and we should be able to be excited about automation of labor tasks that aren't intrinsically rewarding.
One of the reasons that we're not is because the fruits of that increased production flow to the people who own the AI, not to the people who were doing that labor, who now have to find another way to trade their labor for money or else become homeless and starve and die, and that's cruel.
It is the world that we're living in so it's really understandable to me that an artist is going to want to reach for copyright, which has the potential of big financial damages against someone who infringes, and is the way that we've thought about monetization of artistic works. I think that way of thinking about it is detrimental, but I also think it's really understandable.
One of the reasons why the particular legal theories in the lawsuits against generative AI technologies are concerning is because they wind up stretching existing doctrines of copyright law. So in particular, the very first case against Stable Diffusion argued that you were creating an infringing derivative work when you trained your model to recognize the patterns in five billion images.
It's a derivative work of each and every one of them. And that can only succeed as a legal theory if you throw out the existing understanding of what a derivative work is, that it has to be substantially similar to a thing that it's infringing and that limitation is incredibly important for human creativity.
The elements of my work that you might recognize from my artistic influences in the ordinary course of artistic borrowing and inspiration are protected. I'm able to make my art without people coming after me because I like to draw eyes the same way as my inspiration or so on, because ultimately the work is not substantially similar.
And if we got rid of that protection, it would be really bad for everybody.
But at the same time, you can see how someone might say, why should I pay a commission to an artist if I can get something in the same style? To which I would say, try it. It's not going to be what you want because art is not about replicating patterns that are found in a bunch of training data.
It can be a substitute for stock photography or other forms of art that are on the lower end of how much creativity is going into the expression, but for the higher end, I think that part of the market is safe. So I think all artists are potentially impacted by this. I'm not saying only bad artists have to care, but there is this real impact.
Their financial situation is precarious already, and they deserve to make a living, and this is a bandaid because we don't have a better solution in place to support people and let them create in a way that is in accord with their values and their goals. We really don't have that either in the situation where people are primarily making their income doing art that a corporation wants them to make to maximize its products.
No artist wants to create assets for content. Artists want to express and create new beauty and new meaning and the system that we have doesn't achieve that. We can certainly envision better ones but in the meantime, the best tool that artists have is banding together to negotiate with collective power, and it's really not a good enough tool at this point.
But I also think there's a lot of room to ethically use generative AI if you're working with an artist and you're trying to communicate your vision for something visual, maybe you're going to use an AI tool in order to make something that has some of the elements you're looking for and then say this, this is what I want to pay you to, to draw. I want this kind of pose, right? But, but, more unicorns.
JASON KELLEY
And I think while we're talking about these sort of seemingly good, but ultimately dangerous solutions for the different sort of problems that we're thinking about now more than ever because of generative AI, I wanted to talk with Jacob a little bit about watermarking. And this is meant to solve a sort of problem of knowing what is and is not generated by AI.
And people are very excited about this idea that through some sort of, well, actually you just explain Jacob, cause you are the technologist. What is watermarking? Is this a good idea? Will this work to help us understand and distinguish between AI-generated things and things that are just made by people?
JACOB HOFFMAN-ANDREWS
Sure. So a very real and closely related risk of generative AI is that it is - it will, and already is - flooding the internet with bullshit. Uh, you know, many of the articles you might read on any given topic, these days the ones that are most findable are often generated by AI.
And so an obvious next step is, well, what if we could recognize the stuff that's written by AI or the images that are generated by AI, because then we could just skip that. You know, I wouldn't read this article cause I know it's written by AI or you can go even a step further, you could say, well, maybe search engines should downrank things that were written by AI or social networks should label it or allow you to opt out of it.
You know, there's a lot of question about, if we could immediately recognize all the AI stuff, what would we do about it? There's a lot of options, but the first question is, can we even recognize it? So right off the bat, you know, when ChatGPT became available to the public, there were people offering ChatGPT detectors. You know, you could look at this content and, you know, you can kind of say, oh, it tends to look like this.
And you can try to write something that detects its output, and the short answer is it doesn't work and it's actually pretty harmful. A number of students have been harmed because their instructors have run their work through a ChatGPT detector, an AI detector that has incorrectly labeled it.
There's not a reliable pattern in the output that you can always see. Well, what if the makers of the AI put that pattern there? And, you know, for a minute, let's switch from text based to image based stuff. Jason, have you ever gone to a stock photo site to download a picture of something?
JASON KELLEY
I sadly have.
JACOB HOFFMAN-ANDREWS
Yeah. So you might recognize the images they have there, they want to make sure you pay for the image before they use it. So there's some text written across it in a kind of ghostly white diagonal. It says, this is from say shutterstock.com. So that's a form of watermark. If you just went and downloaded that image rather than paying for the cleaned up version, there's a watermark on it.
So the concept of watermarking for AI provenance is that It would be invisible. It would be kind of mixed into the pixels at such a subtle level that you as a human can't detect it, but you know, a computer program designed to detect that watermark could so you could imagine the AI might generate a picture and then in the top left pixel, increase its shade by the smallest amount, and then the next one, decrease it by the smallest amount and so on throughout the whole image.
And you can encode a decent amount of data that way, like what system produced it, when, all that information. And actually the EFF has published some interesting research in the past on a similar system in laser printers where little yellow dots are embedded by certain laser printers, by most laser printers that you can get as an anti counterfeiting measure.
JASON KELLEY
This is one of our most popular discoveries that comes back every few years, if I remember right, because people are just gobsmacked that they can't see them, but they're there, and that they have this information. It's a really good example of how this works.
CINDY COHN
Yeah, and it's used to make sure that they can trace back to the printer that printed anything on the off chance that what you're printing is fake money.
JACOB HOFFMAN-ANDREWS
Indeed, yeah.
The other thing people really worry about is that AI will make it a lot easier to generate disinformation and then spread it and of course if you're generating disinformation it's useful to strip out the watermark. You would maybe prefer that people don't know it's AI. And so you're not limited to resizing or cropping an image. You can actually, you know, run it through a program. You can see what the shades of all the different pixels are. And you, in theory probably know what the watermarking system in use is. And given that degree of flexibility, it seems very, very likely - and I think past technology has proven this out - that it's not going to be hard to strip out the watermark. And in fact, it's not even going to be hard to develop a program to automatically strip out the watermark.
CINDY COHN
Yep. And you, you end up in a cat and mouse game where the people who you most want to catch, who are doing sophisticated disinformation, say to try to upset elections, are going to be able to either strip out the watermark or fake it and so you end up where the things that you most want to identify are probably going to trick people. Is that, is that the way you're thinking about it?
JACOB HOFFMAN-ANDREWS
Yeah, that's pretty much what I'm getting at. I wanted to say one more thing on, um, watermarking. I'd like to talk about chainsaw dogs. There's this popular genre of image on Facebook right now of a man and his chainsaw carved wooden dog and, often accompanied by a caption like, look how great my dad is, he carved this beautiful thing.
And these are mostly AI generated and they receive, you know, thousands of likes and clicks and go wildly viral. And you can imagine a weaker form of the disinformation claim of say, ‘Well, okay, maybe state actors will strip out watermarks so they can conduct their disinformation campaigns, but at least adding watermarks to AI images will prevent this proliferation of garbage on the internet.’
People will be able to see, oh, that's a fake. I'm not going to click on it. And I think the problem with that is even people who are just surfing for likes on social media actually love to strip out credits from artists already. You know, cartoonists get their signatures stripped out and in the examples of these chainsaw dogs, you know, there is actually an original.
There's somebody who made a real carving of a dog. It was very skillfully executed. And these are generated using kind of image to image AI, where you take an image and you generate an image that has a lot of the same concepts. A guy, a dog, made of wood and so they're already trying to strip attribution in one way.
And I think likely they would also find a way to strip any watermarking on the images they're generating.
CINDY COHN
So Jacob, we heard earlier about Kit's ideal world. I'd love to hear about the future world that Jacob wants us to live in.
JACOB HOFFMAN-ANDREWS
Yeah. I think the key thing is, you know, that people are safer in their daily lives than they are today. They're not worried about their livelihoods going away. I think this is a recurring theme when most new technology is invented that, you know, if it replaces somebody's job, and that person's job doesn't get easier, they don't get to keep collecting a paycheck. They just lose their job.
So I think in the ideal future, people have a means to live and to be fulfilled in their lives to do meaningful work still. And also in general, human agency is expanded rather than restricted. The promise of a lot of technologies that, you know, you can do more in the world, you can achieve the conditions you want in your life.
CINDY COHN
Oh that sounds great. I want to come back to you Kit. We've talked a little about Kitopia, including at the top of the show. Let's talk a little bit more. What else are we missing?
KIT WALSH
So in Kitopia, people are able to use AI if it's a useful part of their artistic expression, they're able to use AI if they need to communicate something visual when I'm hiring a concept artist, when I am getting a corrective surgery, and I want to communicate to the surgeon what I want things to look like.
There are a lot of ways in which words don't communicate as well as images. And not everyone has the skill or the time or interest to go and learn a bunch of photoshop to communicate with their surgeon. I think it would be great if more people were interested and had the leisure and freedom to do visual art.
But in Kitopia, that's something that you have because your basic needs are met. And in part, automation is something that should help us do that more. The ability to automate aspects of, of labor should wind up benefiting everybody. That's the vision of AI in Kitopia.
CINDY COHN
Nice. Well that's a wonderful place to end. We're all gonna pack our bags and move to Kitopia. And hopefully by the time we get there, it’ll be waiting for us.
You know, Jason, that was such a rich conversation. I'm not sure we need to do a little recap like we usually do. Let's just close it out.
JASON KELLEY
Yeah, you know, that sounds good. I'll take it from here. Thanks for joining us for this episode of How to Fix the Internet. If you have feedback or suggestions, we would love to hear from you. You can visit EFF.org slash podcasts to click on listener feedback and let us know what you think of this or any other episode.
You can also get a transcript or information about this episode and the guests. And while you're there of course, you can become an EFF member, pick up some merch, or just see what's happening in digital rights this or any other week. This podcast is licensed Creative Commons Attribution 4. 0 International and includes music licensed Creative Commons Unported by their creators.
In this episode, you heard Kalte Ohren by Alex featuring starfrosch & Jerry Spoon; lost Track by Airtone; Come Inside by Zep Hume; Xena's Kiss/Medea's Kiss by MWIC; Homesick By Siobhan D and Drops of H2O ( The Filtered Water Treatment ) by J.Lang. Our theme music is by Nat Keefe of BeatMower with Reed Mathis. And How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology. We’ll see you next time. I’m Jason Kelley.
CINDY COHN
And I’m Cindy Cohn.