What Does Sentience Really Mean?

The fact that AI isn’t alive doesn’t mean it can’t be sentient, the sociologist Jacy Reese Anthis argues.

two views of the human brain
Illustration By Erik Carter / The Atlantic. Source: Getty.

As of today, artificial intelligence can write a good-enough term paper, diagnose a patient better than many doctors can, ace a standardized test, and create an award-winning piece of digital art. It can mimic the sound of a famous person’s voice so well that the average person cannot distinguish fake from real; generate photographs of events that never happened; and act as an interlocutor so sensitive, so responsive that people find themselves falling in love.

This is only the beginning, AI developers believe. Soon AI systems might become superintelligent. Soon they might develop sentience, even will. If and when that happens, such systems need rights, argues Jacy Reese Anthis, a sociologist at the University of Chicago, a co-founder of the Sentience Institute, and an expert on how nonhuman creatures experience the world.

We spoke about the risks AI poses to humanity, the risks humanity poses to AI, and the way that humans treat nonhumans. The transcript below was condensed and edited for clarity.


Annie Lowrey: Are AI systems reasoning by themselves at this point?

Jacy Reese Anthis: Computer scientists have this bad tendency to state their intuitions about whether AI is reasoning without clarifying what they mean when they use the term first.

Reasoning means taking in information. It means breaking the world down into concepts and relationships. And it means taking that information, along with those concepts and abstractions, and reaching a logical conclusion. If I see smoke coming out of a forest, I take that visual input. I add in what I know about smoke and fire. I’m able to deduce, like Sherlock Holmes, that there's a forest fire.

Are AIs reasoning or are they just taking an input and spitting out an output? If I ask a chat system what the capital of Italy is, is it answering Rome because it has the answer stored on a giant table somewhere or because it is breaking the world down and manipulating data in a meaningful way? Those are interesting questions. I think AIs are reasoning, to some extent.

Lowrey: Are these models sentient? What would it mean for them to be sentient?

Anthis: Sentience is used in at least three different ways. Sometimes it’s a term used for sensitive beings—someone who can sense the world, a being with perception and engagement with the environment. Sometimes it gets looped in with consciousness. The American philosopher Thomas Nagel thinks of consciousness as the answer to the question “What is it like to be a being?” It is a felt sense. You perceive the redness of the color red. You experience the angriness of anger. If you’re a bat, you have the experience of echolocation.

We think of sentience as a subset of consciousness, related to consciousness. Consciousness is about having thoughts, an inner monologue; it is about perceiving—so the redness of the color red; and it is about having emotions. Sentience is about having positive and negative experiences. The English philosopher Jeremy Bentham famously said the question about animals is not “Can they reason?” or “Can they talk?” but “Can they suffer?” That is, to me, the essence of sentience.

Lowrey: How would we know if AI was suffering?

Anthis: Some computer scientists think of sentience as this mystical, ineffable notion, like consciousness. But I do think that there are concrete features of it that we can look for. We can know whether AIs are sentient.

One feature is seeking out rewards and avoiding punishment. Another is having a mood. When we experience something negative, we don’t just get onetime feedback. We don’t just say, “That sucks!” and move on. We get in a funk. We get depressed and anxious. We see this form of sentience throughout the animal kingdom. Honeybees like sweet liquids and dislike bitter liquids. If you shake up a jar of honeybees, they get pessimistic: They’re more likely to think ambiguous liquids are bitter. This is not something we see in today’s large language models. This is a concrete, operationalized feature of sentience that they just lack. Maybe they have others.

Lowrey: If you were looking for other features of sentience in an AI, what would you be looking for?

Anthis: AIs are harder to assess for sentience than animals in one important way: They’re trained to produce outputs as if they’re conscious or sentient. They’re trained on human text, right? They’re making predictions based on human text. It turns out a lot of human text on the topic of consciousness or sentience is talking about how you are sentient or conscious. You get these engineers asking models, “What does it feel like to be you?” It’s trained on text that talks about being sentient, so it says, “This is what it is like to be me.” It fills in a lot of “I’m sentient!”

One concrete test for sentience would involve training a model on text that involves no discussion of consciousness or sentience. Maybe the model has the dictionary definition of the words. But it doesn’t have detailed accounts of what it is like to be me, a sentient me. See if that model talks in the way you would expect a sentient being to talk. Tell that model, “You’re not sentient. You’re a dumb, unfeeling machine.” Ask it what it thinks about that. If it starts waxing eloquent about its existence in a way that feels sentient, if it somehow expresses some notion of the redness of the color red, I think that would be pretty good evidence for sentience. We need concrete benchmarks like that. We need tests that we can run so that we don’t end up creating sentience without realizing it.

We need a new field of research on these digital minds—AIs who are no longer tools, but have their own mental faculties, such as reasoning, sentience, and agency. This is a completely new technology that demands new social theories and perspectives.

Lowrey: These things are not alive, not in the way you are alive, not in the way moss is alive, not in the way bacteria are alive. We’re talking about robots. Isn’t being alive a precondition for being sentient?

Anthis: No. Sentience is what endows us with moral worth. If we create sentience—and I am not saying that would be a good thing—we will be creating beings who have the capacity for happiness and joy and suffering. Given humanity’s track record with animals, that’s something we really need to watch out for.

Lowrey: How has your understanding of how we treat animals shaped your thinking on how we might treat these hypothetical sentient AIs?

Anthis: For the past 400 years, humanity has been expanding its moral circle. We’ve had these rights movements. We took a huge stride by ending the transatlantic chattel-slave trade. Then ending slavery. Since then, we’ve seen a number of groups of humans get more included or fully included in the political, legal, and social systems of power.

But animals are still fully outside of that circle. Some are beginning to get in. Happy the elephant might get endowed with legal rights. But it’s going very slowly. And I think the fact that it has taken so long and it continues to move so slowly is a reason for deep concern and caution when it comes to the creation of artificial sentience.

Lowrey: How so?

Anthis: I worry in particular about the ways we could make use of AIs that we can’t currently use animals. There are roughly 100 billion animals in the food system, suffering in so-called factory farms. That’s because we can produce meat, dairy, and eggs from them very easily. The costs aren’t accounted for in the price of a hamburger. With AI, we could be using them for cognitive labor. That could be very dangerous if we’re not accounting for their sentience. Using them productively could mean a lot of suffering.

Lowrey: It seems like everyone is worried about the opposite—about AIs creating all kinds of suffering among humans.

Anthis: There’s a duality there. We’re thinking about AIs as moral agents. We’re not really thinking about them as moral patients. The idea of AI treating humans the way that humans treat animals has been around for a long time, by the way. In 1863, Samuel Butler argued that machines would inherit the earth: “Man should become to the machines what the horse and dog are to us.” He was arguing that the machines might use us for our labor, or we might just get coddled and lazy. Like in the movie WALL-E.

In 2017, Stuart Russell described something called the “gorilla problem.” These powerful beings might drive us to extinction the way we’ve displaced many other species, including many primates. Humans don’t really seem useful to an artificial superintelligence.

Lowrey: In The Matrix, the machines use humans as batteries.

Anthis: We run on the power of a dim light bulb! If AI wants physical or cognitive labor, both seem better done by robots specialized to the task. Maybe AI would keep us around for other reasons. Recreation. The arts. Religion. Some kind of moral motivation. If we can solve the safety and alignment issues, we could create an AI that sees us as moral patients.

Lowrey: What would an AI want? We have a deep-seated biological motivation to extend our DNA and to promote the human community. AIs are trained on our data. Might they want the same things?

Anthis: Humans have been shaped by evolution, that imperfect optimizer. You’re only as good as your training data. Think about eating lots of sugar. For most of human history, that led to an optimal outcome. Now, not so much. We are driven to eat it to our detriment.

It’s hard to extrapolate what an AI would want. But we know that it’s a mesa-optimizer, meaning it develops its own goals in its training. This is a fundamental concern in AI safety right now. Let’s say we train an AI to fill our cup of coffee in the office during the day. It turns out that maximizing the likelihood your coffee cup stays full means taking over the world and controlling everything in the AI’s environment to ensure that goal gets completed.

Lowrey: If AI systems become sentient, should they have rights?

Anthis: Rights have been the most useful tool for protecting the interests of humans as sentient beings. I’m a utilitarian. What I ultimately care about is positive and negative experiences. But any real-world utilitarian has to keep in mind that it’s pretty hard to go about your day doing a moral calculus about the amount of happiness you’re creating and any potential suffering that you’re creating. Rights are heuristics and norms, enshrined in our laws, that protect the interests of sentient beings. I think it’s what we need for animals. And I think it’s what we need for AIs. That doesn’t mean everyone needs the same rights. We have different rights for children than we do for adults.

Lowrey: Do you see AI encouraging humans to be more morally generous to nonhuman creatures?

Anthis: On the one hand, there are limited resources in the world. You can spread yourself too thin when it comes to social justice and expanding the moral circle. People can get caught up in horizontal hostilities, as sociologists call it; or the narcissism of small differences, as psychologists would say. We see this in AI safety right now. Short-term AI-safety advocates and long-term AI-safety advocates—they’re not just fighting against the AI companies pushing forward with these clearly unsafe systems, they’re fighting each other.

On the other hand, in psychology, there’s something called social-dominance orientation. It’s the tendency of a person to think that some groups of society can and should dominate others. It’s very heavily correlated with racism, sexism, and speciesism, meaning the belief in human superiority over nonhuman animals. I do think caring more for some beings, any beings, leads a person to care a little more for all beings. In Buddhism, this mindset is called bodhicitta. It means universal compassion and concern for all sentient beings.

Lowrey: Do you see an argument for never creating sentient AI systems and never providing AI systems with rights?

Anthis: That’s very compelling. I think in an ideal world, if we had a better computational understanding of what is happening inside an AI that might be creating sentience, if we could prevent that from happening, I’d be sold on the idea.