The Nuances of Threats on Facebook

John Elwood an attorney for Anthony Elonis speaks to reporters outside the Supreme Court on Monday.
John Elwood, an attorney for Anthony Elonis, speaks to reporters outside the Supreme Court on Monday.Photograph by Susan Walsh/AP

On Monday morning, Supreme Court Chief Justice John Roberts got some attention for quoting Eminem during oral arguments in a case, Elonis v. United States, about the limits of free speech. The issue Roberts wanted to understand—when does communication cross the line into being an illegal threat?—is central to the case, which involves an aspiring rapper who used violent language on Facebook in reference to his wife, an elementary school, and an F.B.I. agent, among others. First, Roberts cited some lyrics from the Eminem song “’97 Bonnie and Clyde,” in which the narrator asks his young daughter to help him tie a rope around a rock and her mother’s foot, then help push her into a lake. (He calls the daughter Hai-Hai; Eminem’s daughter with his ex-wife, Kim, is named Hailie.) Then Roberts asked a lawyer for the government whether, in his view, Eminem could be prosecuted for the lines.

In asking about the lyrics, Roberts seemed to be wrestling with some of the questions that have attracted a variety of groups to the case, including free-speech activists, demonstrators who use inflammatory protest materials, advocates for domestic-violence victims, and Internet companies. Under what circumstances might a Facebook post be considered a threat? What kind of communication constitutes a threat, and does it make a difference whether the communication was made online or offline?

The Eminem lyrics that Roberts quoted weren’t far off from words Anthony Elonis began writing on Facebook four years ago. In 2010, as Halloween approached, he posted a photograph taken at a haunted-house-themed event held by Dorney Park and Wildwater Kingdom, an amusement park in Allentown, Pennsylvania, where he worked. In the picture, Elonis is dressed in costume and holding a toy knife against a female co-worker’s neck. “I wish,” Elonis captioned the photo. His supervisor read the post as a threat and immediately fired him.

Elonis, who was twenty-seven years old at the time, was having a bad year. Months earlier, his wife had left him, taking their children with her. In the several weeks after putting up the Halloween photo, Elonis, who sometimes posted original rap lyrics to Facebook under the pseudonym Tone Dougie, continued to write sinister-sounding missives on the site, including several about his wife (“There’s one way to love ya, but a thousand ways to kill ya”) and one about an elementary-school classroom (“Enough elementary schools in a ten mile radius to initiate the most heinous school shooting ever imagined”). After the F.B.I. learned about the messages and sent an agent to check out Elonis, he posted about her, too (“Little Agent Lady stood so close/ Took all the strength I had not to turn the bitch ghost/ Pull my knife, flick my wrist, and slit her throat”). By December, Elonis had been arrested and charged with five counts of violating a federal law barring interstate communications—like messages posted online—that contain “any threat to injure the person of another.” A jury convicted Elonis in 2011, and a district court sentenced him to forty-four months in prison.

Elonis appealed the decision all the way to the Supreme Court, arguing that the jurors making the decision about his fate hadn’t used an appropriate definition of what constitutes a threat. In his case, a jury had been instructed to consider whether a reasonable person would foresee that the posts would be read by the audience as a “serious expression of intention” to hurt or kill someone. Elonis argued that his messages should count as threats only if he actually intended them as such. To make this argument, his lawyers drew on a 2003 Supreme Court case, Virginia v. Black, having to do with burning crosses. In that case, the Court deemed a Virginia law unconstitutional because it suggested that the act of cross burning itself represented “evidence of an intent to intimidate”; there could be other reasons, after all, that a person might choose to burn a cross. Some have interpreted the Court’s decision in that case to mean that the intent behind a message should be considered in deciding whether it is really a threat.

Elonis’s situation has attracted the attention of groups as diverse as the Reporters Committee for the Freedom of the Press, People for the Ethical Treatment of Animals, and the Anti-Defamation League, all of which believe that the Court’s decision could have repercussions for other free-speech cases. (Advance Publications, which, through Condé Nast, owns The New Yorker, joined eight other media organizations in backing a Reporters Committee amicus brief in support of Elonis.) Indeed, it’s easy to imagine the possible ramifications of the Court’s decision. What about a newspaper columnist who writes with satirical intent that he wishes someone would kill the President, but is taken seriously? What about an animal-rights demonstrator who uses language that is perceived as threatening, even though she meant only to use hyperbole to make a point? Could those people be prosecuted for making threats under the framework that Elonis’s jury applied?

Free-speech activists were concerned, too, about the implications for people who post on Facebook and other forms of social media. In petitioning the Supreme Court to take the case, Elonis’s lawyers argued that online communication makes it more difficult than ever to interpret the meaning of a statement, which makes it especially important for a jury to evaluate Elonis’s intent in writing his posts rather than just consider how a hypothetical reasonable person might expect an audience to react. “The issue is growing in importance as communication online by email and social media has become commonplace, even as the norms and expectations for such communication remain unsettled,” they wrote. “The inherently impersonal nature of online communication makes such messages inherently susceptible to misinterpretation.”

The Supreme Court often gets flack for seeming to misunderstand how technology works—Justice Sonia Sotomayor once called Netflix “Netflick,” among other gaffes the Court has made. So one might have expected that the Justices would be especially sensitive to the unique challenges that online communications present to those who are asked to interpret them. But, interestingly, in Monday’s oral arguments, the tone of the questioning suggested that they weren’t especially drawn to this aspect of the case. When Elonis’s lawyer brought up a case in Texas involving a teen-ager who said online, sarcastically, that he was going to shoot up a kindergarten and eat a child’s heart—an example of a situation in which even a reasonable person might interpret a dangerous-sounding but ultimately frivolous comment as a true threat—Roberts interrupted him: “It’s a reasonable person familiar with the context of the statement. Right? So you don’t take what is on the Internet in the abstract and say, ‘This person wants to do something horrible.’ You are familiar with the context. You are familiar with the fact that this was a couple of teenagers in a chat room playing a game, right?”

Roberts seemed to be considering whether juries are already able to account for the nuances of online communication. Maybe an understanding of a written message’s context is enough to avoid situations where a well-meaning person might be convicted of making a threat. A reasonable person would interpret the language of kids playing an online game differently from, say, the ranting of a disturbed person waving a gun outside an elementary school. And a reasonable person would interpret an Eminem song performed onstage for the purpose of entertainment differently from an aspiring rapper’s Facebook rant. The same would presumably be true of an op-ed writer’s columns or an activist’s slogans.

The Court’s decision on Elonis is pending, and it remains to be seen how the Justices will resolve the questions at hand. On Monday, after the oral arguments, I spoke with James Grimmelmann, a professor at the University of Maryland who specializes in Internet law. He said that, based on the oral arguments, he expects the Court to take a narrow position that won’t significantly change the current interpretation of how jurors should evaluate threats, whether they are made online or elsewhere. Media organizations, protest groups, and other concerned parties who had dreaded a more encompassing ruling will likely be satisfied with the outcome, he said.

Some had wondered, too, if Facebook and similar sites would have to change their rules for users based on the Court’s decision. Grimmelmann doubts that this would happen. “I did not see any enthusiasm among the judges to make new rules for social media, and that’s a good thing, too,” he said. In any case, Facebook already has policies in place to protect people from threats. The company, in its Community Standards, notes, “We remove content and may escalate to law enforcement when we perceive a genuine risk of physical harm, or a direct threat to public safety. You may not credibly threaten others, or organize acts of real-world violence.” Grimmelmann pointed out that Facebook isn’t concerned only with following the law—it also wants to make users happy, which means keeping them from feeling unsafe.