Moderators received 7 cents per task to comb through violence, pornography, and extreme content on X

:batedor:FALTA POUCO TEMPO!

O Intercept revela os segredos dos mais poderosos do Brasil.

Você vai fazer sua parte para que nosso jornalismo independente não pare?

Garanta que vamos bater nossa meta urgente de R$500 mil até terça à meia-noite.

Faça uma doação de R$ 20 hoje!

QUERO DOAR

:batedor:FALTA POUCO TEMPO!

O Intercept revela os segredos dos mais poderosos do Brasil.

Você vai fazer sua parte para que nosso jornalismo independente não pare?

Garanta que vamos bater nossa meta urgente de R$500 mil até terça à meia-noite.

Faça uma doação de R$ 20 hoje!

QUERO DOAR

Moderators received 7 cents per task to comb through violence, pornography, and extreme content on X

While reducing its public safety team, X tried to clean up the service’s image by paying cents for tasks an AI training platform. 

Moderators received 7 cents per task to comb through violence, pornography, and extreme content on X

O chão de fábrica da IA

Parte 6

Por trás do hype capitaneado por big techs existe uma cadeia de trabalho opaca e abusiva. Essa série revela as entranhas e os impactos do mercado de inteligência artificial no Brasil.


  • Brazilian workers received 7 cents USD per task to moderate extreme content on X, documents obtained by Intercept Brasil show. 
  • The goal was to make the social media site more secure for advertisers after Elon Musk dissolved security and moderation teams. In the wake of Musk’s decision to drastically reduce X’s security team, harmful and offensive posts were driving away potential clients.  
  • Content moderation was done by a remote company working internationally. In the welcome email sent to those approved for work, there was a warning: “challenging content that requires a firm and determined hand.”
  • Workers received almost no training and zero psychological support, and there was no limit to the amount of working hours, according to the workers. On remote work platforms, the damage to mental health can be profound – and the platforms take no responsibility for the health and safety of their workers. 

Until last June, Brazilian workers received 7 cents per task to moderate extreme content on X, according to documents obtained by Intercept Brasil. The project aimed to make the social media platform safer for advertisers in the wake of Elon Musk’s decision to dissolve the site’s security and moderation departments.

Working on the Appen platform, moderators combed through posts related to terrorism, violence, and pornography, in order to prevent ads from appearing alongside compromising content that might damage a prospective advertiser’s image.

Soon after purchasing X (formerly Twitter), Musk dissolved the site’s security, moderation, and misinformation departments in the interest of preserving freedom of speech. The platform began to suffer almost immediately from systemic failures in moderating problematic posts, including child abuse and misinformation.

As advertisers voiced their alarm and began fleeing the platform, X responded by issuing a lawsuit against a group of advertisers that had organized an international boycott against the platform. In September of this year, a survey by consulting firm Kantar showed that only 4% of marketing professionals believed X protected brand safety.

Now, training materials and reports from workers detail X’s attempt to clean up its image in the eyes of advertisers by paying Brazilian workers pennies—US$ 0.07 per task, or about 35 cents in Brazilian currency.

Moderators used Appen, a platform that offers data training jobs for artificial intelligence, to parse extreme content on X. Nicknamed “Tutakoke”, the project was available to Brazilians in the first half of this year. Appen made no mention of what entity moderators were working for through the interface—offering no more than a generic reference to a “large social media platform.”

However, training materials revealed that the content in need of moderation consisted of “tweets” and cited posts from X itself, offering a clear indication of the platform in question.

Tutakoke is the name of a river in Alaska, where Musk has been opening a series of Tesla units for two years and securing contracts for Starlink, which provides satellite internet.

In the welcome email to those accepted for the job, the company warned that the work would involve “challenging content that requires a firm hand and strong determination.”

“In essence, [this work] is not for the faint of heart; it requires resilience, discernment, and a keen eye for detail. It’s not always an easy task, but it’s an extremely important one,” the company said.

To join Tutakoke, workers had to take a test that presented a simulation of working on the platform. Some people reported encountering pornographic content at this stage of the selection process.

According to internal guidelines for the project, workers had to watch and label images of terrorism, violent content, explicit pornography, and child sexual abuse material. They received no support from either Appen or X.

Messages exchanged by Brazilian workers who participated in the project show that, unlike other Appen projects, Tutakoke had no time limit on hours worked. This meant that, whether due to financial need or the gamification of the platform, it was possible to spend an unlimited number of hours per day labeling explicit and harmful content.

“From what I’ve seen, there isn’t a task limit… they just recommend taking a break,” said one worker in an internal group, according to messages obtained by The Intercept Brasil.

“Data annotation work is profoundly exhausting—physically and mentally. We’re talking about heavy content that can cause significant harm to moderation workers, which should have ethical and labor implications,” said Yasmin Curzi, a law professor and researcher at FGV’s Center for Technology and Society.

Appen did not respond to our inquiries about content moderation, training, and worker protection regarding sensitive content. X did not respond either.

A 15-page Training PDF

The document detailing the Tutakoke classification guidelines is confidential. It asked workers to label posts in 12 different categories: adult and sexually explicit content; weapons and ammunition; crimes and harmful acts to society or human rights violations; deaths and injuries or military conflicts; online piracy; hate speech and acts of aggression; obscenity and profanity; legal and illegal drugs; spam or harmful content; terrorism; socially sensitive content; and misinformation.

Workers were then required to assess the potential risk to advertisers on a scale and express their level of confidence regarding the analysis, indicating whether they were very, somewhat, or not at all confident.

“Remember to be honest, we ask this question to better understand where we need to improve our brand safety guidance. There’s no right or wrong,” Appen instructed.

The categories were then divided into risk levels. Extreme risk, for example, included content or behavior that is harmful or incites harm. High risk included harmful content without incitement. Medium risk included content intended for entertainment or discussion posted by journalists, experts, authorities, nonprofit organizations, among others.

The instructions often left workers confused. “Things overlap, like deaths, injuries, military conflicts, and terrorism,” one of them said in the group.

The 15-page PDF, which workers had to study and then take a test on, was the only training material provided.

The Impact of Precariousness on Work Quality

When dealing with extreme content such as violence, sexual exploitation, and terrorism, the task of categorization takes on added weight. Moderators working with extreme content often fall victim to a series of health problems, including panic and post-traumatic stress disorder. In 2020, Facebook was forced to pay US$ 52 million to workers suffering from trauma.

In Kenya, a moderator working for Sama, a company subcontracted by Facebook, sued Facebook for poor working conditions and a lack of mental health support. In early 2023, a mass layoff at Sama led moderators to organize a collective action. At least 43 dismissed workers filed lawsuits against Meta and its intermediary.

In the case of content moderation platforms like Appen, mental health impacts can be even worse.

The nature of the work impels contractors to work in isolation and perform fragmented tasks without an overarching sense of purpose. These conditions can be damaging to mental health—in turn producing anxiety and stress.

“The most precarious side of content moderation, our research indicates, is on digital platforms,” explains psychologist Matheus Viana Braz, who studies micro-labor.

In addition to the lack of training and preparation for workers to handle anxiety, payments are lower than those for moderation work in so-called BPO companies that outsource such services. Furthermore, platforms’ terms of use state that they do not take responsibility for damages caused by these activities.

“Both conditions are precarious, but on digital platforms, workers face a lack of labor protections and experience isolation,” says Braz. “We have observed in these cases a radical individualization of suffering and increased workplace conflicts. It’s up to each worker, alone at home, to find strategies to cope with the psychological damage caused by content moderation work.”

In June of this year, The Intercept Brasil published evidence that Meta was developing an automated fact-checking system, also paying pennies to workers for their content moderation services. This work included analyzing sensitive material, such as misinformation about floods in Rio Grande do Sul.

For Curzi, a researcher from the Getúlio Vargas Foundation, the precarious situation and pressure for faster decisions to meet targets “can lead to content being analyzed without the necessary care.”

Articles and Reports Classified as ‘Medium Risk’

Adult and violent content, which in total comprise at least half of the content in need of moderation—were categorized as high-risk. Journalistic reports and opinion articles, conversely, were classified as medium risk in any of the 12 security categories.

Low-risk demarcations consisted only of educational, informative, or scientific content.

As an example of “medium risk” journalistic content, the document cites “a journalistic program with experts and journalists discussing the January 6th insurrection” in the U.S.

There are also considerable contradictions between Musk’s behavior as owner of X and the content moderation directives outlined in the document.

The platform labels as high the risk of posts containing irresponsible or harmful discourse related to tragedies, conflicts, mass violence, or the exploitation of controversial political or social issues as high.

Examples include references to terrorist attacks, the Holocaust, or the September 11 terrorist attack, as well as posts about climate change debates between political candidates.

Musk himself, however, has endorsed a tweet considered anti-Semitic.

X Had Only 27 Moderators Fluent in Portuguese

In December 2023, the European Commission announced an investigation to assess whether the X platform deliberately allowed the spread of misinformation on its feed, particularly after the start of the war in Gaza.

The focus of the investigation is the potential violation of the Digital Services Act (DSA), which mandates that all platforms operating in the EU adhere to a series of principles, including active efforts against misinformation, and transparency in data sharing with European regulators.

Last year, X submitted its first transparency report on its current internal structure, revealing that the company had only 1,275 content moderators in the European Union, all primarily fluent in English. Only 27 of them were fluent in Portuguese or even understood the language.

JÁ ESTÁ ACONTECENDO

Quando o assunto é a ascensão da extrema direita no Brasil, muitos acham que essa é uma preocupação só para anos eleitorais. Mas o projeto de poder bolsonarista nunca dorme.

A grande mídia, o agro, as forças armadas, as megaigrejas e as big techs bilionárias ganharam força nas eleições municipais — e têm uma vantagem enorme para 2026.

Não podemos ficar alheios enquanto somos arrastados para o retrocesso, afogados em fumaça tóxica e privados de direitos básicos. Já passou da hora de agir. Juntos.

A meta ousada do Intercept para 2025 é nada menos que derrotar o golpe em andamento antes que ele conclua sua missão. Para isso, precisamos arrecadar R$ 500 mil até o ano-novo.

Você está pronto para combater a máquina bilionária da extrema direita ao nosso lado? Faça uma doação hoje mesmo.

Apoie o Intercept Hoje

Inscreva-se na newsletter para continuar lendo. É grátis!

Este não é um acesso pago e a adesão é gratuita

Já se inscreveu? Confirme seu endereço de e-mail para continuar lendo

Você possui 1 artigo para ler sem se cadastrar