Home Platforms Russia’s Invasion Of Ukraine Highlights Big Tech’s Struggle To Moderate Content At Scale

Russia’s Invasion Of Ukraine Highlights Big Tech’s Struggle To Moderate Content At Scale

SHARE:
The ongoing Russian invasion of Ukraine is yet another example that content moderation will never be perfect.
Bachevsk. Ukraine. October 2021: Control sign at the entrance to the Ukrainian checkpoint from Russia. Text translation: Ukraine

All of the large social platforms have content moderation policies.

No belly fat ads, no ads that discriminate based on race, color or sexual orientation, no ads that include claims debunked by third-party fact-checkers – no ads that exploit crises or controversial political issues.

No graphic content or glorification of violence, no doxxing, no threats, no child sexual exploitation, nothing that promotes terrorism or violent extremism. And on and on.

The policies sound good on paper. But policies are tested in practice.

The ongoing Russian invasion of Ukraine is yet another example that content moderation will never be perfect.

Then again, that’s not a reason to let perfect get in the way of good.

For now, the platforms are mainly being reactive – and, one could argue, moving slower and with more caution than the evolving situation on the ground calls for.

For example, Meta and Twitter (on Friday) and YouTube (on Saturday) made moves to prohibit Russian state media outlets, like RT and Sputnik, from running ads or monetizing accounts. But it took the better part of a week for Meta and TikTok to block online access to their channels in Europe, and only after pressure from European officials. Those blocks don’t apply globally.

As The New York Times put it: “Platforms have turned into major battlefields for a parallel information war” at the same time “their data and services have become vital links in the conflict.”

When it comes to content moderation, the crisis in Ukraine is a decisive flashpoint, but the challenge isn’t new.

We asked media buyers, academics and ad industry executives: Is it possible for the big ad platforms to have all-encompassing content and ad policies that handle the bulk of situations, or are they destined to be roiled by every major news event?

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

  • Joshua Lowcock, chief digital & global brand safety officer, UM
  • Ruben Schreurs, global chief product officer, Ebiquity
  • Kieley Taylor, global head of partnerships & managing partner, GroupM
  • Chris Vargo, CEO & founder, Socialcontext

Joshua Lowcock, chief digital & global brand safety officer, UM

The major platforms are frequently caught flatfooted because it appears they spend insufficient time planning for worse-case outcomes and are ill-equipped to act rapidly when the moment arrives. Whether this is a leadership failure, groupthink or lack of diversity in leadership is up for debate.

At the heart of the challenge is that most platforms misappropriate the concept of “free speech.”

Leaders at the major platforms should read Austrian philosopher Karl Popper and his work, “Open Society and Its Enemies,” to understand the intolerance paradox. We must be intolerant of intolerance. The Russian invasion of Ukraine is a case in point.

Russian leadership has frequently shown it won’t tolerate a free press, open elections or protests – yet platforms still give Russian state-owned propaganda free reign. If platforms took the time to understand Popper, took off their rose-colored glasses and did scenario planning, maybe they’d be better prepared for future challenges.

Ruben Schreurs, global chief product officer, Ebiquity

In moments like these, it’s painfully clear just how much power and impact the big platforms have in this world. While I appreciate the need for nuance, I can’t understand why disinformation-fueled propaganda networks like RT and Sputnik are still allowed to distribute their content through large US platforms.

Sure, “demonetizing” the content by blocking ads is a good step (and one wonders why this happens only now), but such blatantly dishonest and harmful content should be blocked altogether – globally, not just in the EU.

We will continue supporting and collaborating with organizations like the Global Disinformation Index, the Check My Ads Institute and others to make sure that we, together with our clients and partners, can lead to help deliver structural change. To not just support Ukraine during the current invasion by Russia, but to ensure ad-funded media and platforms are structurally unavailable to reprehensible regimes and organizations.

Kieley Taylor, global head of partnerships & managing partner, GroupM

Given the access these platforms provide for user-generated and user-uploaded content, there will always be a need to actively monitor and moderate content with “all-hands-on-deck” in moments of acute crisis. That said, progress has been made by the platforms both individually and in aggregate.

Individually, platforms have taken action to remove coordinated inauthentic activity as well as forums, groups and users that don’t meet their community standards.

In aggregate, the Global Internet Forum to Counter Terrorism is one example of an entity that shares intelligence and hashes terror-related content to expedite removal. The Global Alliance for Responsible Media (GARM), created by the World Federation of Advertisers, is another example.

GARM has helped the industry create and adhere to consistent definitions – and a methodology to measure harm – across respective platforms. You can’t manage what you do not measure. With deeper focus through ongoing community standard enforcement reports, playbooks have been developed to lessen the spread of egregious content, including removing it from proactive recommendations and searches, bolstering native language interpretations and relying on external fact-checkers.

There will be more lessons to learn from each crisis, but the infrastructure to take more swift and decisive action is in place and being refined, with the amount of work still to do based on the scale of the platform and the community of users it hosts.

Chris Vargo, CEO & founder, Socialcontext

Content moderation, whether it’s social media posts, news or ads, has always been a whack-a-mole problem. However, the difference between social media platforms and ad platforms is in codifying, operationalizing and contextualizing definitions for what is allowed on their platforms.

Twitter, for instance, has bolstered its health and safety teams, and, as a result, we have an expanded and clearer set of behaviors with definitions of what is not allowed on the platform. Twitter and Facebook both regularly report on infractions they find, and this further builds an understanding as to what those platforms do not tolerate. Today, it was Facebook saying they would not enable astroturfing and misinformation in Ukraine by Russia and its allies.

But ad tech vendors themselves haven’t been pushed enough to come up with their own definitions, so they fall back on GARM, a set of broad content categories with little to no definitions. GARM does not act as a watchdog. It does not report on newsworthy infractions. Ad tech vendors feel no obligation to highlight the GARM-related infractions they find.

It’s possible to build an ad tech ecosystem that has universal content policies, but this would require ad tech platforms to communicate with the public, to define concretely what content is allowed on its platform – and to report real examples of infractions they find.

Answers have been lightly edited and condensed.

Must Read

Google filed a motion to exclude the testimony of any government witnesses who aren’t economists or antitrust experts during the upcoming ad tech antitrust trial starting on September 9.

Google Is Fighting To Keep Ad Tech Execs Off the Stand In Its Upcoming Antitrust Trial

Google doesn’t want AppNexus founder Brian O’Kelley – you know, the godfather of programmatic – to testify during its ad tech antitrust trial starting on September 9.

How HUMAN Uncovered A Scam Serving 2.5 Billion Ads Per Day To Piracy Sites

Publishers trafficking in pirated movies, TV shows and games sold programmatic ads alongside this stolen content, while using domain cloaking to obscure the “cashout sites” where the ads actually ran.

In 2019, Google moved to a first-price auction and also ceded its last look advantage in AdX, in part because it had to. Most exchanges had already moved to first price.

Thanks To The DOJ, We Now Know What Google Really Thought About Header Bidding

Starting last week and into this week, hundreds of court-filed documents have been unsealed in the lead-up to the Google ad tech antitrust trial – and it’s a bonanza.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

Will Alternative TV Currencies Ever Be More Than A Nielsen Add-On?

Ever since Nielsen was dinged for undercounting TV viewers during the pandemic, its competitors have been fighting to convince buyers and sellers alike to adopt them as alternatives. And yet, some industry insiders argue that alt currencies weren’t ever meant to supplant Nielsen.

A comic depicting people in suits setting money on fire as a reference to incrementality: as in, don't set your money on fire!

How Incrementality Tests Helped Newton Baby Ditch Branded Search

In the past year, Baby product and mattress brand Newton Baby has put all its media channels through a new testing regime for incrementality. It was a revelatory experience.

Colgate-Palmolive redesigned all of its consumer-facing sites and apps to serve as information hubs about its brands and make it easier to collect email addresses and other opted-in user data.

Colgate-Palmolive’s First-Party Data Strategy Is A Study In Quality Over Quantity

Colgate-Palmolive redesigned all of its consumer-facing sites and apps to make it easier to collect opted-in first-party user data.