You're on X!

You're on X!

This article was published in the Marketecture Newsletter. Subscribe for free.

At the conference last week a woman was trying to figure out how she knew my name when she blurted out accusingly “You’re on X!” This confused me as I wasn’t dancing with glow sticks or otherwise feeling universal love, until I realized she was talking about the platform owned by our future interplanetary Howard Hughes.

If the very name of the platform can be brand unsafe, that suggests some things about the content thereon, and that is the subject of this week’s drama. Equally ironically, virtually all of this drama was discussed on the platform in question.

 

Prelude

By the very nature of the open internet it is impossible to make everything fully safe for advertisers. There is no standards committee to pre-review every Facebook post, or news article, or YouTube video. Maybe there should be, but the cost and loss of efficiency inherent in pre-review of content has made that impossible.

The assurance of “safeness” among advertising environments is held together by a mix of self-regulatory guidelines, industry certifications, targeting vendors, verification vendors, and the occasional whistle blowers.

It isn’t surprising, then, that there is a near constant stream of outrages, news reports, and alleged violations of advertiser safety hitting our inboxes. With that in mind, the chaotic re-imagining of Twitter as X over the past 18 months, has been the perfect storm of brand safety conflict and this week was…let’s just say entertaining.

TAG, you’re not it

Our favorite ad tech agitator, Nandini Jammi of Check My Ads Institute, filed a complaint last August with the self-regulatory body TAG, asking them to remove X’s brand safety seal of certification. On March 13th she Tweeted (X’ed?) that the certification had been revoked.

Many brands and agencies use TAG approval as an important signal in their evaluation of different media options, so this change will presumably hurt the company’s efforts to claw back some of the media dollars lost since Twitter decided to transition to X.

Friday night PR massacre

The TAG news didn’t make too many waves as far as I know. Then, last Friday at 7PM Eastern, some funny X’s starting coming from the The official @XBusiness account and reinforced by @lindayaX. A snippet is shown below. The hostage note…sorry, corporate statement…clearly points the finger at DoubleVerify who takes “full responsibility.” I would hope so, given they are a measurement company.

Source: @XBusiness

DoubleVerify was obviously thrilled to be posting a correction to their data. In their statement they made it clear that the error in question was not one of measurement, but rather was a mislabeled graphic that showed the Brand Suitability Rate where it should have shown Brand Safety Rate. DV has a blog post explaining the difference. Here’s a screenshot of their statement:

Source: @XBusiness post

On it’s own, this isn’t that much of a story. There’s an underlying friction that is evident in the language X is using (accusatory, angry) vs DoubleVerify (technical, specific), and it is noteworthy that the announcement was set for a time that would be noticed by the fewest number of people.

But what raised everyone’s eyebrows was the doubling down on the assertion that X’s inventory was 99.99% brand safe, especially given <points in every general direction>.

Shot…chaser…nazis.

After a relaxing weekend feeling confident that the ads were safe and that there were absolutely, positively, no Nazis involved, we woke up on Tuesday to find NBC News disagreeing. Some quotes from their story below:

“NBC News found that at least 150 paid ‘Premium’ subscriber X accounts and thousands of unpaid accounts have posted or amplified pro-Nazi content on X in recent months, often in apparent violation of X’s rules”

“NBC News found ads running on 74 of the 150 premium accounts, either on their profile pages or in the replies below their posts”

It didn’t take long for folks to find live examples of ads adjacent to antisemitic content, like this example of a Hyundai ad. And then there’s the general feeling from pretty much everyone in the advertising community that the 99.99% safe claim is a form of gaslighting.

What is verification, anyway?

For both DV and Integral Ad Science, when working in non-web media (aka “walled gardens”), the methodology is, broadly speaking, to receive a log file of the exposed ads within the relevant context, and then to score safety and suitability.

This methodology has some potential pitfalls:

  • The ads are only evaluated based on the content directly above and below the exposure, not on the overall timeline experience;

  • The logs must accurately convey the context for every user, which may be subject to error;

  • The “safety” of the adjacent content may be difficult to evaluate given nuances in language, sarcasm and context.

Given this methodology I’m trying to reconcile multiple facts that seem to be contradicting one another:

  • Anecdotally, there appears to be a lot of bad stuff on X;

  • NBC News has shown us evidence that it is not safe;

  • Other people have pretty quickly been able to find examples of unsafeness;

  • And…a trusted verification company run by people I know say it is 99.99% safe.

Let’s go through the possibilities.

Option 1: Flaws in DV’s methodology. Maybe DV is not flagging bad posts because of the difficulties with nuance and sarcasm.

Option 2: We are misinterpreting what DV is saying. In the blog post listed above, DV made the distinction between safety and suitability. Maybe there’s a very low bar for safety and suitability is the right metric to consider? Not likely, but I’m grasping for straws here.

Option 3: We are just finding the exceptions. 99.99% implies that one out of every 1,000 ads is unsafe. Maybe that’s the real rate?

Option 4: There’s lots of bad stuff, but it is not always directly adjacent to ads. As described above, verification only covers direct adjacency, not overall context. For example, the NBC report found 150 bad actors but only 74 with ads, and some of those with ads were non-adjacent (on profile pages).

Option 5: X is lying to DV. DV is reliant on data provided by X and cannot really vouch for its accuracy. This would be quite a scandal, but it is possible.

Option 6: DV is lying to us. Extremely unlikely, but in the list just for completeness.

Option 7: DV isn’t lying to us, but their hand is on the scale. This is sort of an accelerator to some of the other options, where for commercial reasons DV is making judgments in the ways most favorable to the platform.

My bet is that the reality is some combination of Options 1 and 4, with maybe a little bit of 7. It is hard to tell if certain content on X is really bad, or just sort of bad and sarcastic, and also the really bad stuff often shows up at the bottom of comments where there are fewer ads.

But really the answer, as it was with the Forbes MFA scandal from just a couple of weeks ago, is to be smart and not rely on these verification companies as the only authority on where your brand should advertise. Verification is a useful tool but not the absolute truth. As a customer of verification companies you should hold them to account when things don’t add up.

This article was published in the Marketecture Newsletter. Subscribe for free.

Jonathan Harrop

Strategic Marketing Leader | Expert in Mobile Advertising, Privacy, and Growth | Transformative Strategies | Global Brand Development

6mo

Twitter has 99.99% brand safety the same way Vladimir Putin won 80% of the popular vote in Russia’s last election.

Like
Reply

"Show me the incentives, I will show you the outcome...." I'd vote for an initial combination of Options 5, 6 and 7 - both X and DV agree to mis-represent "safety" or conflate it with even-more nebulous "suitability" to preserve a co-dependent relationship - X needing ad dollars to flow, and DV needing X's ad spend to "verify". Option 1 is a nonsense (sarcastic Nazis?), Option 3 fails the basic eye test, as does Option 4 (just 0.01% of the bad stuff is adjacent?). We all criticise Big Tech for grading their own homework. Is it any better when you pay the exam proctor to slip you the correct answers?

Like
Reply
Tom Triscari

Building the go-to M&A advisory practice in the AdTech & Media space.

6mo

Aren’t verification business models a lot like insurance models? You’re insuring against [dill in the blank].

Ian Chadwick

Semi-retired writer & editor, former municipal politician, local curmudgeon, ardent socialist. Join me on Mastodon.social

6mo

XItter safe? For whom? For the rightwing extremists, perhaps. For the trolls, the bots, and the dog-whistling fascists.

Like
Reply

I'd like to take this one, Ari Paparo. Let's start with the word "safety". Is any one of us truly safe? We're all just one dropped piano away from a flat foot and a bad back, so let's take that one off the table. Then the next word: "brand", which could mean a lot of things. Has DV ever come out explicitly stating they're measuring the safety of "brand" in the corporate sense of the word? Perhaps DV is measuring the safety of "brand" like with cattle. In which case, X's 99.99% rating that they've almost never directly been involved with any accidents involving cattle branding is probably entirely accurate. Food for thought.

Like
Reply

To view or add a comment, sign in

Explore topics