Home AI Zefr’s New AI Chief Wants To Prove The Value Of ‘Truthiness’

Zefr’s New AI Chief Wants To Prove The Value Of ‘Truthiness’

SHARE:

Zefr is the latest company to name a chief AI officer. Jon Morra was promoted to the role in February after about seven years with the company.

Zefr is among the brand safety and suitability vendors with a specialty in rating contextual placements across the walled gardens, including YouTube, TikTok and Meta. Over time, Morra has seen content moderation shift from “mass human labeling” (where humans reviewed and manually marked content for violations) to painstakingly training machine learning models on the same rules. Now, he said, large language models have begun to do “pseudo labeling” of content, with a human stepping in at the end just to vet the software’s decisions.

“My job is to figure out how to use AI responsibly, to keep on top of research trends and know how to cut through the noise,” said Morra, who joked he falls asleep most nights reading the machine learning subreddit.

AdExchanger caught up with Morra about the new role and how digital media will adapt (or acquiesce) to AI technology.

AdExchanger: What are your top priorities in your new role?

JON MORRA: First is identification of misinformation. The amount of generative content online that’s not clearly marked [as AI generated] is growing.

Two, scaling our policy effectively in as many languages and modalities as we can.

Third, we have a new initiative around responsible AI. A lot of our customers are creating generative experiences, so there’s a burgeoning market for making sure these experiences are safe and suitable.

Why is it difficult to detect misinformation?

When you’re looking at the GARM [Global Alliance for Responsible Media] categories, a well-trained person can assert whether a piece of content matches a policy. Is somebody committing a crime? Is there a weapon? Is somebody consuming alcohol? People can be trained to do that. Misinformation, not so much.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

Asserting the truthiness of something is hard.

Truthiness is an interesting way to put it. Is there not always an absolute truth?

You have two separate problems. There’s not always an absolute truth. But you also have negatives that are hard to prove.

There was a post saying that Joe Biden’s mental faculties aren’t what they used to be. Is that true? Is that false? He’s in his 80s. He’s never been diagnosed with Alzheimer’s. Does he have some other condition? Probably not, but it gets hard to prove a negative.

What would you do to prove a negative?

We’ll find articles from trusted news sources that talk about why that’s probably not true. Ultimately, our policy team makes the call about whether or not they want to add a fact to our database. It’s case by case.

How do you stay ahead of misinformation trends?

Our goal is to stay on top of these facts and be as fast as we can from a reactionary standpoint.

In 2022, we acquired AdVerif.ai, which focuses on misinformation. Zefr also integrates with verified fact-checkers [International Fact-Checking Network members] and public data sources, which we use as our ground truth to train our models to assert, when some new piece of content comes in, whether it’s true or false.

In addition, our policy team hunts for social media trends. Once they find a trend, they try to find a verified fact to say this is proven or disproven, according to some third-party source. We then put that fact into our database and retrain and redeploy our models.

We want to makes sure we get both a global definition of misinformation and a customer-focused definition.

What’s the difference between global and customer-focused misinformation?

Global misinformation would be [misinformation about] anything you would read on CNN or in a major newspaper.

Brand-specific misinformation could be where a brand creates a product and somebody claims that product causes cancer.

Where do you see generative AI going?

Generative models are going in two separate directions. One is bigger. GPT-5 is going to be this monstrous model that’s going to consume a ton of compute power.

The other thing you see is smaller, more targeted models. This is where Zefr is investing: using the big models to understand the world at large, fine-tuning them and creating these smaller models to do one thing really well – in our case, brand safety and suitability.

Where the generative models excel is helping us come up with training data.

What are the implications of generative AI for brand safety and suitability?

The future of brand safety is this ability to run fast. When we have a policy change, no longer do we need to train our crowdsourced reviewers on what that policy change means, get a million pieces of content labeled about that policy change, retrain the model and redeploy.

Now, the cycle of deployment and keeping up with new policies – new content in the wild – has gotten a lot faster.

This interview has been edited and condensed.

Must Read

Google filed a motion to exclude the testimony of any government witnesses who aren’t economists or antitrust experts during the upcoming ad tech antitrust trial starting on September 9.

Google Is Fighting To Keep Ad Tech Execs Off the Stand In Its Upcoming Antitrust Trial

Google doesn’t want AppNexus founder Brian O’Kelley – you know, the godfather of programmatic – to testify during its ad tech antitrust trial starting on September 9.

How HUMAN Uncovered A Scam Serving 2.5 Billion Ads Per Day To Piracy Sites

Publishers trafficking in pirated movies, TV shows and games sold programmatic ads alongside this stolen content, while using domain cloaking to obscure the “cashout sites” where the ads actually ran.

In 2019, Google moved to a first-price auction and also ceded its last look advantage in AdX, in part because it had to. Most exchanges had already moved to first price.

Thanks To The DOJ, We Now Know What Google Really Thought About Header Bidding

Starting last week and into this week, hundreds of court-filed documents have been unsealed in the lead-up to the Google ad tech antitrust trial – and it’s a bonanza.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

Will Alternative TV Currencies Ever Be More Than A Nielsen Add-On?

Ever since Nielsen was dinged for undercounting TV viewers during the pandemic, its competitors have been fighting to convince buyers and sellers alike to adopt them as alternatives. And yet, some industry insiders argue that alt currencies weren’t ever meant to supplant Nielsen.

A comic depicting people in suits setting money on fire as a reference to incrementality: as in, don't set your money on fire!

How Incrementality Tests Helped Newton Baby Ditch Branded Search

In the past year, Baby product and mattress brand Newton Baby has put all its media channels through a new testing regime for incrementality. It was a revelatory experience.

Colgate-Palmolive redesigned all of its consumer-facing sites and apps to serve as information hubs about its brands and make it easier to collect email addresses and other opted-in user data.

Colgate-Palmolive’s First-Party Data Strategy Is A Study In Quality Over Quantity

Colgate-Palmolive redesigned all of its consumer-facing sites and apps to make it easier to collect opted-in first-party user data.