Home AI Crafting A Conscience For Generative AI In Marketing

Crafting A Conscience For Generative AI In Marketing

SHARE:

When a generative AI tool spouts misinformation, breaks copyright law or perpetuates hateful stereotypes, it’s the people using the technology who take the fall.

After all, a large language model (LLM) generating text or an image “doesn’t use a brain of its own” or understand the implications of what it’s generating, said Paul Pallath, VP of applied AI at Searce, a cloud consulting company founded in 2004 that provides AI services such as assessing AI “maturity,” or readiness, and identifying use cases.

“We are far away from machines doing everything for us,” said Pallath, who held executive data science and analytics roles at SAP, Intuit, Vodafone and Levi Strauss & Company before joining Searce last year. (He’s also got a PhD in machine learning.)

Humans can’t outsource their ethical conundrums to algorithms and programs. Instead, we must “ground ourselves in empathy,” Pallath said, and develop responsible machine learning practices and generative AI applications.

Searce, for example, works with clients to move beyond the abstract. It guides companies through generative AI implementations and helps them establish frameworks for ethical, responsible AI use.

Pallath spoke with AdExchanger about a few hypothetical – but very possible – ethical scenarios a marketer might face.

If a generative AI tool produces factually inaccurate or misleading information, what should a marketer do?

PAUL PALLATH: Understand, verify and fill in the gaps of everything that’s coming out. There will be a lot of content that LLMs create that feels like truth but isn’t. Don’t assume anything. Fact-checking is very important.

What if I’m unsure if an LLM has trained on copyrighted material?

Avoid using it unless you have the rights and explicit permission from the author who has the copyright, because it creates significant exposure for your company.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

The LLM should also spit out the references from which that content has been generated. It’s necessary to check every reference. Go back and read the original content. I’ve seen LLMs create a reference, and the reference doesn’t exist. It just cooked up information.

Say a marketer’s looking for ad imagery, and an LLM keeps returning lighter-skinned people. How can they steer it away from harmfully reinforcing and amplifying biases?

It’s about how you design your prompts. You need to have governance in terms of prompt engineering – a review of the different types of prompts you should be using, typically – so the content coming out isn’t biased.

If you have a repository of approved images, the LLM could create different surroundings, change the color, the clothes or the brightness, or make the image a high-resolution digital image.

For retail companies, if they have permission to use a person’s image, they can fit different apparel on top [of existing images] so it can be part of their marketing messages. They can have brand-approved ambassadors who don’t have to come in for several hours of photo and video shoots.

Should companies pay these brand-approved ambassadors for AI-generated variations of their images?

Yes. You’d compensate for every digital artifact you create with different models. Companies will start to work on different compensation mechanics.

LLMs train on what’s online, so they often favor “standard” forms of dominant languages, such as English. How can marketers mitigate language bias?

LLMs are maturing from a translation standpoint, but there are even variations of the same language. Which region the content is coming from, who has vetted the content, whether it’s true from a cultural standpoint, whether it stands by the belief system of that country – that’s not knowledge the LLMs have.

You need a human in the loop doing a rigorous review of the content that’s getting generated before it’s published. Have cultural ambassadors within your company who will understand the nuances of a culture and how it will resonate.

Is generative AI morally dubious from a sustainability perspective, given the power consumption involved in running LLMs?

A significant amount of computing power goes into training those models.

The metrics that large companies are chasing to become carbon neutral in the next five to 10 years are fundamental to which vendors they choose, so they’re not contributing toward their carbon emissions. They have to look at the energy their data centers use when they make those choices.

How can we prevent exploitation, such as using prisoners or very poorly paid workers to train LLMs, and other bad behaviors by LLM makers?

You have to have data governance and data lineage – in terms of who created the data, who touched the data, even before the data actually lands in the algorithms – and [a log of] the decisions that have been made [along the way]. Data lineage gives you transparency and allows you to audit the algorithms.

Today, that auditability is not there.

Transparency is necessary for us to weed out the nonethical elements. But we are dependent upon the large companies that have created these models to come out with the transparency metrics.

This interview has been edited and condensed.

For more articles featuring Paul Pallath, click here.

Must Read

Google filed a motion to exclude the testimony of any government witnesses who aren’t economists or antitrust experts during the upcoming ad tech antitrust trial starting on September 9.

Google Is Fighting To Keep Ad Tech Execs Off the Stand In Its Upcoming Antitrust Trial

Google doesn’t want AppNexus founder Brian O’Kelley – you know, the godfather of programmatic – to testify during its ad tech antitrust trial starting on September 9.

How HUMAN Uncovered A Scam Serving 2.5 Billion Ads Per Day To Piracy Sites

Publishers trafficking in pirated movies, TV shows and games sold programmatic ads alongside this stolen content, while using domain cloaking to obscure the “cashout sites” where the ads actually ran.

In 2019, Google moved to a first-price auction and also ceded its last look advantage in AdX, in part because it had to. Most exchanges had already moved to first price.

Thanks To The DOJ, We Now Know What Google Really Thought About Header Bidding

Starting last week and into this week, hundreds of court-filed documents have been unsealed in the lead-up to the Google ad tech antitrust trial – and it’s a bonanza.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

Will Alternative TV Currencies Ever Be More Than A Nielsen Add-On?

Ever since Nielsen was dinged for undercounting TV viewers during the pandemic, its competitors have been fighting to convince buyers and sellers alike to adopt them as alternatives. And yet, some industry insiders argue that alt currencies weren’t ever meant to supplant Nielsen.

A comic depicting people in suits setting money on fire as a reference to incrementality: as in, don't set your money on fire!

How Incrementality Tests Helped Newton Baby Ditch Branded Search

In the past year, Baby product and mattress brand Newton Baby has put all its media channels through a new testing regime for incrementality. It was a revelatory experience.

Colgate-Palmolive redesigned all of its consumer-facing sites and apps to serve as information hubs about its brands and make it easier to collect email addresses and other opted-in user data.

Colgate-Palmolive’s First-Party Data Strategy Is A Study In Quality Over Quantity

Colgate-Palmolive redesigned all of its consumer-facing sites and apps to make it easier to collect opted-in first-party user data.