Home Data-Driven Thinking Here’s What Can Go Wrong If Your AI Risk Management Isn’t Up To The Task

Here’s What Can Go Wrong If Your AI Risk Management Isn’t Up To The Task

SHARE:
Neil Cohen, Strategy Director, Traction

It’s no surprise that marketing is one of the first disciplines to embrace generative AI. In a recent State of Generative AI Survey by risk management platform Portal26, 68% of respondents in the marketing, media and sales categories believed generative AI would give their organization a significant competitive advantage.

Yet, in that same survey, 75% of those organizations have had security and/or misuse incidents with generative AI. While powerful, generative AI is also fraught with risks.

As Uncle Ben famously told Peter Parker, “With great power comes great responsibility.” 

There are a number of landmines in the world of generative AI. Here’s what to watch out for and how to avoid them:

Deepfakes and authenticity. Bad actors have a lot to gain by associating themselves with existing brands and personas. 

The Skyward Aviator Quest gaming app recently created a video featuring world famous cricketer Sachin Tendulkar in a promo video. Only problem is, he didn’t endorse the product or have a deal with the company.  

Protecting your brand assets, including photos and videos, from misuse and ensuring they are authentic will be new challenges. New companies and apps, like Nodle’s Click app, are now emerging, designed to automatically authenticate media assets. Look for more of this, and soon.  

Bias. Bias comes in two forms: in training your LLM (large language model) and in how you prompt your AI tools to provide outputs. 

As outlined in an article by Martech.org, generative AI is just a machine. The outputs are only as good as the inputs, and those inputs come from humans with subjective perspectives based on their own experiences and backgrounds. 

Just because a generative AI product is commercially available doesn’t mean it’s trustworthy. Some of the most popular tools out there still have significant bias. 

This report by Bloomberg describes how Stable Diffusion’s AI creates images that perpetuate and amplify harmful gender and racial disparities. It ticks all the usual boxes for how AI manifests stereotypes: The world is run by white male CEOs; women are rarely doctors, lawyers or judges; dark-skinned men commit crimes, while dark-skinned women flip burgers. 

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

Filtering out bias isn’t easy. But deliberately introducing different perspectives and viewpoints in model training, a process known as data augmentation, should be mandated in governance and policy.

Privacy. Customer data privacy is a critical risk factor for marketers. Individuals are currently and actively creating prompts by entering private data like phone numbers, email addresses – even social security numbers and intellectual property – into public LLMs with the obvious negative results. The International Association of Privacy Professionals is a great resource for digging into guiding principles and legal compliance around generative AI privacy.

In South Korea, there was a major incident recently with Samsung in which private customer data made it into ChatGPT and became accessible to millions of people. This happened just 20 days after Samsung lifted a ban on ChatGPT designed to protect customer data. 

A thorough employee training regime could have made a difference. The aforementioned State of Generative AI Survey highlighted that 57% of marketing companies provided five hours or less training to their employees. We need to do better.

Data usage. Reuters highlights numerous corporate data risks around the adoption of generative AI. The nature of generative AI, which often requires pulling and storing data from a cloud-based repository, creates opportunities for bad actors to hijack data. Rules around what data is or isn’t allowed need to be clearly communicated and managed to minimize risk for the organization.     

IP protection and copyright infringement. A recent article in HBR highlights the challenges creators and marketers face when diving into generative AI. Are you protecting your or your client’s intellectual property by not feeding it into public LLMs? Conversely, how do you prevent using others’ IP and copyrighted material when querying LLMs? And, finally, are you training your LLMs with copyrighted information? 

The copyright issue is in the courts right now as The New York Times is suing ChatGPT for infringement. A well-considered governance program should provide guidance and protective oversight when it comes to the use of copyrighted material. While it may take time for the court case to sort itself out, companies will require a way forward for training their LLMs. Some may even feel that a business relationship that pays the copyright owners is necessary if that information is considered important to model.  

These are just a handful of areas of risk that marketers will have to wrestle with as they rapidly adopt generative AI. Even the most thoughtful training and governance program isn’t foolproof.

While it might seem counterintuitive, high-quality human oversight is essential when it comes to AI. Generative AI is no panacea for thorough human reviews by people who are well versed in the products and audiences they are meant to serve. 

Yes, we are at the dawn of a new age that will enhance productivity and revenue. But marketers should still proceed with caution and ensure the right human safeguards are in place. 

Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Follow Traction and AdExchanger on LinkedIn.

Must Read

Google filed a motion to exclude the testimony of any government witnesses who aren’t economists or antitrust experts during the upcoming ad tech antitrust trial starting on September 9.

Google Is Fighting To Keep Ad Tech Execs Off the Stand In Its Upcoming Antitrust Trial

Google doesn’t want AppNexus founder Brian O’Kelley – you know, the godfather of programmatic – to testify during its ad tech antitrust trial starting on September 9.

How HUMAN Uncovered A Scam Serving 2.5 Billion Ads Per Day To Piracy Sites

Publishers trafficking in pirated movies, TV shows and games sold programmatic ads alongside this stolen content, while using domain cloaking to obscure the “cashout sites” where the ads actually ran.

In 2019, Google moved to a first-price auction and also ceded its last look advantage in AdX, in part because it had to. Most exchanges had already moved to first price.

Thanks To The DOJ, We Now Know What Google Really Thought About Header Bidding

Starting last week and into this week, hundreds of court-filed documents have been unsealed in the lead-up to the Google ad tech antitrust trial – and it’s a bonanza.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

Will Alternative TV Currencies Ever Be More Than A Nielsen Add-On?

Ever since Nielsen was dinged for undercounting TV viewers during the pandemic, its competitors have been fighting to convince buyers and sellers alike to adopt them as alternatives. And yet, some industry insiders argue that alt currencies weren’t ever meant to supplant Nielsen.

A comic depicting people in suits setting money on fire as a reference to incrementality: as in, don't set your money on fire!

How Incrementality Tests Helped Newton Baby Ditch Branded Search

In the past year, Baby product and mattress brand Newton Baby has put all its media channels through a new testing regime for incrementality. It was a revelatory experience.

Colgate-Palmolive redesigned all of its consumer-facing sites and apps to serve as information hubs about its brands and make it easier to collect email addresses and other opted-in user data.

Colgate-Palmolive’s First-Party Data Strategy Is A Study In Quality Over Quantity

Colgate-Palmolive redesigned all of its consumer-facing sites and apps to make it easier to collect opted-in first-party user data.