AI Briefing: Senators propose new regulations for privacy, transparency and copyright protections

The U.S. Senate Commerce Committee on Thursday held a hearing to address a range of concerns about the intersection of AI and privacy. While some lawmakers expressed concern about AI accelerating risks – such as online surveillance, scams, hyper-targeting ads and discriminatory business practices — others cautioned regulations might further protect tech giants and burden smaller businesses.

According to U.S. Senator Maria Cantwell (D-Wash.), the risks of AI could accelerate risks to consumers related to social media and digital advertising. Just like the growth of online ads was powered by data, Cantwell worries tech companies will train AI models with sensitive data and use that information against consumers. She said a restaurant in her home state was reportedly giving reservations based on data about a potential guest’s income.

“If they don’t really have enough money to buy a bottle of wine, they are giving the reservation to someone else,” said Cantwell. “Without a strong privacy law, when the public data runs out, nothing is stopping them from using our private data … I’m very concerned that the ability to collect vast amounts of personal data about individuals, and create inferences about them quickly at very low cost, can be used in harmful ways, like charging consumers different prices for the same product.”

Cantwell and other lawmakers also hope to pass new federal transparency standards to protect intellectual property and guard against various risks of AI-generated content. On Thursday, Cantwell and Senators Sen. Marsha Blackburn (R-Tenn.) and Martin Heinrich (D-N.M) introduced new bi-partisan legislation to protect publishers, actors and other artists while also mitigating risks of AI-generated misinformation, called the COPIED Act — short for Content Origin Protection and Integrity from Edited and Deepfaked Media Act.

The COPIED Act would have the National Institute of Standards and Technology (NIST) develop transparency standards for AI models and create standards for content provenance — including detecting and watermarking synthetic content — and craft new cybersecurity standards banning anyone from tampering with content provenance data. The bill also would ban AI companies from using protected content to train AI models or generate content without permission, allow individuals and companies sue violators, and empower the Federal Trade Commission and state attorneys general to enforce the regulations.

According to Blackburn, privacy regulations and legislation like the COPIED Act are more important than ever to help people protect themselves. She said proposals like the No Fakes Act are also needed to protect people against being victims of AI deepfakes. “Who owns the virtual you?” she posed.

Major organizations have already endorsed the COPIED Act including the News/Media Alliance, National Newspaper Association, National Broadcasters Association, SAG-AFTRA, Nashville Songwriters and the Recording Academy. According to the bill’s text, the COPIED Act would apply to platforms including social media companies, search engines, content platforms and other tech companies that generate more than $50 million in annual revenue and that have had at least 25 million users for more than three months. 

One of the expert witnesses who testified at Thursday’s hearing was Ryan Calo, a law professor at the University of Washington and co-founder of the UW Tech Policy Lab. He asserted that companies have already explored using customer data to charge different prices, and he cited examples like Amazon charging return customers more and Uber quoting users higher prices if their cell phone battery was low. “This is the world of using AI to extract consumer surplus, and it’s not a good world. And it’s one that data minimization could address,” he said. 

Calo and other witnesses said new laws around data minimization could help protect consumers from having their data collected, shared and misused. Udbhav Tiwari, director of global product policy at Mozilla, said designing privacy features into AI models early on could help. Another witness, Amba Kak, co-executive director, AI Now Institute, cautioned that something as subtle as the tone of someone’s voice might be used to predict different outcomes.

“You do not need to be a clairvoyant that all roads may lead us to the same advertising technologies that got us here,” Kak said. “This is the moment for action.”

Without federal data privacy laws, it’s impossible for people to know who has their data and how it’s being used, said Sen. Jacky Rosen (D-Nev.). Without uniform regulations, she said “the supply chain of data is full of loopholes.”

Some lawmakers warned that AI regulations could unintentionally harm small businesses. Another expert witness, Morgan Reed, president of ACT | The App Association — which represents thousands of developers and connected device makers — said a U.S. privacy law would make it easier for small businesses to comply without having to navigate the growing number of state privacy laws. Reed said it’s not just small tech companies that are impacted by AI and privacy laws, but also small businesses that use tech.

“The reality is small business has been the faster adopter [of AI],” Reed said. “More than 90% of my members use generative AI tools today with an average 80% increase in productivity. And our members who develop those solutions are more nimble than larger rivals…Their experiences should play a major role in informing policy makers on how any new laws should apply to AI development and use.”

U.S. Sen. Ted Cruz, R-Texas, was one of the committee members that cautioned against sweeping AI regulations. During his opening hearing remarks, Cruz acknowledged the need for a federal AI and privacy laws, but said regulations should be more focused to address specific issues. One example is the Take It Down Act, a bipartisan bill he is co-sponsoring with U.S. Sen. Amy Klobuchar, D-Minn. The legislation, introduced last month, would target bad actors that create and publish AI-generated explicit deepfakes of real people.

“Our goal shouldn’t be to pass any uniform data privacy standard but the right standard that protects privacy without preventing U.S. technological innovation,” Cruz said.

Prompts and Products: AI News and announcements 

  • AWS and Writer debuted new tools for their separate platforms to make enterprise-grade generative AI applications easier to make and hopefully more accurate.
  • Ebay debuted new advertising tools including AI-generated campaign recommendations based on marketplace trends.
  • The House Judiciary Committee accused the Global Alliance for Responsible Media (GARM) of antitrust violations, alleging GARM used its market influence to direct advertisers away from right-wing platforms.
  • The U.S. Justice Department announced the findings of an investigation alleging Russian actors used AI-generated images and text to spread misinformation on social media platforms including X (Twitter).
  • Microsoft has given up its seat on OpenAI’s board while Apple has abandoned plans to take an observer seat on the board.
  • Omnicom debuted a new AI content platform called ArtBotAI that uses large language models to help marketers optimize creative assets for campaigns.
  • Anthropic debuted a new way to let developers experiment with prompts when creating generative AI platforms with the startup’s Claude AI models.

Other stories from across Digiday:

https://digiday.com/?p=549929

More in Media

How publishers are experimenting with Reddit — even without a formal publisher program

While it hasn’t formally launched a publisher program, over the last 18 months Reddit has been steadily rolling out products and resources aimed at courting media companies to increase their presence on the platform.

AI safety guardrails

AI Briefing: How state governments and businesses are addressing AI deepfakes

New state laws and new tools from cybersecurity firms both aim to help mitigate the risks of AI content and deepfakes in this election season.

Media Briefing: The 2024 media glossary, pt. 2

In part two of the 2024 Media Glossary, we tackle many of the terms you may have heard in conversations about generative AI as well as increasing the value of publishers’ on-site content.