Skip to content
AI conundrum

GPT-4 poses too many risks and releases should be halted, AI group tells FTC

OpenAI released GPT-4 despite "full knowledge" of risks, nonprofit tells agency.

Jon Brodkin
The ChatGPT website is displayed on a smartphone screen next to two blocks displaying the letters "A.I."
Credit: Getty Images | VCG
Credit: Getty Images | VCG

A nonprofit AI research group wants the Federal Trade Commission to investigate OpenAI, Inc. and halt releases of GPT-4.

OpenAI "has released a product GPT-4 for the consumer market that is biased, deceptive, and a risk to privacy and public safety. The outputs cannot be proven or replicated. No independent assessment was undertaken prior to deployment," said a complaint to the FTC submitted today by the Center for Artificial Intelligence and Digital Policy (CAIDP).

Calling for "independent oversight and evaluation of commercial AI products offered in the United States," CAIDP asked the FTC to "open an investigation into OpenAI, enjoin further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace."

Noting that the FTC "has declared that the use of AI should be 'transparent, explainable, fair, and empirically sound while fostering accountability,'" the nonprofit group argued that "OpenAI's product GPT-4 satisfies none of these requirements."

GPT-4 was unveiled by OpenAI on March 14 and is available to subscribers of ChatGPT Plus. Microsoft's Bing is already using GPT-4. OpenAI called GPT-4 a major advance, saying it "passes a simulated bar exam with a score around the top 10 percent of test takers," compared to the bottom 10 percent of test takers for GPT-3.5.

Though OpenAI said it had external experts assess potential risks posed by GPT-4, CAIDP isn't the first group to raise concerns about the AI field moving too fast. As we reported yesterday, the Future of Life Institute published an open letter urging AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." The letter's long list of signers included many professors alongside some notable tech-industry names like Elon Musk and Steve Wozniak.

Group claims GPT-4 violates the FTC Act

CAIDP said the FTC should probe OpenAI using its authority under Section 5 of the Federal Trade Commission Act to investigate, prosecute, and prohibit "unfair or deceptive acts or practices in or affecting commerce." The group claims that "the commercial release of GPT-4 violates Section 5 of the FTC Act, the FTC's well-established guidance to businesses on the use and advertising of AI products, as well as the emerging norms for the governance of AI that the United States government has formally endorsed and the Universal Guidelines for AI that leading experts and scientific societies have recommended."

The FTC should "halt further commercial deployment of GPT by OpenAI," require independent assessment of GPT products prior to deployment and "throughout the GPT AI lifecycle," "require compliance with FTC AI Guidance" before future deployments, and "establish a publicly accessible incident reporting mechanism for GPT-4 similar to the FTC's mechanisms to report consumer fraud," the group said.

More broadly, CAIDP urged the FTC to issue rules requiring "baseline standards for products in the Generative AI market sector."

We contacted OpenAI and will update this article if we get a response.

“OpenAI has not disclosed details”

CAIDP's president and founder is Marc Rotenberg, who previously co-founded and led the Electronic Privacy Information Center. Rotenberg is an adjunct professor at Georgetown Law and served on the Expert Group on AI run by the international Organisation for Economic Co-operation and Development. Rotenberg also signed the Future of Life Institute's open letter, which is cited in the CAIDP complaint.

CAIDP's chair and research director is Merve Hickok, who is also a data ethics lecturer at the University of Michigan. She testified in a congressional hearing about AI on March 8. CAIDP's list of team members includes many other people involved in technology, academia, privacy, law, and research fields.

The FTC last month warned companies to analyze "the reasonably foreseeable risks and impact of your AI product before putting it on the market." The agency also raised various concerns about "AI harms such as inaccuracy, bias, discrimination, and commercial surveillance creep" in a report to Congress last year.

GPT-4 poses many types of risks, and its underlying technology hasn't been adequately explained, CAIDP told the FTC. "OpenAI has not disclosed details about the architecture, model size, hardware, computing resources, training techniques, dataset construction, or training methods," the CAIDP complaint said. "The practice of the research community has been to document training data and training techniques for Large Language Models, but OpenAI chose not to do this for GPT-4."

"Generative AI models are unusual consumer products because they exhibit behaviors that may not have been previously identified by the company that released them for sale," the group also said.

OpenAI released GPT-4 with “full knowledge” of risks

CAIDP's complaint pointed to some of OpenAI's own statements about GPT-4's risks. "OpenAI has specifically acknowledged the risk of bias, and more precisely, 'harmful stereotypical and demeaning associations for certain marginalized groups,'" the complaint said.

For example, OpenAI said in the GPT-4 System Card that "the model has the potential to reinforce and reproduce specific biases and worldviews, including harmful stereotypical and demeaning associations for certain marginalized groups." CAIDP also quoted an OpenAI company blog post that said ChatGPT "will sometimes respond to harmful instructions or exhibit biased behavior."

"OpenAI released GPT-4 to the public for commercial use with full knowledge of these risks," the complaint to the FTC said. Raising concerns about kids using GPT-4, the complaint said that the "GPT-4 System Card provides no detail of safety checks conducted by OpenAI during its testing period, nor does it detail any measures put in place by OpenAI to protect children."

CAIDP pointed to concerns raised by the European consumer group BEUC. "If ChatGPT gets used for consumer credit or insurance scoring, is there anything to prevent it from generating unfair and biased results, preventing access to credit or increasing the price of health or life insurance for certain types of consumers?" BEUC asked in a tweet quoted by the CAIDP complaint.

Security and privacy worries

Turning to cybersecurity, the CAIDP noted a Europol warning that ChatGPT could be used to "draft highly realistic text" for phishing purposes, to produce text for propaganda and disinformation, or produce malicious code due to ChatGPT's proficiency in different programming languages.

On privacy, the CAIDP said there was an incident reported this month in which OpenAI displayed private chat histories to other users, a problem that "required the company to suspend the display of Histories, an essential feature for users of the system to be able to navigate among sessions and to distinguish specific sessions."

In another case, an AI researcher "described how it was possible to 'take over someone's account, view their chat history, and access their billing information without them ever realizing it,'" the complaint said. The researcher said last week that OpenAI fixed the vulnerability after he reported it.

GPT-4's ability to provide text responses from photo inputs "has staggering implications for personal privacy and personal autonomy," letting users "link an image of a person to detailed personal data," CAIDP said. It could also be used for "GPT-4 to make recommendations and assessments, in a conversational manner, regarding the person."

"OpenAI had reportedly suspended the release of the image-to-text capability, known as Visual GPT-4, though the current status is difficult to determine," the complaint said.

Listing image: Getty Images | VCG

Photo of Jon Brodkin
Jon Brodkin Senior IT Reporter
Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.
Prev story
Next story