- It's a real mess at OpenAI as more concerns over its commitment to safety come to light.
- The ChatGPT-maker has faced backlash over transparency issues, NDAs, and a tussle with Scarlett Johansson.
- CEO Sam Altman isn't looking too good, either, as he takes on a new job: damage control.
OpenAI's rough week has turned into a rough month — and it's not looking like a problem that the company's golden boy CEO, Sam Altman, can easily solve.
In the latest development of the OpenAI-is-a-disaster saga, a group of current and former OpenAI employees has gone public with concerns over the company's financial motivations and commitment to responsible artificial intelligence. In a New York Times report published Tuesday, they described a culture of false promises around safety.
"The world isn't ready, and we aren't ready," Daniel Kokotajlo, a former OpenAI researcher, wrote in an email announcing his resignation, according to the Times report. "I'm concerned we are rushing forward regardless and rationalizing our actions."
Also on Tuesday, the whistleblowers, along with other AI insiders, published an open letter demanding change in the industry. The group is calling for AI companies to commit to a culture of open criticism and to promise not to retaliate against those who come forward with concerns.
While the letter isn't specifically addressed to OpenAI, it's a pretty clear subtweet and another damaging development for a company that has taken more than enough hits in the past couple of weeks.
In a statement to Business Insider, an OpenAI spokesperson reiterated the company's commitment to safety, highlighting an "anonymous integrity hotline" for employees to voice their concerns and the company's safety and security committee.
"We're proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk," they said over email. "We agree that rigorous debate is crucial given the significance of this technology and we'll continue to engage with governments, civil society and other communities around the world."
Safety second (or third)
A common theme of the complaints is that, at OpenAI, safety isn't first — growth and profits are.
In 2019, the company went from a nonprofit dedicated to safe technology to a "capped profit" organization worth $86 billion. And now Altman is considering making it a regular old for-profit vehicle of capitalism.
This has put safety lower on the priority list, former board members and employees say.
"Based on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives," the former board members Helen Toner and Tasha McCauley wrote in an Economist op-ed last month that called for external oversight of AI companies. Toner and McCauley voted for Altman's ouster last year. (In a responding op-ed, the current OpenAI board members Bret Taylor and Larry Summers defended Altman and the company's safety standards.)
Those profit incentives have put growth front and center, some insiders say, with OpenAI racing against other AI companies to build more-advanced forms of the technology — and releasing those products before some people think they're ready for the spotlight.
In an interview that aired last week, Toner said Altman routinely lied and withheld information from the board, including about safety processes. The board wasn't even aware of ChatGPT's release in November 2022 and found out it went live on Twitter, she said. (The company didn't explicitly deny this but, in a statement, said it was "disappointed that Ms. Toner continues to revisit these issues.")
The former researcher Kokotajlo told the Times that Microsoft began testing Bing with what OpenAI employees believed was an unreleased version of GPT, a move that OpenAI's safety board hadn't approved. (Microsoft denied this happened, according to the Times.)
The concerns mirror those of the recently departed Jan Leike, who led the company's superalignment team with the chief scientist, Ilya Sutskever, another recent defector. The team, dedicated to studying the risks that AI superintelligence posed to humanity, saw several departures over recent months. It disbanded when its leaders left, though the company has since formed a new safety committee.
"Over the past years, safety culture and processes have taken a backseat to shiny products," Leike wrote in a series of social-media posts around his departure. "I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point."
These concerns are heightened as the company approaches artificial general intelligence — or technology capable of all human behavior. Many experts say AGI increases the likelihood of p(doom), a nerdy and depressing term for the probability of AI destroying humanity.
To put it bluntly, as the leading AI researcher Stuart Russell said to BI last month: "Even people who are developing the technology say there's a chance of human extinction. What gave them the right to play Russian roulette with everyone's children?"
An A-list actor and NDAs
You probably didn't have it on your 2024 bingo card that Black Widow would take on a Silicon Valley giant, but here we are.
Over the past few weeks, the company has met some unlikely foes with concerns that go beyond safety, including Scarlett Johansson.
Last month, the actor lawyered up and wrote a scathing statement about OpenAI after it launched a new AI model with a voice eerily similar to hers. While the company insists it didn't seek to impersonate Johansson, the similarities were undeniable — particularly given the fact that Altman posted the word "her" on X around the time of the product announcement, seemingly a reference to Johansson's 2013 movie in which she played an AI virtual assistant. (Spoiler alert: The movie isn't exactly a good look for the technology.)
"I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar," Johansson said of the model, adding that she'd turned down multiple offers from Altman to provide a voice for OpenAI.
The company's defense was, more or less, that its leadership didn't communicate properly and handled the matter clumsily — which isn't all that comforting considering the company is dealing with some of the world's most powerful technology.
Things worsened when a damaging report was published about the company's culture of stifling criticism with its restrictive and unusual NDAs. Former employees who left the company without signing an NDA could lose out on vested equity — worth millions for some. Such agreement was basically unheard of in the world of tech.
"This is on me and one of the few times i've been genuinely embarrassed running openai; i did not know this was happening and i should have," Altman responded to the claims in an X post.
But days later he was caught with egg on his face when a report came out that indicated Altman knew about the NDAs all along.
As Altman learned, when it rains, it pours.
No more white knight
But the May rain didn't bring June flowers.
Like many tech rocket ships before it, OpenAI is synonymous with its cofounder and CEO, Altman — who, until recently, was seen as a benevolent brainiac with a vision for a better world.
But as the company's perception continues to sour, so does that of its leader.
Earlier this year, the venture-capital elite started to turn on Altman, and now the public may be following suit.
The Scarlet Johansson incident left him looking incompetent, the NDA fumble left him looking a bit like a snake, and the safety concerns left him looking like an evil genius.
Most recently, The Wall Street Journal reported Monday on some questionable business dealings by Altman.
While he isn't profiting directly from OpenAI — he owns no stake in the company, and his reported $65,000 salary is a drop in the bucket compared with his billion-dollar net worth — conflicts of interest abound. The Journal reported that he had personal investments in several companies with which OpenAI does business.
He owns stock in Reddit, for example, which recently signed a deal with OpenAI. The first customer of the nuclear-energy startup Helion, in which Altman is a major investor, was Microsoft, OpenAI's biggest partner. (Altman and OpenAI said he recused himself from these deals.)
Faced with the deluge of detrimental media coverage, the company and its leader have tried to do some damage control: Altman announced he was signing the Giving Pledge, a promise to donate most of his wealth, and the company is reported to have sealed a major deal with Apple.
But a few positive news hits won't be enough to clean up the mess Altman is facing. It's time for him to pick up a bucket and a mop and get to work.
Correction: June 5, 2024 — An earlier version of this story misstated when ChatGPT was released. It was November 2022, not November 2023.