It's apples and oranges. Employees are expressing a preference for making a profit versus the nonprofit board. I'm sort of impressed 5% of the crowd might elevate conscience over cash. No other business has been so insane as to pursue a naked profit objective under the guise of a nonprofit operation. The fact that Altman did so is not a virtue, in my opinion.
You're accepting the board's good vs. evil framing without any evidence that Altman sacrificed safety for profit, or that the board (the Quora guy, really?) is any better. OpenAI needs money to achieve its mission. Training GPT-4 alone cost over $100 million. Commercializing the product is not bad in itself.
I kinda doubt that. I imagine they're mostly mad about the injustice and unilateralness of it. For all we know the employees would make the same money either way.
If the original statement said that the board had removed him because it disagreed on for profit and Altman refused to change course then that makes a lot of sense and I think many people would at least be OK with their motives, even if it likely harmed OpenAI's progress.
Doesn't look like that is the reason given they are still holding onto Microsoft's $10 billion deal, the rejection/cancellation/notice of termination/notice it would be the last of that would have come along with the firing if this was the motive.
Whatever the real motives are I don't think we have heard them yet, which is a bad sign for them being with cause.
It's probably a reflection of the fact that most engineers who joined OpenAI did so with the hope of becoming rich and working at what might end up being one of the most influential tech companies in the world. They probably figured the non-profit governance thing is just some weird accounting or PR thing.
I doubt they care about Sam Altman. But they didn't turn down offers from other tech giants to be stuck at a niche AI doomer research shop.
It’s fairly unusual that a CEO getting fired would put a company’s ability to continue to operate at such high risk.
I’d take it less as a signal of approval in the ordinary sense (though some of it’s probably that!) and more one of perceived dependence. Altman = Microsoft = big piles of cash.
I must admit, the first clause of my sentence was mostly a vehicle for the second clause, the punchline — sole-proprietors not approving of themselves. It loses a bit in the explaining. Although whatever that 95% is a proxy for, they clearly trust Altman dramatically, enough to endure a job at a Microsoft subsidiary.
How many people would stand in front of their boss and the rest of the world and say that they don't support him? That's what this open letter is equivalent to.
You may want to double check that. Their letter was in support of the CEO from 3 days ago. Although you’re right they don’t support the new CEO appointed within the last 24 hrs, but they don’t explicitly mention that, but they do lambast the current board of directors — and I imagine the the board should feel embarrassed, but save one, there’s no evidence of that.
Sure, but (even more importantly after passing some threshold of signatures) the question's not 'do you approve of Altman as CEO' but 'do you approve of the board of your employer'.
And I'm not just saying it's the board vs CEO, it's job preservation etc. not just abstract personality approval. (Again, much more so for the last 20% than the first.)
This isn't indicative of a >95% approval rating for Altman, though—once the first 50% of employees sign the letter, the rest might sign due to peer pressure (even if not explicit), or because their favorite coworkers have already signed, or because the new CEO doesn't have a good reputation among workers, or lots of other reasons besides "when last asked if I approved of the CEO in the abstract I said yes".
By the dozens of employee accounts I’ve read in the last 72 hours they all spoke with affection, and I could not find any detractors. A repeated theme was that he always took the time to listen and he cared and went out of his way. I’ve never worked with a CEO I would describe that way. I don’t know how widespread that is among their employees, or whether they’re mostly pissed at the board for destroying their financial futures, but I’d say this situation and their level of devotion is mind boggling, almost cult like.
It's obvious and I don't mean to imply you aren't aware or even that you're wrong, but if I had the earning potential of an OpenAI employee, I probably wouldn't be looking to publicly shit on anyone that might affect that gravy train.
One thing is at some point, you are peer pressured into signing a petition. When your boss, your coworkers, and your reports are already in it, you got to be in it too.
That said, it would still take a significant number of people before it gets to the snowballing phase and it is obvious OAI won't come out of this unscathed.
I’m more convinced this whole thing was Adam D’Angelo feeling blindsided by the GPT store which is effectively what his Poe platform does, and then convincing Ilya and the two other board members to agree with some hand wavy explanation to fire him.
And this is why 99% of AI safety is a waste of time.
Skynet and paperclips are clearly not the risk. The risk is a person deciding that they need to use their power to do what they want and that person having access to powerful AI. What if Adam had a backdoor to GPT-5. He could have got sam arrested by generating proof that he was using company funds on prostitutes and drugs?
as a low-level employee, what happens when you don't sign? i've been on the minority side for stuff like that a couple times in my career, and so far there's never been much fallout. i don't deny it could go south sometimes, rather i'd like to understand more so i can predict when/how it would go south next time i'm in the minority again.
It isn't peer pressure, it is just reality. 60 people are not going to be able to run OpenAI especially since those 60 people will have not had a 2 week knowledge transfer window with the missing 700 people.
Given the timing the remaining 32 could plausibly just be the set of people who are hard OOO for the (US) Thanksgiving holiday week -- on airplanes, in cars, or otherwise indisposed and out of touch with 'work news' for a few days.
Imagine going on vacation from your extremely well paid silicon valley AI job, just to find the company gone and that everyone around you quit a few days ago. I'm sure that happened a few times already but it's not a situation I'd like to find myself in.
I would be curious to know under what circumstances the board indicated that "'allowing the company to be destroyed' would be consistent with the mission'. Because there are some circumstances under which it would. The most obvious is the reason why you strap a shotgun to that server, so to speak.
Reading these comments its like everyone forgot who Sam Altman is, eyeball scanning orb that sells your biometrics for crypto Sam Altman? This guy is a literal fraud and its a shame they didnt get rid of him sooner, good riddance. All the employees and investors too, who knows what the board is really trying to achieve, but if their mission statement is truly the goal then the investors can get bent and Id take the employees up on their offer, let them quit! Easier on the books that way.
I've been part of a development team meltdown, where an entire engineering org left a company within a few months (we didn't all have a standing offer from a single company, so people trickled out as we found new jobs).
In retrospect, it was not a good company and good for us all for leaving, but the internal drama hit such a fever pitch at one point that we were nearing open rebellion and it was perhaps a bit over the top. I did, however, make quite a few close friends that I still have to this day.
Even if this all works out in Sam’s favor, I wonder how a lot of these employees could ever work with Ilya again given the amount of money he cost them in equity.
Unfortunately, shutting down OpenAI would probably be seen as a win for Poe by Adam D’Angelo. He literally stands to lose both Quora and Poe if Sam keeps scaling up OpenAI’s product strategy. It’s one of the biggest conflicts of interest I’ve ever seen in business.
This is how we know that mankind cannot be trusted to develop harness and maintain an all-powerful AGI :
absolutely everybody who is working on the project is more interested in the value of their own stock options than what happens in the future when this thing gets loose.
> absolutely everybody who is working on the project is more interested in the value of their own stock options than what happens in the future when this thing gets loose.
Assuming these employees follow Altman to Microsoft, it seems like Microsoft got a pretty good deal on their $10B investment in OpenAI, most of which they haven't paid yet.
How quickly can DJ Elon de Don get organized and swoop in to take them Home, where they belong? Spending their remaining productive mortal lives in an MS bus terminal isn’t as bad as it sounds though. (Younger Me just projectile vomited.)
And labor makes those AIs, even if it's a separate company. Until AI digs up the resources, builds the machines to do that, makes the computers they run on and writes the software there'll always be a labor factor.
Plus, once AI does all those things, it now pretty much falls into the labor bucket.
Value can be 100% contingent on factor X without factor X taking all of the responsibility for it. All value is contingent on the sun, atmosphere, and natural life supporting our existence on earth, but we don’t see comments on every thread about how value is 100% dependent on trees.
Similarly, labor does not deserve 100% of the fruits of the economic system.
Just your daily reminder that there is no value without labor.
Not quite, but so verging on false. Tons of labor with zero knowledge, design, coordination, and skill can yield near zero or zero value. Likewise, there are cases where comparatively less labor can yield outsized value.
The problem isn't labor going out the door. The problem is the knowledge and other factors which go out the door with it. (Something which those seeking AGI seek to change.)
> Not quite, but so verging on false. Tons of labor with zero knowledge, design, coordination, and skill can yield near zero or zero value. Likewise, there are cases where comparatively less labor can yield outsized value.
I don't understand the quip, there's literally no value without labour. Some labour might have outsized value given inputs and costs, some might yield zero or negative value but all value depends on labour.
OpenAI staff threaten to quit unless board resigns - https://news.ycombinator.com/item?id=38347868 - Nov 2023 (1182 comments)