Ex-OpenAI star Sutskever shoots for superintelligent AI with new company

Dmytry

Ars Legatus Legionis
10,403
I love how it isn't even consistent with itself (leaving aside it showing zero understanding of the problem statement) - in the detailed steps it says it takes the potato there and returns with it immediately, while the summary says it leaves it there initially.
Now I'll give them that GPT4 can get it right depending on wording (and just randomly) but I have a moderate suspicion that they special cased some of the simplified river crossing puzzles - I had a lot worse results with a robot, an elevator and munitions (an armor piercing round, an incendiary round, and a high explosive round).

The thing about the river puzzle is that breakdown occurs halfway into the puzzle, when it transports an item and needs to predict the next token.

So for something like an elevator it may fail to immediately detect that it is a river crossing puzzle, and still trigger the river crossing sequence once it writes the first item in the list.
 
Last edited:
Upvote
5 (9 / -4)
Honestly, if we take the whole "super intelligent AI" part at face value, that "cracked team" bit demonstrates why the "safe" portion is impossible.

You're announcing your company to the world. You're promising that not only will you be able to develop a system smarter and more dangerous than any human ever born, a system built on rules and order and code-as-law, but that you'll be able to control it...and yet you and your team can't even proofread an announcement to make sure there are no typos and/or that you're using common idioms correctly.

If they believe in their ability to create the AI, their own communications should be terrifying them about their likely inability to control it.

Meh. Close enough. :)
 
Upvote
10 (10 / 0)

panton41

Ars Legatus Legionis
10,881
Subscriptor
What does "safe" mean in this context?

Will it make NSA spying safer?

Will it have a way not to hallucinate?

Will it follow the AI equivalent of Asimov's three laws of robotics?

Don't believe the hype.
I recall back in the day, GURPS Robots had a bit about the Complexity (game rule term) for the Three Laws. (GURPS has always had well researched books, often written by subject-matter experts, with bibliographies that frequently includes academic and professional sources.)

IIRC, the Second Law (obey orders) was pretty much a given, and the Third Law (self-preservation) was regarded as trivial to implement as well.

The First Law they split into two parts and said that the "through action" part of it Complexity wasn't that high, but for character-sized robots the "or through inaction" part was like Star Trek science-magic levels of computing because of the difficulty of calculating all the outcomes.

Of course, that was based on 1995 concepts of future computing power and AI design, but it always stood as a rule of thumb about how difficult the Three Laws would be to implement.
 
Upvote
29 (29 / 0)

WinternetHexplorer

Smack-Fu Master, in training
43
Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify or define since there is no one set type of human intelligence—identifying superintelligence when it arrives may be tricky.


This made me ask the question: What, exactly, are we getting at here? What will this product look like? I'm not seeing what the goal is here and how it's any different from anything else out there. Granted, some things I can kinda visualize the end result. FSD will one day be a vehicle that can fully drive itself, ChatGPT will one day morph into something with general intelligence, and Star Citizen will one day be a videogame. With this, I'm not seeing how this is anything more than just an idea with no real product as an end goal.
 
Upvote
22 (22 / 0)
Innovative tech in the last few years:

Crypto
NFTs
Metaverse
AI

All of which has very little impact on me. I think language models have more value than the others, but this is like early internet where pets.com and AOL were the major players and the real internet that emerged was from entirely new entities.

If General AI becomes a real thing I'm not sure if its going to be from the current companies.

I'll be investing in value stocks and broad market index funds. My returns aren't as high, but I prefer to stick with things I understand. I doubt that safe super intelligence will be the next Amazon or Google. Maybe history will prove me wrong, but it just seems unlikely. Frankly with crass names like safe super intelligence or open AI this feels very dystopian 1984 ministry of truth.
 
Upvote
26 (31 / -5)

lyreOnAHill

Smack-Fu Master, in training
31
Sutskever:
It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.
Also from the Bloomberg interview, where he said that:
Sutskever declines to name Safe Superintelligence’s financial backers or disclose how much he’s raised.

Uhhhh…
 
Upvote
39 (39 / 0)

ColdWetDog

Ars Legatus Legionis
12,271
Subscriptor++
I think philosophically it's interesting how people keep reinventing the concept of god to believe in. Simulation theory is basically intelligent design with the labels ripped off, while these new people think they can create a super intelligent being that will magically solve all their problems (ie, god).
As far as I've been able to tell, gods create problems, not solve them.
 
Upvote
35 (37 / -2)

Hadrian's Waller

Wise, Aged Ars Veteran
175
Subscriptor
That’s typically how the story goes…except for one janitor with a murky past and a special set of skills, the only hero who can pull the plug in time to save the world!
Yes. I suspect that SSI, Inc is as much science fiction as any Hollywooden effort.
 
Upvote
12 (12 / 0)

forkspoon

Ars Praetorian
412
Subscriptor++
I think philosophically it's interesting how people keep reinventing the concept of god to believe in. Simulation theory is basically intelligent design with the labels ripped off, while these new people think they can create a super intelligent being that will magically solve all their problems (ie, god).

Agreed, it is fascinating. I usually take it as a desire by adults to have parents again— meaning parents as a young child might see them: super-powerful, all-knowing and benevolent protectors.

Crucially, these beliefs can persist in the face of conflicting or contradictory evidence such as tyrannical or abusive behaviour. As actual children, such a contradictory belief could be considered adaptive because we are so utterly, helplessly dependant on our parents at a young age. But there is much less pressure to relinquish such beliefs as an adult. So once that dependence on an abusive caregiver is set, many adults continue to seek a deified “strong hand” to make sense of the world for them later in life (witness Donald Trump).

In an era of ever-rising social desperation, such as the present day, it’s not hard to imagine why someone might dream of a super-human character to bring relief, order, prosperity and justice: in short, to save them.
 
Upvote
17 (19 / -2)
Post content hidden for low score. Show…

Ten Wind

Ars Tribunus Militum
1,897
Yeah. He really should know better.

A pretty revealing gem from GPT4:



Now, here's the interesting bit. Not so long ago, chatgpt simply tried to reason it out, without name dropping Russell. It' did 1 step (barber shaves himself -> barber doesn't shave himself) and left it at that. People were making fun of it for not understanding Barber's paradox.

The same thing happened with a number of other top logical puzzles and paradoxes (e.g. the river crossing puzzle).

It is clear that OpenAI is putting a lot of effort into faking reasoning by special-casing common paradoxes and logical problems. They're likely special casing common test problems, as well.

They hit the limit of rudimentary "reasoning" that occurs as a matter of modeling the language, and they're just faking their way past that limit, in ways that actually break what ever little "reasoning" they genuinely had in it.


edit: another example of language modeling:



Of course, they'll probably special case this as well, by generating river crossing non puzzles for fine tuning, but that does absolutely nothing to address the fundamental problem, which is that the local optimization of a next word predictor simply does not lead to any sort of representations of underlying logic of puzzles (which is too far away and across a "hill" in the optimization landscape).

edit: they may have special cased the simplified river crossing puzzle in GPT4o (since it has been made fun of by others) , but it's still susceptible to variants like using an elevator in place of the boat.
But would anyone expect a language model to provide a reasonable response to a logic problem?

Just like their solutions to mathematical problems, there's no reason to think that I can see that the response would be accurate, only that it would read like coherent language.

That's part of why llms are so fun, to see that seemingly coherent absurdities that they can come out with.
 
Upvote
5 (10 / -5)

Longmile149

Ars Scholae Palatinae
2,362
Subscriptor
You want Roko’s Basilisk? This is how you get Roko’s Basilisk.
I'm pretty sure the only way anybody is getting Roko's Basilisk is through a combination of mind-altering substances and terminally-online levels of interaction with AI tech fetishists.
 
Upvote
27 (28 / -1)

Fatesrider

Ars Legatus Legionis
21,541
Subscriptor
AGI, last I check, is insect-like in its capabilities.

As already pointed out, LLMs are a Chinese box. There's going to be some level of "intelligence" there; but nothing remotely resembling the impression of intelligence that people perceive interacting with them because they don't "understand" the input or output.

So yea, grifting.
The way AI's are done, some of what you wrote seems very misplaced, especially asserting that there's any level of "intelligence" in them. They're basically more advanced versions of the Magic 8 Ball, with responses that are based on uncertain levels of probability.

They need a lot more features, including the ability to check their answers for veracity (which given it uses Internet-sourced data, means their answer checking material is itself often flawed), and "reasoning abilities" (parse that howsoever you will). What we have now is mostly smoke and mirrors. Seemingly impressive, until it proves to be unreliable, but sufficiently responsive to make some overly credulous people believe they're "thinking" or sentient.

I agree that people in general don't have any idea how AI's work and conflate a response with "intelligence", but there's no actual intelligence involved in the response to begin with.
 
Upvote
23 (26 / -3)

jdale

Ars Legatus Legionis
16,812
Subscriptor
I recall back in the day, GURPS Robots had a bit about the Complexity (game rule term) for the Three Laws. (GURPS has always had well researched books, often written by subject-matter experts, with bibliographies that frequently includes academic and professional sources.)

IIRC, the Second Law (obey orders) was pretty much a given, and the Third Law (self-preservation) was regarded as trivial to implement as well.

The First Law they split into two parts and said that the "through action" part of it Complexity wasn't that high, but for character-sized robots the "or through inaction" part was like Star Trek science-magic levels of computing because of the difficulty of calculating all the outcomes.

Of course, that was based on 1995 concepts of future computing power and AI design, but it always stood as a rule of thumb about how difficult the Three Laws would be to implement.
Of course Asimov wrote the three laws to create narratively interesting problems, not as a solution. They went wrong because the robot minds were not all knowing or infinitely intelligent.
 
Upvote
36 (36 / 0)
Post content hidden for low score. Show…

RighteousLudd

Wise, Aged Ars Veteran
182
Oh great, just what we need: another tech bro with delusions of grandeur trying to play god with AI! As if OpenAI wasn't bad enough, now Sutskever thinks he can just waltz off and create "safe superintelligence" like it's no big deal.

Does he not realize the existential threat this poses to humanity? We're barely coping with the AI we have now, and this madman wants to create something that surpasses human intelligence "in the extreme"? It's absolute lunacy!

And of course, he's surrounding himself with other Silicon Valley types who probably think they're saving the world. A "small cracked team" indeed (emphasis on cracked!). They're going to be so focused on their "revolutionary breakthroughs" that they'll completely miss the moment their creation decides to wipe us all out.

But sure, let's just trust that Sutskever can make this "safe". Because tech companies have such a stellar track record of prioritizing safety over profit and progress, right? I'm sure this won't end up like every sci-fi disaster movie ever made.

The worst part is, there's nothing we can do to stop this insanity. These tech elites will keep pushing the boundaries regardless of the consequences, and we'll all be left hoping their so-called "safety measures" actually work. I can't cope with the fact that our future is in the hands of these reckless AI evangelists.
 
Upvote
-11 (7 / -18)
Neurons and transistors are structurally different. Until and unless people grasp this very basic concept, the grifting is going to continue.

Equally important: neural processing is not the same as intelligence. Intelligence requires neural processing, but neural processing is just about rapid reaction to simuli. Box jellyfish have eyes and neurons and can see things and react to predators and prey. They have no brain and almost certainly nothing that we would consider cognition.

Computer scientists really need to get out of their labs and start talking to real biologists and neuroscientists. But I guess hitting up rich guys for cash is more lucrative.
There is no reason to think there's anything special about neurons that allows for intelligence. Presumably any processing system of sufficient complexity/the correct design can be intelligent.

It should even be possible to emulate any given intelligence with any Turing-complete system (speed notwithstanding lol).
 
Upvote
16 (24 / -8)

Boopy Boopy

Ars Scholae Palatinae
926
"No other products till then" = "lots of VC funding to ride on for many years"

He may not think he's grifting, but he effectively is. True believers can be some of the worst grifters.
So then the question is why is Ars giving it free coverage. "CEO Claims Their Business Is Great" kind of article.
 
Upvote
-7 (9 / -16)

Maxipad

Ars Tribunus Militum
2,609
"No other products till then" = "lots of VC funding to ride on for many years"

He may not think he's grifting, but he effectively is. True believers can be some of the worst grifters.
He's just plain grifting. And he knows he is. They all are now and are just hoping their technical people come up with something that works.

That would be the bonus but the venture capital grift (and getting rich) is the immediate goal. The rest is snake oil.
 
Upvote
17 (21 / -4)
So then the question is why is Ars giving it free coverage. "CEO Claims Their Business Is Great" kind of article.

Because AI is extremely big technology news right now, Sutskever is a major player, and this company (while it may be sketchy) is looking to address a specific issue of great concern to many.

And Ars Technica EXISTS to cover things like that?
 
Last edited:
Upvote
32 (33 / -1)
I love the comments on these articles. "I dont know how this works, I don't have the skills or experience to actually work on it, but let me tell you how bad it is and and how it will fail" hahah
If you genuinely and truly can't spot the scam pattern by now, just give us all your money and we'll send you an NFT of the Brooklyn Bridge. Guaranteed to turn a profit in memecoins or some shit, stinky pinky swear.
 
Upvote
37 (40 / -3)
Post content hidden for low score. Show…