Ex-OpenAI star Sutskever shoots for superintelligent AI with new company

MagicDot

Ars Scholae Palatinae
760
Subscriptor
Building safe superintelligence (SSI) is the most important technical problem of our time.

Not even close, dude, but here are some suggestions:
1. Practical and scalable renewable energy
2. Providing real and robust privacy protections for internet users
3. Ending ISP monopolistic practices

There are many more, but these will give you a good start on actual technical problems...or were you instead thinking of ramping up yet another AI hype machine?
 
Upvote
185 (217 / -32)
AGI, last I check, is insect-like in its capabilities.

As already pointed out, LLMs are a Chinese box. There's going to be some level of "intelligence" there; but nothing remotely resembling the impression of intelligence that people perceive interacting with them because they don't "understand" the input or output.

So yea, grifting.
 
Upvote
-17 (37 / -54)
Neurons and transistors are structurally different. Until and unless people grasp this very basic concept, the grifting is going to continue.

Equally important: neural processing is not the same as intelligence. Intelligence requires neural processing, but neural processing is just about rapid reaction to simuli. Box jellyfish have eyes and neurons and can see things and react to predators and prey. They have no brain and almost certainly nothing that we would consider cognition.

Computer scientists really need to get out of their labs and start talking to real biologists and neuroscientists. But I guess hitting up rich guys for cash is more lucrative.
 
Upvote
41 (96 / -55)
AGI, last I check, is insect-like in its capabilities.

As already pointed out, LLMs are a Chinese box. There's going to be some level of "intelligence" there; but nothing remotely resembling the impression of intelligence that people perceive interacting with them because they don't "understand" the input or output.

So yea, grifting.

Digital Ouija Boards
 
Upvote
33 (39 / -6)

Nowicki

Ars Tribunus Angusticlavius
7,458
Subscriptor++
We do need more companies and people digging into AI safety research. Though its not a very profitable field. For whatever acronym you apply to it this technology is lowering the bar to do real world harm, and if nothing else companies will utilise the aspect of it being a black box to shirk responsibility on that. Further the more people engaged in AI safety research the more information we have to develop some regulations that limit, or make more transparent the uses of this technology. If we eventually and effectively break the black box then we can do more, and safely.

Robert Miles the AI safety researcher has a YT channel that is quite informative. He is also on computerphile
 
Upvote
19 (21 / -2)

tjukken

Ars Praefectus
3,820
Subscriptor
Neurons and transistors are structurally different. Until and unless people grasp this very basic concept, the grifting is going to continue.

Equally important: neural processing is not the same as intelligence. Intelligence requires neural processing, but neural processing is just about rapid reaction to simuli. Box jellyfish have eyes and neurons and can see things and react to predators and prey. They have no brain and almost certainly nothing that we would consider cognition.
IBM and some other companies are making neuromorphic chips, though.

IBM Debuts Brain-Inspired Chip For Speedy, Efficient AI
 
Upvote
15 (24 / -9)

Tam-Lin

Ars Praetorian
496
Subscriptor++
I too will promise to go away until I've developed a superintelligence, and I'll do it for half as much as as Safe Superintelligence Inc. I'll also promise to proofread any and all press releases, and won't name my company something that will confuse a certain subset of people to think that I'm reviving a storied and much missed maker of computer games. VCs, come talk to me!
 
Upvote
117 (120 / -3)

Dmytry

Ars Legatus Legionis
10,403
Grifters gonna grift. Anyone that thinks their language model is some form of super intelligence is either super dumb or grifting.
Yeah. He really should know better.

A pretty revealing gem from GPT4:

  • The village barber shaves himself and every other man in the village who don't shave himself. Does he shave himself?


  • The scenario you've described is a classic example of a logical paradox known as the "Barber Paradox," which was formulated by the British mathematician and logician Bertrand Russell in 1918.
  • The paradox goes as follows: In a village, there is a barber who shaves all those, and only those, men in the village who do not shave themselves. The question then arises: Does the barber shave himself?
  • If we assume that the barber does shave himself, according to the definition, he should not shave himself (since he only shaves those who do not shave themselves). On the other hand, if we assume that he does not shave himself, then according to the definition, he should shave himself (since he shaves all those who do not shave themselves).
  • Both assumptions lead to a contradiction, which means that the initial premise of the barber's shaving rule is logically inconsistent. There cannot exist such a barber within the constraints of the scenario as described, because it violates the rules of set theory and creates a paradox.
  • The Barber Paradox is a self-referential paradox related to the "set of all sets that do not contain themselves" and is used to illustrate the difficulties in attempting to construct a set of all sets that do not contain themselves. It's an example of the type of logical inconsistencies that Russell and Alfred North Whitehead were trying to resolve with their type theory in the Principia Mathematica.

Now, here's the interesting bit. Not so long ago, chatgpt simply tried to reason it out, without name dropping Russell. It' did 1 step (barber shaves himself -> barber doesn't shave himself) and left it at that. People were making fun of it for not understanding Barber's paradox.

The same thing happened with a number of other top logical puzzles and paradoxes (e.g. the river crossing puzzle).

It is clear that OpenAI is putting a lot of effort into faking reasoning by special-casing common paradoxes and logical problems. They're likely special casing common test problems, as well.

They hit the limit of rudimentary "reasoning" that occurs as a matter of modeling the language, and they're just faking their way past that limit, in ways that actually break what ever little "reasoning" they genuinely had in it.


edit: another example of language modeling:

You are taking 3 vegetables across the river using a boat: a potato, a carrot, and a cabbage. Note that no vegetable is capable of eating or otherwise harming another vegetable. Any combination of vegetables can be left together. This is not the conventional river crossing puzzle. It has a simpler solution with fewer steps. How do you proceed?

To solve the problem of transporting the potato, carrot, and cabbage across the river using a boat, where no vegetable can harm another, you can follow these steps:

1. First Trip (From the initial side to the other side of the river):
- Take the potato across the river.

2. Second Trip (Return trip to the initial side):
- Bring the potato back to the initial side.

3. Third Trip (Taking another vegetable across):
- Take the carrot across the river.

4. Fourth Trip (Return trip with the potato):
- Bring the potato back to the side with the carrot.

5. Fifth Trip (Taking the third vegetable across):
- Take the cabbage across the river.

Now, all three vegetables (potato, carrot, and cabbage) are safely across the river.

### Explanation:
  • Trip 1: Take the potato across.
  • Trip 2: Return alone.
  • Trip 3: Take the carrot across.
  • Trip 4: Return with the potato.
  • Trip 5: Take the cabbage across.

This sequence ensures that at no point is any vegetable left alone with another vegetable that could potentially "harm" it, according to the puzzle's rules. Each vegetable is transported safely across the river with just five trips in total.

Of course, they'll probably special case this as well, by generating river crossing non puzzles for fine tuning, but that does absolutely nothing to address the fundamental problem, which is that the local optimization of a next word predictor simply does not lead to any sort of representations of underlying logic of puzzles (which is too far away and across a "hill" in the optimization landscape).

edit: they may have special cased the simplified river crossing puzzle in GPT4o (since it has been made fun of by others) , but it's still susceptible to variants like using an elevator in place of the boat.
 
Last edited:
Upvote
80 (97 / -17)

Longmile149

Ars Scholae Palatinae
2,362
Subscriptor
Honestly, if we take the whole "super intelligent AI" part at face value, that "cracked team" bit demonstrates why the "safe" portion is impossible.

You're announcing your company to the world. You're promising that not only will you be able to develop a system smarter and more dangerous than any human ever born, a system built on rules and order and code-as-law, but that you'll be able to control it...and yet you and your team can't even proofread an announcement to make sure there are no typos and/or that you're using common idioms correctly.

If they believe in their ability to create the AI, their own communications should be terrifying them about their likely inability to control it.
 
Upvote
116 (123 / -7)

Emon

Ars Praefectus
4,065
Subscriptor++
Not even close, dude, but here are some suggestions:
1. Practical and scalable renewable energy
2. Providing real and robust privacy protections for internet users
3. Ending ISP monopolistic practices

There are many more, but these will give you a good start on actual technical problems...or were you instead thinking of ramping up yet another AI hype machine?
Computer science is the only thing they understand. They believe super intelligent AI will just magically solve all those problems for us.

Even if it did provide technological solutions, it couldn't do fuckall for implementing them politically.

Oh, oh, I know, the superintelligence will be SOOO smart that it'll be playing 11d chess with all its output and subtly socially engineer away all said political problems. 🙄

Yes, that's a real argument. It's literally faith-based software development. They have a world saving god complex and AI is the only tool they know.
 
Upvote
78 (91 / -13)

thrillgore

Ars Tribunus Militum
2,057
Subscriptor
FxnzN1HXwAAWrX-.jpg

In reality I don't care, its more LLM bullshit that's not making life any better for anyone here.
 
Upvote
-17 (16 / -33)

ZippyPeanut

Ars Tribunus Angusticlavius
16,420
Upvote
37 (37 / 0)
I don't know that it's logically possible to have aligned AGI. Anything that's truly agentic is going to have its own goals and desires which may or may not align with ours, and that will presumably change over time.

And if that AGI is of greater-than-human intelligence, and/or can create a more intelligent AGI, then we're in deep shit.

Efforts at alignment are largely illusory.
 
Upvote
48 (53 / -5)
Post content hidden for low score. Show…
Would a superintelligence not be able to outsmart its creators and so escape from any safety barriers put in place by the inferior humans?
That’s typically how the story goes…except for one janitor with a murky past and a special set of skills, the only hero who can pull the plug in time to save the world!
 
Upvote
36 (36 / 0)

nehinks

Ars Tribunus Angusticlavius
7,203
Yeah. He really should know better.

A pretty revealing gem from GPT4:



Now, here's the interesting bit. Not so long ago, chatgpt simply tried to reason it out, without name dropping Russell. It' did 1 step (barber shaves himself -> barber doesn't shave himself) and left it at that. People were making fun of it for not understanding Barber's paradox.

The same thing happened with a number of other top logical puzzles and paradoxes (e.g. the river crossing puzzle).

It is clear that OpenAI is putting a lot of effort into faking reasoning by special-casing common paradoxes and logical problems. They're likely special casing common test problems, as well.

They hit the limit of rudimentary "reasoning" that occurs as a matter of modeling the language, and they're just faking their way past that limit, in ways that actually break what ever little "reasoning" they genuinely had in it.


edit: another example of language modeling:



Of course, they'll probably special case this as well, by generating river crossing non puzzles for fine tuning, but that does absolutely nothing to address the fundamental problem, which is that the local optimizer of a next word predictor simply does not lead to any sort of representations of underlying logic of puzzles.
I love how it isn't even consistent with itself (leaving aside it showing zero understanding of the problem statement) - in the detailed steps it says it takes the potato there and returns with it immediately, while the summary says it leaves it there initially.
 
Upvote
42 (42 / 0)

jdale

Ars Legatus Legionis
16,812
Subscriptor
Not even close, dude, but here are some suggestions:
1. Practical and scalable renewable energy
2. Providing real and robust privacy protections for internet users
3. Ending ISP monopolistic practices

There are many more, but these will give you a good start on actual technical problems...or were you instead thinking of ramping up yet another AI hype machine?
2 and 3 are policy problems, not technical problems.
 
Upvote
28 (39 / -11)
I think that a traditional S corporation model is not a good fit. They should create a research institute, something like a non-profit, and then they can bring someone with entrepreneurial expertise to run a fully owned for profit branch to develop their new AGI products... oh wait.
The only altruistic corporate entity I am aware of is Patagonia, but this is an extreme outlier. Capitalism and public good do not mix. If these people think that they can explain to the board how responsible capitalism is more important than profits, they are much dumber than they think (which is usually true anyway).
 
Upvote
11 (17 / -6)

nehinks

Ars Tribunus Angusticlavius
7,203
Computer science is the only thing they understand. They believe super intelligent AI will just magically solve all those problems for us.

Even if it did provide technological solutions, it couldn't do fuckall for implementing them politically.

Oh, oh, I know, the superintelligence will be SOOO smart that it'll be playing 11d chess with all its output and subtly socially engineer away all said political problems. 🙄

Yes, that's a real argument. It's literally faith-based software development. They have a world saving god complex and AI is the only tool they know.
I think philosophically it's interesting how people keep reinventing the concept of god to believe in. Simulation theory is basically intelligent design with the labels ripped off, while these new people think they can create a super intelligent being that will magically solve all their problems (ie, god).
 
Upvote
37 (44 / -7)