Safe Superintelligence, Inc. seeks to safely build AI far beyond human capability.
See full article...
See full article...
Building safe superintelligence (SSI) is the most important technical problem of our time.
Right as soon as someone decides to invent superduperintelligent AI, this guy's screwed.Man, why didn't anybody else think of achieving the impossible by making revolutionary breakthroughs?
AGI, last I check, is insect-like in its capabilities.
As already pointed out, LLMs are a Chinese box. There's going to be some level of "intelligence" there; but nothing remotely resembling the impression of intelligence that people perceive interacting with them because they don't "understand" the input or output.
So yea, grifting.
IBM and some other companies are making neuromorphic chips, though.Neurons and transistors are structurally different. Until and unless people grasp this very basic concept, the grifting is going to continue.
Equally important: neural processing is not the same as intelligence. Intelligence requires neural processing, but neural processing is just about rapid reaction to simuli. Box jellyfish have eyes and neurons and can see things and react to predators and prey. They have no brain and almost certainly nothing that we would consider cognition.
Yeah. He really should know better.Grifters gonna grift. Anyone that thinks their language model is some form of super intelligence is either super dumb or grifting.
- The village barber shaves himself and every other man in the village who don't shave himself. Does he shave himself?
- The scenario you've described is a classic example of a logical paradox known as the "Barber Paradox," which was formulated by the British mathematician and logician Bertrand Russell in 1918.
- The paradox goes as follows: In a village, there is a barber who shaves all those, and only those, men in the village who do not shave themselves. The question then arises: Does the barber shave himself?
- If we assume that the barber does shave himself, according to the definition, he should not shave himself (since he only shaves those who do not shave themselves). On the other hand, if we assume that he does not shave himself, then according to the definition, he should shave himself (since he shaves all those who do not shave themselves).
- Both assumptions lead to a contradiction, which means that the initial premise of the barber's shaving rule is logically inconsistent. There cannot exist such a barber within the constraints of the scenario as described, because it violates the rules of set theory and creates a paradox.
- The Barber Paradox is a self-referential paradox related to the "set of all sets that do not contain themselves" and is used to illustrate the difficulties in attempting to construct a set of all sets that do not contain themselves. It's an example of the type of logical inconsistencies that Russell and Alfred North Whitehead were trying to resolve with their type theory in the Principia Mathematica.
You are taking 3 vegetables across the river using a boat: a potato, a carrot, and a cabbage. Note that no vegetable is capable of eating or otherwise harming another vegetable. Any combination of vegetables can be left together. This is not the conventional river crossing puzzle. It has a simpler solution with fewer steps. How do you proceed?
To solve the problem of transporting the potato, carrot, and cabbage across the river using a boat, where no vegetable can harm another, you can follow these steps:
1. First Trip (From the initial side to the other side of the river):
- Take the potato across the river.
2. Second Trip (Return trip to the initial side):
- Bring the potato back to the initial side.
3. Third Trip (Taking another vegetable across):
- Take the carrot across the river.
4. Fourth Trip (Return trip with the potato):
- Bring the potato back to the side with the carrot.
5. Fifth Trip (Taking the third vegetable across):
- Take the cabbage across the river.
Now, all three vegetables (potato, carrot, and cabbage) are safely across the river.
### Explanation:
- Trip 1: Take the potato across.
- Trip 2: Return alone.
- Trip 3: Take the carrot across.
- Trip 4: Return with the potato.
- Trip 5: Take the cabbage across.
This sequence ensures that at no point is any vegetable left alone with another vegetable that could potentially "harm" it, according to the puzzle's rules. Each vegetable is transported safely across the river with just five trips in total.
I don't think anyone knows right now, but they are claiming like 140 terawatts by 2027Honest question- How much energy are all theseAIchat bots using globally right now, and tracked out over the next handful of years...?
Pedro Domingos wrote, "Ilya Sutskever's new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe."
Computer science is the only thing they understand. They believe super intelligent AI will just magically solve all those problems for us.Not even close, dude, but here are some suggestions:
1. Practical and scalable renewable energy
2. Providing real and robust privacy protections for internet users
3. Ending ISP monopolistic practices
There are many more, but these will give you a good start on actual technical problems...or were you instead thinking of ramping up yet another AI hype machine?
I’m impressed/appalled that he managed to sell that “plan” for VC.Man, why didn't anybody else think of achieving the impossible by making revolutionary breakthroughs?
Don't know. But a lot: "Microsoft is looking at next-generation nuclear reactors to power its data centers and AI, according to a new job listing for someone to lead the way."Honest question- How much energy are all theseAIchat bots using globally right now, and tracked out over the next handful of years...?
That’s typically how the story goes…except for one janitor with a murky past and a special set of skills, the only hero who can pull the plug in time to save the world!Would a superintelligence not be able to outsmart its creators and so escape from any safety barriers put in place by the inferior humans?
I love how it isn't even consistent with itself (leaving aside it showing zero understanding of the problem statement) - in the detailed steps it says it takes the potato there and returns with it immediately, while the summary says it leaves it there initially.Yeah. He really should know better.
A pretty revealing gem from GPT4:
Now, here's the interesting bit. Not so long ago, chatgpt simply tried to reason it out, without name dropping Russell. It' did 1 step (barber shaves himself -> barber doesn't shave himself) and left it at that. People were making fun of it for not understanding Barber's paradox.
The same thing happened with a number of other top logical puzzles and paradoxes (e.g. the river crossing puzzle).
It is clear that OpenAI is putting a lot of effort into faking reasoning by special-casing common paradoxes and logical problems. They're likely special casing common test problems, as well.
They hit the limit of rudimentary "reasoning" that occurs as a matter of modeling the language, and they're just faking their way past that limit, in ways that actually break what ever little "reasoning" they genuinely had in it.
edit: another example of language modeling:
Of course, they'll probably special case this as well, by generating river crossing non puzzles for fine tuning, but that does absolutely nothing to address the fundamental problem, which is that the local optimizer of a next word predictor simply does not lead to any sort of representations of underlying logic of puzzles.
2 and 3 are policy problems, not technical problems.Not even close, dude, but here are some suggestions:
1. Practical and scalable renewable energy
2. Providing real and robust privacy protections for internet users
3. Ending ISP monopolistic practices
There are many more, but these will give you a good start on actual technical problems...or were you instead thinking of ramping up yet another AI hype machine?
The only altruistic corporate entity I am aware of is Patagonia, but this is an extreme outlier. Capitalism and public good do not mix. If these people think that they can explain to the board how responsible capitalism is more important than profits, they are much dumber than they think (which is usually true anyway).I think that a traditional S corporation model is not a good fit. They should create a research institute, something like a non-profit, and then they can bring someone with entrepreneurial expertise to run a fully owned for profit branch to develop their new AGI products... oh wait.
I think philosophically it's interesting how people keep reinventing the concept of god to believe in. Simulation theory is basically intelligent design with the labels ripped off, while these new people think they can create a super intelligent being that will magically solve all their problems (ie, god).Computer science is the only thing they understand. They believe super intelligent AI will just magically solve all those problems for us.
Even if it did provide technological solutions, it couldn't do fuckall for implementing them politically.
Oh, oh, I know, the superintelligence will be SOOO smart that it'll be playing 11d chess with all its output and subtly socially engineer away all said political problems.
Yes, that's a real argument. It's literally faith-based software development. They have a world saving god complex and AI is the only tool they know.