🚨🚀 Check out this exclusive interview (https://lnkd.in/deSqWuB5) with OpenAI co-founder Ilya Sutskever! Here’s the scoop on his new venture, team, and goals. 👇
💡 Ilya's latest startup, Safe Superintelligence (SSI), has set up shop in Silicon Valley and Tel Aviv – not surprising since both founders have roots in Israel.
🧠 Mission: Build a safe and powerful AI system through pure research. No short-term product sales here – just focused innovation. Think less OpenAI, Google, or Anthropic distractions, more laser-focused breakthroughs!
✨ "Our first product will be a safe superintelligence. Nothing else until it's ready," says Ilya. Talk about commitment!
💸 As for the backers? Shh… That's still top secret.
🔒 "By 'safe AI,' we mean safety on the level of nuclear trust and security," Ilya explains.
SSI's dream team includes:
1. Ilya Sutskever himself.
2. Ex-Apple AI head Daniel Gross.
3. Daniel Levy, with a stellar rep and history at OpenAI.
👥 When asked about his ties with Altman, Ilya simply says Sam knows about the project.
Years of deep thinking on AI safety have led to some promising approaches at SSI, but they’re keeping the details hush-hush for now.
🌟 While language models are key players in today's AI scene, SSI aims for something even more powerful. Stay tuned!