OpenAI’s Former Chief Scientist Raises $1B for Safe Superintelligence Startup
How much of a game-changer is this? Safe Superintelligence (SSI), co-founded by OpenAI’s former chief scientist, Ilya Sutskever and OpenAI’s Daniel Levy, and former Apple AI chief, Daniel Gross, said on social media, “We’ve started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.” SSI plans to reach its goal through “revolutionary engineering and scientific breakthroughs” without the distraction of “management overhead or product cycles.”
In a fast-moving AI environment and products introduced at breakneck speed long-term, the safety of the underlying technology often seems to be at the bottom of the proverbial list of objectives. In comparison, SSI identifies safe superintelligence is the most important technical problem of our time.
This type of investor is essential to the company. Gross explained that they want to be surrounded by investors who understand, respect, and support the company—to make a straight shot to safe superintelligence and spend a couple of years doing R&D on our product before bringing it to market. Investors include venture capitalist firms such as Andreessen Horowitz, Sequoia Capital, and SV Angel. The company is estimated to be valued at $5 billion, sources told Reuters.
SSI has 10 employees. Plans for the $1 billion in funding include recruiting top AI talent to work in Palo Alto, California, and Tel Aviv, Israel offices. According to the co-founders, they plan to advance capabilities as fast as possible while ensuring our safety always remains ahead. “This way, we can scale in peace.”