Home » Ex-OpenAI Leader Raises US$1 Billion for Safety-focused AI Startup

Ex-OpenAI Leader Raises US$1 Billion for Safety-focused AI Startup

Ex-OpenAI Leader Raises US$1 Billion for Safety-focused AI Startup

Safe Superintelligence (SSI), a startup co-founded by former OpenAI Chief Scientist Ilya Sutskever, has secured US$1 billion in seed funding to develop advanced artificial intelligence (AI) systems focused on safety.

The funds will be used to acquire computing power and recruit top engineering and research talent. Currently, SSI has 10 employees and operates out of offices in Palo Alto, California, and Tel Aviv, Israel.

SSI’s mission is to create safe super-intelligent AI systems that won’t cause harm to humans.

Gross emphasized that the company’s singular focus on long-term AI safety will allow it to operate without the pressure of immediate profitability. “It’s important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market,” he explained to Reuters on Wednesday (September 4).

He left after internal disagreements at OpenAI that saw CEO Sam Altman ousted and quickly reinstated. After Sutskever and fellow team leader Jan Leike’s exit, OpenAI disbanded the Superalignment team.

Despite a slowdown in AI investment due to concerns about long-term profitability, SSI’s successful funding round underscores the continued willingness of some investors to back projects led by well-known technologists.

One of SSI’s priorities is to hire a small, highly trusted team of engineers and researchers. Gross said the hiring process focuses not just on technical ability, but also on character and alignment with the company’s culture.

Securities Disclosure: I, Giann Liguid, hold no direct investment interest in any company mentioned in this article.

source

Leave a Reply

Your email address will not be published.