Ilya Sutskever, co-founder and former chief scientist of OpenAI, has raised $1 billion in funding for his new artificial intelligence venture, Safe Superintelligence (SSI). The startup Safe AI, based in Palo Alto, California, and Tel Aviv, Israel, is focused on developing AI systems with a strong emphasis on safety and alignment with human values.
In an announcement on X (formerly Twitter), Sutskever stated that SSI’s mission is to pursue "safe superintelligence" while avoiding distractions related to traditional management and product cycles. The venture’s singular focus on AI safety sets it apart in a rapidly evolving industry.
The funding round was backed by top venture capital firms such as Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel, along with NFDG, a partnership co-run by SSI executive Daniel Gross.
Sutskever co-founded SSI with Daniel Gross, formerly a leader in AI at Apple, and Daniel Levy, another ex-OpenAI colleague. Together, they aim to build AI systems that prioritize safety, ensuring alignment with human values, without the pressures of short-term commercial interests.
This move follows Sutskever’s departure from OpenAI in May 2024, after internal conflict over the controversial removal of OpenAI CEO Sam Altman in November 2023. Sutskever’s focus on AI safety reportedly clashed with Altman’s vision for rapid AI advancement. Following Altman’s swift reinstatement due to internal employee protests, Sutskever left OpenAI and shifted his focus to founding SSI.
SSI’s approach, with a focus solely on safe AI development, positions it as a key player in the evolving AI landscape. The company seeks to avoid commercial distractions and focus on developing AI systems that are aligned with long-term human values.
For more information, visit SSI's official site or follow Ilya Sutskever's updates on X.
[…] research into artificial intelligence (AI) language models is revealing their growing ability to handle multiple languages, presenting […]