OpenAI CEO Sam Altman has indicated that the development of an artificial “superintelligence” could be realized within “a few thousand days.” This statement was made in a blog post on Monday, where Altman emphasized that humanity is at the “dawn of the Intelligence Age.”
Altman expressed confidence that achieving superintelligence—an artificial intelligence with cognitive capabilities surpassing that of any human and the ability to self-improve—is imminent. Current AIs like ChatGPT analyze extensive data sets to generate responses, whereas superintelligent AIs would possess significantly enhanced cognitive functions.
Highlighting the importance of continued investment, Altman stressed the need for more chips and computing power to create a widespread infrastructure for AI technology. He noted that making computing resources abundant is crucial to democratizing AI, rather than it becoming a resource limited to affluent individuals and potentially a cause for conflict.
Altman further remarked on the importance of strategic and decisive action, acknowledging the transformative yet complex and high-stakes nature of the Intelligence Age. He underlined the necessity for careful navigation of the accompanying risks.
Coinciding with Altman’s statement, a new tech startup named Safe Superintelligence (SSI), co-founded by OpenAI’s former chief scientist Ilya Sutskever, announced that it has secured $1 billion to develop a safe artificial intelligence system. SSI, co-founded by Sutskever, Daniel Levy, and former Apple AI chief Daniel Gross, has stated its commitment to creating a safe superintelligence.
According to a social media post by SSI in June, the company aims to achieve its goal through significant engineering and scientific advancements, devoid of management complexities or product cycle distractions. Investors involved include venture capitalist firms Andreessen Horowitz, Sequoia Capital, and SV Angel.
SSI co-founder Gross emphasized to Reuters the importance of aligning with investors who share their mission of creating a safe superintelligence, indicating plans to focus on research and development before bringing their product to market.
The source of the information, Fox News’ Stephen Sorace, contributed to the report.