Photo/VCG
On November 20, Microsoft announced that OpenAI founders Sam Altman and Greg Brockman will join Microsoft and lead a new advanced AI research team.
On the other hand, OpenAI co-founder Ilya Sutskever announced that Emmett Shear, the co-founder of Amazon’s video streaming website Twitch, will take over as the interim CEO of the company. In just one weekend, OpenAI’s management team has undergone several changes, and now the global shock of the “internal fight” seems to have settled.
According to previous reports, Sutskever was the soul of this huge change, and Altman was ousted because he and Sutskever had disagreements on AI safety, technology development speed, and company commercialization issues.
NBD found that as early as June this year, when Altman and Sutskever talked on the same stage at Tel Aviv University, they faced questions about AI risks, and their answers had subtle differences. Sutskever admitted and emphasized these risks, and expressed his uncertainty about the risks. Although Altman also agreed that AI could go out of control and needed different socio-economic contracts to balance, he was more inclined to describe the optimistic side of AI.
Foreign media analysis believes that the conflict between the two cannot be simply understood as a debate between “AI accelerationists” and “AI doomsayers”. The root of their disagreement lies in how to understand what they are creating and how to achieve AGI (general AI). And this conflict erupted at this moment, which is likely to be that the current level of AI has reached a critical point.
Sutskever’s worldview sowed the seeds for OpenAI’s “shock”
According to Bloomberg’s previous report, OpenAI’s co-founder and chief scientist Sutskever was the soul of this huge change, and Altman was removed because he and Sutskever had disagreements on AI safety, technology development speed, and company commercialization issues.
Among them, AI safety is the most important keyword. Sutskever has repeatedly mentioned the issue of AI safety, he is very worried about the negative impact of AI on society, and also values the importance of “AI alignment” (using human feedback to train models, making them follow human values).
This idea may have been planted in Sutskever’s early academic career.
In 1985, Sutskever was formed in Novgorod in northwestern Russia. From the age of 5 to 16, he lived in Jerusalem, Israel. In 2002, Sutskever’s family immigrated to Toronto, Canada.
Sutskever was supervised by the world-renowned AI scientist Geoffrey Hinton, who is one of the three giants of the Turing Award and known as the “AI godfather” by the industry. In 2012, Sutskever participated in the invention of Hinton’s AlexNet neural network, which is the backbone of large models and completely opened up the global deep learning craze.
In 2013, 28-year-old Sutskever joined Google’s AI department with Geoffrey Hinton, and the two were praised by Google as “rare talents”. He is also one of the authors of the Google AlphaGo paper, and he has been responsible for the most cutting-edge development with his own strength.
In the stage when OpenAI was still a non-profit organization, Musk and Sutskever were closer than Altman. In 2015, Musk and Altman and others cooperated to establish an AI laboratory and dug Sutskever from Google. Musk described bringing Sutskever as the key to OpenAI’s success.
It is worth noting that Hinton’s view that “super AI” will destroy humanity caused a huge uproar a few months ago, and he resigned from Google afterwards; Elon Musk also believes that AI may pose a threat to humans themselves, and he has always insisted on open source AI code. As early as 2018, Musk parted ways with OpenAI because of his concerns about AI safety and inconsistency in commercialization.
It can be said that in Sutskever’s academic and professional career, the people who are closely connected with him all maintain a vigilant attitude towards super AI, and Hinton and Musk’s ideas may have influenced Sutskever’s cognition of AI to some extent.
The signs of disagreement can be traced back to five months ago
On November 7, at OpenAI’s first developer conference, Altman announced the launch of a simple custom GPT tool (GPTs), which was interpreted as his desire to build a grand commercial ecosystem like Apple. Altman’s removal was so close to this major release, and the launch of GPTs was also widely interpreted as the fuse of the personnel earthquake.
Regarding Altman’s attitude towards AI, in an interview with the New York Times, he said that he was a moderate in the AI debate. He said, “I believe this will be the most important and beneficial technology ever invented by humans. I also believe that if we are not careful with it, it may bring disastrous consequences, so we have to be careful with it.”
Although Altman has always called for regulation, from the series of positive actions launched by OpenAI since the launch of ChatGPT in November last year, he insisted on promoting the commercial ecosystem and spreading GPT’s capabilities quickly.
But Sutskever’s position seems to be different, he cares more about researching how to prevent AGI from going out of control. In July this year, Sutskever and OpenAI researcher Jan Leike formed a new team, dedicated to researching technical solutions to prevent AI system anomalies, and also said that they would invest one-fifth of the computing resources to solve the threat of AGI.
NBD noticed that in June this year, Altman and Sutskever talked on the same stage at Tel Aviv University. When the host mentioned the three risks of AI: job impact, hackers gaining super intelligence and system out of control, their answers had subtle differences.
Image: Youtube
Sutskever admitted these risks and said, “Although (AI) will create new job opportunities, economic uncertainty will last for a long time. I am not sure if this will happen. This technology can be used for amazing applications, such as curing diseases, but it may also create diseases that are worse than anything before. We need appropriate structures to control the use of this technology.”
Although Altman also agreed with the risk of AI out of control and needed different socio-economic contracts to balance, he was more inclined to describe the optimistic side of AI: “In terms of economy, I find it difficult to predict the future development… In the short term, things look good, we will see significant productivity growth; in the long term, I think these systems will handle more and more complex tasks and job categories.”
Is the tipping point of AI capabilities coming?
In recent months, there has been a fierce debate about AI risks, with the three Turing giants Hinton, Yoshua Bengio and Yann LeCun engaging in multiple rounds of confrontation on social media. The media calls this a debate between “AI accelerationists” and “AI doomists”.
However, another analysis suggests that the conflict between Altman and Sutskever cannot be simply understood as a debate between “AI accelerationists” and “AI doomists”. The foreign media The Algorithmic Bridge claims that the two are not as opposed as the doomists and accelerationists, but rather very similar - they both believe in the future benefits of AGI for humanity, which is also the basis for their co-founding of OpenAI.
The report believes that the source of the disagreement lies in how to understand what they are creating and how to achieve AGI. For example, Altman believes that AGI can be better achieved by iteratively deploying products, raising funds and building more powerful computers; but Sutskever believes that AGI can be better achieved by conducting more in-depth research on aspects such as “AI alignment” and investing less resources in making consumer-oriented AI products.
From Sutskever’s perspective, he wants OpenAI to be consistent with the mission of the company at its inception - to create AGI for the well-being of humanity, which is understandable.
Why did this disagreement happen now, rather than when they trained GPT-3 or GPT-4? It is not that the differences do not exist, but rather that the impact of the AI capability level is not enough to take action. The report analysis suggests that this is likely because the current AI level has reached a critical point.
On November 16, at the APEC summit, Altman said that he had witnessed the knowledge boundary being pushed forward four times in his work experience at OpenAI, the most recent of which occurred a few weeks ago. In October of this year, Sutskever said in an interview with MIT Technology Review that ChatGPT might be conscious.
The Algorithmic Bridge believes that the hidden implication behind this is - the OpenAI board believes that it is close to achieving AGI, which is the root of the conflict.
Altman recently admitted that OpenAI is developing GPT-5, the ultimate goal of which is a super AI equivalent to the human brain. There is no latest news about GPT5 yet, but the outside world speculates that GPT5 may self-modify its program and data, but “does not love humans”. New York University AI professor Gary Marcus said, “They must have seen the danger of what Sam did.”
Image source: X platform
The New York Times claims that Altman’s departure is likely to fuel the culture war in the AI industry, that is, the war between the camp that believes AI should be allowed to develop faster and the camp that believes development should be slowed down to prevent potential catastrophic harm. Who is right? Perhaps only time can prove it.