On May 30, non-profit organization "Center for AI Safety" published a joint statement on its official website, stating that mitigating the risk of extinction caused by AI (for humans) should be a global priority, alongside other societal-scale risks such as pandemics and nuclear war.
Currently, the open statement has been signed by 350 AI experts, including OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, and Anthropic CEO Dario Amodei, as well as Turing Award winners Geoffrey Hinton and Yoshua Bengio.
"In the long run, AI will transform every aspect of our civilisation. Everything will change. This is more than just another industrial revolution. This is something new that will eventually transcend humankind and even biology," said Jüergen Schmidhuber, the "father of deep learning artificial intelligence" and scientific director of the Swiss AI Lab IDSIA, in an interview to National Business Daily.
"Every 5 years, compute is getting 10 times cheaper. The naive extrapolation of this exponential trend predicts that the 21st century will see cheap computers with a thousand times the raw computational power of all human brains combined. And soon there will be millions and billions and trillions of such devices. Almost all of intelligence will be outside of human brains," he further explained.
LSTM (Long Short-Term Memory Network) proposed by Schmidhuber in 1997 is now widely used in applications such as Google Translate, Apple Siri, and Amazon Alex, and is one of the strongest technologies in the commercialization of deep learning. The linearized self-attention mechanism in the Transformer architecture he proposed is also the basis for the birth of ChatGPT.
In a broader discussion, most people's fears about AI come from the surging statement of "AI will replace human labor".
"Indeed, AI will have temporary negative effects on many jobs, and the impact will happen relatively quickly, so we need a safety net to minimize the pain of the transition," Yaser S. Abu-Mostafa, Professor of Electrical Engineering and Computer Science at the California Institute of Technology and the founding program chairman of the annual conference on Neural Information Processing Systems, said to NBD.
However, a recent report by Deutsche Bank stated that history has shown that AI will ultimately create rather than destroying jobs, and the wave of AI will create more new job opportunities than the positions it replaces.
Regarding the role of AI in human society, Abu-Mostafa noted, "AI can and should remain subservient to humans, and will be greatly beneficial to us in the long run. In medicine for example, AI is making revolutionary advances that will have profound positive impact on our life and on our quality of life."
Schmidhuber also firmly believes that individuals and organizations will profit immensely from this in many ways. "My old statement from the 1980s is still valid: it's easy to predict which jobs will disappear, but hard to predict which new jobs will be created. Remarkably, countries with many robots per capita have relatively low unemployment rates, because many new jobs were invented. 30 years ago, who would have predicted all those people making money as video bloggers? " Schmidhuber explained to NBD.
Francesca Rossi, an IBM Fellow and global leader in AI ethics, also said to NBD that "AI should aim to enhance rather than replace humans."
In Rossi's view, concerns about AI often stem from a lack of understanding of what happens inside the "black box". "The development and application of AI systems must be rooted in ethics, which means prioritizing responsible training, collaboration, and supervision. As AI technology continues to be popularized and used, people still have reservations about this technology, but the advantages of AI are obvious. Only by placing ethics at the core of AI development can society learn to trust it."
With regard to this, Abu-Mostafa said, "We should first consider whether the product safety and quality prevention measures currently applicable to various products can also be applied to AI products. In addition, we need an agency similar to the FDA to approve AI products and ensure that they are safe, unbiased, and that their 'side effects' (such as privacy breaches) are within acceptable standards."