Now, the wave of artificial intelligence (AI) is reshaping the world at an unprecedented speed, and the accompanying security and ethical issues are increasingly gaining attention. How to balance innovation and risk has become a focal point of concern in the industry.

At the end of August, AI giants OpenAI and Anthropic reached a historic agreement with the U.S. AI Safety Institute, allowing the U.S. government to conduct strict safety tests before the release of their new and significant AI models. 

OpenAI's CEO, Sam Altman, stated that the agreement is a "historically important step," reflecting the industry's commitment to safety and responsible innovation. Elizabeth Kelly, the director of the U.S. AI Safety Institute, said that this is just the beginning but is an important milestone in managing the responsible future of AI.

In response to the agreement reached by these two leading AI startups and the issues of safety and future development in the AI field, National Business Daily (NBD) had an exclusive interview with Turing Award laureate and Canadian computer scientist Yoshua Bengio, known as the AI "godfather," for an in-depth interpretation.

Yoshua Bengio is one of the pioneers in the field of deep learning, who, along with Geoffrey Hinton and Yann LeCun, received the Turing Award ("the Nobel Prize of Computing" ) in 2018 and is hailed by the industry as one of the "deep learning trio."

Bengio not only has a profound understanding of cutting-edge technology but also has unique insights into the direction of AI development, potential risks, and social impact.In August 2023, Bengio was appointed as a member of the United Nations Scientific Advisory Board on Technological Advancement and served as the AI chair at the Canadian Institute for Advanced Research (CIFAR). In 2024, Bengio was named one of TIME magazine's 100 most influential people in the world.

Yoshua Bengio Photo/ Personal website of Yoshua Bengio

Ensuring AI safety must address two major challenges

NBD: How do you view the collaboration between OpenAI and Anthropic with the U.S. AI Safety Institute? Does this reflect a broader trend towards increased industry responsibility in AI governance?

Yoshua Bengio: These companies and others had already committed to privately share their latest models with the UK government's AI Safety Institute, so it's only natural that they would do the same for the US government, given that they are US companies.

NBD: In your opinion, what are the most critical challenges that need to be addressed to ensure AI systems are developed and deployed safely, particularly in light of these industry collaborations?

Yoshua Bengio: At a high level, we need to address two challenges:

First, technical solutions to the AI safety challenge, that will continue to work when we reach AGI (artificial general intelligence = human-level) or ASI (artificial super-intelligence = beyond human level): evaluation, detection and mitigation of dangerous capabilities, design of AI that will be controllable and guaranteed safety.

Second, political solution to the AGI coordination challenge: how to make sure that all the of countries and all of the companies in the world building the strongest AI systems (frontier AI) follow the best AI safety practice, in order to avoid a catastrophic loss of human control, and the best security and governance protocols to make sure that some humans will not abuse the power of AGI for themselves and at the expense of many others.

NBD: Considering the global nature of AI development, what do you think are the most effective ways to foster international cooperation in AI governance, particularly between regions with differing regulatory approaches?

Yoshua Bengio: International coordination is essential to avoid the two most dangerous scenarios for humanity in coming years or decades: (1) the concentration of power in the hands of a few who would control an ASI (for example to establish a worldwide AI-supported dictatorship), and (2) the possible loss of human control to an ASI with a self-preservation goal (which may conflict with human goals and lead to human extinction).

The vast majority of humans on this planet would not want any of these futures. In order to avoid them, it is essential that more people and governments around the world understand these risks, so thanks for writing about it. We will need international treaties, like for nuclear weapons, but because of the military and geopolitical competition between countries, these treaties won't be effective unless we also figure out sufficient verification technology, which I think is within reach in a few years if we invest in this development.

Independent academics needed in both research and audit of AI safety

NBD: How do you see the role of academic research influencing or guiding these industry efforts in AI governance? What steps can be taken to strengthen the connection between research and practical implementation?

Yoshua Bengio: Corporations have an inherent conflict of interest when it comes to AI safety. They compete fiercely with each other and that survival demands that they cut corners on safety in order to lead commercially, at the expense of public safety. They have an incentive to hide the risks and the accidents and reject regulation. It is thus essential that a large group of independent academics be involved in both the research in AI safety and the evaluation and audit of frontier AI systems.

NBD: Looking forward, what additional research or initiatives would you recommend to further advance AI safety and governance in the context of rapid technological progress?

Yoshua Bengio: At a scientific and technical level:

(1) better risk assessment methodologies, such as capability evaluation, to detect a potential danger with a frontier AI.

(2) better AI design methodologies that can provide us with some quantitative assurances of safety (which we do not have right now).

(3) hardware-enabled mechanisms for tracking, licensing and verification of the use of AI chips, to support international treaties verification.

At a governance and political level:

(1) greater global awareness of AI risks, and in particular of the catastrophic ones whose mitigation should unite everyone in every country because we are all humans and most of us want to see our children have a life and humanity continue to prosper.

(2) national regulation of AI labs developing the most powerful AI systems to introduce (a) transparency about their AI safety plan, efforts and accident reporting (including whistleblower protection) and (b) incentives (such as liability) to follow national and international AI safety standards; a good example of this is California's SB-1047 proposed bill.

(3) international treaties about AGI development to (a) enforce national regulation (each country may have different flavours but protecting the public and thus humanity overall against the catastrophic scenarios should be mandatory) (b) redistribute globally the wealth that AGI could create (also as an incentive for low and middle-income countries to sign treaties), (c) international cooperation on AI safety standards and R&D and (d) allowing a neutral global entity such as the UN to verify compliance to the treaty when it comes to the globally relevant catastrophic risks, using remote tracking and verification technology.

Editor: Gao Han