___________ai-vcg41n1356593648__4_.thumb_head

Photo/VCG

On November 1, the topic of “ChatGPT may have become conscious” topped the trending list. This statement came from OpenAI’s chief scientist Ilya Sutskeve, who recently claimed that the neural network behind ChatGPT had developed consciousness, and that super artificial intelligence (AI) would pose a potential risk in the future.

Earlier, on October 30, US President Biden signed an executive order to establish comprehensive regulatory standards for AI research and application, which was the first executive order on AI issued by the White House. On November 1, the first Global AI Safety Summit was held at Bletchley Park in the UK, where representatives from China, the US, the UK, the EU and other parties signed a milestone “Bletchley Declaration”, warning of the dangers of the most advanced frontier AI systems.

As the regulatory level intensified, the industry’s big discussion on AI risks was once again pushed to a climax. On social media, many industry experts warned of the existential threat posed by AI, arguing that open-source AI could be manipulated by bad actors, (for example) making it easier to produce chemical weapons, while on the other hand, the counter-arguments dismissed such alarmism as a way to help concentrate control in the hands of a few protectionist companies.

The debate escalated, and OpenAI’s CEO Sam Altman was even met with strong resistance from radical activists when he recently attended an event in Cambridge.

In the face of such fierce arguments, is open-source AI really dangerous? How to balance AI regulation and openness?

“Doomsdayers” vs “Open-sourcers”

The discussion on AI risks and regulation reached a climax again this week.

OpenAI’s chief scientist Ilya Sutskeve recently told MIT Technology Review that the neural network behind ChatGPT had developed consciousness, and that super AI would pose a potential risk in the future. In addition, Biden’s first executive order on AI, and the signing of the Bletchley Declaration at the first Global AI Safety Summit, also signaled that AI regulation was gradually tightening.

According to the new AI regulations recently issued by the White House, AI system developers that pose a risk to US national security, economy, public health or safety need to share their safety test results with the US government before releasing them to the public, and even specified the AI model parameters that need to be regulated.

This immediately triggered a debate among the AI industry giants, with the “open-sourcers” who advocate for more openness in AI development arguing that this would hinder the improvement of algorithm efficiency, while the “doomsdayers” who believe that AI could pose a huge risk to humanity agreeing with it.

The “open-sourcers” are represented by Meta’s chief AI scientist Yann LeCun and Stanford University’s associate professor of computer science and electrical engineering Andrew Ng, who believe that the monopoly of the giants brought by strong AI regulation is the real problem that practitioners and policymakers should care about.

The “doomsdayers” are represented by “AI godfather” Geoffrey Hinton, University of Montreal computer science professor Yoshua Bengio, and New York University professor Gary Marcus. Hinton said on X platform, “I left Google to talk freely about the existential threat of AI.”

In response, Andrew Ng countered that if the regulations that the “doomsdayers” hoped for were enacted, it would hinder open-source development and greatly slow down the pace of innovation.

Yann LeCun also believes that the “doomsdayers” are unwittingly helping those who want to protect their business by banning open research, open source code, and open models, which will inevitably lead to adverse consequences.

The debate between the two factions escalated, and after Musk led the signing of a thousand-person joint letter in March this year, urgently calling for AI labs to immediately suspend research, two new joint letters appeared.

The “doomsdayers” led the signing of a joint letter, calling for an international treaty on artificial intelligence to address its potential catastrophic risks. The letter states, “Half of AI researchers estimate that AI could lead to human extinction, or a similar catastrophic limitation of human potential, with a probability of more than 10%.” As of press time, the joint letter has been signed by 322 people.

The “open-sourcers” signed a joint letter calling for more openness in AI research, which has been signed by 377 experts as of press time.

The debate over whether AI development should remain open or be subject to more stringent regulation is not only taking place among industry leaders, but also among the general public.

Is big model really dangerous?

On the X platform, the "destructionists" have launched the topic "BelieveHinton" and taken to the streets with signs.

Others have started a poll about the two factions.

As the global discussion intensifies, even OpenAI CEO Sam Altman was recently met with strong resistance from radicals at an event in Cambridge. He was heckled and booed in the auditorium.

In his speech, Altman later said that even if future AI models are powerful enough, they still require enormous computing power to run. Raising the computing power threshold can reduce the risk of deliberate crime and improve accountability.

In the face of such heated debate, one question is whether open-source AI is dangerous?

On October 20, experts from MIT and Cambridge University published a paper in which they experimentally investigated whether the spread of continuous model weights could help malicious actors use more powerful future models to cause large-scale harm. The results showed that open-source large models do pose a risk.

The researchers arranged a hackathon in which 17 participants played the role of bioterrorists, attempting to successfully obtain a sample of the influenza virus that caused the 1918 pandemic. Participants used two versions of an open source model, one with built-in protection measures and one with the protection measures removed, called Spicy.

The results showed that the Spicy model provided almost all of the key information needed to obtain the virus. Releasing the weights of future foundation models with even more capabilities, regardless of how reliable the protection measures are, could potentially pose a risk.

Andrew Ng, on the other hand, believes that "when discussing the reality of these arguments, I find them to be vague and specific, and frustrating because they boil down to 'this could happen.'"

In an article, New York University professor Julian Togelius discusses what it means to regulate AI development in the name of AI safety.

He argues that "the neural networks that constitute the recent surge in generative AI were actually invented years ago, and symbolic planning, reinforcement learning, and ontology have all been 'the future.' Technology is developing rapidly, and we cannot know which specific technologies will lead to the next breakthrough. And regulating underlying technology is a pitfall for technological progress."

Editor: Alexander