Upload_1703828862809.thumb_head

Photo/Madrona Venture Group

A year ago, the emergence of ChatGPT completely ignited the booming market for generative AI. Throughout 2023, industry giants such as OpenAI, Google, Meta, and Baidu continuously updated and iterated their large models and AI products, with a frequency that made industry insiders sigh that they could not keep up with the pace.

The vigorous scene also crowned 2023 with the title of "The First Year of Generative AI". Looking forward to 2024, what new opportunities and challenges will the AI industry face? National Business Daily interviewed Dr. Oren Etzioni, the founding CEO, current board member and advisor of the Allen Institute for Artificial Intelligence (AI2) in the United States.

NBD: You have been engaged in AI research for a long time and have led the Allen Institute for Artificial Intelligence for 8 years. How do you evaluate the rapid development of AI in 2023?

Oren Etzioni: It can be said that after the release of ChatGPT, related language and visual large models occupied the whole of 2023. I am very excited about the positive potential of AI. I think AI can bring a lot of benefits to humans.

For example, autonomous driving is a technology we have been looking forward to for many years, and in 2023, Waymo's autonomous vehicles have completed more than 700,000 trips. The importance of this is not that it is AI, but that autonomous driving has the potential to reduce car accidents.

NBD: Entering 2024, what changes do you think will occur in the field of AI?

Oren Etzioni: In 2024, we will see more multimodal models. We will also go beyond the Transformer structure used by large models such as ChatGPT and adopt more efficient training and inference methods.

In addition to chatbots, large models are still in their infancy at the application level, so there may be more applications in chip design, new drug research and development, etc. next year. In addition, other than generating text and images, we will see increasingly complex agents that can act on behalf of users.

NBD: Can you elaborate on how to improve the efficiency of training and inference?

Oren Etzioni: The specific details are quite technical. In general, the core mechanism of GPT is the Transformer architecture, which is not very efficient in training and inference. We have seen small-scale improvements in these areas, spent a lot of money to optimize the training and inference process, and developed specific hardware for it.

What computer scientists are best at is making expensive things cheap. In the past thirty years, we have witnessed Moore's Law in the CPU industry, and the cost of computing chips has been continuously reduced. I think the same thing will happen in the field of AI, that is, the cost of AI will halve or even more every 18 months.

NBD: You mentioned AI agents, which is a hot topic recently. Bill Gates wrote that agents may change the way we use computers in five years. So, what does agent mean?

Oren Etzioni: We have to remember that although large models like GPT are amazing, they can't take any action. You can ask questions, they can give answers, or generate a picture, but in many cases what you want is for AI to do things for you. This can be a simple action, such as booking a plane ticket, or it can be a more complex action.

However, no matter how powerful the large models of the GPT class are, they can't even cook rice, can they? It can give the steps to cook rice, but it is complicated to take action in the physical world. We have already started to let large models take action in the software world, such as connecting APIs, which is essentially an action to obtain services from the Internet. We will see more complex AI agents, surpassing the simple manipulation of text and images, they can act on behalf of users.

Of course, there are many research problems in this area. For example, if a large model makes a mistake when using natural language, it can say: Sorry, I gave you the wrong answer, this is an "illusion". However, if it makes a mistake in action, the potential consequences may be very destructive, so we still have a lot of work to do.

NBD: OpenAI is the biggest star company in the AI world, but in November there was a shocking event where the CEO was ousted. What impact do you think this event will have on the entire AI industry?

Oren Etzioni: This incident shows the current important status of OpenAI in the AI industry, and also reminds us that humans need multiple options such as open source models.

If there are only a few companies and a few large models in the entire industry, there will be problems. I am very happy to see more and more open source large models from Europe and China, and multiple choices are beneficial to the whole world. AI2, as a non-profit organization, has already announced the first batch of open source datasets. You know, even Meta's open source models have not published the training datasets.

NBD: In June of this year, President Biden met with you and several other scholars and CEOs to discuss AI regulation issues. Can you talk about the situation at the time?

Oren Etzioni: President Biden is very interested in using AI to solve big problems, such as how to treat cancer, or help people with visual impairments. The conversation at the time was mainly about these issues. Of course, he also has a more practical attitude towards the potential negative impact of AI. We all believe that due to errors in false information, biological information surveillance, face recognition, etc., AI will cause some problems.

I think this conversation achieved a good balance, on the one hand, what we hope AI can achieve, on the other hand, what practices and rules should be adopted to prevent bad results.

Editor: George Feng