In the past year, generative AI technology has made stunning progress thanks to breakthroughs in algorithms and computing. It has greatly improved human creativity and efficiency, and has spawned popular generative AI platforms and products such as ChatGPT, MidJourney, and Pika, which offer endless possibilities for various creative and applied purposes.

In late 2023, OpenAI co-founder and CEO Sam Altman posted on the X platform to solicit netizens' wishes and expectations for OpenAI in 2024. According to netizen feedback, AGI (artificial general intelligence), GPT-5, more powerful GPTs, better reasoning ability, and open source were the most popular calls. 

It is undeniable that the development of generative AI technology has given wings to imagination in all walks of life, but it has also brought some risks and challenges in terms of regulation. In the new year, what kind of development will AI undergo? Where will AI regulation go? In this regard, NBD interviewed Dr. Peter J Bentley, a computer scientist and honorary professor at University College London (UCL).

Photo/provided by Dr. Peter J Bentley

Companies should focus on business needs

In 2013, tech giants and startups such as Google, Microsoft, Meta, Anthropic and others rushed into the field of generative AI. In China, AI companies are already involved in multiple industries, such as chip manufacturing, cloud computing, data services, natural language processing, text data analysis, computer vision and smart security, transportation, finance, Internet of Things and other terminal applications. Taking the 70 AI-related industrial chain companies listed on the Shenzhen Stock Exchange as an example, they have covered these areas.

Nvidia CEO Jensen Huang even claimed that “the iPhone moment of AI has arrived”. Riding on the wind of AI, Nvidia also became the world’s first chip company with a market value of over one trillion US dollars.

From ChatGPT’s text interaction to MidJourney’s image generation to Pika’s video creation, generative AI shows its powerful charm, creating infinite possibilities. Elon Musk recently responded to a post from an X user and even said that movies made entirely by AI are expected to appear in 2024.

NBD: How do you view the development status and prospects of generative artificial intelligence (GenAI)? What industries and fields do you think it will have a significant impact on in 2024?

Peter J Bentley: The area has grown hugely and continues to develop. For art, design, video, audio, movie and entertainment applications there are immediate benefits as GenAI enables much easier special effects and processing. For marketing and sales, this technology enables easier creation of materials. In science and engineering, it can produce useful summaries that help us collate separate findings and learn more. Already these tools are becoming integrated into word processors and tools for software developers - these will become even better and more useful.

NBD: What aspects do you think technology companies should focus more on when it comes to AI products in 2024?

Peter J Bentley: Sometimes when a shiny new hammer is invented, everything looks like a nail, and people try to use their hammer on everything. That’s not always a good idea. It’s best to think about the problems facing your company first, and then consider which solution would really be best. AI may help with many problems. But sometimes it might cause problems itself (not work well, be expensive, do something unnecessary for the business, cause legal issues). This technology is evolving so quickly that right now we do not know how best to apply it effectively. Rather than trying to sound “modern” with the latest AI, companies would be wiser to focus on their business needs and only use AI where it is certain that a competitive advantage will be gained. Right now, despite the hype, there are only a very small number of applications where GenAI is better than our existing methods when solving specific problems. 

Stronger Regulation Will Not Limit Technology Development

Artificial intelligence (AI) is rapidly transforming our world, with the potential to revolutionize many industries and improve our lives in countless ways. However, AI also poses a number of risks, including data breaches, copyright infringement, and the potential for job displacement.

In 2023, a group of 11 American authors, including Taylor Branch and Stacey Schiff, sued OpenAI and Microsoft for allegedly using their work to train ChatGPT, a large language model. The lawsuit is one of many recent cases that have raised concerns about the potential for AI to be used to violate the privacy and intellectual property rights of individuals.

As the negative consequences of AI become more apparent, calls for regulation of the technology are growing louder. In November 2023, the first global AI safety summit was held in the United Kingdom. The summit was attended by representatives from 28 countries and the European Union, who agreed that AI poses a potential existential risk to humanity. The summit concluded with the signing of the Bletchley Declaration, which is the first global statement calling for the regulation of AI development.

NBD: What do you think will be the major technological breakthroughs and innovations in the field of artificial intelligence in 2024?

Peter J Bentley: This is increasingly quite hard to predict as the rate of progress is so high! My expectation is that foundation models (LLMs) will become more multimodal - they will be trained on more and more types of data. This will give them more capabilities that combine text, images, video, audio, and perhaps more specialized data such as financial, business, or movement (which might help robots). There are also signs that these models may be able to help us make new discoveries in science.

NBD: How do you view the ethical, security and social responsibility issues of artificial intelligence? Do you think the regulation of AI will continue to strengthen in 2024? Does the strengthening of regulation mean that technological development will be restricted?

Peter J Bentley: There is a problem with many AIs around the world in that they have been trained on a lot of data taken from the Internet. This data may contain biased or incorrect information. It may also belong to writers, designers or artists and has been used without permission. Regulation is a good idea if it helps us make AIs without those problems. An AI that uses better data will produce better results - and the owners will not be sued by writers, designers, etc because their data was used! There is also a problem with the misuse of AI. Malicious individuals are manipulating images, video and audio to make deepfakes and fool the population. We need to be able to stop this, otherwise, harmful misinformation will be spread, and people will believe many silly things. There is evidence that many democracies around the world are suffering from this as their politicians use AIs to manipulate views and encourage people to vote for them.

Editor: Alexander