______-____500632106_banner.thumb_head

Photo/Shetuwang

Feb. 28 (NBD) -- Although TV series The Legend of The Condor Heroes adapted from Louis Cha's wuxia novel of the same title have been reproduced for several times, yet a recent "remake" with the help of artificial intelligence (AI) aroused heated discussions and concerns from the public, and the so-called "face-changing" technology came under fire.

In the video uploaded by a user named "Huanlian Ge" (literally meaning face-changing brother) on video platform Bilibili, the face of a classic female role, Huang Rong, played by Athena Chu was digitally altered and replaced with the face of another actress Yang Mi.

The video in which the images looked totally real went viral online and the tag "Replace face of  Athena Chu as Huang Rong with Yang Mi'" was viewed over 120 million times on the Chinese Twitter-like social media platform Weibo.

Athena Chu (Left)(Photo/Weibo.com) and Yang Mi (Photo/Bilibili)

Some deemed the face-changed video as a disrespect to the actresses and infringement on the actresses' privacy and portraiture right, and some teased that the technology used in the video can be applied in making movies where the stand-ins first complete the shooting and then the "face-changing" technology jumps in, swapping images of stand-ins for those of famous celebrities.

The "face-changing" technology, or Deepfake more accurately, is an AI-based human image synthesis technique. Such technique allows Deepfakers to combine the existing and source videos to make a fake video that shows a person or persons performing an action at an event that never occurred in reality.

The video producer, Xiao, later apologized for the stir online on Bilibili, saying that he, as a researcher in artificial neural network learning, hates malicious Deepfake videos targeting celebrities. Xiao said his video is aimed to make people beware of such technology and raise the public's cautiousness against Deepfake videos. He also admitted that such AI-powered face-changing technology should be used in the right manner. The video has been removed from the video platform.

Obviously, potential hazards brought by Deepfake go beyond infringement on privacy or right of portrait. Deepfakes can also be used to create fake IDs, fake photos, fake news, malicious hoaxes and explicit videos of celebrities. And when Deepfake videos flood the Internet, people will no longer believe in what they see in a video and integrity and authenticity will collapse.

Despite positive applications of the image-cloning technology in areas such as the film and TV industry, Deepfake, a portmanteau of "deep learning" and "fake" and sharing the same technological basis with facial recognition, will possibly pose a serious threat to the latter.

Photo/Shetuwang

Imagining that once criminals get hold of such technology, they will be able to break into online systems and steal personal data, which could bring about great troubles and even financial losses. Moreover, if AI can't tell the difference between a person's face and an altered image, the application of facial recognition will remain stagnant and AI will be a dancer in shackles.

To address the danger that Deepfake videos and images would bring to government and businesses, U.S. think tank The Carnegie Endowment for International Peace published an article "How Should Countries Tackle Deepfakes". According to the article, Deepfakes could incite political violence, sabotage elections, and unsettle diplomatic relations. The think tank advised that governments need to fund the development of media forensic techniques for detecting Deepfakes.

To defend against the harm that Deepfakes could cause, the European Union has called for an independent European network of fact-checkers to help analyze the sources and processes of content creation. Some advocate a combination of manual recognition, investigation into sources and cross check to discern the false from the genuine.

In the U.K., producers of Deepfake material can be prosecuted for harassment, but there are calls to make Deepfake a specific crime, and in the U.S. where charges as varied as identity theft and cyber-stalking have been pursued, the notion of a more comprehensive statute has also been discussed.


Email: gaohan@nbd.com.cn

Editor: Gao Han