China’s State Internet Information Office, the Ministry of Culture and Tourism, and the State Administration of Radio and Television jointly issued the "Regulations for the Management of Network Audio and Video Information Services" (hereinafter referred to as the "Regulations"), which will be implemented on January 1, 2020, according to the Chinese government’s official website.
In the just-issued regulation, there are four direct references to "deep learning", which can be regarded as measures aiming to better police and govern AI-enabled "deep fake" videos, audio and digital content.
At the end of 2017 since the appearance of deep fake or face-swapping technology, a variety of fake videos using AI technology have emerged. With the advancement of technology, fake videos have become easier to produce and more difficult to distinguish.
The new regulation clearly states that for ordinary users, when publishing AI-enabled fake audio and video, they must notify viewers in a prominent way. Users are also not allowed to use AI technology to produce and spread fake news.
For service providers, in addition to the above requirements, they must assess security for any on-line AI audio and video services. They also must develop technology to distinguish fake audio and video. In addition, they must stop the dissemination of fake AI audio and video content in a timely manner, and establish or improve rumor removal mechanisms.
This month, Twitter released the first anti-deep fake draft strategy and solicited public comments. According to the draft, deep fake content will be deleted if it threatens someone’s personal safety or may cause serious harm.
International companies including Microsoft and Google are also studying how to automatically identify face-changing videos.
AI fake videos are also one of the most controversial topics in the US Congress in recent months. Earlier, the United States Center for International Strategic Studies (CSIS) released a report that fake information was spreading rapidly through the Internet, disrupting the democratic process.
In October 2019, California Governor Gavin Newsom signed the decree AB-730, stating clearly that it would be a crime if the dissemination of forged information makes the public have a wrong or harmful impression on the speech and actions of political figures.
In July, the state of Virginia also issued an injunction prohibiting the abuse of deep fake: it is illegal to share naked videos or photos of others without the parties’ permission, whether the photos or videos are real or fake.
In June this year, universities such as Stanford and Princeton released a new study: Given any text, you can freely change what a character in a video says. In addition, after changing the keywords, the character’s mouth shape can be extremely accurate, and there is no trace of tampering.