Original image by Teguhjati Pras.
When we discuss the Chinese internet, it often conjures up thoughts about creepy surveillance and censorship. And rightfully so. At the same time, with the extremely speedy development of digital innovations in China, the country has started taking big strides in internet legislation that protects both the public order and its citizens. In recent years, these new laws have come remarkably close to what we have seen appear in the west. A good example is China’s Personal Information Protection Law, which was largely modelled after Europe’s GDPR privacy legislation.
Protection of vulnerable groups in society and citizens and consumers in general has become more prominent in China’s new legislations. China has even arrived at a point where it is implementing regulations in areas western societies are still struggling with, like the laws for algorithms that became effective in March. Another example is the proposal for regulation of “deep synthesis technology” that was published for public feedback in January. You might be more aware of this topic under the name ‘deepfakes’.
Deepfakes are images, sounds or video footage that are not real but appear very realistic. As artificial intelligence (AI) technologies improve, deepfakes become increasingly lifelike. At times they are clearly made for entertainment purposes and are obviously fake to all but the most gullible, like Chinese AI company Iflytek’s video of Donald Trump speaking Mandarin. But sometimes they are created to intentionally mislead the public.
The question of what is real and what is fake becomes increasingly difficult to answer. The impact of fake news on societies has been clear during elections and rampant conspiracy theories related to the pandemic. As deepfakes are getting more realistic, not only are they starting to influence public opinion, but they are also eroding trust in truthful news. Meanwhile, speedy publication of news becomes risky since it might lack proper verification of trustworthiness.
Deepfake regulation in the west
Experts (link in Dutch) warn that within 5 years, 90% of all online content will be manipulated and technology that can detect deepfakes has its limitations. Western societies are struggling with the question how deepfakes should be regulated. A report, Deepfakes: the legal challenges of a synthetic society, by Tilburg Institute for Law, Technology, and Society (TILT) mentions various options, including limiting the use of deepfake technology for consumers (full Dutch report, Dutch summary, English summary).
In January 2022, the Digital Services Act (DSA), which will take effect in 2023, was modified by the European Parliament. The changes include some regulations about deepfakes. Large platforms are required to clearly label deepfakes where they become aware of them. There seems to be a lack of more far-reaching regulation out of fear that it might cause western AI development to fall behind. In the meantime, the AI Act (AIA) that was proposed in April 2021 has received a lot of criticism for being an incoherent patchwork that is ‘lukewarm, short-sighted and deliberately vague’. Proper regulation of deepfakes is mentioned as one of the shortcomings of the AIA.
In the U.S., three states (California, Virginia and New York) have implemented deepfake regulations, but they only apply to pornographic deepfakes, while Texas and Maryland have implemented legislation limited to respectively political deepfakes and deepfakes involving child pornography.
A July 2021 report requested by the Panel for the Future of Science and Technology (STOA), Tackling deepfakes in European policy (full report, summary), offers various policy options to the European Parliament.
- Clarify which AI practices should be prohibited under the AIA
- Regulating deepfake technology as high-risk under the AIA
- Limit spread of deepfake detection technology (to prevent circumvention by creators)
- Develop AI systems that prevent, slow down, or complicate deepfake attacks
- Ban certain applications
Requirements for creators
- Legal obligations for creators to label content as deepfakes, clarify guidelines and limit exceptions (the AIA does not obligate deepfake technology providers to label deepfakes, unlike a new proposal in China)
- Lift some degree of anonymity for using online platforms, e.g., when uploading certain content (remarkably, China’s real name registration requirement for online platforms is mentioned as an example here)
Requirements for platforms
- Obligate platforms and other intermediates to detect deepfakes and user authenticity
- Labelling/removing deepfakes by platform when detected or notified by a victim or trusted flagger
- Independent oversight & limiting platforms’ unilateral decision-making authority regarding legality and harmfulness of content
- Increase transparency by extending DSA’s reporting obligations for deepfakes
- Slow the speed of circulation; obligating platforms to slow circulation of deepfakes
Knowledge & awareness
- Invest in education and raise awareness amongst IT professionals
- Invest in media literacy of citizens, organisations and institutions and technological citizenship
- Invest in a pluralistic media landscape and high-quality journalism
- Protecting organisations against deepfake fraud (support risk assessments and prepare staff)
- Establish authentication systems for content recipients (e.g., raw video data, digital watermarks or information to support traceability)
- Diplomatic actions and international agreements regarding use of deepfakes by foreign states, intelligence agencies and other actors
- Invest in knowledge and technology transfer to developing countries to help improve these countries’ resilience
- Institutionalise (judicial & psychological) support for victims of deepfakes
Governance & law
- Strengthen the capacity of data protection authorities (DPAs) to respond to the use of personal data for deepfakes
- Provide guidelines on GDPR in the context of deepfakes
- Extend the GDPR’s list of special categories of personal data with voice and facial data
- Develop a unified approach for the proper use of personality rights
- Protect personal data of deceased persons
- Address authentication and verification procedures for court evidence (e.g., electronic seals, time stamps and electronic signatures)
- Systematise and institutionalise the collection of information with regards to deepfakes
- Identify weaknesses and share best practices
Note: these are all policy suggestions, not implemented regulations.
Many of these suggestions lead to heated debate about the effect on values like privacy, anonymity and freedom of speech. As we will see, many of the suggestions have already been implemented in China or are being proposed in new legislation. Without a doubt this will provide fodder for parties on both sides of the debate.
Deepfake regulation in China
While the European Union is still gathering advice from think tanks, China has been taking steps to regulate AI and deepfakes. Interestingly, some of China’s regulations, dating back to 2020, can also be found in the list of suggestions from the STOA report.
The aforementioned TILT report includes an extensive (page 221- 237) English description by Bo Zhao on the regulatory framework regarding deepfakes in China at the time of publication (November 2021). On January 1, 2020 a law, Network Audio-video Information Services Regulation (link in Chinese), came into effect that called on platforms that produce and/or distribute audio-visual material to clearly mark audio or videos using deepfakes, deep learning, virtual reality or other new technologies. It also banned the use of deepfake and virtual reality (VR) technologies in creating, publishing or spreading fake news, and called on platforms to remove such media. Platforms also had to set up easy-to-use complaint systems. The Cyberspace Administration of China (CAC), China’s main internet watchdog, was made responsible for enforcing the law. In March 2021 China’s tech companies were asked to do a security review regarding deepfakes and compliance with 2017’s Cybersecurity Law.
More recently, on January 28th, 2022, China took another step by publishing a draft Provisions on the Administration of Deep Synthesis Internet Information Services. The proposal builds on the aforementioned 2020 regulations by adding regulations for companies that provide deepfake technology (more on this later).
Provisions on the Administration of Deep Synthesis Internet Information Services
Let’s have a look at these new regulations, using a summary of the (unofficial) translation by ChinaLawTranslate. A few notes before you continue:
- The aim of this summary is to give a more detailed insight than most news articles, while at the same time providing a more concise and grouped list than the actual original text.
- Personal remarks are placed between [ ] brackets.
- In case of doubt please consult the full translation or original Mandarin text.
- The provisions have been released for public feedback until February 28th, 2022, a normal process that can lead to major or minor changes while the big strokes often remain largely intact. In some cases, a definitive version is never made. In other words, theoretically all options are still on the table for this proposal. Final versions are normally implemented a few months later, as we have seen with the legislation for algorithms which was first proposed in August 2021 and became effective March 1st, 2022.
The Chinese provisions take the 2017 Cybersecurity Law, the 2021 Data Security Law and the 2021 Personal Information Protection Law and several other laws as the basis for new regulation on ‘deep synthesis’ (basically deepfakes). “Preserving national security and the societal public interest, and protecting the lawful rights and interests of citizens, legal persons, and other organisations” are mentioned as the goals for the regulations.
Deep synthesis technology: the use of technologies using generative sequencing algorithms for making or editing text, images, audio, video, virtual scenes, or other information by deep learning and virtual reality, and including but not limited to:
- text content, such as chapter generation, text style conversion, and question-and-answer dialogues
- voice content, such as text-to-speech, voice conversion, and voice attribute editing
- non-voice audio content, such as music generation and scene sound editing
- biometric features such as faces in images and video content, face generation, face swapping, personal attribute editing, face manipulation, or gesture manipulation
- non-biometric attributes in images and video content, such as image enhancement and image restoration
- virtual scenes such as 3D reconstruction
Face manipulation: manipulation of persons’ facial expressions in images and videos.
Gesture manipulation: manipulation of persons’ body movements in images or videos.
3D reconstruction: generating or editing 3D images of scenes.
Deep synthesis service providers: organisations that provide deep synthesis services and/or technical support for deep synthesis services.
[Note: these regulations apply to companies that provide deepfake tools or services, like software, AI and apps. One might think that this frees platforms on which deepfake content is shared from any responsibility. However, the responsibilities of platforms have already been regulated in the aforementioned 2020 regulations for platforms and many other laws. Whereas the SOTA concluded that the AI Act could not put certain obligations on deepfake technology providers, this new Chinese proposal does exactly that.]
Deep synthesis service users: organisations and individuals using deep synthesis services to make, reproduce, publish, or transmit information.
Deep synthesis services must not be used to:
- engage in activities that are prohibited by laws and regulations, such as those endangering national security, undermining social stability, disrupting social order, or violating the lawful rights and interests of others
- to create, reproduce, publish, and transmit information that contains content prohibited by laws and regulations such as incitements to subvert state sovereignty, endangering national security and social stability, pornography and fake information [This basically means the existing censorship rules also extend to the use of deepfakes].
- infringe the lawful rights and interests of others such as their reputation, image, privacy, or intellectual property rights.
[Note: the above three bullets apply to usage of deepfake technology and therefore more to the users of the technology and not necessarily the creators/providers of the technology. In other words, it applies to the people that create a face-swapped picture, not the company offering the app that makes the face swap. The list of requirements for the latter category follows.]
Deep synthesis service providers shall:
- comply with laws and regulations
- respect social mores and ethics
- adhere to the correct political direction, public opinion orientation, and values trends, to promote progress and improvement in deep synthesis services
- implement entity responsibility for information security
- establish management systems such as for algorithm review mechanisms, user registration, information content management, data security and personal information protection, protection of minors, and education and training of workers
- have safe and controllable safeguard measures corresponding to the development of new technologies
- draft and disclose management rules and platform covenants
- improve service agreements
- post the security obligations of users of deep synthesis services in a conspicuous manner
- perform corresponding management responsibilities in accordance with law and agreements
- conduct real-name identity verification for users of deep synthesis services (and block publishing where no real-name verification is available)
[Users of the deepfake software/app should register with their real identities. In China this often requires uploading a picture of yourself, holding up an ID card. This requirement has already existed for several years for internet platforms like social media for users selling items on e-commerce marketplaces].
- strengthen content management for deep synthesis information, employing technical or manual methods to conduct reviews of the data input by users of the deep synthesis services and the synthesis results
[Providers of deepfake tools/services should screen the content created by users. For internet companies this has already required thousands of content monitors that check user-generated content like microblog posts and videos. It can put the profitability of a company under serious pressure. However, failing to detect and remove sensitive content can result in serious punishments, including complete shutdown of an app or platform]
- establish a database of characteristics used to identify illegal and negative deep synthesis information content, and improve the standards, rules, and procedures for making entries in the database
- employ measures to address illegal and negative information
- employ measures to lawfully address the related users of the deep synthesis services, such as warnings, limiting functions, suspending services, and closing accounts
- strengthen the management of deep synthesis technologies, periodically reviewing, assessing, and testing algorithmic mechanisms
- conduct a security assessment and prevent information security risks where models, templates, or other tools are provided that can edit biometric information such as faces and voices or non-biometric information such as special items and scenes that might involve national security or the societal public interest
- strengthen the management of training data (annotated or benchmark datasets that are used to train machine learning models), ensure that data processing is legal and proper and employ necessary measures to ensure data security
- comply with provisions on the protection of personal information and must not illegally handle personal information where training data includes data involving personal information
- prompt the users of the deep synthesis service to inform and obtain the independent consent of the entity whose personal information is being edited, where deep synthesis service providers provide significant functions for editing biometric information such as faces and voices
[The provider of the tool/service should inform users that they need to have approval of the person whose face/voice is being manipulated but does not hold responsibility if the user does not comply with this rule. The Personality Rights in the Civil Code that went into effect on January 1st 2021 already stated that a person’s face cannot not be replaced without their consent. In 2020 a face-swapping app, Reface, was banned in China after it was used outside China to put Xi Jinping’s face on other people’s bodies, most likely without Xi’s consent.]
- use effective technical measures to add marks that do not affect users’ usage to deep synthesis information content produced using their services
- retain log information so that the deep synthesis information content that is published and transmitted can identify itself and be traced
[Deepfakes will need to have a unique identifier that enables identifying the creator of the content, who has registered with real-name registration].
- identify the deep synthesis information content in a conspicuous way to effectively alert the public about the synthetic nature of the information content in case of the following deep synthesis services
- services such as smart dialogue or smart writing, etc., which simulate natural persons to generate or edit texts, it is to be prominently labelled in the area where the source of the text information content is explained.
- speech generation services such as voice synthesis and imitations or editing services that significantly change personal identification characteristics, it is to be prominently labelled by means such as vocal explanation in a reasonable part of the audio information content.
- services that generate images or video of virtual persons such as face generation, face swapping, face manipulation, and gesture manipulation, or editing services that significantly change personal identification characteristics, it is to be prominently labelled in a conspicuous location in the information content of images and videos.
- services with immersive scenes (highly realistic virtual scenes that are generated or edited by deep synthesis technology and can be experienced or interacted with by participants) are provided, it is to be prominently labelled in a conspicuous location in the virtual scene information content.
- services that have functions for generating or editing information content are provided, it is to be prominently labelled in a conspicuous location in tests, images, sounds, video, virtual scenes, and so forth.
- In case of any other deep synthesis services, they shall provide users of the deep synthesis services with functions to prominently label the deep synthesis information content, and alert them that they may prominently label the deep synthesis information content.
[Basically, all deepfake content needs to be clearly identified as deepfakes in the same medium as the concerned content. Simply putting a text under an embedded video or audio file is insufficient.]
- Where deep synthesis service providers discover that the deep synthesis information content was not prominently labelled, they shall immediately stop the transmission of that information and are to label it as provided before resuming transmission.
- establish mechanisms for dispelling rumours
- employ measures to dispel the rumours where it is discovered that deep synthesis information service users used the deep synthesis technology to make, reproduce, publish, or transmit false information and file the relevant information with departments such as for internet information.
[In other words, violations should be reported to the relevant authorities.]
- set up convenient and effective portals for user appeals, and public complaints and reports, and shall publish the process for handling them and the time limits for responses, promptly accepting and handling them, and giving feedback on the outcome.
- follow the relevant provisions of the Provisions on the Management of Algorithmic Recommendations in Internet Information Services to provide filing procedures within 10 working days of beginning the provision of services.
- shall indicate their filing number and provide links to publicly displayed information on the websites, applications, and so forth that they provide.
[The two bullets above basically require providers to get a licence, the number of which needs to be clearly communicated, comparable to the licence number that is visible in footers of Chinese websites].
- follow the relevant state provisions to carry out security assessments when putting new products, usages, or functions online that have public opinion properties or capacity for social mobilisation
- cooperate with internet information departments carrying out oversight inspections in accordance with law and provide them with necessary technical and data support and assistance.
App stores must:
- perform security management responsibilities for deep synthesis applications provided by deep synthesis service providers, checking the security assessment, filings, and so forth for the application.
- where relevant state provisions are violated, they shall promptly employ measures to address it such as not making it available on the market, suspending availability, or taking it off the market.
All levels (central, provincial, local, etc) of China’s internet information department must
- carry out oversight inspections of deep synthesis service providers’ performance of entity responsibility for deep synthesis content management
- promptly submit corrective comments and set a time for corrections for deep synthesis service providers that have problems.
Relevant industry organisations are encouraged to:
- strengthen self-discipline
- establish and complete industry standards and norms, and self-discipline systems, urging and guiding deep synthesis service providers to draft and improve service guidelines, strengthen the management of information content security, provide services in accordance with law, and accept societal oversight.
In the case of violations of these regulations where no punishments can be applied based on existing laws, central and local governments can:
- give warnings and order corrections in a set period of time and stop relevant operations until corrections have been made
- order a stop of information updates and give a concurrent fine of between 10,000 and 100,000 RMB where corrections are refused, or the circumstances are serious
As elsewhere, deepfakes have been problematic in China. They have been used in financial scams, like in 2020 when elderly women were tricked into thinking they had online relationships with celebrities. At the time news articles mentioned that while internet platforms tried to detect deepfakes using technology, as required by the legislation implemented earlier that year, it still needed human labour. Since deepfakes were not as heavily regulated as pornographic or violent content, it probably did not get the same priority. The new legislation will probably help by trying to solve the problem at its very root: the tool or app where the deepfakes are created before they are even shared.
Many of China’s regulations of 2020 and the current proposal go further than many of the policy options that the Panel for the Future of Science and Technology (STOA) proposed to the European Parliament. Even if Europe would implement some or all of STOA’s suggestions, they would only come into effect when the DSA and AIA are fully implemented. By that time China will already be years ahead of us.
While we might have a natural aversion to anything that involves China regulating its internet, it’s certainly not all creepy surveillance and censorship. With its new laws China is overtaking the west in handling thorny online problems and there is a lot we can learn from their ideas and best practices. As with the regulations on algorithms it’s interesting to see how this new regulation on deepfakes plays out. After all, there’s a good reason why European think tanks are closely watching developments in China and using some of the learnings when advising the European Parliament.