Is China showing us the way in regulating algorithms?


2021 was the year of rectification in China’s online platform sector with companies like Alibaba, Meituan and Didi being hit by existing and new regulations. These regulations included anti-trust, cybersecurity, and other areas where China was playing catch-up with global developments like China’s Personal Information Protection Law, which was largely tailored after the European GDPR. Meanwhile, new anti-trust rules were a further specification for the online sector of regulations that were barely a decade old. But China also saw the announcement of a set of rules that would be the first in the world and which were immediately identified as the moment where China got ahead of the west in regulating tech.

In August 2021 the Cyberspace Administration of China (CAC) published the Provisions on the Administration of Internet Information Service Algorithmic Recommendation. This document, which could be considered a near-finished draft of a legislative proposal, showed unprecedented restrictions on the use of algorithms by tech companies. As Rogier Creemers of Leiden University explained in an interview with China Media Project, a 2019 consumer protection law already “requires that in any case where online services were offered on an algorithmic or automated-decision basis, the consumer should have a choice to switch it off.” But this new set of regulations was specifically made for algorithms and promised to make life much more difficult for Chinese internet companies.

On January 4th 2022, the final version of the regulation was published as Provisions on the Management of Algorithmic Recommendations in Internet Information Services (link in Chinese). It doesn’t differ enormously from the earlier draft and will become effective March 1st 2022, making China the first country in the world to tightly regulate algorithms, specifically those used in recommendation processes.

Why was this necessary?

Over the last couple of years China has come to realise the disruptive nature of algorithms. They have been used to implement differential pricing among e-commerce companies; based on your personal data one consumer would be offered a higher price than another for identical products or services. They have been used to push meal delivery couriers and ride-hailing drivers to the limit of their capacity. This has resulted in life-threatening antics of couriers trying to keep up with the maximum delivery times assigned to them while traversing public traffic. They have been used to manipulate minors into addictive app usage.

And yes, they have also undermined the foundations of the Chinese government. Content platforms like Bytedance’s Douyin (TikTok’s sister app in China) and the same company’s news aggregation app Jinri Toutiao both use recommendation engines to present content that consumers like, based on their previous behaviour. Giving users what they like best has doubtlessly pushed out boring government propaganda. On top of that, worries about the potential of algorithms steering public opinion and behaviour is clear from the new strict rules for algorithms that can “influence public opinion or mobilize the masses”. Algorithms have become a potential danger to that which the government values most: stability and harmony in society.

What’s in the regulations?

Let’s run through the regulations, using a summary and abbreviation of the (unofficial) translation by ChinaLawTranslate. A few notes before you continue:

  • The aim of this summary is to give a more detailed insight than most news articles, while at the same time providing a more concise and grouped list than the actual original text.
  • I’m focussing on the regulation’s requirements for algorithmic recommendation service providers (hereafter ‘companies’) and their algorithmic recommendation technologies/services’ [hereafter ‘algorithms’] and less so on the responsibilities of the government departments described in the provisions.
  • I have grouped the regulations as much as possible into limitations for the algorithms themselves and requirements for companies applying them.
  • Personal remarks are placed between [ ] brackets.
  • In case of doubt please consult the full translation or original Mandarin text.
  • As explained a bit clearer in an article in The Diplomat than the actual text, the regulations apply to three types of algorithms:
    • “Search filters” and “personalized recommendation algorithms” as used in social media feed algorithms (Weibo), content services (Tencent Music, streaming), online stores (e-commerce, app stores), and so on
    • “Dispatching and decision making” algorithms, such as those used by gig work platforms (like transport and delivery services)
    • “Generative or synthetic-type” algorithms used for content generation in gaming and virtual environments, virtual meetings, and more.
  • As with many regulations, responsibilities are appointed both nationally (the state internet information department and relevant departments of the State Council) and locally (delegated to equivalents in administrative regions) while relevant industry organizations are encouraged to self-regulate though standards, norms, and such.

Here we go …

  • Algorithms should not
    • be used to engage in legally prohibited activities, such as:
      • those that endanger national security or the societal public interest
      • disrupt economic and social order [more on this later]
      • harm the lawful rights and interests of others
    • be used to transmit information that is legally prohibited
    • transmit negative information
    • violate laws and regulations
    • go against ethics and morals, such as by inducing users to become addicted or spend too much
    • use keywords that are unlawful or negative information to record user interests or user labels used to push information
    • produce a negative impact on users, and prevent and reduce contention and disputes
    • generate or synthesize fake news information or transmit news information from units that are not within the scope provided by the state [companies should also have an Internet news information service permits when providing news and other existing regulations concerning news distribution also apply]
    • register fake accounts, illegal trade accounts, manipulate user accounts, or give false likes, comments, or forwards [manipulating and faking statistics through click farms, brushing and other illegal practices are a big problem in China]
    • interfere with information presentation such as by
      • blocking information
      • making excessive recommendations
      • manipulating the order of top content lists or search results
      • controlling ‘hot searches’ or selections
      • influencing online public opinion
      • evading oversight and management
    • unreasonably restrict other internet information service providers
    • obstruct and undermine the normal operation of the internet information services, implementing a monopoly or unfair competition [These last two points refer to internet companies using algorithms to block or limit access to each other’s services and could also apply to China’s so-called walled gardens in which one app/platforms blocks links to another (e.g., WeChat blocking hyperlinks to Taobao stores), a practice that has finally also come under fire in 2021]

  • Companies must
    • be oriented towards mainstream values, optimize mechanisms for algorithms, actively transmit positive energy, and promote the uplifting use of algorithms. [Requirements to comply with ‘mainstream values’ and ‘transmit positive energy’ have been present in regulations for online content for many years. Algorithms need to fit in too.]
    • set up an entity responsible for security of algorithms and management systems, technical measures and staffing for technological ethics reviews, data security, personal information protection, identification of illegal and negative content, etc.
    • draft and disclose the rules for algorithmic recommendation services [more on this later]
    • periodically check, assess, and verify algorithm mechanisms, models, data, and outcomes
    • immediately stop transmission of unlawful information when detected, eliminate/address it, store relevant records, and inform relevant authorities
    • improve rules for recording the interests of user models and rules for managing user labels
    • optimize the transparency and explainability [more on this later] of rules for searches, sorting, selections, pushing, and displays, to avoid producing a negative impact on users, and to prevent and reduce contention and disputes
    • inform users in a conspicuous fashion of the circumstances of the algorithm provided, and display the algorithm’s basic principles, intended purposes, main operation mechanisms, and so forth
    • actively present information ​​in key steps such as the homepage, home screen, hot searches, selections, top content lists, and pop-up windows
    • provide users with options not targeting their individual characteristics or provide users with convenient options to immediately stop the use of algorithms for them [in other words, users should be able to ‘opt-out’ of algorithms, which implies the option of an algorithm-less version of the service]
    • provide users with functions for selecting or deleting user labels used in algorithms that target their personal traits [the draft version also mentioned ‘changing’ labels, but this has been removed, probably because it would be hard to practically implement].
    • give an explanation and bear corresponding legal responsibility when causing a major impact on users’ rights and interests
    • protect minors online
      • facilitate minors obtaining beneficial and healthy information through methods appropriate and suited to minors
      • not push information to minors that might impact their physical and psychological health such as possibly leading them to imitate unsafe behaviours and conduct contrary to social mores or inducing negative habits [for instance, no short videos with crazy stunts or unhealthy challenges to show how thin you are, like we’ve seen so many times in the past decade]
      • not use algorithms to induce minors’ addiction to the internet [this is in line with 2021’s legislation limiting time minors can spend in video games and short video apps]
    • safeguard the rights and interests lawfully enjoyed by seniors
      • consider their needs in travel, seeking medical care, spending, and handling affairs
      • provide smart services suited to them [in recent years the government has already asked internet companies to make their products more suitable for the elderly]
      • carry out monitoring, recognition, and handling of information involving telecommunication network fraud, and facilitate seniors’ safe use of services
    • protect the laborers’ lawful rights and interests when providing work coordination services [for instance gig workers for meal delivery or ride-hailing receiving instructions through algorithms]
      • such as to receive salary and to rest and vacation
      • establish and improve algorithms related to assigning orders, salary composition and payment, work times, rewards, penalties, and so fort
    • protect the consumers’ rights to fair transactions when marketing goods or providing services
      • not use algorithms to carry out unreasonable differentiation in treatment in terms of transaction prices or other transaction conditions [this refers to algorithms being used for price discrimination in which one consumer pays more than another because his data predicts he is probably willing to pay a higher price]
      • not carry out other unlawful conduct, based on consumers preferences, transaction habits, or other traits
    • set up convenient and effective portals for user appeals and public complaints or reports, clarifying the process for handling them and time limits for giving feedback, to promptly accept and address them and give feedback on the outcomes.
    • retain network logs, cooperate with relevant authorities conducting security assessments and oversight inspections, and provide necessary technical and data support and assistance.

  • The internet information department will create a management system [a database I assume] of companies using algorithms including:
    • public sentiment attributes and capacity to mobilize the public
    • the content types
    • the scale of users
    • the importance of the data handled by the algorithm
    • the degree of interference in user conduct, and so forth.
  • Algorithms that have public opinion properties or capacity for social mobilization should
    • be registered within 10 working days of becoming operational. The same applies to modifications. When terminating the use of an algorithm it should be deregistered within 20 working days. [these requirement does not seem to be present for other algorithms, which raises the question how they will be registered in the aforementioned database]
  • Within 30 working days companies will receive a registration number after completing a filing. This number must be clearly displayed on a website or app. [This is comparable to a website registration number you will often find in the footer of a Chinese website].

The internet information department and other relevant authorities can:

  • give warnings and circulate criticism [read: ‘naming and shaming’, very common in China and often the first step before actually punishing companies], and order that corrections be made in a set period of time
  • order a temporary suspension of information updates [also common, often the second step to force companies to make the requested corrections more promptly] and give a fine of between 10,000 and 100,000 RMB [~$1.600 to $16.000] where corrections are refused, or the circumstances are serious [compared to other regulations these penalties seem relatively mild]
  • give a public security administrative sanction where violations of public security are constituted
  • pursue criminal responsibility where a crime is constituted
  • revoke the filings and give warnings and circulate criticism where companies using algorithms with public opinion properties or capacity for social mobilization use improper tactics such as concealing relevant circumstances or providing false materials in filing
  • order a suspension of information updates and give a concurrent fine of between 10,000 and 100,000 RMB where these same companies refuse corrections or where the circumstances are serious
  • give administrative punishments such as ordering to close websites or cancelling related operation permits or business licenses when these same companies fail to promptly deregister [this is normally the last resort and basically means terminating a product (like what happened with Bytedance’s Neihan Duanzi) or even a whole business. It’s remarkable that it is only specifically mentioned in the case of deregistering an algorithm.]

Social order

Many of the limitations for the use of algorithms seem very reasonable; they protect laborers, minors, seniors, consumers, and users in general. They also offer users ways to determine if apps should use algorithms for them at all. But close examination of the regulations points out that there is also a strong emphasis and tougher rules for algorithms that can disturb social order, influence public opinion, and mobilize masses. You could translate this as people getting upset by what they see and read and taking to the street or even rebelling because of information that the algorithms have fed them (regardless of the information being factual or not). This is a danger to the party’s desired social stability and harmony. Instead, it wants algorithms to behave in an uplifting manner, stick to mainstream values (as dictated by the state), party ideology and transmit positive energy. Happy happy happy!

As such, these rules are not just there to protect the various user groups but also help the government keep a tight grip on what information is distributed in what way. The regulations are therefore, at least partially, another step to control internet content, just like the great firewall, censorship instructions and various ways to stimulate self-censorship have done.

Of course, as we have seen in the west, there is a substantial danger of algorithms disrupting social order, intentionally or not. This is often triggered by misinformation and fake news, which are also addressed in these new regulations. But who gets to determine what is fake and what is real? As with 2013 legislation (link in Dutch) that made spreading ‘rumours’ punishable by up to 3 years of prison and 2016 legislation (link in Dutch) that forbade distribution of certain news categories unless sanctioned by the state, it will be that same authority that also determines what is fake news and misinformation. That is problematic to say the least.

Explaining the inexplicable

A large part of the regulations deals with describing, explaining, and disclosing the workings of algorithms to both users and authorities. It’s questionable how feasible it is to implement these required changes. I’m no software engineer, but I have worked with several on predictive modelling projects. On one of these, where we would predict a customer’s most likely desirable next travel destinations for a tour operator. We had the choice between an old-fashioned predictive model or a machine learning based algorithm. In the predictive model the variables with predictive value from customer data and their individual weights were pre-determined by statistical analysis. As such, we would always be able to explain the outcome and trace it back to specific customer data. If necessary, under regulations like these we could explain them, although I’m not sure the average customer would understand. In the machine learning option, it would be impossible to explain the model, even to ourselves, according to the engineers.

This might also be the case for many algorithms used by Chinese tech companies. How easy can they be explained (and understood by the CAC) and adapted to specific customer needs? What if their mechanics are fuzzy and might even constantly self-adapt based on machine learning? And when they could even be explained, would every change require disclosing this updated information to users and authorities? In other words, how easily executable are these instructions? The CAC does not seem to offer any guidelines in this text and will probably leave it to relevant provincial organisations to figure out.

Is China showing us the way?

On one hand the new regulations will further tighten the government’s grip over what content is allowed. But on the other hand, many governments and consumer protection organisations outside China will be secretly jealous of this unprecedented development. The use of algorithms has been highly controversial for years, with problems like exploitation of gig workers and algorithmic bias frequently being discussed in documentaries (like this one).

Algorithms have become highly influential in steering our opinions and behaviour, just the thing that the Chinese government is so worried about. Social media like Twitter, Facebook and LinkedIn decide for us what we see. Their algorithms are designed to keep us on their platforms as long as possible so they can serve us more ads and help them make more money. As a wise man once said: ‘If the product is free, you are the product!’ As such, it’s not difficult to understand how Facebook’s algorithm prioritized posts with angry emojis.

Even though I’m active on Twitter I rarely use the Twitter app to read my newsfeed. Instead, I use a third-party dashboard that shows me my feed in chronological order, without adverts and without Twitter’s sorting. I can assure you; the difference is astonishing: less aggravating and less stressful. I have also noticed that some posts don’t even show up in the Twitter app itself, while I did decide to follow those accounts for a reason.

Meanwhile, LinkedIn seems to pollute our feeds with 60% irrelevant content (link in Dutch), while feeling more and more like another angry Facebook clone. At the same time, it’s getting harder and less predictable to get relevant content through to your own followers. And, in those pre-Covid days when I would still fly to China frequently, I would use an extensive set of instructions to circumvent algorithms and ensure I would get the best price for a ticket. I would be more than happy to have the option to shut those algorithms off.

Of course, sometimes algorithms can be helpful. As Rui Ma of TechBuzz China points out, apps like Douyin/TikTok would probably be very boring because it’s exactly the algorithmic recommendation engine what makes the product so good (and yes, addictive).

Other algorithms, like the collaborative filtering models that make suggestions what other movies you might like on IMDB or what other books people also bought on Amazon can be very helpful in a world where there’s too much choice.

These algorithms have been designed to increase revenue and/or decrease costs and improve profitability. Limiting their use will definitely impact the bottom line of the concerned Chinese tech companies. As I’ve said before, unfortunately user protection and profitability are often counteractive forces. What’s more, like with other content moderation requirements that have been implemented in China over the last two decades, compliance costs will further increase under these new rules.

Also, disclosure of the workings of an algorithm might give away some of the ingredients to the ‘secret sauces’ of Chinese internet companies. If you need to tell users how your algorithms work, you’re also telling your competitors and might lose a competitive advantage.

Finally, the effectiveness of the new regulations will highly depend on how they are technically implemented. How many of us read the long terms and conditions of apps before scrolling down and clicking the ‘accept’ button? How many people actually descend into the jungle of privacy settings of LinkedIn and Facebook to change how these platforms use their data? If Chinese platforms can burry options to adjust algorithms just as deeply, the new rules might make little difference.

Nevertheless, these new regulations are highly interesting and worth watching in the coming years. Maybe there’s something we can learn and maybe there will be an opportunity for us to pick-and-mix and copy back to our own societies, while carefully watching to leave the less desirable aspects where they are.