By 2030, China plans to become the world’s leading country in artificial intelligence (AI). Beijing’s AI development and implementation approach are fast-paced and pragmatic, with a focus on finding applications to help solve real-world problems. Several advances are being made in healthcare, such as “AI doctor” chatbots, machine learning for pharmaceutical research, and deep learning for medical image processing. Aside from this rapid development in artificial intelligence, China’s AI policies are deeply troubling and deserve condemnation. Nevertheless, portraying China as a “villain” in this way may be overly simplistic and potentially costly.
The way China uses artificial intelligence poses serious concerns. However, AI advances that benefit citizens and society shouldn’t overshadow the fact that China’s authoritarian government abuses citizens’ data and violates their privacy. Due to a massive police surveillance apparatus powered by big data and artificial intelligence, China is living up to the fictional scenario every day. The government, for instance, now requires national IDs to buy train tickets, making it easier to block human rights activists or anti-corruption journalists from travelling.
According to reports and leaked documents, the government in Xinjiang, home of China’s Uighur Moslem minority, uses AI-sifted big data to screen people entering mosques and even shopping malls. The government can collect data on everything from bank accounts to family planning through thousands of checkpoints that require national IDs. The New York Times confirmed Soros’ fears when it reported that the Chinese authorities are using facial recognition technology to track and target members of the Uighurs, a persecuted Muslim minority in China. The recently released report by Human Rights Watch, titled “China’s Algorithms of Oppression,” provides additional evidence of Beijing’s use of new technologies to restrict the rights and liberties of Uighurs.
In democratic and open societies, AI can directly interfere with human rights beyond its use by repressive regimes. As AI systems collect personal data for micro ad targeting, the right to privacy is violated. Monitoring online content enabled by AI impedes freedom of speech. Access to and the sharing of information by users are controlled in opaque and incomprehensible ways, restricting users’ freedom of expression and opinion.
Disinformation campaigns powered by AI – such as troll bots and deepfakes (altered video clips) – threaten societies’ access to accurate information, disrupt elections, and erode social cohesion.
Concerns also exist over the emergence of opaque social governance systems that lack accountability mechanisms.
An AI-generated assessment is used to determine sentencing in the smart court system in Shanghai, for instance. Unfortunately, defendants find it difficult to assess the tool’s potential biases, the quality of the data and the soundness of the algorithm, making it difficult to contest the results.
China’s experience demonstrates the need for transparency and accountability when it comes to AI in public services. Inclusion and the protection of citizens’ digital rights must be the goals when designing and implementing systems. It is unhelpful to reduce China’s rapid AI development into a simplistic narrative about China as a threat or as a villain. Observers outside China need to engage in the debate and take more steps to understand – and learn from – the nuances of what’s really going on. According to MIT researcher Jonathan Frankle, “this is an urgent crisis we are slowly sleepwalking our way into.”