China’s Incoming AI Ethics Leadership Push: Leveraging the Digital Silk Road to Promote Ethics Principles Worldwide
Emmie Hine
University of Bologna, Department of Legal Studies, Bologna, Italy
KU Leuven, Center for IT & IP Law, Leuven, Belgium
The AI shockwave of 2023 is resonating through governments, companies, and civil society and buoying new proposals for international AI governance to mitigate the potential harms of AI. Some would involve a centralized AI governance body, others strengthening the existing governance regime (Roberts et al. 2023), but all are premised on the necessity of international cooperation to avoid widespread harm to people and the environment and foster equitable outcomes globally. As the leading countries in AI development, the US and China would be major players in AI governance (Stanford HAI 2021). However, their continued geopolitical rivalry, combined with the strategic importance of AI to interstate competition, makes cooperation between the two a thorny issue. Technology governance inherently involves defining what uses of the technology are desirable, undesirable, and even verboten. This requires normative agreement, which is difficult when actors have different values. In this essay, I focus on AI ethics as a form of both AI governance and international competition. I argue that that there is a 70% chance that China will tie AI ethics into its Digital Silk Road, releasing a highly influential global AI ethics framework targeted at developing countries in the next two years. If this comes to pass, it will deepen divides between China and the West and increase China’s influence over developing countries.
China is advocating for global cooperation in AI governance, at least in its rhetoric. In its 2022 Position Paper of the People’s Republic of China on Strengthening Ethical Governance of Artificial Intelligence (Position Paper), which provides advice on AI regulation, R&D, use, and cooperation to the international community with the goal of advancing international AI ethical governance, China calls for “the international community to reach international agreement on the issue of AI ethics on the basis of wide participation” and to create an international AI governance framework “while fully respecting the principles and practices of different countries’ AI governance” (Ministry of Foreign Affairs of the People’s Republic of China 2022). It issued a similar call in its 2023 announcement of the Global AI Governance Initiative, calling for countries to “build consensus through dialogue and cooperation, and develop open, fair, and efficient governing mechanisms” and saying that “[w]e should work together to prevent risks, and develop AI governance frameworks, norms and standards based on broad consensus, so as to make AI technologies more secure, reliable, controllable, and equitable” (Ministry of Foreign Affairs of the People’s Republic of China 2023). Though high-level, it makes a number of suggestions to achieve this, including “encourage active involvement from multiple stakeholders to achieve broad consensus in the field of international AI governance” and “increase the representation and voice of developing countries in global AI governance” ” (Ministry of Foreign Affairs of the People’s Republic of China 2023). It even explicitly endorses the UN discussions to establish an international AI governance institution. If taken at face value, these statements suggest that China wishes to pursue multilateral, global AI governance.
In contrast to China, the US’s rhetoric has been exclusionary and aimed at building a value-aligned coalition centered around “democratic values.” This coalition includes the EU, with a US-EU Trade and Technology Council (TTC) AI taxonomy saying that “trustworthy AI” should be “based on shared democratic values including the respect for the rule of law and fundamental rights” (Trade and Technology Council 2023). The US is also working with groups like the G7 and the OECD-hosted Global Partnership on AI to advance AI governance based on “shared democratic values,” which in a G7 Communiqué include “fairness, accountability, transparency, safety, protection from online harassment, hate and abuse and respect for privacy and human rights, fundamental freedoms and the protection of personal data” (“G7 Hiroshima Leaders’ Communiqué” 2023). These declarations of values have been accompanied by rhetoric explicitly painting China as competition for the US and EU (Hine and Floridi 2022; EURACTIV, 2022), as well as measures aimed at hobbling China’s chip industry (Roberts, Hine, and Floridi 2023) and sanctions on technology companies involved in human rights abuses in Xinjiang (Fromer 2021), where Uighurs are subject to AI-enabled tracking and detention (United Nations Office of the High Commissioner 2022). International cooperation in AI can take many forms, but right now, the US is opposed to almost all of them. While human rights abuse is a reasonable red line, the US’s commitment to promoting “democratic values” implies that the US is fundamentally opposed to working with an authoritarian country even if it was not engaged in these abuses, precluding constructive engagement.
Because the US is committed to building its own values-aligned coalition, cooperation is highly unlikely. Thus, China has nothing to lose and much to gain by expanding its AI efforts and building its own coalition. There is precedent for China building influence in developing countries through the Belt and Road Initiative’s Digital Silk Road (DSR), which supports digital infrastructure and technology development. China’s purpose behind the DSR is contested, considered by some to be an attempt to increase China’s power and by others to be a counter to domestic instability (Roberts, Hine, and Floridi 2023). The DSR is often perceived as a front to foist a system of digital authoritarianism on “vassal states,” but China is instead responding to demand—including for surveillance technology—from the governments of developing countries, creating a “Beijing Effect” that influences transnational data governance (Erie and Streinz 2021; Roberts, Hine, and Floridi 2023; Kurlantzick 2020). Furthermore, it is not a centrally organized “grand strategy,” but more of a slogan binding a variety of initiatives (Cheng and Zeng 2023). Regardless of motive, the DSR is increasing China’s influence over digital infrastructure worldwide, particularly in the developing world (Erie and Streinz 2021). For instance, it is a key way China exports standards (He 2022), which brings tangible economic benefits as well as soft power.
Working to increase its influence in developing countries also helps China achieve its explicitly stated goals for global leadership in AI across development, policy, standards, and ethics, which have been enshrined in policy since the release of the New Generation AI Development Plan (AIDP) in 2017. The AIDP set the goal of having the “overall technology and applications of AI in step with globally advanced levels” by 2020; achieving “major breakthroughs” in basic AI theories to achieve “world-leading levels” in some technologies and applications by 2025; and achieving “world-leading” levels of AI theory, technology, and applications, becoming the “world’s primary AI innovation center,” by 2030 (State Council 2017; Webster et al. 2017). China believes itself to have achieved its 2020 goal in 2018 (Allen 2019) and has been funding projects, passing laws, and pushing standards to facilitate its 2025 and 2030 goals (Hine and Floridi 2022; Roberts et al. 2023). Accompanying concrete action to achieve its goals are measures to broadcast its success and portray China as an international leader. The Global AI Governance Initiative, though nascent and containing few specifics, is an example of this for development and governance. Developing an “ethical norms and policy system” is also included in China’s long-term goals (State Council 2017), and China has passed laws, including on recommendation algorithms, synthetic content, and generative AI, to support this. Although the AIDP does not explicitly mention exporting this ethics and policy system, the Position Paper portrays China as a leader in AI ethics, making recommendations “based on its own policies and practices and with reference to useful international experience” (Ministry of Foreign Affairs of the People’s Republic of China 2022) and displaying an interest in global moves on AI ethics. Development and standards tangibly benefit China’s economy. Ethics leadership contributes more to soft power than economic or military power, but is no less important for international legitimacy, the perception of which is crucial for “shaping international politics” (Albert 2018). China has spent billions developing its soft power (Albert 2018); investing in AI ethics leadership would contribute to this.
Success in development and standards also contribute to soft power, but of its areas of global leadership interest, China’s ethics leadership is most nascent and thus most likely to be the subject of a new push. However, the US and EU are not likely to cooperate with China on AI ethics because of the aforementioned ideological and political conflicts. The one recent exception has been AI safety, with China, the US, UK, EU states, and 26 other countries signing the Bletchley Declaration after the November 2023 AI Safety Summit hosted by the UK. This can be seen as a bare minimum form of cooperation and shows the differences between “thin” and “thick” values (Walzer 1994). China has signed on to several sets of international AI ethics principles, including the 2019 G20 AI Principles (G20 Trade Ministers and Digital Economy Ministers 2019) and the derivative 2021 UNESCO Recommendation on the Ethics of Artificial Intelligence (“Recommendation on the Ethics of Artificial Intelligence” 2020; Roberts, Hine, and Floridi 2023), which are extremely high-level. It is easy to agree on generic “thin” concepts like “fairness,” “privacy,” and “safety” without agreeing on what they mean in practice—a difficult task considering how concepts can be translated differently in different cultural contexts and imbued with “thick” contextual meaning. The US would not want to be seen as endorsing, for example, AI that permits widespread government surveillance, which violates Western notions of privacy, even though more permissive use of data by the government accords with the Chinese conception of privacy (Pernot-Leplay 2020). However, it is easier to agree that AI should not pose an existential threat to humanity or that foundation models should not interfere with a country’s political stability, a concern both the US and China share.^1 Values often go from “thin” to “thick” when pursuing implementation, a fundamental issue of AI ethics principles. In its statements, China appears to be advocating for a stance of ethical pluralism (Hine 2023), which holds that there can be multiple ways to interpret different values (Ess 2006), while the US takes an absolutist stance that only “democratic values” are acceptable in AI, making cooperation on anything but the thinnest of statements difficult and limiting China’s options for global leadership.
Broadly, there are two feasible tactics for becoming a global leader in AI ethics: coming up with globally palatable principles or building a coalition around one’s own. The first is likely to have only limited success because of the difficulty of coming to a consensus under the current political dynamics. Thus, China’s best prospects for becoming a leader in AI ethics is appealing to countries outside the US-aligned bloc, such as those it is already building influence with through the DSR. Officially, this includes 17 countries across Africa, Asia, Europe, and South America who have signed memoranda of understanding with China (Sim 2023; Kurlantzick 2020), but it unofficially may include up to a third of the Belt and Road Initiative’s 138 participating countries (Hao 2019). The DSR is already used as a primary method for “digital expansionism” (Roberts, Hine, and Floridi 2023), and while it may be more slogan than strategy (Cheng and Zeng 2023), this gives it the flexibility to be used for new initiatives like a new AI ethics push. While it would make sense for an international AI ethics movement to be put under the banner of the Global AI Governance Initiative, due to the Initiative’s inchoateness, the DSR will provide a more effective framework for promoting them, and both initiatives are flexible enough to be intertwined; the Global AI Governance Initiative may be used for branding, while the DSR would be used for execution. DSR countries have already proven willing to accept funding, technology, and standards from China, so it is highly plausible that ethics “standards” could follow.
Since China does not necessarily share a values system with those countries and “exporting” a values system for blanket adoption, even if that was a goal of the DSR, is not an easy task, China’s best option for achieving leadership in global AI ethics is swaying developing countries to endorse its own AI ethics principles, coupled with a stance of ethical pluralism. This would give China the prestige associated with authoring an influential set of principles without the Sisyphean task of swaying the US to its side or directly overseeing the implementation of its principles. It also offers participating countries the opportunity to deepen their engagement with China and, if the AI ethics principles are bundled with development support as an incentive, promote domestic AI development. Optimistically, it would provide China and other adoptees with guidance for how to use AI ethically, but cynically, it would offer moral cover for using—and abusing—AI as they see fit with the justification that others must be respectful of their cultural contexts. Indeed, in the Position Paper, China called for “fully respecting the principles and practices of different countries’ AI governance” (Ministry of Foreign Affairs of the People’s Republic of China 2022); China’s calls for ethical pluralism could be read as demanding tolerance for abuses of AI. Getting more countries to endorse this stance would only increase pressure on the West to back off of their criticisms of China’s abuses of AI.
What exactly these principles would look like is unclear, but they will likely come packaged as a new document. China’s most recent AI ethics principles are the “Ethical Norms for New Generation Artificial Intelligence,” published on September 26 of 2021. These supplanted the Beijing AI Principles and the New Generation AI Principles, both published in 2019. The Beijing AI Principles and New Generation AI Principles were notable for having a distinctively Confucian character, referencing terms like “harmony” [和谐] that are core components of Confucianism (Hine 2023; Hine and Floridi 2022). However, the Ethical Norms are instead much more similar to the OECD AI principles, replacing references to “harmony and friendship” [和谐友好] with “enhance human well-being” [增进人类福祉] (Hine 2023). Despite their theoretically broad palatability, it is unlikely that these would be offered up wholesale to the international community. For one, they are targeted specifically at China’s “New Generation” AI initiative, creating the impression that they are a domestic document. For China to secure the endorsement of other countries, the document will need to have relevance to them, meaning that a new document reflecting the shift in focus towards international leadership indicated by the Position Paper is likely. Another nail in the coffin of the Ethical Norms is that they are over two years old at this point. While ethics theoretically should not outdate, these were released prior to the boom in generative AI and their accompanying threats to social stability, meaning that a re-prioritization is likely, especially considering the flurry of concrete legislation that followed the release of ChatGPT (Hine 2023). The Ethical Norms could serve as a springboard for a new set of AI ethics principles, but regardless of how much inspiration the Ethical Norms provide, a new document is likely. Given the current global attention on AI policy and ethics, this is likely to happen relatively soon, with the next two years seeming like ample time to draft and release such an initiative.
The consequences if this were to occur are also murky. The US would likely see it as an effort to exert authoritarian influence, similar to how it perceives the DSR and BRI today (Geraci 2020). If the US were to perceive it as a provocation, it could encourage a further entrenchment of its commitment to “democratic values,” cementing a values-aligned bifurcation that is not currently. It could also incentivize the US to more actively pursue developing countries, although its proposed counter to the BRI, “Build Back Better World,” has morphed into the “Partnership for Global Infrastructure and Investment” and faces an uncertain funding future (Keith 2022). This would also likely cause relations to deteriorate; although Chinese rhetoric tends to be fairly restrained, there is no doubt that China is piqued by existing efforts to build a US-led coalition and hinder its semiconductor industry, illustrated by quotes like “We oppose drawing ideological lines or forming exclusive groups to obstruct other countries from developing AI. We also oppose creating barriers and disrupting the global AI supply chain through technological monopolies and unilateral coercive measures” in the Global AI Governance Initiative announcement (Ministry of Foreign Affairs of the People’s Republic of China 2023). An escalation from rhetoric and criticism to explicit courting of other countries to counter China’s influence would likely cause a corresponding deterioration in relations, which are already at a low. The implications of this are what moderates my prediction to 70%, as China may not want to provoke conflict through a global AI ethics leadership push, despite the possible benefits.
An alternative to the likely tension this would create would be a cooling of rhetoric from the US and an active, good-faith effort to engage China in AI governance discussions, as China claims to desire, but given that the US is grounding its AI policy in countering China, a total about-face seems unlikely. This would also rightly necessitate a reversal of AI-enabled human rights abuses in Xinjiang, which is also improbable given Beijing’s insistence that the region is a national security risk, and is not guaranteed to succeed regardless. However, not trying leaves the door open to further abuses and harm from AI. Historically, implementations of “constructive engagement” policies to bring about change in authoritarian regimes have not been successful (Oo 2000), but the current sanctions regime is also not bringing about change, and the potential harms of AI demand effort in governance. A middle ground of low-level engagement without endorsement may be the best way forward. Although most new international AI governance policies involve a centralized, international body, one benefit of the current fragmented AI “regime complex”—with no central institution guiding AI governance—is that it requires low-level cooperation at a variety of institutions (Roberts et al. 2023). While there will undoubtedly be value clashes, lower-profile initiatives focused on specific areas of engagement (rather than the entirety of AI) may even be a way for the US to engage with China without losing face by extending an olive branch at high-level fora. Unfortunately, given the current geopolitical context and international fixation on centralized AI governance, it seems unlikely that this will happen. Thus, I predict that there is a 70% chance that China will leverage the DSR, and potentially the Global AI Governance Initiative as branding, to release an AI governance framework aimed at developing countries, proving itself a leader in AI ethics—albeit the leader of one faction of a fragmented world.
^1 This is seen through initiatives like the US government’s agreements with Big Tech companies to voluntarily label AI-generated content (The White House 2023) and China’s laws requiring it (Zheng and Zhang 2023) in the hopes of countering AI-generated disinformation.
Works Cited
Albert, Eleanor. 2018. “China’s Big Bet on Soft Power.” Council on Foreign Relations. February 9, 2018. https://www.cfr.org/backgrounder/chinas-big-bet-soft-power.
Allen, Gregory C. 2019. “Understanding China’s AI Strategy.” June 2, 2019. https://www.cnas.org/publications/reports/understanding-chinas-ai-strategy.
Cheng, Jing, and Jinghan Zeng. 2023. “‘Digital Silk Road’ as a Slogan Instead of a Grand Strategy.” Journal of Contemporary China 0 (0): 1–16. https://doi.org/10.1080/10670564.2023.2222269.
Erie, M. S., and T. Streinz. 2021. “The Beijing Effect: China’s ‘Digital Silk Road’ as Transnational Data Governance.” New York University of International Law and Politics 54 (1). https://ora.ox.ac.uk/objects/uuid:71efc786-4e5f-4006-8156-ea9cdfd3a433.
Ess, Charles. 2006. “Ethical Pluralism and Global Information Ethics.” Ethics and Information Technology 8 (4): 215–26. https://doi.org/10.1007/s10676-006-9113-3.
EURACTIV. n.d. “Europe’s Chips Challenge.” The Tech Brief. Accessed October 31, 2022. https://share.transistor.fm/s/c38f987d.
Fromer, Jacob. 2021. “US Sanctions China’s SenseTime, Xinjiang Officials over ‘Human Rights Abuses.’” South China Morning Post. December 11, 2021. https://www.scmp.com/news/china/article/3159297/biden-administration-sanctions-chinese-ai-company-sensetime-citing-human.
“G7 Hiroshima Leaders’ Communiqué.” 2023. The White House. May 20, 2023. https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/20/g7-hiroshima-leaders-communique/.
G20 Trade Ministers and Digital Economy Ministers. 2019. “G20 AI Principles.” https://wp.oecd.ai/app/uploads/2021/06/G20-AI-Principles.pdf.
Geraci, Matt. 2020. “An Update on American Perspectives on the Belt and Road Initiative.” Institute for China-America Studies (blog). January 23, 2020. https://chinaus-icas.org/research/an-update-on-american-perspectives-on-the-belt-and-road-initiative/.
Hao, Chan Jia. 2019. “China’s Digital Silk Road: The Integration Of Myanmar – Analysis.” Eurasia Review (blog). April 30, 2019. https://www.eurasiareview.com/30042019-chinas-digital-silk-road-the-integration-of-myanmar-analysis/.
He, Alex. 2022. “The Digital Silk Road and China’s Influence on Standard Setting.” Centre for International Governance Innovation. April 4, 2022. https://www.cigionline.org/publications/the-digital-silk-road-and-chinas-influence-on-standard-setting/.
Hine, Emmie. 2023. “Governing Silicon Valley and Shenzhen: Assessing a New Era of Artificial Intelligence Governance in the US and China.” SSRN Scholarly Paper. Rochester, NY. https://doi.org/10.2139/ssrn.4553087.
Hine, Emmie, and Luciano Floridi. 2022. “Artificial Intelligence with American Values and Chinese Characteristics: A Comparative Analysis of American and Chinese Governmental AI Policies.” AI & SOCIETY, June. https://doi.org/10.1007/s00146-022-01499-8.
Keith, Tamara. 2022. “Biden Said the G-7 Would Counter Chinese Influence. This Year, He’ll Try Again.” NPR, June 24, 2022, sec. Politics. https://www.npr.org/2022/06/24/1106979380/g7-summit-2022-germany-global-infrastructure.
Kurlantzick, Joshua. 2020. “Assessing China’s Digital Silk Road Initiative.” Council on Foreign Relations. December 18, 2020. https://www.cfr.org/china-digital-silk-road.
Ministry of Foreign Affairs of the People’s Republic of China. 2022. “Position Paper of the People’s Republic of China on Strengthening Ethical Governance of Artificial Intelligence (AI).” November 17, 2022. https://www.fmprc.gov.cn/mfa_eng/wjdt_665385/wjzcs/202211/t20221117_10976730.html.
———. 2023. “Global AI Governance Initiative.” https://www.mfa.gov.cn/eng/zxxx_662805/202310/P020231020384763963543.pdf.
Oo, Minn Naing. 2000. “Constructive Engagement: A Critical Evaluation.” Legal Issues on Burma Journal 7, no. 7.
Pernot-Leplay, Emmanuel. 2020. “China’s Approach on Data Privacy Law: A Third Way Between the U.S. and the EU?” Penn State Journal of Law & International Affairs 8 (1). https://papers.ssrn.com/abstract=3542820.
“Recommendation on the Ethics of Artificial Intelligence.” 2020. UNESCO. February 27, 2020. https://en.unesco.org/artificial-intelligence/ethics.
Roberts, Huw, Emmie Hine, and Luciano Floridi. 2023. “Digital Sovereignty, Digital Expansionism, and the Prospects for Global AI Governance.” SSRN Scholarly Paper. Rochester, NY. https://papers.ssrn.com/abstract=4483271.
Roberts, Huw, Emmie Hine, Mariarosaria Taddeo, and Luciano Floridi. 2023. “Global AI Governance: Barriers and Pathways Forward.” SSRN Scholarly Paper. Rochester, NY. https://doi.org/10.2139/ssrn.4588040.
Sim, Dewey. 2023. “From Railways to 5G: Why China Is Plugging into the Digital Silk Road.” South China Morning Post. October 15, 2023. https://www.scmp.com/news/china/diplomacy/article/3237945/railways-5g-why-china-plugging-digital-silk-road.
Stanford HAI. 2021. “Global AI Vibrancy Tool.” Stanford Institute for Human-Centered Artificial Intelligence. 2021. https://aiindex.stanford.edu/vibrancy/.
State Council. 2017. “Xin Yidai Rengong Zhineng Fazhan Guihua [A New Generation Artificial Intelligence Development Plan].” http://www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm.
The White House. 2023. “FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI.” The White House. July 21, 2023. https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/.
Trade and Technology Council. 2023. “EU-U.S. Terminology and Taxonomy for Artificial Intelligence: First Edition.” https://www.nist.gov/system/files/documents/noindex/2023/05/31/WG1%20AI%20Taxonomy%20and%20Terminology%20Subgroup%20List%20of%20Terms.pdf.
United Nations Office of the High Commissioner. 2022. “OHCHR Assessment of Human Rights Concerns in the Xinjiang Uyghur Autonomous Region, People’s Republic of China.” https://www.ohchr.org/
App status across various funders