AI Governance

AI, Global Governance, and Digital Sovereignty

By

Swati Srivastava,

Justin Bullock,

October 23, 2024

This essay examines how Artificial Intelligence (AI) systems are becoming more integral to international affairs by affecting how global governors exert power and pursue digital sovereignty.

Abstract

This essay examines how Artificial Intelligence (AI) systems are becoming more integral to international affairs by affecting how global governors exert power and pursue digital sovereignty. We first introduce a taxonomy of multifaceted AI payoffs for governments and corporations related to instrumental, structural, and discursive power in the domains of violence, markets, and rights. We next leverage different institutional and practice perspectives on sovereignty to assess how digital sovereignty is variously implicated in AI-empowered global governance. States both seek sovereign control over AI infrastructures in the institutional approach, while establishing sovereign competence through AI infrastructures in the practice approach. Overall, we present the digital sovereignty stakes of AI as related to entanglements of public and private power. Rather than foreseeing technology companies as replacing states, we argue that AI systems will embed in global governance to create dueling dynamics of public/private cooperation and contestation. We conclude with sketching future directions for IR research on AI and global governance.

Introduction

This essay examines how Artificial Intelligence (AI) systems are becoming more integral to international affairs by affecting how global governors exert power and invoke digital sovereignty. AI implicates governing power in different ways. For states, AI is an “enabling force” (Horowitz, 2018, p.39) (Arsenault and Kreps, 2024, p. 960) with promises to improve capabilities and competitiveness (Bullock et al., 2024). In 2022, U.S. federal agencies “reported about 1,200 AI use cases—specific challenges or opportunities that AI may solve” (Government Accountability Office, 2023). In light of this perceived promise, states are racing to secure AI as a national asset before their rivals (Horowitz et al., 2024, p.924) and seeking to set the agenda in AI governance debates (Bradford, 2023; Canfil and Kania, 2024). Yet, states must also rely on globalized technology infrastructures of data, computing resources, and human talent that make zero-sum competition difficult (Ding, 2024, p. 882). Meanwhile, AI is core to the continued dominance of traditional Big Tech firms Alphabet, Meta, Amazon, and Microsoft (Zuboff, 2019; Srivastava, 2023). Newer players OpenAI and Anthropic have their foundation models integrated into everyday chatbots and AI assistants, while chip designer Nvidia and manufacturer TSMC have seen their fortunes soar. The centrality of private actors to AI advances have led to claims of an emerging “technopolar” order where “technology companies wield the kind of power in the domains once reserved by nation-states” (Bremmer and Suleyman, 2023, p. 28). In response, states have pursued schemes for “digital sovereignty” (Bellanova et al., 2022; Broeders et al., 2023; Adler-Nissen and Eggeling, 2024).

But what is AI? Over the past 75 years, “artificial intelligence” has been used to describe various machine architectures. Work began on AI shortly after the invention of mechanical computers during the 1940s (Dyson, 2012). Early approaches to AI included architectures of symbolic manipulation and statistical inference. Throughout the second half of the 20th century and the first decade of the 21st century, interest in and progress of AI systems waxed and waned with a series of “AI summers” followed by “AI winters.” During the same period, digitalization marched forward with the rise of personal computers and the world wide web (Dyson, 2012). These developments highlighted the governance challenges and opportunities from vast increases in computation and the global interconnectedness of computation (DeNardis, 2020). Digital governance, for example, has become its own area of inquiry (Milakovich, 2012; Luna-Reyes, 2017; Manoharan et al., 2023). However, modern machine learning and neural network forms of AI became more capable in the early 2010s. The creation of “frontier” large language models over the last decade has ushered in AI systems with enhanced capabilities in a wide array of areas, including reading comprehension, image and speech recognition, and predictive reasoning (Kiela et al., 2023). Human judgment in governance is by some accounts being both augmented and replaced by machine intelligence (Bullock, 2019; Young et al., 2019). At the same time, the ethical AI community warns about perpetuating “AI hype” centered on future risks (Bender et al., 2021) while ignoring current AI harms impacting marginalized groups (Eubanks, 2018; Noble, 2018; Benjamin, 2019). Ultimately, as a “general purpose technology,” frontier AI is unlike a specific tool or weapon and more like electricity in its transformative potential (Horowitz, 2018).

What does the proliferation of frontier AI systems mean for global governance? International Relations (IR) has examined how control of data and computing resources entrenches state power (Farrell and Newman, 2023), grows private power (Atal, 2021; Srivastava, 2023), and raises new dilemmas for human rights (Wong, 2023), regulatory regimes (Gorwa, 2024), and ethics (Erskine, 2024). State use of AI tools for digital repression, such as censorship and surveillance, is perhaps most evident in China’s treatment of the Uyghurs, but the same technologies are exported globally and are also prevalent in democracies (Deibert, 2020; Feldstein, 2021). AI riches accrue to large corporations who are able to invest billions in computing power, data, and human talent (Vipra and Korinek, 2023), all the while operating without public accountability (Lehdonvirta, 2022). As AI advancements create market concentrations, the regulatory gap widens (Culpepper and Thelen, 2020; Seidl, 2022).

Our aim in this agenda-setting essay is (1) to map how frontier AI systems equip public and private global governors with new ways of exercising power and (2) to assess the resulting implications for the emergence of digital sovereignty. We argue that AI systems do not affect power or sovereignty in singular ways. Instead, we create a taxonomy of AI payoffs related to instrumental, structural, and discursive power (Lukes, 1974; Fuchs, 2013). The purpose of the taxonomy is to take stock of the variety of ways in which AI systems have been, or might be, used by public and private governors (on varieties of global governors, see (Avant et al., 2010). For public global governors such as states, we explore the development of powerful weapons with decreased human oversight (instrumental power), increases in internal control by supercharging surveillance capacity (structural power), and the ability to tailor propaganda to individual susceptibilities (discursive power). For private global governors such as corporations, we explore the invasive control of employees (instrumental power), concentration of computational resources (structural power), and alteration of the trust landscape (discursive power). We also discuss how autonomous AI models may express more agentic control over governance decision-making, including being in conflict with the goals of humans and organizations that create the AI systems. By including private governors and potential AI agents, our taxonomy moves beyond recent overviews of AI for international politics that are largely state-centric (Horowitz et al., 2024, p.929-930) or skeptical of AI’s unforeseen transformative potential (Arsenault and Kreps, 2024, p.960).

We present the digital sovereignty stakes of AI as related to entanglements of public and private power. Policy experts lament that “big technology firms have effectively become independent, sovereign actors in the digital realms they have created” (Bremmer and Suleyman, 2023, p.28), echoing claims by some IR scholars that online platforms now exhibit “virtual sovereignty” (Kelton et al., 2022). Within this context, some states are pursuing “sovereign AI” in their national strategies, as seen in India’s assertion: “We are determined that we must have our own sovereign AI” (Barik, 2023). But India’s ability to meet this objective is questionable (Panday and Samdub, 2024). Indeed, American firm Nvidia regards its chips as integral to state pursuit of sovereign AI, which the company defines as “a nation’s capabilities to produce artificial intelligence using its own infrastructure, data, workforce and business networks” ((Nvidia, 2024) emphasis added). The French government also explicitly connects AI and sovereignty, recently acknowledging “our lag in the field of artificial intelligence undermines our sovereignty. Weak control of technology effectively implies a one-way dependence on other countries. In the privatized and ever-evolving field of AI, public power appears largely outmatched, limiting our collective ability to make choices aligned with our values and interests” (Artificial Intelligence Commission, 2024, p.8). These efforts follow a broader resurgence of sovereignty talk in global discourse (Paris, 2020). In reality, digital sovereignty has multifaceted meanings that authorize a range of policy practices (Bellanova et al., 2022; Broeders et al., 2023; Adler-Nissen and Eggeling, 2024).

We build on this later work to discuss AI’s implications for digital sovereignty in two ways. As an international institution, sovereignty is state-centric (Onuf, 1991) and relies on states keeping nonstate actors out of their exclusive club (Barkin, 2021). In the institutional perspective, the AI race creates opportunities for states to assert digital sovereignty over AI infrastructures and to be seen as taking on Big Tech as potential rivals. Recent European regulations such as the Digital Services Act, Digital Markets Act, and AI Act project Europe as an autonomous actor over private, particularly non-European, AI infrastructures. However, sovereignty is also an ongoing social practice (Wendt, 1992; Biersteker and Weber, 1996), where performing sovereign functions may require states working with nonstate actors (Srivastava, 2022a,). In the practice perspective, digital sovereignty is achieved through largely private AI infrastructures and depends on governments and companies co-developing capacity for AI innovation and regulation. Sticking with the European context, the Data Act and the Data Governance Act aim to monetize public data for European companies, while the Digital Services Act adopts a “co-regulatory” model with tech platforms. Thus, AI’s diverse power payoffs in global governance are likely to result in dual dynamics of states pursuing sovereignty over AI and sovereignty through AI.

The rest of the article proceeds as follows. The next section introduces the classic “three faces of power” approach in global governance. The third section maps how public and private governors may use AI to alter their instrumental, structural, and discursive power in the domains of violence, markets, and rights. We also consider how autonomous AI agents may be seen as decision-makers in global affairs. In the fourth section, we discuss AI’s implications for digital sovereignty from both the institutional and practice perspectives. The fifth section concludes with directions for future IR research.

Report Link

Report Link

Report Link

Download the full PDF of the report

Download the full PDF of the report

Newsletter

Newsletter

Newsletter

Get research updates from Convergence

Leave us your contact info and we’ll share our latest research, partnerships, and projects as they're released.

You may opt out at any time. View our Privacy Policy.