Mapping global AI governance: a nascent regime in a fragmented landscape
This blog post is based on a GLOBE article:
Schmitt, L. Mapping global AI governance: a nascent regime in a fragmented landscape. AI Ethics (2021).
The rise of artificial intelligence (AI) technology and its transformative impact across a wide range of issues pose new challenges to policymakers and other stakeholders around the globe. Whether one looks at the near, medium, or long term, there arise a myriad of legal and ethical challenges and even existential risks that societies need to address. These risks are exacerbated by a lack of effective global governance mechanisms to provide, at minimum, guardrails steering AI in beneficial directions. In a recent paper, GLOBE fellow Lewin Schmitt mapped the key actors and governance initiatives at various levels of government and the private sector. This blog post summarises the main findings.
The current global AI governance landscape displays a multitude of governance initiatives by various actors, some dealing with the regulation of very specific AI applications and others with more general, abstract principles of AI ethics and policy. By now, many countries have brought forward their own AI strategies, often with direct reference to the international level and questions of global AI governance. While these alone are valuable objects of study, this article focuses on those actors and initiatives that are by nature transnational or multilateral, i.e., that involve stakeholders from more than two countries. At this stage, almost none of them entail binding legislation, but rather political declarations, ethical principles, or partnerships.
There are many ways to organize these actors and initiatives, structuring them by their regional or topical scope, by the actors’ nature (e.g., governmental, business, civil society) or by the kind of instrument involved (e.g., international treaty or organization, alliances or partnerships, political declarations).
The article employs a two-by-two matrix (Table below) that distinguishes the following characteristics: (a) between action that is embedded in the existing governance architecture vs action that establishes new instruments; and (b) between state-led initiatives and non-state-led initiatives. Note that the latter dimension considers the origin or agency of action, not necessarily the organizational nature through which it is ultimately carried out.
G20
CCW Group of Governmental Experts on emerging technologies in the area of LAWS (GGE)
Council of Europe (CoE)
European Commission
Organisation for Economic Co-operation and Development (OECD)
IEEE
ISO/IEC
AI Partnership for Defense
The paper further summarizes and contextualizes the different actors and initiatives, highlighting their trajectories and connections. This exercise is mostly agnostic to the content of what these global governance initiatives and arrangements actually entail. Focusing the analysis on actors and instruments was a deliberate choice to avoid confusion between structure and content.
The fragmented landscape that emerges from this exercise is congruent with other authors’ characterizations of the nascent global AI governance architecture as an “unorganised” and “immature field”. Alongside the different actors, we find epistemic communities that are well-connected and often overlap. Governance actors differ in their agenda-setting and norm-setting powers. The analysis also shows the rapid progress and first signs of consolidation and convergence. Furthermore, the observed dynamics shed some light on the type of entities involved in the early design of AI governance—which is marked more by the utilization of existing governance instruments than by institutional innovation. In addition, it demonstrates a surprisingly high level of agency by international organizations.
The mapping of the global AI governance landscape allows for several relevant observations:
- First, there is a clear tendency to accommodate governance initiatives within the existing architecture, both by state and non-state actors. This could have several potential explanations. States and other global governance actors might be wary of foundational innovation and starting from scratch. Instead, they prefer to build on existing, proven governance arrangements. Alternatively, more attempts might have been made with new instruments and these might simply have been less fruitful and thus did not feature in this overview. In any case, the case of the GPAI suggests a gravitational pull towards established governance mechanisms.
- Second, there is a fairly equitable distribution of labor between national governments (state-led) and international organizations (non-state-led). The community of international organizations moved early to occupy an open policy space, thus carving out a considerable competence vis-à-vis its member states. These, in return, offloaded some of the AI policy work to international organizations (CoE, OECD via GPAI). This would suggest that states accept their role as useful fora for international cooperation and the steering of AI development into globally beneficial directions. However, global coordination in this realm has so far not touched upon legally binding treaties. It may well be that governments decided to transfer some authority to IOs only as long as they deal with rather abstract principles or soft governance, but would withdraw or stall as soon as work proceeds towards more regulatory, hard governance. Whether the CoE will produce any meaningful conclusions by the end of the year may be a good indication of the potential for such binding international rules.
- Thirdly, international standards organizations play a role in the development of AI governance, as is the case for most emerging technologies. More worrying is the shift towards geopolitics: in the last years, the development of international AI standards has increasingly received attention from key governments such as China, the EU, and the US. Their renewed interest and subsequent strategic engagement risks contention and the encroachment of geopolitical considerations into domains that ought to be technical [62, 63]. This may not only affect the quality of standards but also obstruct debates around AI ethics. As standards cannot be completely detached from the policy world, scholars of global AI governance need to have a sound understanding of the proceedings in the international standard-setting arena. Future research should explore the interactions and means by which governments aim to steer the development of standards to further their own perceived interests.
- Lastly, sub-state actors from the public sector are practically not present in the discussions around global AI governance. This is in stark contrast to other policy domains such as global climate change governance, where city networks play an important role. It is also a bit surprising, given that cities are one of the focal points of AI rollout and several cities have subsequently taken notable actions with regards to AI policy. However, to date, these actions are isolated and do not engage at the supranational or global level.
In light of the fuzzy nature of AI, it is barely surprising that the current landscape is somewhat fragmented. Promising moves towards some degree of centralization and coordination are found in the prominent role of the OECD. With its epistemic authority and its norm- and agenda-setting power, it managed to act as a reference point for the G7 and G20. Through its close collaboration with other multilateral actors such as the European Commission, the UN, and the CoE, and by using the GPAI as a dedicated tool for advancing global AI governance, it may continue to play a leading role.
With all this in mind, the article argues that we are witnessing the first signs of consolidation in this fragmented landscape. The nascent AI regime that emerges is polycentric and fragmented but gravitates around the OECD, which holds considerable epistemic authority and norm-setting power. It is polycentric because it features different epistemic communities and multiple centers of decision-making, each operating with some degree of autonomy. It is fragmented because there is substantial overlap in different actors’ membership and the topics addressed by these initiatives; the well-connected epistemic communities are equally overlapping. As with other polycentric governance architectures, global AI governance will likely continue to struggle with the challenge of coordination. While epistemic and membership overlap may benefit consolidation or convergence, topic overlap tends to foster fragmentation and adds complexity to the regime.
The article is mostly agnostic to the content of what these global governance initiatives and arrangements actually entail. This was a deliberate choice to focus the analysis on structure, actors and instruments, to avoid confusion between structure and content. Nevertheless, a quick look at the main developments suggests that there is convergence on a certain type of AI values and principles, as put forward by the European Commission and the OECD. These are focusing on trustworthy, human-centric AI.
Such terms are of course abstract and somewhat vague, thus leaving room for interpretation. This interpretation, contextualization, and operationalization of AI values will without doubt experience major contestation by different actors. While China is side-lined from most of the above initiatives, its role in AI governance cannot be understated. The government has signalled willingness to engage in global governance as a responsible actor, and specifically on AI ethics has made some steps towards conciliation. Yet, it will want to interpret AI ethics in accordance with its own cultural context and promote these views globally. Hence, how China engages with the GPAI and other governance initiatives (and vice-versa) will be an interesting space to watch and leaves ample room for future research.
Lewin Schmitt is a Pre-Doctoral Fellow at Institut Barcelona d’Estudis Internacionals (IBEI).