AI governance: five takeaways from Paris Peace Forum 2023

By

Nick Newsom

17 Nov 2023

|

Blog

Paris Peace Forum 2023
Paris Peace Forum 2023

As artificial intelligence (AI) systems grow more advanced and ubiquitous, questions around how we can mitigate potential risks whilst still promoting innovation have come to the forefront.

Held annually since 2018, the Paris Peace Forum has become one of the highest profile stops on the annual global governance circuit. How to manage advances in AI such that they serve humanity and developmental needs, whilst respecting human rights, is one of the primary themes technology, government, and civil society leaders discuss at the two-day event.

Siren was at this year's event showcasing our AI powered counter-disinformation platform, DALIL MENA. Here are five key takeaways on the complex challenges of AI governance that emerged from the discussions this year.

The need for global cooperation

While nations may differ in their specific priorities and approaches, speakers emphasised the importance of developing a common foundation and ethical framework for AI governance globally.

Zeynep Tufekci, Co-Char, Global Tech Thinkers, cautioned that this should happen before the technology becomes cheap and at available scale if we are to avoid potentially wide-reaching destabilising effects from AI.

While innovation is likely to always outpace governance, Microsoft's Brad Smith advocated for them to move forward “within a reasonable distance” of each other, through technical and ethical standards, national rules and global coordination.

Some examples include the ongoing development of the European Union’s AI Act, the Charter on AI in the media, and UNESCO’s ‘Recommendation on the Ethics of Artificial Intelligence,’ which was adopted by all 193 Member States in November 2021. These efforts are united in their prioritisation of outcomes and rights.

Many panellists argued that AI policy should focus on serving social goods over commercial interests. To nurture this shift, UNESCO's Gabriela Ramos advocated for building government capacities so those in power better understand AI technologies and can put in place market incentives that encourage investment in AI with positive social outcomes.

The need for robust monitoring

Several new AI monitoring tools were highlighted that aim to build an evidence base for policymaking.

Gabriela Ramos pointed to the Recommendation’s supporting Readiness Assessment Methodology, which helps countries understand how ready they are to implement AI ethically and responsibly, and identify needed institutional and regulatory changes.

Karine Persolino, who heads the AI unit at OECD, showcased the organisation’s AI Incident Monitor, which tracks where AI risks are actually materialising into hazards or actual harms, and collates reports that flag potential future threats.

Giacomo Persi Paoli from the UN Institute for Disarmament Research highlighted UNIDIR’s AI Policy Portal tool for monitoring national AI policies, strategies and structures, and identifying good practices.

The tool provides insights on how different governments are regulating the deployment, development and acquisition of AI, and integrating it into their military capabilities.

The Global Index on Responsible AI was also showcased as a measurement framework that benchmarks what responsible AI means. Research ICT Africa's Rachel Adams explained that the index contains 87 indicators across 29 thematic areas, and that there are 135 country experts in 135 countries around the world gathering data across this measurement framework.

The need for multistakeholder participation

The accumulation of decision-making power in the hands of a few tech companies is a governance flaw that panellists spent much time discussing. This power asymmetry has resulted in inadequate risk management, “a disparity between what AI is used for and what it is needed for,” and algorithmic bias due to a lack of representation, Rachel Adams said.

To address this, panellists called for the removal of barriers preventing the inclusion of diverse voices in AI policymaking. Linda Bonyo of Lawyers Hub Kenya commented that these include restrictive visa policies, which frequently prevent people from the Global South from accessing policymaking fora. Jamila Venturini of Derechos Digitales added that opaque high-level AI governance processes also hinder Global South participation.

Part of addressing this must involve “building a new public digital literacy about the harms, opportunities and mechanisms of governance that actually lead to a robust public conversation,” Vilas Dhar of the Patrick J. McGovern Foundation said.

Assertive southern cooperation and collaboration in defining what kind of AI is needed was also advocated by Denmark's tech ambassador Anne Marie Engtoft Meldgaard, who stated that countries could pool their AI purchasing power to demand greater representation in AI decision making.

The need for transparency and accountability

Many AI systems currently are proprietary. Speakers explained that black box algorithms make it hard to audit data sets for bias, make sense of how they work, and assess what outcomes they seek to achieve.

Open-source was promoted as necessary for advancing AI and having it trained on wider datasets. But many speakers advocated for a broader interpretation of openness, viewing transparency and explainable algorithms as a fundamental enabler of inclusive AI governance. They pointed to the role of governments in mandating civil society access to information, which would enable greater participation in discussions on what communities want from AI.

Accountability was also raised as a guiding principle to facilitate a common approach. Jamila Venturini stressed that international organisations must push countries in the Global North to “take responsibility for the commercial actors that are selling human rights abusive technologies to Global South countries … [and for] the environmental impact of mining in the Global South that is used to sustain the AI industry.”

Anne Marie Engtoft Meldgaard also advocated for more effective and inclusive fora where large online platforms that have outsized influence on geopolitics can be held accountable for the decisions and promises they make.

The need to address digital divides

Inclusive AI governance and development cannot be achieved without attention to digital divides. Speakers cited lacks of data centres, computing power, internet access and AI skills across the Global South.

Emphasising that inequality is a “deeply rooted socioeconomic problem,” Anne Marie Engtoft Meldgaard highlighted the need for "massive public infrastructure and investment" to build representative data and deploy AI meaningfully worldwide.

Linda Bonyo argued that fixing basic digitisation issues should come before advanced AI policy. “Without data, we actually can't get to what is aspirational, which is artificial intelligence … so it would be useful to fix that problem first, and then go to ancillary laws,” she said.

Speakers also highlighted how the high cost of entering the industry contributes to digital divides. Collaboration between the public and private sector was promoted as key for developing digital public infrastructure. Hitachi's Kazuo Noguchi noted that knowledge exchange from the private sector in the Global North toward the Global South could help lower entry and maintenance costs.

In sum, the discussions made clear that inclusive and equitable AI governance requires a continuous commitment to multistakeholder dialogue and concrete policy measures that centre human rights and social good.

Though the path forward is complex, we can feel inspired by the fact that we have come together many times before to successfully address difficult global governance issues. As Zeynep Tufekci put it, "we've done this before, and when it works, we no longer see it."