Team Dalil's reflections from the Paris AI Summit

By

Dalil team

21 Feb 2025

|

Blog

Siren Chair, Carole Alsharabati (left), and Head of AI, Amer Mouawad (right), in a selfie taken during the Paris AI Action Summit in Feb. 2024
Siren Chair, Carole Alsharabati (left), and Head of AI, Amer Mouawad (right), in a selfie taken during the Paris AI Action Summit in Feb. 2024

We’re fresh back from Paris and the AI Action Summit, and wow, what an experience. It was great to present our counter-disinformation project, Dalil, to such a diverse group of people – activists, innovators, policymakers, you name it. Here are some of our reflections on the whole thing.

First off, the summit felt particularly relevant given the current global context. As noted by Justin Vaïsse, founder and director general of the Paris Peace Forum, there is a palpable sense of dysfunction in the global order, marked by security challenges and significant disparities in development. In that kind of environment, trying to figure out something as transformative as AI needs a platform for discussion and, hopefully, some action. Plus, let's be real, there's a global race happening in AI. Countries are all vying to be leaders, focusing a lot on innovation and investment. This emphasis was evident in the announcements and speeches delivered by various leaders, including President Macron's commitment to a substantial €109 billion investment in AI and the focus on economic opportunities highlighted by U.S. Vice President Vance. It's definitely a competitive landscape.

Within this dynamic, Dalil addresses a critical need: combating disinformation. As an application designed to identify and analyse disinformation in text, audio, and video, Dalil's role is particularly salient at a time when AI technologies are also being exploited to produce and accelerate the spread of false information. We gather data from social and traditional media, cluster it by theme, rank it by threat level, and present it to users so they can fight back against the bad stuff. Being able to showcase Dalil at the summit, alongside other public-interest AI projects, felt like a real opportunity to highlight the positive ways AI can be used to address societal challenges.

Thinking about the future of AI, it’s pretty clear we’re heading toward what’s being called Agentic AI. It’s not just about AI understanding language anymore; it’s about AI that can reason, interact, and eventually take action. Imagine AI that can go beyond just observing to actually solving problems in the real world – that’s the direction we’re heading. This evolution presents both significant opportunities and challenges. While AI can automate routine tasks and enable humans to focus on more complex and creative pursuits, it also necessitates a focus on human-centred AI principles, emphasising dignity, agency, and community.

When it comes to AI safety and regulation, the vibe at the Paris summit felt a bit different from previous ones. The Paris event, branded as an "Action" summit, saw a greater focus on investment and innovation. There was still a call for binding regulations on safety, cybersecurity, and disinformation, but safety definitely took a backseat to discussions about investment and innovation. Some groups felt that the declaration coming out of the summit wasn't strong enough on safety commitments, and that more action is needed to prevent a concentration of power among the limited number of entities controlling key AI technologies and infrastructure, potentially hindering broader access and innovation.

Overall, it’s clear there is a strong global eagerness to embrace the potential of AI, particularly from an economic standpoint. Against this backdrop, we must have a critical eye to identify snake oil and misleading narratives equating all AI innovation with societal advancement. Presenting Dalil reinforced for us the urgent need for tools that can ensure AI is used responsibly and ethically, especially in the fight against disinformation. As AI continues its rapid evolution towards more agentic capabilities, it’s crucial that we keep these considerations front and centre, working across disciplines to build a future where AI truly benefits humanity.