Characterizing the operation of 𝕏's crowd-sourced moderation system in 13 countries.
Overall we observe that:
Overall, by relying on consensus rather than factuality, 𝕏's Community Notes struggle to moderate polarizing content. As YouTube, TikTok, and Meta transition to substitute expert fact-checking with crowd-sourced systems, our results call for risk assessment comparing moderation outcomes over polarized content widely disseminated during elections.
Exposing how 𝕏 allows brands to target users based on sensitive personal data.
We inspected the targeting options of ads disclosed in 𝕏's Ads Repository, what we found is concerning.
Exposing Large Scale Health Scams Targeting EU Users.
Our investigation reveals Meta's ongoing failure to moderate its advertisement ecosystem, particularly for health-related content.
We identified over 46k advertisements containing unapproved drugs and deceptive health claims that were shown by Meta to European users over 292 million times.
These ads violated at least 15 of Meta's own Advertising and Community Standards, featuring celebrity deepfakes, impersonation of medical professionals and news outlets, and misleading health claims.
This activity continues unabated into 2025, suggesting systemic failures in Meta's risk mitigation approaches required under DSA Article 34.
Exposing Pornographic Ads on Meta Despite Content Moderation Claims.
I detected thousands of pornographic advertisements featuring explicit adult content that were reviewed, approved, and distributed by Meta in clear violation of their Community and Advertising Standards.
To highlight the double standard, I uploaded the exact same pornographic visuals as organic user posts, which were automatically taken down by Meta.
Meta has the technology to detect pornographic content; our observations suggest they simply chose not to use it for advertisements, their core source of revenue.
From ads on Meta, to spoofed news outlet articles relayed on 𝕏 and leaked SDA documents.
In collaboration with CheckFirst and Reset.Tech, I cross-referenced thousands of Doppelganger Ads with leaked documents from the Social Design Agency (SDA), the company linked to the Kremlin behind the propaganda operation. Through this analysis, we documented the sustained access by SDA to Meta's Advertising Ecosystem even after being sanctioned by the US and EU.
Leveraging the methodology used to detect coordinated inauthentic behaviors, I monitor the Doppelgänger campaign.
Ahead of the European and French elections, let's take a look at political exchanges on 𝕏.
By collecting over a million retweets of French political figures, I depicted the landscape of political exchanges on 𝕏. Filtering by topic reveals markedly different landscapes. Emmanuel Macron's party aligns closely with the Ecologists on the discussion of the Russian war in Ukraine but is completely opposed on the issue of the Israel's war on Gaza, where it is closer to the Far-right.
Analyzing over 200 millions ads approved by Meta reveal majors shortcomings in their moderation.
Thanks to the Digital Services Act Article 39, with AI Forensics, we tens of millions of advertisements run in the EU. By training language models to detect political advertisements, we assessed Meta's moderation, and documented:
Examination of algorithmic recommendations on Amazon's Belgian and French bookstores.
As a result of a collaboration between AI Forensics and CheckFirst, we reveal that:
We urged regulators to ensure that Amazon comprehensively addresses the impact of its systems on the dissemination and amplification of disinformation content.
By collecting what platforms recommend to users, I characterize their inner workings.
To characterize the recommender systems of Facebook, Google Search, YouTube, and Twitter, I developed a browser add-on that collects algorithmically curated content, as well as user interactions. This new, longitudinal study, running since Fall 2022, has enabled numerous studies, such as an audit of Twitter's 'For You' feed and characterizing the differences between YouTube's API and what real users are exposed to.
Social macroscope for a better understanding of the circulation of climate change narratives.
In March 2023, prior to the release of the IPCC AR6, we leveraged extensive databases containing over 500 million tweets, curated by the amazing Maziyar Panahi, to analyze the global discussion on climate change.
I am a researcher conducting adversarial audits of online platforms to uphold platform accountability and safeguard informational ecosystems.
Through data donation, large-scale web scraping, and reverse-engineering of internal APIs, I collect and analyze original datasets to characterize platform and actor behaviors.
Based in Paris, I hold a PhD in algorithmic auditing and am currently a resident researcher at the Complex Systems Institute and an associated researcher at the Medialab of Sciences Po.
Additionally, I serve as an expert and freelance researcher for NGOs, national and European institutions. My research receives extensive media coverage, contributing to informed public discourse.