Platform Accountability
Information Manipulation
Computational Social Science

A Pill Hard to Swallow - Meta's Failing Moderation

Exposing Large Scale Health Scams Targeting EU Users.

Our investigation reveals Meta's ongoing failure to moderate its advertisement ecosystem, particularly for health-related content.

We identified over 46k advertisements containing unapproved drugs and deceptive health claims that were shown by Meta to European users over 292 million times.

These ads violated at least 15 of Meta's own Advertising and Community Standards, featuring celebrity deepfakes, impersonation of medical professionals and news outlets, and misleading health claims.

This activity continues unabated into 2025, suggesting systemic failures in Meta's risk mitigation approaches required under DSA Article 34.

Meta's Double Standards on Pornography

Exposing Pornographic Ads on Meta Despite Content Moderation Claims.

I detected thousands of pornographic advertisements featuring explicit adult content that were reviewed, approved, and distributed by Meta in clear violation of their Community and Advertising Standards.

To highlight the double standard, I uploaded the exact same pornographic visuals as organic user posts, which were automatically taken down by Meta.

Meta has the technology to detect pornographic content; our observations suggest they simply chose not to use it for advertisements, their core source of revenue.

Monitoring of the Doppelgänger Operation

From ads on Meta, to spoofed news outlet articles relayed on X and leaked SDA documents.

In collaboration with CheckFirst and Reset.Tech, I cross-referenced thousands of Doppelganger Ads with leaked documents from the Social Design Agency (SDA), the company linked to the Kremlin behind the propaganda operation. Through this analysis, we documented the sustained access by SDA to Meta's Advertising Ecosystem even after being sanctioned by the US and EU.

Leveraging the methodology used to detect coordinated inauthentic behaviors, I monitor the Doppelgänger campaign.

  • Ahead of the 2024 EU Parliamentary Election, we identified Doppelgänger ads targeting Italy and Poland, in addition to the historic targets, France and Germany.
  • Doppelgänger ads disseminated propaganda and fake articles supporting far-right candidates in France.
  • Among the ads uploaded by the threat actors, I detected a screenshot of their internal VK Teams, providing valuable insights into their operations.
  • Monitoring and detecting new spoofed outlets: historia.fyi, spektrum.cfd closermag.eu, psychologies.top,

Politoscope: Observatory of French Political Twittersphere

Ahead of the European and French elections, let's take a look at political exchanges on X.

By collecting over a million retweets of French political figures, I depicted the landscape of political exchanges on X. Filtering by topic reveals markedly different landscapes. Emmanuel Macron's party aligns closely with the Ecologists on the discussion of the Russian war in Ukraine but is completely opposed on the issue of the Israel's war on Gaza, where it is closer to the Far-right.

Meta's Ads: Scams & Pro-Russian Propaganda

Analyzing over 200 millions ads approved by Meta reveal majors shortcomings in their moderation.

Thanks to the Digital Services Act Article 39, with AI Forensics, we tens of millions of advertisements run in the EU. By training language models to detect political advertisements, we assessed Meta's moderation, and documented:

  • Widespread Non-compliance: Only a small fraction of undeclared political ads are caught by Meta's moderation system.
  • Ineffective Moderation: 60% of ads moderated by Meta do not adhere to their own guidelines concerning political advertising.
  • Significant Reach: A specific pro-Russian propaganda campaign reached over 38 million users in France and Germany, with most ads not being identified as political in a timely manner.
  • Rapid Adaptation: The influence operation has adeptly adjusted its messaging to major geopolitical events to further its narratives.

Browsing Amazon’s Book Bubbles

Examination of algorithmic recommendations on Amazon's Belgian and French bookstores.

As a result of a collaboration between AI Forensics and CheckFirst, we reveal that:

  • Amazon's search results fail to provide a pluralism of views and convey misleading information, specifically on public health, immigration, gender, and climate change.
  • Amazon's recommender systems trap users into tight book communities, which are hard to exit, and some of these communities contain books endorsing climate denialism, conspiracy theories, and conservative views.
  • Amazon fails to enforce its policies, resulting in the widespread availability of sexually explicit content without age restrictions.

We urged regulators to ensure that Amazon comprehensively addresses the impact of its systems on the dissemination and amplification of disinformation content.

Horus: Crowdsourced Audit of Twitter & Youtube

By collecting what platforms recommend to users, I characterize their inner workings.

To characterize the recommender systems of Facebook, Google Search, YouTube, and Twitter, I developed a browser add-on that collects algorithmically curated content, as well as user interactions. This new, longitudinal study, running since Fall 2022, has enabled numerous studies, such as an audit of Twitter's 'For You' feed and characterizing the differences between YouTube's API and what real users are exposed to.

Climatoscope: Observatory of Climate Change Discussions on Twitter

Social macroscope for a better understanding of the circulation of climate change narratives.

In March 2023, prior to the release of the IPCC AR6, we leveraged extensive databases containing over 500 million tweets, curated by the amazing Maziyar Panahi, to analyze the global discussion on climate change.

  • The global climate change debate on Twitter is highly polarized, with about 30% climate denialists among Twitter accounts addressing climate issues from 2019 to 2022, rising to over 50% in Elon Musk's Twitter.
  • The denialist community has 71% more inauthentic accounts compared to pro-climate communities.

Welcome.

I'm Paul Bouchaud

My pronouns are they/them.

I am a researcher conducting adversarial audits of online platforms to uphold platform accountability and safeguard informational ecosystems.

Through crowdsourced data donation, large-scale web scraping, and reverse-engineering of internal APIs, I collect and analyze original datasets to characterize platform and actor behaviors.

Based in Paris, I hold a PhD in algorithmic auditing from EHESS. My work with the non-profit AI Forensics, as well as with national and European regulators, is extensively covered in the media to inform public discussion.