Welcome.

I'm Paul Bouchaud
My pronouns are They/Them

Feel free to email me Find me on BlueSky

My research focus on auditing online platforms' algorithms and investigating their impact on social dynamics.

I am pursuing my doctoral studies at EHESS/CAMS, supported by the Jean-Pierre Aguilar fellowship of the CFM Foundation for Research, under the direction of David Chavalarias. I am in residency at the Complex Systems Institute of Paris (CNRS/ISCPIF). I collaborate with AI Forensics, a European non-profit that investigates influential and opaque algorithms such as those of YouTube, Google, TikTok or Amazon.

In 2022, I developed and launched the Horus project, a crowdsourced audit of Twitter, YouTube, Google & Facebook. In a first study, I have been able to show that Twitter's recommender amplifies toxic tweets (insults, threats, etc.) and distorts the political landscape perceived by the users. More recently, leveraging the behavioral data collected from this initiative, I trained engagement predictive models, enabling me to explore audit methodologies and delve into the consequences of algorithms aimed at maximizing user engagement.

As part of my residency at the Complex Systems Institute of Paris, I am fortunate to work with extensive historical Twitter databases. These databases, curated by the amazing Maziyar Panahi, have amassed, since 2016, more than 700 million political tweets and 500 million tweets related to climate change. Prior to the release of the IPCC AR6 in March 2023, we conducted an in-depth analysis of the online discussion about climate, shedding light on the dynamics of the climate denialist community.

Before that, leveraging the extensive Politoscope database, I have been able to fully calibrate an agent-based model of Twitter accounts to examine how recommender systems can toxify social networking sites.

Media Coverage

Productions:

On Meta's Political Ad Policy Enforcement:
An analysis of Coordinated Campaigns
& Pro-Russian Propaganda
Bouchaud P
This study evaluates Meta's enforcement of political advertising policies across 16 EU countries, finding imprecise moderation with a 60% false-positive rate and a 95.2% false-negative rate. Coordinated advertising campaigns relaying pro-Russian propaganda, reached 38 millions of accounts since August 2023. Despite documented activities, less than 20% of such pages were moderated by Meta as political. With upcoming elections, these findings highlight significant shortcomings in Meta's political ad moderation.
Auditing the Audits: Evaluating Methodologies for Social Media Recommender System Audits
Bouchaud P, Ramaciotti P
April 2024 (Under Review)
Through a simulated Twitter-like platform designed to optimize user engagement and grounded in authentic behavioral data, we evaluates methodologies for auditing social media recommender systems. In particular, the impact of key parameters in sock-puppet audits, the number of friends and session length, on audit outcomes. Additionally, we investigate the algorithmic amplification of political content across different levels of granularity and political dimensions. Amid increasing regulatory scrutiny, this research contributes to enhancing methodologies for auditing social media platforms.
A Crowdsourced Exploration of Algorithmic Personalization on YouTube
Bouchaud P, Shabayek S
January 2024 (Under Review)
Combining users’ data donations from the Horus project and API results we explore how personalization operates on YouTube. Overall, we find that YouTube favors popular and authoritative News channels on users' Homepages, displays a higher diversity of content compared with users' Watch History in terms of categories of videos and political leaning. In particular, the results returned to users contained a higher fraction of channels users had subscribed to or recently watched compared to results of the API. We observe that search results do not align with users Watch History in terms of categories of videos and political leaning, as opposed to users' Watch Next recommendations.
Auditing Amazon Recommender Systems: Insights Into Risk Assessment Under The Digital Services Act
Bouchaud P, Çetin R.B
January 2024 [Tech Policy Press]
We consider repercussions of a lack of diversity in book recommendations as a possible source of risk to information integrity.
The Amazing Library: An Analysis of Amazon's Bookstore Algorithms within the DSA Framework
AI Forensics & CheckFirst
Based on extensive data collection across Amazon's Belgian and French bookstores, we reveal that:

  • Amazon's search results fail to provide a pluralism of views and convey misleading information, specifically on public health, immigration, gender, and climate change.
  • Amazon's recommender systems trap users into tight book communities, hard to exit, which for some contains books endorsing climate denialism, conspiracy theories and conservative views.
  • Amazon fails to enforce their policies resulting in the widespread availability of sexually explicit content without age restrictions.

  • We urge regulators to ensure that Amazon comprehensively addresses the impact of its systems on the dissemination and amplification of disinformation content.
    Browsing Amazon’s Book Bubbles
    Bouchaud P
    December 2023 (Under Review) [preprint]
    We investigate Amazon’s book recommendation system, uncovering cohesive communities of semantically similar books. We identify a large community of recommended books endorsing climate denialism, COVID-19 conspiracy theories, and conservative views. This study underscores how even non-personalized recommender systems can have foreseeable negative effects on public health and civic discourse.
    Algorithmic Amplification of Politics and Engagement Maximization on Social Media
    Bouchaud P
    We examine how engagement-maximizing recommender systems influence the visibility of Members of Parliament's tweets in timelines. We showcase the need for audits accounting for user characteristics when assessing the distortions introduced by personalization algorithms and advocate addressing online platform regulations by evaluating the metrics platforms aim to optimize.
    Skewed Perspectives: Examining the Influence of Engagement Maximization on Content Diversity in Social Media Feeds
    Bouchaud P
    This article investigates the information landscape shaped by curation algorithms that seek to maximize user engagement. Leveraging unique behavioral data, we trained machine learning models to predict user engagement with tweets. Our study reveals how the pursuit of engagement maximization skews content visibility, favoring posts similar to previously engaged content while downplaying alternative perspectives. The empirical grounding of our work contributes to the understanding of human-machine interactions and provides a basis for evidence-based policies aimed at promoting responsible social media platforms
    Crowdsourced Audit of Twitter’s Recommender Systems
    Bouchaud P, Chavalarias D, Panahi M
    Combining crowd-sourced data donation and large-scale server-side data collection, we provide quantitative experimental evidence of Twitter recommender distortion of users' subscriptions choices. In particular, we reveal:
  • toxic tweets are amplified by 50%
  • tweets from friends from the same community are highly amplified
  • uneven amplification across friends’ political leaning

  • The new fronts of denialism and climate skepticism
    Chavalarias D, Bouchaud P, Chomel V, Panahi M
    February 2023, [General Report]
    Analyzing two years of Twitter exchanges, we observe that the denialist community presents inauthentic forms of expertise, relays more toxic tweets and embeds +71% inauthentic accounts with respect to the Pro-Climate community. Pro-climate accounts fleeing from Elon Musk's Twitter, climate skeptic accounts represent 50% of the online discussion by March 2023.
    Can a Single Line of Code Change Society? Optimizing Engagement in Recommender Systems Necessarily Entails Systemic Risks for Global Information Flows, Opinion Dynamics and Social Structures
    Chavalarias D*, Bouchaud P*, Panahi M
    (*co-first authors)
    After having calibrated an agent-based model over a large scale longitudinal database of tweets from political activists, we compare the consequences of various recommendation algorithms on the social fabric and quantify their interaction with some cognitive biases.