Monitoring of the Doppelgänger Operation
From propaganda ads on Meta, to spoofed news outlet articles relayed on X.
Leveraging the methodology used to detect coordinated inauthentic behaviors, I monitor the Doppelgänger campaign.
- Ahead of the 2024 EU Parliamentary Election, we identified Doppelgänger ads targeting Italy and Poland, in addition to the historic targets, France and Germany.
- Doppelgänger ads disseminated propaganda and fake articles supporting far-right candidates in France.
- Among the ads uploaded by the threat actors, I detected a screenshot of their internal VK Teams, providing valuable insights into their operations.
- Monitoring and detecting new spoofed outlets:
historia.fyi,
spektrum.cfd
closermag.eu,
psychologies.top,
Leaked Sreenshot
Read Follow-Up Report
Politoscope: Observatory of French Political Twittersphere
Ahead of the European and French elections, let's take a look at political exchanges on X.
By collecting over a million retweets of French political figures, I depicted the landscape of political exchanges on X. Filtering by topic reveals markedly different landscapes. Emmanuel Macron's party aligns closely with the Ecologists on the discussion of the Russian war in Ukraine but is completely opposed on the issue of the Israel's war on Gaza, where it is closer to the Far-right.
See the landscape
Meta's Ads: Scams & Pro-Russian Propaganda
Analyzing over 200 millions ads approved by Meta reveal majors shortcomings in their moderation.
Thanks to the Digital Services Act Article 39, with AI Forensics, we tens of millions of advertisements run in the EU. By training language models to detect political advertisements, we assessed Meta's moderation, and documented:
- Widespread Non-compliance: Only a small fraction of undeclared political ads are caught by Meta's moderation system.
- Ineffective Moderation: 60% of ads moderated by Meta do not adhere to their own guidelines concerning political advertising.
- Significant Reach: A specific pro-Russian propaganda campaign reached over 38 million users in France and Germany, with most ads not being identified as political in a timely manner.
- Rapid Adaptation: The influence operation has adeptly adjusted its messaging to major geopolitical events to further its narratives.
Read Full Paper
Read the Report
Browsing Amazon’s Book Bubbles
Examination of algorithmic recommendations on Amazon's Belgian and French bookstores.
As a result of a collaboration between AI Forensics and CheckFirst, we reveal that:
- Amazon's search results fail to provide a pluralism of views and convey misleading information, specifically on public health, immigration, gender, and climate change.
- Amazon's recommender systems trap users into tight book communities, which are hard to exit, and some of these communities contain books endorsing climate denialism, conspiracy theories, and conservative views.
- Amazon fails to enforce its policies, resulting in the widespread availability of sexually explicit content without age restrictions.
We urged regulators to ensure that Amazon comprehensively addresses the impact of its systems on the dissemination and amplification of disinformation content.
Read Full Paper
Read the Report
Network Visualizer
Horus: Crowdsourced Audit of Twitter & Youtube
By collecting what platforms recommend to users, I characterize their inner workings.
To characterize the recommender systems of Facebook, Google Search, YouTube, and Twitter, I developed a browser add-on that collects algorithmically curated content, as well as user interactions. This new, longitudinal study, running since Fall 2022, has enabled numerous studies, such as an audit of Twitter's 'For You' feed and characterizing the differences between YouTube's API and what real users are exposed to.
Read the first paper
See the project page
Climatoscope: Observatory of Climate Change Discussions on Twitter
Social macroscope for a better understanding of the circulation of climate change narratives.
In March 2023, prior to the release of the IPCC AR6, we leveraged extensive databases containing over 500 million tweets, curated by the amazing Maziyar Panahi, to analyze the global discussion on climate change.
- The global climate change debate on Twitter is highly polarized, with about 30% climate denialists among Twitter accounts addressing climate issues from 2019 to 2022, rising to over 50% in Elon Musk's Twitter.
- The denialist community has 71% more inauthentic accounts compared to pro-climate communities.
Read the Report
Auditing the Audits: Evaluating Methodologies for Social Media Recommender System Audits
Bouchaud P, Ramaciotti P. September 2024 - Applied Network Science
Through a simulated Twitter-like platform designed to optimize user engagement and grounded in authentic behavioral data, we evaluates methodologies for auditing social media recommender systems.
Read the Paper
Beyond the Guidelines: Assessing Meta’s Political Ad Moderation in the EU
Bouchaud P, Liénard J. April 2024 - Accepted at ACM IMC 2024
This study evaluates Meta's enforcement of political advertising policies across 16 EU countries, finding imprecise moderation with a 60% false-positive rate and a 95.2% false-negative rate. Coordinated advertising campaigns relaying pro-Russian propaganda, reached 38 millions of accounts since August 2023. Despite documented activities, less than 20% of such pages were moderated by Meta as political. With upcoming elections, these findings highlight significant shortcomings in Meta's political ad moderation.
Read the Preprint
A Dataset to Assess Microsoft Copilot Answers in the Context of Swiss, Bavarian and Hesse Elections
Romano S, Angius R, Kerby N, Bouchaud P, Amidei J, Kaltenbrunner A. April 2024 - ICWSM 2024
This study describes a dataset that allows to assess the emerging challenges posed by Generative Artificial Intelligence when doing Active Retrieval Augmented Generation (RAG), especially when summarizing trustworthy sources on the Internet
Read the Paper
A Crowdsourced Exploration of Algorithmic Personalization on YouTube
Bouchaud P, Shabayek S. January 2024 - Under Review
Combining users’ data donations from the Horus project and API results we explore how personalization operates on YouTube.
Auditing Amazon Recommender Systems: Insights Into Risk Assessment Under The Digital Services Act
Bouchaud P, Çetin R.B. January 2024 - Tech Policy Press
We consider repercussions of a lack of diversity in book recommendations as a possible source of risk to information integrity.
Read the article
Browsing Amazon’s Book Bubbles
Bouchaud P. December 2023 - Accepted at ASONAM 2024
We investigate Amazon’s book recommendation system, uncovering cohesive communities of semantically similar books. We identify a large community of recommended books endorsing climate denialism, COVID-19 conspiracy theories, and conservative views. This study underscores how even non-personalized recommender systems can have foreseeable negative effects on public health and civic discourse.
Read the Preprint
Algorithmic Amplification of Politics and Engagement Maximization on Social Media
Bouchaud P. August 2023 - Complex Networks and their Applications
We examine how engagement-maximizing recommender systems influence the visibility of Members of Parliament's tweets in timelines. We showcase the need for audits accounting for user characteristics when assessing the distortions introduced by personalization algorithms and advocate addressing online platform regulations by evaluating the metrics platforms aim to optimize.
Read the Paper
Read the Preprint
Skewed Perspectives: Examining the Influence of Engagement Maximization on Content Diversity in Social Media Feeds
Bouchaud P. June 2023 - Journal of Computational Social Science
This article investigates the information landscape shaped by curation algorithms that seek to maximize user engagement. Leveraging unique behavioral data, we trained machine learning models to predict user engagement with tweets. Our study reveals how the pursuit of engagement maximization skews content visibility, favoring posts similar to previously engaged content while downplaying alternative perspectives. The empirical grounding of our work contributes to the understanding of human-machine interactions and provides a basis for evidence-based policies aimed at promoting responsible social media platforms
Read the Paper
Read the Preprint
Crowdsourced Audit of Twitter’s Recommender Systems
Bouchaud P, Chavalarias D, Panahi M. March 2023 - Scientific Reports
Combining crowd-sourced data donation and large-scale server-side data collection, we provide quantitative experimental evidence of Twitter recommender distortion of users' subscriptions choices. In particular, we reveal:
- Toxic tweets are amplified by 50%
- Tweets from friends from the same community are highly amplified
- Uneven amplification across friends’ political leaning
Read the Paper
The new fronts of denialism and climate skepticism
Chavalarias D, Bouchaud P, Chomel V, Panahi M. February 2023
Analyzing two years of Twitter exchanges, we observe that the denialist community presents inauthentic forms of expertise, relays more toxic tweets and embeds +71% inauthentic accounts with respect to the Pro-Climate community. Pro-climate accounts fleeing from Elon Musk's Twitter, climate skeptic accounts represent 50% of the online discussion by March 2023.
Read the Paper
Can a Single Line of Code Change Society? Optimizing Engagement in Recommender Systems Necessarily Entails Systemic Risks for Global Information Flows, Opinion Dynamics and Social Structures
Chavalarias D*, Bouchaud P*, Panahi M. February 2023 - Journal of Artificial Societies and Social Simulation
After having calibrated an agent-based model over a large scale longitudinal database of tweets from political activists, we compare the consequences of various recommendation algorithms on the social fabric and quantify their interaction with some cognitive biases.
Read the Paper