My name is Paul Bouchaud

Feel free to email me

I am currently pursuing my doctoral studies at the Center of Analysis and Social Mathematics (EHESS/CAMS) , in residency at the Complex Systems Institute of Paris (CNRS/ISCPIF). Under the supervision of David Chavalarias, I am conducting research on the influence of large-scale digital infrastructures on social dynamics. I am supported by the Jean-Pierre Aguilar fellowship of the CFM Foundation for Research.

I have launched the Horus project, a crowdsourced audit of Twitter, YouTube, Google & Facebook. In a first study, I have been able to show that Twitter amplifies toxic tweets (insults, threats, etc.) and distorts the perceived political landscape. More recently, by harnessing the interaction data collected from this initiative, I trained engagement predictive models, enabling me to delve into the consequences of algorithms aimed at maximizing user engagement on the information landscape.

My current research aims to explore socially desirable attention-allocators , such as bridging systems. Additionally, I seek to investigate the long-term implications of maximizing user engagement, including the examination of how platforms are incentivized to manipulate their users.

As part of my residency at the Complex Systems Institute of Paris, I am fortunate to work with extensive historical Twitter databases. These databases, curated by Maziyar Panahi, have amassed, since 2016, more than 700 million political tweets and 500 million tweets related to climate change. Prior to the release of the IPCC AR6 in March 2023, we conducted an in-depth analysis of the online discussion about climate, shedding light on the dynamics of the climate denialist community.

Before that, leveraging the extensive Politoscope database, I have been able to fully calibrate an agent-based model of Twitter accounts to examine the omnipotence of recommender systems and how they can toxify social networking sites.


Algorithmic Amplification of Politics and Engagement Maximization on Social Media
Bouchaud P
We examine how engagement-maximizing recommender systems influence the visibility of Members of Parliament's tweets in timelines. We showcase the need for audits accounting for user characteristics when assessing the distortions introduced by personalization algorithms and advocate addressing online platform regulations by evaluating the metrics platforms aim to optimize.
August 2023 (Under Review), [preprint]
Skewed Perspectives: Examining the Influence of Engagement Maximization on Content Diversity in Social Media Feeds
Bouchaud P
By training engagement predictive models, we explore the impact of curation algorithms seeking to maximize engagement on the information landscape.
June 2023 (Under Review), [preprint]
Crowdsourced Audit of Twitter’s Recommender Systems Impact on Information Landscapes.
Bouchaud P, Chavalarias D, Panahi M
Combining crowd-sourced data donation and large-scale server-side data collection, we provide quantitative experimental evidence of Twitter recommender distortion of users' environmental reality.
March 2023 (Under Review), [preprint]
The new fronts of denialism and climate skepticism
Chavalarias D, Bouchaud P, Chomel V, Panahi M
Analyzing two years of Twitter exchanges, we observe that the denialist community presents inauthentic forms of expertise, relays more toxic tweets and embeds +71% inauthentic accounts with respect to the Pro-Climate community. Pro-climate accounts fleeing from Elon Musk's Twitter, climate skeptic accounts represent 50% of the online discussion by March 2023.
Feb 2023, [preprint] (Press: France Inter, L'Obs, etc.)
Can Few Lines of Code Change Society? Beyond fact-checking and moderation: how recommender systems toxify social networking sites
Chavalarias D*, Bouchaud P*, Panahi M
(*co-first authors)
After having calibrated an agent-based model over a large scale longitudinal database of tweets from political activists, we compare the consequences of various recommendation algorithms on the social fabric and to quantify their interaction with some cognitive biases.
Feb 2023 (Under Review), [preprint]