Research into AI tells us it is
Lily Dykstra (she/her) // Contributor
Rachel Lu (she/her) // Illustrator
It’s no secret that over the years, social media has garnered a reputation for being problematic. On platforms such as Instagram and TikTok, we sometimes blame this negative atmosphere on users, but, as it turns out, social media’s toxicity may be more a product of its fundamental structure. While human interaction is a crucial component of social media, a recent study shows that most of these platforms operate in a way that incentivizes bad behaviour, and that users—or certain algorithms—are not necessarily to blame.
In 2025, at the University of Amsterdam, two researchers used LLMs (large language models) to create bots in order to simulate social media users. The study had the bots interact with one another, post, repost and follow each other. On the main news feed, 10 posts were shown, with only five of them being from users the bot already followed to simulate the way users find new accounts and how what is popular tends to be shown more frequently. The platform was created without any “complex recommendation algorithms,” the goal of this being to “construct a minimal environment capable of reproducing well-documented macro-level patterns.” In the end, they found the same problems that many real-life users have already been experiencing on social media, including political echo chambers, negative influences and followings concentrated to a small percent of users. Each round of simulation, which involved allowing the bots to operate within the artificial social media just as humans would in real-life, consisted of a random user reposting, sharing or doing nothing with the posts shown to them on the feed. Who each bot follows was determined by what they repost, and often they only interacted with users who share the same “beliefs” or views as their own.
On social media platforms, it’s common for extreme political opinions to garner more attention than less inflammatory content, which in turn, creates a kind of feedback loop where users are primarily subjected to certain kinds of views. Users are more likely to interact with content that is seeing high numbers of likes and comments—whether they are aware of it or not—furthering its reach and ensuring that only what is popular is what is seen. Finally, over time followers are concentrated to a small number of users, minimizing the visibility of certain users and perpetuating popular ideals.
Within the study, after the bots showed signs of exhibiting the kinds of negative behaviours that are stereotypical of current social media platforms, the researchers implemented a number of different changes to the algorithms, primarily ones that have been brought up in popular discourse regarding how to make social media a more positive place. The changes included structuring the feed chronologically, obscuring likes and hiding bios to help limit the echo-chamber-effect. Overall, these changes to the algorithm had little effect, and in some circumstances exacerbated the issues. In short, the researchers found that algorithms do little to influence the way social media operates, specifically in relation to remedying its harmful outcomes.
This suggests that the study’s “findings challenge the common view that social media’s dysfunctions are primarily the result of algorithmic curation,” and that “the problems may be rooted in the very architecture of social media platforms.” The findings of this study bring into question how social media can be fixed, but also, the ethics of using platforms that are, evidently, so inherently flawed.

