A stream of recent research has paid attention to people’s theories about how algorithms work, as such theories appear to guide human action. Our recent article highlights how these theories also contain assumptions about human behavior and agency. They are not just theories about algorithms, but also theories about people.
According to YouTube, around 500 hours of new video is uploaded on the platform every minute. There’s an abundance of content on digital platforms, and considering how to get viewers is an integral aspect of professional content creation – think of a YouTuber uploading a small sliver of all that video content, or an influencer on Instagram. A part of this is judging what potential audiences are likely to click or view by considering their behavior, reactions, and preferences. But visibility on digital platforms also depends on algorithmic systems that organize content feeds and recommend, target, and curate content. These algorithmic systems in turn depend on analyzing human behavior. This feedback loop between what technology does and what humans do plays a big role in determining what content gets visibility and which creators thrive. Being serious about content creation – say, trying to make a living with it – then involves acting in a way that gets rewarded by this complex system made up of people and technology.
How to please an algorithm
We examine the relationship between content production and algorithmic systems in our research recently published in New Media and Society. The work started from a simple observation: people speak of certain kinds of content creation on digital platforms as ‘pleasing the algorithm’. We began wondering what does pleasing an algorithm mean, and what are its consequences? This ultimately led to a research project and, as a part of it, a one-year sample of Reddit discussions on content creation, where Redditors mention the phrase ‘pleasing the algorithm’ or its variants. Here’s an example, paraphrased to protect the identity of the Redditor:
A former colleague admitted to creating clickbait videos. He said he just takes someone’s reaction video, slices it and adds a couple of meme clips to make it over 10 minutes long to please the algorithm and posts it. It’s apparently quite profitable, but not enough to restore the respect I lost for him.
Two observations can be made based on the quote. For one, talk about pleasing the algorithm suggests having an actionable understanding of digital platforms and their algorithmic systems – what algorithms, in some sense, want. It doesn’t really matter whether this understanding is accurate or not, if it has effects on behavior. In the quote, the content creator thinks the YouTube algorithm wants 10-minute videos, and therefore he makes sure to make 10-minute videos. The material we gathered was rife with theories like this. A stream of research has explored just these understandings, unearthing the kinds of folk theories people form to make sense of the digital environment in order to navigate it.
The other observation is that talk about pleasing the algorithm can be evaluative. It tends to include notions of what is acceptable, that is, about the moral worth of people and their actions. In the quote above, the Redditor suggests that even if it’s very profitable to create clickbait videos and format them in a way that pleases the algorithm, it’s also condemnable. Our research kicked off with this observation. We began tracing how ‘pleasing the algorithm’ is invoked in an evaluative sense. We examine not just how people make sense of algorithmic systems, but how people discursively make sense of how others engage with them.
Moral orders of pleasing the algorithm
In the article, we highlight three different moral evaluations of the act of pleasing. One was to consider pleasing the algorithm as a morally acceptable part of the craft of content creation. This evaluation relied on a theory that digital platforms’ algorithms are essentially intermediaries of the human audience’s tastes. When content creators please algorithms, they ultimately please humans and their preferences and whims. If someone is able to please the algorithm, they are essentially good at their job and deserve all the success, visibility and fame they get.
The second moral evaluation, by contrast, depicted acts of pleasing the algorithm as something to be condemned. Here, the theory was that humans are susceptible to algorithmic manipulation, because they are guided by for example short-term interests, instincts, or bursts of dopamine. Content creators who pleased the algorithm had figured out how to benefit from algorithmic amplification, and thus exploited the audience’s vulnerabilities. While this was bad as such, there’s more: pleasing the algorithm was also argued to give an unfair advantage that hurts other content creators. Without algorithm-pleasers, visibility could be only gained by fair means, for example, by producing content that is worthy in itself.
The third moral evaluation was more ambiguous. Somewhat similarly to the first one, Redditors depicted pleasing the algorithm as a normal part of content creation. But they also argued it was detrimental to both creators and audiences, for much of the same reasons as in the second one. However, here pleasing the algorithms was a necessary evil: the content creator role necessitates it. Acts of pleasing the algorithms were justifiable even if they had negative effects, because also content creators were victims of circumstances. This redeemed them of responsibility, which was dispersed across the whole structure in which content creators operate: on the digital platforms, their algorithms, and their human users.
Theories about people
At first sight, our analysis reveals how the ways people make sense of algorithmic systems are full of morally laden judgements. But besides outlining how notions of acceptable conduct and responsibility play out, perhaps the more interesting outcome was how this allows highlighting that theories about algorithms are simultaneously theories about people.
What’s crucial here is how human agency is presented. In the above evaluations, different constructions of human agency govern whether pleasing the algorithm merits respect or contempt. Theories about algorithms and human connections with them don’t simply invoke claims over human-technology relationships. These theories themselves are invoked to locate who has capacity to act and who is thus responsible, and to provide justification. The notion of pleasing the algorithm allows illustrating how, when making sense of how algorithmic systems work, technical features become layered with features of humans.
The anthropologist Nick Seaver has famously defined algorithmic systems as dynamic arrangements of people and code, which are formed by feedback loops between technology and human action. This is to say that when considering the social and societal effects of algorithms, not just algorithms matter, but the overall system that is defined by algorithms, people, and the context in which algorithms and people operate. We would add to this that these feedback loops also connect theories about algorithms to theories about humans and their behavior and agency.
Tuukka Lehtiniemi
**
Based on the publication Haapoja, J., Savolainen, L., Reinikainen, H., & Lehtiniemi, T. (2024). Moral orders of pleasing the algorithm. New Media & Society, 1-18. https://doi.org/10.1177/14614448241278674
The blog post is cross-published in Finnish at Rajapinta blog
The research project Pleasing the algorithm is funded by The Helsingin Sanomat Foundation