1 Introduction

As social media becomes a primary source of scientific information for the public [Mede et al., 2024], the visibility of scientific knowledge is increasingly determined by algorithmic systems [Hoang, 2020]. YouTube’s recommendation algorithms are a prime example of how AI-driven systems determine content visibility [Gillespie, 2010; Hoang, 2020; van Dijck & Poell, 2013]. Given that science content can only be impactful if it is visible [Hoang, 2020] and accessible [Medvecky, 2017], science communicators face the challenge of producing content that recommendation algorithms will deem relevant to a specific target audience [Gillespie, 2014]. By exerting immense power over content visibility, algorithmic systems influence the practice of content creation [Nieborg & Poell, 2018; Poell et al., 2021]. In the context of science communication, this influence is evident in the persistent challenge of producing high-quality content that garners significant visibility [Hoang, 2020]. In order to gain visibility, content creators need to tailor science content to match the preferences of AI-powered recommendation algorithms, which use deep learning models to predict what users will watch next. However, the fundamental challenge lies in understanding how these algorithms work, since the criteria and rules underlying them are largely opaque [R. Taylor, 2020]. Science communicators therefore require algorithmic expertise to compete successfully in the “visibility game” [Cotter, 2019] while platform developers might change the rules of this game at any time [Gillespie, 2014]. Aligning content with algorithmic preferences has significant implications for science communication. To reveal these implications, we investigate how recommendation algorithms shape science content by exploring the production of science videos for YouTube.

Given the amount of concern about how much power recommendation algorithms have in shaping science content on social media [Weingart, 2017], little is known about the mechanisms through which these algorithms do so. While some studies have examined the role algorithms play in the dissemination of science content and audience engagement [Allgaier, 2019; Anderson et al., 2021; Hargittai et al., 2018], little attention has been paid to their influence on content production. Our study addresses this gap by drawing on a two-year ethnographic study to provide detailed insights into the development and production process of science communication videos, thereby unveiling the role of algorithms and practitioners’ perceptions of algorithms in shaping content creation practices. Specifically, we focus on how algorithmic experts [Bishop, 2020] — a new professional group that advises content creators on visibility strategies [Stoldt et al., 2019] — make sense of opaque recommendation algorithms and affect science communication practices.

Our research focuses on a collaborative project involving a German public broadcaster and a team of social science scholars. Here, we explore the joint content creation process between the broadcaster’s social media consultants, who claim expertise in recommendation algorithms, and the social science scholars responsible for editorial tasks and video moderation. We examine how consultants interpret opaque recommendation algorithms and how these interpretations, in turn, affect science video production. To analyze this sense-making process, we draw on Bucher’s [2017] concept of algorithmic imaginaries and demonstrate how social media consultants infer content strategies from their algorithmic imaginaries. Finally, we argue that practitioners’ individual and context-specific interpretations of opaque recommendation algorithms shape content creation practices and ultimately science content on social media.

2 Theoretical background

2.1 Science communication on social media

Social media has become a primary source of science-related information [Mede et al., 2024], and recommendation algorithms — rather than local media organizations or experts — now increasingly determine the visibility of science content [Hoang, 2020]. Consequently, Hoang [2020] argues that “the bottleneck of science communication is arguably no longer the production of quality contents […]. More often than not, the bottleneck has become the large-scale promotion of top-quality content” [2020, p. 3]. Some scholars have expressed hopes that social media platforms can amplify the visibility of science [Metag, 2021], foster a more direct dialog between scientists and the public, increase public participation [Weingart, 2017], and engage more diverse audiences [Dawson, 2018]; however, as Allgaier [2019] notes, when YouTube users search for specific scientific terms (e.g., “climate change”), they are likelier to encounter content promoting conspiracy theories or views opposing scientific consensus than they are high-quality scientific information. Consequently, science communicators on platforms such as YouTube are adopting strategies to enhance content visibility based on a nuanced understanding of algorithmic systems, but adapting science content to perform well in social media infrastructures has raised concerns about potential declines in content quality [Fisher, 2022]. These concerns include adapting science content to platform logics in ways that potentially jeopardize the science communication objective of delivering “trustworthy content to their audiences” [Fisher, 2022, p. 273]. Specifically, the adoption of attention-seeking marketing strategies that prioritize persuasion over quality has been scrutinized critically [Väliverronen, 2022; Weingart, 2022].

Despite these concerns, empirical studies to date have focused primarily on how to enhance audience engagement with science videos on YouTube. In doing so, they have highlighted various factors that enhance engagement: parasocial relationships, identification with regular hosts [Boy et al., 2020], community-building [Welbourne & Grant, 2016], compelling storytelling techniques [e.g., posing dramatic questions, depicting moments of change, and eliciting emotional responses; Huang & Grant, 2020], and humor [Yeo et al., 2021]. According to Pavelle and Wilkinson [2020], effective science videos balance entertainment with accuracy. Nevertheless, these studies focus predominantly on the analysis of social media content and the reception of content, often neglecting the role of algorithmic systems in determining what becomes visible and popular [Burgess & Green, 2018].

The few empirical studies exploring the role algorithms play in science communication focus primarily on technological and ethical concerns, often calling for “expert-driven recommendation algorithms in science communication” [Hoang, 2020, p. 2] to counteract the opaque decision-making processes of social media algorithms. These critical perspectives align with research advocating greater algorithmic transparency and accountability [Diakopoulos & Koliska, 2017]. Nevertheless, the mechanisms through which recommendation algorithms shape science content — particularly the production of science content — remain largely unexplored.

2.2 Algorithmic imaginaries

While critical perspectives on algorithms in science communication remain limited, previous studies have highlighted the relevance of sense-making practices surrounding algorithmic technologies [Klein-Avraham et al., 2024]. By describing how social media users perceive and interpret algorithmic technologies, the algorithmic imaginary provides a conceptual framework for exploring how individuals make sense of algorithms and how this sense-making shapes users’ behavior on platforms [Bucher, 2017]. Unlike the linguistically related concept of sociotechnical imaginaries [Jasanoff, 2015; Jasanoff & Kim, 2009], which concerns collective visions of technological futures, algorithmic imaginaries are fundamentally different in that they focus on individual interpretations of algorithms [Bucher, 2017]. The algorithmic imaginary is defined as “ways of thinking about what algorithms are, what they should be, how they function and what these imaginations in turn make possible” [Bucher, 2017, p. 40]. As such, the algorithmic imaginary is particularly relevant to examinations of the interactions between people and algorithms [Bucher, 2017], which are shaped by prior experiences with technology [Schellewald, 2022]. In this sense, algorithms are conceptualized as “‘multiples’ — unstable objects that are enacted through the varied practices that people use to engage with them” [Seaver, 2017, p. 1] — rather than static computational codes. Though Bucher’s [2017] definition of the concept remains abstract, algorithmic imaginaries offer a valuable framework for analyzing how people interpret recommendation algorithms in everyday life and manifest them through language, emotions, and behavior. This focus on the social power of algorithms is relevant since “we may begin to understand the performance of algorithms through the ways in which they are being articulated, experienced and contested in the public domain” [Bucher, 2017, p. 40].

2.3 The productive power of algorithmic imaginaries

Initially focused on user perceptions [Bucher, 2017; Schellewald, 2022], algorithmic imaginaries also offer a framework for analyzing content production. Beyond shaping mental models, algorithmic imaginaries exert productive power by actively influencing social actions and professional routines, such as editorial practices in journalism [Bucher, 2017; Christin, 2017]. For example, previous studies exploring the role of algorithms in news production have shown how the sense-making surrounding algorithms shapes journalists’ decision-making with regard to optimizing visibility [Christin, 2017]. Furthermore, dependence on algorithmic visibility reconfigures journalistic values, such as relevance and newsworthiness, which are now redefined by algorithmic systems [Cotter, 2024]. In particular, the framing of news is adapted to imagined audiences [Litt & Hargittai, 2016] and their preferences [Mitova et al., 2022; Peterson-Salahuddin & Diakopoulos, 2020], since “giving the audience what they want” [Ferrucci et al., 2020, p. 1588] is commonly believed to enhance visibility. In general, adapting editorial work to algorithmic preferences transforms journalistic norms and practices regarding content design and rhetorical strategies [Hermida & Mellado, 2020] and ultimately “leads to negotiations over loss of control, as editors realize that their publicist and democratic mission is at stake” [Schjøtt Hansen & Hartley, 2021, p. 924]. This tension is particularly evident in the balancing act between maintaining journalistic integrity and autonomy [Peterson-Salahuddin & Diakopoulos, 2020] and gaining visibility on platform infrastructures that prioritize consumer demands over citizen values [van Dijck et al., 2018].

2.4 Professional strategies for enhancing content visibility

Although the rules and mechanisms of recommendation algorithms are opaque, platform users still perceive the effects of their inner workings. These embodied experiences of algorithms, in turn, shape the professional practices of content creators whose success relies on visibility [Bucher, 2017; Schellewald, 2022]. This influence is particularly evident in influencer cultures, since the professional existence and income of content creators depend on whether their content is visible [Bishop, 2023; Glatt, 2022]. Bishop describes how beauty vloggers’ communal sense-making of recommendation algorithms — and their exchange of algorithmic knowledge, conceptualized as algorithmic gossip — affect their content production practices [Bishop, 2019]. This shared understanding of how to gain visibility often relies on platform users’ folk theories — assumptions about algorithmic mechanisms that guide behavioral adaptations to opaque, ever-changing systems [DeVito, 2021; Glatt, 2022]. From these speculations and lay theories, influencers infer and implement content strategies to increase visibility, commonly referred to as “gaming the system” [Gillespie, 2014; Cotter, 2019]. Although platform providers publicly condemn “‘system gamers’ as morally bankrupt” [Petre et al., 2019, p. 1], content creators routinely apply diverse visibility enhancement strategies according to their professional contexts and overarching goals. However, the effectiveness of a strategy is hard to verify, since developers can change the algorithmic rules “easily, instantly, radically, and invisibly” [Gillespie, 2014, p. 179]. Some of the strategies shared across domains include performing authenticity [Cotter, 2019; A. S. Taylor, 2022], presenting a unique identity [van Dijck, 2013], focusing on niche topics, and building a community around those topics [Glatt, 2022; Villegas-Simón et al., 2023]. Often aimed at achieving attention and higher advertising revenues, additional content creation practices include designing attention-grabbing thumbnails and crafting sensational titles [Ma & Kou, 2021].

While most influencers monetize content primarily through advertising partnerships [Ørmen & Gregersen, 2023], public broadcasters and science communicators are guided by democratic values (e.g., disseminating knowledge for the public good) and are often restricted from monetizing their content. Since algorithms do not differentiate between content domains, whether science, journalism, or beauty marketing, some of these strategies might also prove valuable for science communicators; however, the uncritical adoption of influencer strategies requires careful consideration, as their fundamental values and professional goals may differ considerably from those of science communicators. Taken together, the effectiveness of content strategies is difficult to verify and strategies vary depending on professional contexts but the mechanisms through which algorithmic imaginaries affect content production remain consistent.

To unveil these mechanisms by which algorithms influence content production, Christin [2017] explicitly calls for more ethnographic research on algorithms in practice. This practice-oriented approach focuses on “actual rather than aspirational practices connected to algorithms” [Christin, 2017, p. 11]. Despite the extent to which algorithms are embedded in practitioners’ everyday digital content production practices, science communication research has neglected algorithmic influences on content production [Schäfer, 2023; Tatalovic, 2018]. Accordingly, we explore how recommendation algorithms shape science content through the lens of algorithmic imaginaries [Bucher, 2017]. Specifically, we examine the sense-making process surrounding recommendation algorithms and its influence on the production of science videos for YouTube.

3 Methodology

3.1 Empirical setting

The data for this paper were collected from January 2022 to January 2024 during a two-year ethnographical study [Madden, 2023; Neyland, 2008] of the development and production of YouTube science videos, a collaboration between a German public broadcaster and a team of social science scholars. The ethnographic approach provided an “emic” perspective on how science communicators engage with and adapt to opaque algorithmic systems in their daily work [Broer, 2020; Davies et al., 2024; Hine, 2020]. As such, ethnographic methods complement broader structural critiques of algorithmic systems [Glatt, 2022]. As Seaver [2017] suggests, “ethnography roots these concerns in empirical soil, resisting arguments that threaten to wash away ordinary experience in a flood of abstraction” [2017, p. 2]. As is typical in ethnographic immersion, the authors engaged with the field site, participated in practices, and used participant observation as a method [Spradley, 1980]. Such an ethnographic approach aims for a “deeper immersion in others’ worlds to grasp what they perceive as meaningful” [Emerson et al., 2011, p. 73], which is especially suited for studying work practices that involve algorithmic technologies [Christin, 2020; Seaver, 2017]. The ethnographic method enabled the authors to reveal how recommendation algorithms shape science content by exposing behind-the-scenes mechanisms that are currently absent in social media content analyses and reception studies.

This collaboration provided an ideal context to examine the production of science content for social media, as it brings together social sciences, science communication, and algorithmic expertise. Social science scholars took charge of editorial work and moderation while the broadcaster oversaw funding, visualization, and distribution. Among the three social science scholars who assumed the roles of science communicators in this collaborative video production, two are the authors of this study. Consequently, both authors contributed in dual capacities, serving as science communication practitioners and researchers. To address potential subjective bias in field observations, the researchers held ongoing discussions about field notes with on-site participants and independent external researchers [Neyland, 2008]. Although the team’s diverse expertise fostered innovation [Buschow et al., 2024], it also generated tensions. This paper focuses on the compromises made to maintain productivity and meet the public broadcaster’s visibility benchmarks.1 Such explicit compromises may remain hidden in non-collaborative contexts or content creation practices of individuals.

A key challenge in the examined collaboration stemmed from the differing motivations of the stakeholders involved [Buschow et al., 2024]. While the broadcaster’s primary focus was on optimizing visibility and maximizing platform engagement — driven by its inherent media logics [Olesk, 2021] — the science communicators were primarily motivated by a desire to share high-quality academic knowledge with audiences beyond their traditional academic reach. Although both parties shared overlapping goals, such as effectively disseminating relevant knowledge, each stakeholder also brought distinct objectives to the project that did not always align with those of the other party. This divergence in motivations is a common challenge in collaborative contexts [Buschow et al., 2024]. Nonetheless, the stakeholders consistently worked to establish a shared understanding of each other’s values and motivations [Enzingmüller & Marzavan, 2024] in order to improve the quality and visibility of science videos on social media.

3.2 Data collection

The two-year period of fieldwork generated a diverse dataset that includes field notes from around 1,200 hours of participant observation, draft scripts, script comments, final video scripts, and workshop materials (e.g., presentation slides and a cocreated Miro board). Data were collected during weekly editorial practices, communication activities, including shared online documents, and thirty-six production days. Key data originated from a three-day workshop in which social media experts introduced content strategies. The workshop aimed to enhance the visibility of the science videos posted on YouTube. These workshops are a standardized procedure for developing social media content at the broadcaster and are held by digital marketing professionals from their in-house digital department with the support of external social media consultants, regardless of content type (e.g., culture, science, news).

During the workshop, five social media consultants (four internal employees and one external consultant), to whom we refer as C1–C5 when discussing our findings, specifically verbalized their understanding of recommendation algorithms and advised science communicators, to whom we refer as SC1–SC3, on YouTube content strategies. Because it is rarely articulated explicitly, studying the sense-making surrounding algorithmic technologies among science communicators is challenging; however, in our study setup, algorithmic imaginaries were made observable, since the social media consultants’ role in the workshop was to verbalize their understanding of the algorithmic system and to open the black box.

3.3 Data analysis

The data analysis followed a constructivist grounded theory approach [Charmaz, 2014], which facilitated the identification of recurring patterns in the data without a predefined research question, while incorporating insights from science communication and Science and Technology Studies (STS) literature. The analysis was iterative, shifting back and forth between the data and the relevant theoretical frameworks [Charmaz, 2014]. Initially, the research focused on how social media platforms influence science communication, but it later fixed on examining the impact of recommendation algorithms on science content production. Consistent with constructivist grounded theory, the coding process involved initial and focused coding phases [Charmaz, 2014].

Initial open coding via MAXQDA focused on moments of change in the content production process and revealed key themes, including “digital expertise”, “platform broker”, “platform narratives”, “frictions in cooperative settings”, and “imagined publics”. Subsequent coding refined these preliminary patterns. The exploratory approach permitted ongoing reflexivity with the literature and flexibility in refining the research focus. In the third phase of the analysis, attention shifted to the concept of the algorithmic imaginary [Bucher, 2017], which emerged through the iterative exploration of the data and literature. This focus produced final codes refined into “algorithmic imaginaries”, “content strategies”, and “content creation practices”. In the final analytical stage, the research question, which reflects the recurring patterns identified in the data, was formulated: “How do recommendation algorithms shape science content?”

4 Findings

Recommendation algorithms shape science content indirectly: they exert significant power over content visibility, prompting content producers to consider how best to adapt their content to appeal to the algorithm. This indirect influence unfolds in three steps (“see Figure 1”). First, opaque recommendation algorithms give rise to algorithmic imaginaries based on experiences and assumptions. The imaginaries reflect only a vague understanding of how a specific algorithmic system functions. Second, these abstract algorithmic imaginaries inform content strategies — actionable translations of algorithmic imaginaries aimed at enhancing content visibility. Third, content strategies affect content creation practices, which in turn, change the content of science communication. Making these three intermediate steps explicit is crucial to understanding how recommendation algorithms shape science content on social media.

PIC

Figure 1: Process overview of how YouTube algorithms indirectly shape science content.

In the next section, we illustrate this process based on our observations using three examples from the field that follow a consistent structure. First, we illustrate how the social media consultants’ understandings of YouTube’s recommendation algorithms were reflected in explicitly verbalized algorithmic imaginaries. Second, we present three content strategies informed by the social media consultants’ algorithmic imaginaries: (1) Keep your target group small and avoid being academic, (2) Construct one need from everyday life and satisfy it, and (3) Persuade first, explain later. Third, we showcase how the introduced content strategies affected the science communicators’ content creation practices, which ultimately led to changes in the science videos. Thus, we reveal the often invisible mechanism of how recommendation algorithms shape the content of science communication on social media.

4.1 Keep your target group small and avoid being academic

From the beginning, the social media consultants framed the relationship between science communicators and algorithms as difficult and suggested better aligning practices with the rules of the algorithm: “If you play by the rules, the algorithm will reward you” (C1). They explained that success on the platform depended on “gaining the algorithm’s favor” (C1). Consequently, perceiving “algorithms as friends” (C1) would be advisable. However, they also explained that achieving this friendship would not be easy: “Social science and the YouTube algorithm don’t go so well together” (C1). Thus, they recognized a general problem: communicating social science theory is far removed from the inner logic of the video platform, which typically promotes less abstract or complex content to “make people happy” (C1). The consultants asserted that to “make friends” with the algorithm, algorithmic systems would have to be “demystified” (C1). One consultant explained that understanding the algorithm was less about mastering computer science and more about understanding people:

It’s total nonsense to talk about how the algorithm works because it’s about understanding target groups. […] It’s about understanding how we tell stories and how we generate comprehensible, good content for the platform and therefore for people. (C1)

According to the consultant, what is good for the target group is also good for the algorithmic system and vice versa. He further explained that the key to visibility lies not in mastering the algorithmic system itself but in understanding potential target groups and tailoring the content to these individuals to make them “happy” (C1), because that is what the algorithm ultimately rewards.

Informed by this algorithmic imaginary, the consultants inferred their first content strategy: keep your target group small and avoid being academic. They emphasized the need to address a narrowly defined target group with a preexisting interest in science videos: “The YouTube algorithm […] has now become such an authentic platform that we just have to think about what the best content for the right viewer is” (C2). According to the consultant, the algorithm assesses the probability of an individual being curious about a specific video based on certain algorithmically determined characteristics, such as age, gender, and interests. This assessment, in turn, dictates the visibility of a video for individual platform users. To address these interests, the consultants recommended developing a detailed persona: a hypothetical individual described in terms of age, gender, education, occupation and interests. This approach allows the content to signal the imagined target group to the algorithm, which increases the likelihood of appearing in video recommendations for users from this target group. From the consultants’ perspectives, designing a prototypical viewer and then developing videos based on their preferences reflects the algorithm’s match-making logic. However, in a different context, another consultant acknowledged that “the probability of reaching a specific target group is always very low” (C3). This apparent contradiction was not further elaborated upon, but the consultant recommended embedding science more subtly in the videos to avoid deterring those who lacked a preexisting interest in science videos. They advised hiding materials that looked academic (e.g., books, theory names, or scientific papers) to avoid driving away potential viewers, even though the imagined viewer was explicitly interested in social science.

This content strategy led to confusion within the team of science communicators. Addressing a clearly defined target group specifically interested in social sciences while leaving scientific artifacts out of the science video was perceived as contradictory advice. Despite dedicating an entire day to specifying ideal viewers, the development of these personas did not result in substantial changes in content creation because addressing this possibly interested target group was at odds with the advice to appeal simultaneously to what the consultants called the “simple world of platform users” (C1). Shaped by their algorithmic lens, the consultants generally portrayed platform users as simple people incapable of dealing with complexity. To avoid excluding these potential viewers, the science communicators made several changes, including simplifying the language and guiding the audience more explicitly toward an understanding of longer quotes. Theoretical nuances and linguistic subtleties were omitted (e.g., the phrase “cooperation without consent” was deleted because it contained too many technical terms, according to the consultants). Significant changes were observed in the choice of titles and thumbnails (small images that promote videos on YouTube). Discipline-specific aspects, such as schools of thought or scientists’ academic backgrounds, were pushed into the background. At the same time, phenomena from everyday life with which the audience was expected to identify more easily were emphasized. Following these changes, the science communicators realized that some social science theories simply could not be explained in the revised video format. Consequently, the topic selection process shifted to a new focus on “low-hanging fruits” (SC 3). The team initially responded to these changes in content creation with frustration, as they were concerned that oversimplifying the video scripts would prevent the proper explanation of specific theories. Due to the contradictions inherent in maintaining their goals of high-quality knowledge distribution and offering audience entertainment, they eventually compromised some of their standards for scientific accuracy in exchange for the promise of greater visibility.

4.2 Construct one need from everyday life and satisfy it

The consultants explained how the algorithm has evolved over time: “First it was all about clicks, then viewing time, and now the algorithm is all about audience loyalty” (C1). This understanding, sourced directly from YouTube’s official website, reflects how the platform promotes its algorithmic system. “Audience loyalty” (C1), also referred to as “viewer satisfaction” (C1), has become the most crucial factor in gaining visibility, according to one consultant: “The key that ties everything together and makes it understandable is viewer satisfaction” (C1). The consultants explained that viewer satisfaction is assessed by the algorithm through metrics such as new views, returning viewers, and viewing time. Conversely, a decline in visibility often occurs if viewers drop off or do not return, which signals to the algorithm that the content has not satisfied them. The consultants concluded that the algorithm’s interpretation of these key metrics is pivotal in shaping its overall understanding of what constitutes user satisfaction with science videos, and they advised science communicators to adopt this algorithmic definition. As one consultant summarized, achieving viewer satisfaction and, thus, greater visibility was contingent upon the ability to outsmart the algorithmic system: “You simply have to ‘game’ the platform properly” (C3). One consultant specified that the algorithm could serve as a valuable guide for understanding how to satisfy viewers, although this requires attentiveness and openness to the signals the algorithm provides. If the videos fail to gain visibility, another consultant explained, the algorithm will subtly indicate that the science video is not meeting the expectations of potential viewers, which suggests the need for adjustments. In this algorithmic imaginary, the consultants equated what the developers of the algorithmic system supposedly defined as viewer satisfaction with the satisfaction of science communication audiences. They did not discuss an alternative understanding of satisfaction that was more specific to science videos, such as learning outcomes or inspiration.

Informed by this algorithmic imaginary, the consultants inferred their second content strategy: construct one need from everyday life and satisfy it. They suggested a relatively rigid algorithmic process that connects a specific “user need” (C1) to content that could potentially fulfill that need. Consequently, their proposed strategy aimed to construct a hypothetical need for platform users and tailor content to address it:

You have to understand that your only job as content creators is to satisfy the needs of the user — then the algorithm will reward you. […] And the best way for us to do that is to fulfill practical needs. (C1)

The consultants conceptualized practical needs as the tangible, everyday concerns of YouTube viewers and argued that platform users typically do not seek to understand social science theories or gain insights into the inner workings of academia. Instead, their needs would arise from daily life, such as seeking explanations of human (mis)behavior or advice on self-improvement. A concrete user need for a video explaining social science theories on YouTube was defined by the team as follows: “I want to better understand everyday social phenomena with the help of real scientific knowledge.” The consultants situated this strategy around platform user needs as part of a broader cultural shift necessary for science communicators on algorithmically curated social media, including “an absolute change of perspective from discourse leader to service provider” (C1). This would require content creation practices to always put “the need of the viewer at the center and act accordingly” (C2).

This content strategy affected content creation as follows: rather than explaining social science theories directly, the storytelling shifted toward answering everyday questions from social science perspectives. After choosing a topic like Bourdieu’s [1979] habitus theory, the science communicators started specifying questions that the target group might encounter in their daily lives and which could be answered with the respective theory. For instance, instead of explaining how societal norms, values, and power relations become internalized in individuals and influence their dispositions and actions [Bourdieu, 1979], they came up with user questions such as: “Why do we love what we have?” In a nutshell, the content strategy affected the content creation practices such that instead of explaining scientific theories with the help of examples and metaphors from everyday life, the science communicators switched to answering questions from everyday life with the help of academic research. One of the science communicators described the new norm as follows: “It should have a headline that draws people in with an everyday problem and then explain to them: This is the theory that helps you make sense of this. But that’s not how we’ve written it so far” (SC1). In the end, the science communicators were required to adopt a considerably different approach to content creation by prioritizing what was relevant to the everyday lives of the audience over what academic knowledge was relevant to the public. Implementing this change in content creation was challenging for the science communicators. As one of them put it, “I am somewhat disturbed by the idea of thinking about science communication as a service” (SC3). Nevertheless, the team’s content creation practices were adjusted to promote more visibility and thus ensure continued funding for video production, which was tied to reaching visibility benchmarks.

4.3 Persuade first, explain later

One consultant described recommendation algorithms as tools typically designed to promote individualized “services” (C1) on platforms:

And what’s interesting is that all the mechanisms behind all these platforms — we can add other services, Amazon, Spotify, or whatever — are the same everywhere. The algorithm searches for what we want, or on the basis of what we’ve watched, what we’ve bought, what we’ve saved somewhere in a list. (C1)

Accordingly, based on previous behavior on the platform, the algorithmic system matches a service provider with a platform user who might be interested in the provided service. Furthermore, the consultants understood the recommendation algorithm as a technology that tries to maximize profits, which means promoting extensive platform use. They specified that the algorithmic system of the video platform YouTube, in contrast to platforms like Amazon, was designed primarily to optimize advertising revenue:

YouTube wants viewers to see what they want to see and that viewers like to use the platform often. And, of course, it is related to money because the longer someone is on the platform, the more adverts one can place and the more revenue the platform generates. (C4)

The consultants described a direct correlation between viewing time and platform revenue. Accordingly, viewing time was highlighted as one of the most critical factors in visibility. With this in mind, one of the consultants saw a disadvantage in the “catalog of institutional rules” (C1), by which he referred to the educational mandate of public broadcasters and the associated “restrictions in the choice of topics” (C1). However, the fact that public broadcasters are prohibited from showing advertising and that the distinct quality standards of science communication may conflict with the concept of offering a service to consumers were both deemed irrelevant.

Informed by this algorithmic imaginary, the consultants inferred their third content strategy: Persuade first, explain later. In their view, audiences must be convinced of their need for the particular service science communicators offer. The team defined one service that social science videos can provide for YouTube users: “help them better understand their everyday lives.” This means, for instance, highlighting how watching a science video will help viewers deal with a particular problem in everyday life. One consultant argued:

How do I know at the beginning that it’s important? […] You have to show viewers the need first — they don’t realize yet that they find it exciting. Actually, algorithms work like drug dealers, “Hey, you’ve had a taste. Do you want some more? Come on!” (C1)

In alignment with this content strategy, science communicators were encouraged to explicitly outline the benefits viewers can gain from watching each video:

This is for people who want to better understand their everyday life, their togetherness. And you can get that; you just don’t know it. So, in that sense, there is a totally sensible explanation that expands my field of knowledge, but at this point, if this is my only picture, I still have no idea what’s in it and that it can probably help me. (C2)

The consultants advised lowering the threshold to explore academic knowledge by explicitly demonstrating the benefits of acquiring the prospective service right at the beginning of the videos (e.g., “After watching this video about the theory of habitus, you will understand why you love what you have”). Furthermore, if viewers were satisfied, they would return to the service, which the algorithm rewarded with more visibility. The consultants concluded that a strategy involving a clear promise at the beginning of a video and fulfilling expectations toward the video’s conclusion would encourage longer viewing times and ultimately lead to (algorithmically defined) viewer satisfaction.

The introduction of this content strategy significantly affected content creation practices by shifting the focus from communicating academic knowledge to persuading the audience of individual benefits gained through academic knowledge. Before the workshop, the science communicators focused on concepts relevant to the scientific community. Videos began with a short explanation of the concept, followed by examples from everyday life. The goal of each video was to shed light on (overlooked) aspects of everyday life and enhance viewers’ understanding of them. It was not initially intended to offer action-oriented recommendations; however, following the content strategy recommended by the algorithmic experts, the videos focused on general interest topics, such as fear of missing out (FOMO), and were restructured so viewers could understand from the start what they would learn by the video’s end. The new approach emphasized framing the relevance of the scientific information at the start of each video by persuading the viewer that the knowledge presented could be practically beneficial to their everyday lives (e.g., “If you understand the concept of technological fix, you will know why the sole presence of an iPad won’t improve your children’s education”). As a result, the videos were edited to conclude with a takeaway offering practical advice on applying the previously introduced concept in everyday life (e.g., teaching children how to use new technologies in educational contexts responsibly). By adapting content creation practices to the content strategies presented by the consultants, the science communicators thus, to a certain degree, took on the proposed role of an audience-oriented service provider. Despite significant concerns about this approach (many social science concepts are not developed to guide behavior but rather to understand complex social phenomena), the science communicators adapted their practices with the aim of making it easier for platform users to access the science videos and, therefore, the academic knowledge on the platform.

5 Discussion

This study demonstrates a three-step mechanism by which recommendation algorithms indirectly shape science content on social media. Using empirical data from a two-year ethnographical study of the production of social science YouTube videos, we show how recommendation algorithms indirectly shape the content of science communication through the power they exert over content’s visibility. Changes in content are particularly driven by practitioners’ algorithmic imaginaries, inferred content strategies, and the adaptation of content creation practices to these strategies. Hence, algorithms’ impact depends less on what they actually do and more on the qualities that people assign to them. In our case, science communicators integrated algorithmic experts’ algorithmic imaginaries and implemented content strategies in their content creation practices, with the assumption that these approaches would enhance the visibility of science content. In the following section, we first situate our observations of three shifts in science communication practices on social media within the relevant literature and then discuss the broader implications of our findings for both research and practice.

(1) Reinforcing the deficit model for algorithmic mass appeal. Informed by their algorithmic imaginaries, the team categorized the audience into a niche group interested in social science and “simple” platform users. To make content visible to both, the science communicators adapted their language [Hermida & Mellado, 2020] by avoiding technical jargon whenever possible. This approach led to a dilemma between increased visibility [Hoang, 2020; Metag, 2021] and the dilution of authentic scientific knowledge [A. S. Taylor, 2022]. Informed by algorithmic imaginaries, successful science content was defined as that which addresses knowledge gaps and corrects public misconceptions through academic insights. This approach aligns with advertisers’ monetization strategies [Ma & Kou, 2021], in which a problem is created before a solution is offered. While constructing a deficit in the viewer’s knowledge may boost visibility, it also unintentionally reinforces the deficit model in social media science communication — a model that science communicators have long tried to move beyond [Bucchi, 2008]. A further strategy arising from the team’s algorithmic imaginaries involved sacrificing accuracy for visibility, based on the assumption that algorithms do not handle complexity well. While adjusting content creation practices to suit this assumption might increase visibility, the practice conflicts with key science communication quality standards, such as accuracy, objectivity, and truthfulness [Fähnrich et al., 2023]. The team’s algorithmic imaginary implied that achieving visibility through simplification while staying true to the intricate details of the source material was not feasible. The ongoing challenge of balancing scientific rigor and sustaining public trust in science while simplifying content has become even more pressing in the context of algorithmic visibility [Weingart, 2022], which some argue determines the fate of science communication [Hoang, 2020].

(2) Redefining the (algorithmic) relevance of science content. The algorithmic imaginary introduced in the production process prioritized viewer satisfaction — quantified by metrics such as views, returning viewers, and viewing time — as the primary determinant of the social media visibility of science content. This algorithmic imaginary led to specific storytelling strategies aimed at extending viewing time. While storytelling techniques, such as posing dramatic questions, are generally known to increase engagement with science content [Huang & Grant, 2020], they are now employed strategically to keep audiences on the platform longer (e.g., by answering the main question only at the very end of the video), thereby boosting the content’s visibility. From the algorithmic imaginary of what constitutes viewer satisfaction, it was also inferred that the individual usability of scientific knowledge plays a key role in the algorithms’ decision on visibility. While this approach might enhance the accessibility of scientific knowledge, it frames audiences primarily as consumers with individual “user needs” (C1) [van Dijck et al., 2018]. By adapting content to perceived algorithmic preferences [Mitova et al., 2022], such as fulfilling practical user needs, the societal role of science communication is diminished, thereby sidelining its cultural and democratic functions, which include enabling participation in the “sense-making of the world” [Davies, 2021, p. 124], fostering critical thinking skills, and legitimizing science in society [Davies, 2021]. The team therefore had to continually balance meeting practical user needs to gain visibility with the democratic values that motivate their work — values assumed to be deemed irrelevant by the algorithmic system.

In general, dependence on algorithmic visibility requires science communicators — much like journalists — to balance giving “the audience what they want” [Ferrucci et al., 2020, p. 1588] with preserving scientific autonomy [Peterson-Salahuddin & Diakopoulos, 2020] in defining what is considered relevant knowledge for the public. Ultimately, the public relevance of scientific knowledge should not be measured only by its algorithmically determined visibility; however, assessing science formats through social media metrics — such as views and watch time — reinforces this logic [Christin, 2022].

(3) Reframing science communication as a service. To increase visibility, the practice of science communication was framed as a commercial service comparable to selling pharmaceuticals or beauty products. Consequently, persuasive strategies, such as crafting catchy titles and attention-grabbing thumbnails, were integrated into content creation practices. The science communicators perceived these practices as conflicting with the credibility of science content [Weingart, 2022]. Highlighting the immediate, practical benefits of academic knowledge — and thereby framing science communication as a service — was introduced as a strategy for enhancing visibility. This shift, as observed in this case, has the potential to reshape the public’s understanding of science [Bucchi, 2008] by reducing its perceived value to a problem-solving service for individuals; however, such adaptation to algorithmic preferences may also boost the visibility of credible science content and increase its accessibility. Thus, science communicators must weigh the societal benefits in terms of knowledge accessibility [Medvecky, 2017] against potential harm to the public image of and, ultimately, trust in science [Weingart, 2022]. Since public broadcasters and science communicators (in this case) are not allowed to monetize science content, the effectiveness of framing science communication as a service remains questionable — and ultimately unverifiable. While this strategy may increase visibility, it might also risk diluting academic identity [van Dijck, 2013] and authenticity [R. Taylor, 2020], thereby counteracting audience identification [Boy et al., 2020; Welbourne & Grant, 2016] with an institution characterized by organized skepticism [Weingart, 2022]. Scientific markers (e.g., theory names, titles, researcher credentials, and academic references) might also be interpreted as increasing authenticity and, thus, visibility. Hence, the main challenge for science communicators lies in navigating the tension between producing trustworthy, authentic content and ensuring its visibility through strategies that may not align with their core values.

6 Conclusions

Taken together, algorithms can be interpreted in various ways, and algorithmic imaginaries can be mobilized for multiple purposes [Christin, 2017]. The productive power of these context-dependent imaginaries becomes evident in the content creation process of science content for social media. This indirect influence of AI-powered recommendation algorithms underscores the need to look beyond technological features and traditional sender-receiver models, as socio-technical systems profoundly reshape the daily professional routines of science communicators. We must acknowledge how algorithms and AI shape our social world through the power they exert over content visibility — whether on social media or in generative AI tools like ChatGPT [Hoang, 2020; Gillespie, 2024].

A key challenge lies in the opaque nature of algorithms, which fosters a wide range of interpretations [Bucher, 2017] that influence how we interact with the technology. This opacity enables new professional roles — such as social media consultants, algorithmic experts or prompt engineers who are often detached from the field of science communication — to mobilize and legitimize individual interpretations. However, experiences with socio-technical systems and expertise remain highly context- and person-dependent. Consequently, navigating socio-technical systems requires more than a basic understanding of their computational rules; it demands a careful balance of technological expertise with the core principles of science communication. If we adopt generalized content production strategies without scrutiny, we risk undermining decades of research and experience in science communication. We therefore stress the urgent need for algorithmic expertise that aligns with the field’s core objectives.

We propose that the fate of science communication is determined not solely by recommendation algorithms [Hoang, 2020] or the “Notorious GPT” [Schäfer, 2023, p. 1] but also by the actors who influence the discourse surrounding these technologies. Our study demonstrates that investigating algorithmic technologies in isolation is insufficient; we must also analyze how experiences with — and discourses around — algorithms and AI influence the field’s evolving practices as these technologies become embedded in everyday professional routines. Following Davies [2022], we emphasize the need for new, critical perspectives from STS, particularly practice-oriented approaches [Davies et al., 2024], to better understand how algorithms and AI shape science communication.

Since this study focused on the production of social science YouTube videos, we call for further research investigating how different science communicators on various platforms interpret algorithmic systems and how this sense-making influences content creation practices — and ultimately, science content. Because algorithms form the basis of AI, this perspective may also be relevant for interactions with generative AI platforms. As perceptions of the inner workings of algorithmic systems play a fundamental role in the creation of science content, it is essential to engage critically with how these imaginaries are developed, mobilized, and institutionalized. We propose that the future of science communication does not rest solely in the hands of algorithms and AI [Hoang, 2020] but also relies on practitioners’ expertise in responsibly navigating opaque algorithmic systems.

Acknowledgments

We would like to thank the editors of this Special Issue as well as the reviewers of our paper for their dedicated efforts and insightful feedback. We extend our sincere gratitude to Anne K. Krüger, lead of the research group Reorganizing Knowledge Practices at the Weizenbaum Institute, for her valuable expertise and encouragement throughout the paper development. We also appreciate the critical contributions of our research fellows Birte Fähnrich and Mohammad Rezazade Mehrizi. A special thanks goes to Katharina Berr and Kira Lehmann for their critical input and dedicated support.

References

Allgaier, J. (2019). Science and environmental communication on YouTube: strategically distorted communications in online videos on climate change and climate engineering. Frontiers in Communication, 4. https://doi.org/10.3389/fcomm.2019.00036

Anderson, J. T. L., Howell, E. L., Xenos, M. A., Scheufele, D. A., & Brossard, D. (2021). Learning without seeking? Incidental exposure to science news on social media & knowledge of gene editing. Journal of Science Communication, 20, A01. https://doi.org/10.22323/2.20040201

Bishop, S. (2019). Managing visibility on YouTube through algorithmic gossip. New Media & Society, 21, 2589–2606. https://doi.org/10.1177/1461444819854731

Bishop, S. (2020). Algorithmic experts: selling algorithmic lore on YouTube. Social Media + Society, 6. https://doi.org/10.1177/2056305119897323

Bishop, S. (2023). Influencer creep: how artists strategically navigate the platformisation of art worlds. New Media & Society. https://doi.org/10.1177/14614448231206090

Bourdieu, P. (1979). Die feinen Unterschiede: Kritik der gesellschaftlichen Urteilskraft [The subtle differences: Critique of social judgement]. Suhrkamp.

Boy, B., Bucher, H.-J., & Christ, K. (2020). Audiovisual science communication on TV and YouTube. How recipients understand and evaluate science videos. Frontiers in Communication, 5. https://doi.org/10.3389/fcomm.2020.608620

Broer, I. (2020). Rapid reaction: ethnographic insights into the science media center and its response to the COVID-19 outbreak. JCOM, 19, A08. https://doi.org/10.22323/2.19050208

Bucchi, M. (2008). Of deficits, deviations and dialogues: theories of public communication of science. In M. Bucchi & B. Trench (Eds.), Handbook of public communication of science and technology. Routledge. https://doi.org/10.4324/9780203928240

Bucher, T. (2017). The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, 20, 30–44. https://doi.org/10.1080/1369118x.2016.1154086

Burgess, J., & Green, J. (2018). YouTube: online video and participatory culture (2nd ed.). Polity.

Buschow, C., Noster, A., Hettwer, H., Lich-Knight, L., & Zotta, F. (2024). Transforming science journalism through collaborative research: a case study of the German “WPK Innovation Fund for Science Journalism”. JCOM, 23, N02. https://doi.org/10.22323/2.23020802

Charmaz, K. (2014). Constructing grounded theory (2nd ed.). SAGE Publications.

Christin, A. (2022). Metrics at work: journalism and the contested meaning of algorithms. Princeton University Press.

Christin, A. (2017). Algorithms in practice: comparing web journalism and criminal justice. Big Data & Society, 4, 205395171771885. https://doi.org/10.1177/2053951717718855

Christin, A. (2020). The ethnographer and the algorithm: beyond the black box. Theory and Society, 49, 897–918. https://doi.org/10.1007/s11186-020-09411-3

Cotter, K. (2019). Playing the visibility game: how digital influencers and algorithms negotiate influence on Instagram. New Media & Society, 21, 895–913. https://doi.org/10.1177/1461444818815684

Cotter, K. (2024). Practical knowledge of algorithms: the case of BreadTube. New Media & Society, 26, 2131–2150. https://doi.org/10.1177/14614448221081802

Davies, S. R. (2021). An empirical and conceptual note on science communication’s role in society. Science Communication, 43, 116–133. https://doi.org/10.1177/1075547020971642

Davies, S. R. (2022). STS and science communication: reflecting on a relationship. Public Understanding of Science, 31, 305–313. https://doi.org/10.1177/09636625221075953

Davies, S. R., Wells, R., Zollo, F., & Roche, J. (2024). Unpacking social media ‘engagement’: a practice theory approach to science on social media. JCOM, 23, Y02. https://doi.org/10.22323/2.23060402

Dawson, E. (2018). Reimagining publics and (non) participation: exploring exclusion from science communication through the experiences of low-income, minority ethnic groups. Public Understanding of Science, 27, 772–786. https://doi.org/10.1177/0963662517750072

DeVito, M. A. (2021). Adaptive folk theorization as a path to algorithmic literacy on changing platforms. Proceedings of the ACM on Human-Computer Interaction, 5, 1–38. https://doi.org/10.1145/3476080

Diakopoulos, N., & Koliska, M. (2017). Algorithmic transparency in the news media. Digital Journalism, 5, 809–828. https://doi.org/10.1080/21670811.2016.1208053

Emerson, R. M., Fretz, R. I., & Shaw, L. L. (2011). Writing ethnographic fieldnotes. University of Chicago Press.

Enzingmüller, C., & Marzavan, D. (2024). Collaborative design to bridge theory and practice in science communication. JCOM, 23, Y01. https://doi.org/10.22323/2.23020401

Fähnrich, B., Weitkamp, E., & Kupper, J. F. (2023). Exploring ‘quality’ in science communication online: expert thoughts on how to assess and promote science communication quality in digital media contexts. Public Understanding of Science, 32, 605–621. https://doi.org/10.1177/09636625221148054

Ferrucci, P., Nelson, J. L., & Davis, M. P. (2020). From “public journalism” to “engaged journalism”: imagined audiences and denigrated discourses. International Journal of Communication, 14, 1586–1604. https://ijoc.org/index.php/ijoc/article/view/11955

Fisher, R. (2022). The translator versus the critic: a flawed dichotomy in the age of misinformation. Public Understanding of Science, 31, 273–281. https://doi.org/10.1177/09636625221087316

Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski & K. A. Foot (Eds.), Media technologies: essays on communication, materiality and society (pp. 167–193). The MIT Press.

Gillespie, T. (2010). The politics of ‘platforms’. New Media & Society, 12, 347–364. https://doi.org/10.1177/1461444809342738

Gillespie, T. (2024). Generative AI and the politics of visibility. Big Data & Society, 11, 1–14. https://doi.org/10.1177/20539517241252131

Glatt, Z. (2022). Precarity, discrimination and (in)visibility: an ethnography of “The Algorithm” in the YouTube influencer industry. In E. Costa, P. G. Lange, N. Haynes & J. Sinanan (Eds.), The Routledge companion to media anthropology (pp. 546–559). Routledge.

Hargittai, E., Füchslin, T., & Schäfer, M. S. (2018). How do young adults engage with science and research on social media? Some preliminary findings and an agenda for future research. Social Media + Society, 4, 1–10. https://doi.org/10.1177/2056305118797720

Hermida, A., & Mellado, C. (2020). Dimensions of social media logics: mapping forms of journalistic norms and practices on Twitter and Instagram. Digital Journalism, 8, 864–884. https://doi.org/10.1080/21670811.2020.1805779

Hine, C. (2020). Ethnography for the internet: embedded, embodied and everyday. Routledge. https://doi.org/10.4324/9781003085348

Hoang, L. N. (2020). Science communication desperately needs more aligned recommendation algorithms. Frontiers in Communication, 5, 598454. https://doi.org/10.3389/fcomm.2020.598454

Huang, T., & Grant, W. J. (2020). A good story well told: storytelling components that impact science video popularity on YouTube. Frontiers in Communication, 5, 581349. https://doi.org/10.3389/fcomm.2020.581349

Jasanoff, S. (2015). Future imperfect: science, technology and the imaginations of modernity. In S. Jasanoff & S.-H. Kim (Eds.), Dreamscapes of modernity: sociotechnical imaginaries and the fabrication of power (pp. 1–33). University of Chicago Press. https://doi.org/10.7208/chicago/9780226276663.003.0001

Jasanoff, S., & Kim, S.-H. (2009). Containing the atom: sociotechnical imaginaries and nuclear power in the United States and South Korea. Minerva, 47, 119–146. https://doi.org/10.1007/s11024-009-9124-4

Klein-Avraham, I., Greussing, E., Taddicken, M., Dabran-Zivan, S., Jonas, E., & Baram-Tsabari, A. (2024). How to make sense of generative AI as a science communication researcher? A conceptual framework in the context of critical engagement with scientific information. JCOM, 23, A05. https://doi.org/10.22323/2.23060205

Litt, E., & Hargittai, E. (2016). The imagined audience on social network sites. Social Media + Society, 2, 1–12. https://doi.org/10.1177/2056305116633482

Ma, R., & Kou, Y. (2021). “How advertiser-friendly is my video?”: YouTuber’s socioeconomic interactions with algorithmic content moderation. Proceedings of the ACM on Human-Computer Interaction, 5, 429, 1–25. https://doi.org/10.1145/3479573

Madden, R. (2023). Being ethnographic: a guide to the theory and practice of ethnography (3rd ed.). SAGE Publications.

Mede, N. G., Cologna, V., Berger, S., Besley, J. C., Brick, C., Joubert, M., Maibach, E., Mihelj, S., Oreskes, N., & Schäfer, M. S. (2024). Public communication about science across 68 countries: global evidence on how people get information and communicate about science-related matters. https://doi.org/10.31219/osf.io/xb3ha

Medvecky, F. (2017). Fairness in knowing: science communication and epistemic justice. Science and Engineering Ethics, 24, 1393–1408. https://doi.org/10.1007/s11948-017-9977-0

Metag, J. (2021). Tension between visibility and invisibility: science communication in new information environments. Studies in Communication Sciences, 21, 129–144. https://doi.org/10.24434/j.scoms.2021.01.009

Mitova, E., Blassnig, S., Strikovic, E., Urman, A., Hannak, A., de Vreese, C. H., & Esser, F. (2022). News recommender systems: a programmatic research review. Annals of the International Communication Association, 47, 84–113. https://doi.org/10.1080/23808985.2022.2142149

Neyland, D. (2008). Organizational ethnography. SAGE Publications. https://doi.org/10.4135/9781849209526

Nieborg, D. B., & Poell, T. (2018). The platformization of cultural production: theorizing the contingent cultural commodity. New Media & Society, 20, 4275–4292. https://doi.org/10.1177/1461444818769694

Olesk, A. (2021). The types of visible scientists. JCOM, 20, A06. https://doi.org/10.22323/2.20020206

Ørmen, J., & Gregersen, A. (2023). Institutional polymorphism: diversification of content and monetization strategies on YouTube. Television & New Media, 24, 432–451. https://doi.org/10.1177/15274764221110198

Pavelle, S., & Wilkinson, C. (2020). Into the digital wild: utilizing Twitter, Instagram, YouTube and Facebook for effective science and environmental communication. Frontiers in Communication, 5, 575122. https://doi.org/10.3389/fcomm.2020.575122

Peterson-Salahuddin, C., & Diakopoulos, N. (2020). Negotiated autonomy: the role of social media algorithms in editorial decision making. Media and Communication, 8, 27–38. https://doi.org/10.17645/mac.v8i3.3001

Petre, C., Duffy, B. E., & Hund, E. (2019). “Gaming the system”: platform paternalism and the politics of algorithmic visibility. Social Media + Society, 5, 1–12. https://doi.org/10.1177/2056305119879995

Poell, T., Nieborg, D. B., & Duffy, B. E. (2021). Platforms and cultural production. Polity Press.

Schäfer, M. S. (2023). The notorious GPT: science communication in the age of artificial intelligence. JCOM, 22, Y02. https://doi.org/10.22323/2.22020402

Schellewald, A. (2022). Theorizing “stories about algorithms” as a mechanism in the formation and maintenance of algorithmic imaginaries. Social Media + Society, 8. https://doi.org/10.1177/20563051221077025

Schjøtt Hansen, A., & Hartley, J. M. (2021). Designing what’s news: an ethnography of a personalization algorithm and the data-driven (re)assembling of the news. Digital Journalism, 11, 924–942. https://doi.org/10.1080/21670811.2021.1988861

Seaver, N. (2017). Algorithms as culture: some tactics for the ethnography of algorithmic systems. Big Data & Society, 4, 1–12. https://doi.org/10.1177/2053951717738104

Spradley, J. P. (1980). Participant observation. Holt, Rinehart; Winston.

Stoldt, R., Wellman, M., Ekdale, B., & Tully, M. (2019). Professionalizing and profiting: the rise of intermediaries in the social media influencer industry. Social Media + Society, 5, 1–11. https://doi.org/10.1177/2056305119832587

Tatalovic, M. (2018). AI writing bots are about to revolutionise science journalism: we must shape how this is done. JCOM, 17, E. https://doi.org/10.22323/2.17010501

Taylor, A. S. (2022). Authenticity as performativity on social media. Springer International Publishing. https://doi.org/10.1007/978-3-031-12148-7

Taylor, R. (2020). It’s time to crack open the black box of social media algorithms. The Telegraph. https://www.telegraph.co.uk/news/2020/02/04/time-crack-open-black-box-social-media-algorithms/

Väliverronen, E. (2022). Massimiano Bucchi: ‘We have all witnessed a spectacular, unprecedented experiment of science communication’. Public Understanding of Science, 31, 367–369. https://doi.org/10.1177/09636625221087318

van Dijck, J. (2013). ‘You have one identity’: performing the self on Facebook and LinkedIn. Media, Culture & Society, 35, 199–215. https://doi.org/10.1177/0163443712468605

van Dijck, J., & Poell, T. (2013). Understanding social media logic. Media and Communication, 1, 2–14. https://doi.org/10.17645/mac.v1i1.70

van Dijck, J., Poell, T., & de Waal, M. (2018). The Platform Society. Oxford University Press. https://doi.org/10.1093/oso/9780190889760.001.0001

Villegas-Simón, I., Anglada-Pujol, O., Castellví Lloveras, M., & Oliva, M. (2023). “I’m not just a content creator”: digital cultural communicators dealing with celebrity capital and online communities. International Journal of Communication, 17, 6447–6465. https://ijoc.org/index.php/ijoc/article/view/21026

Weingart, P. (2017). Wissenschaftskommunikation unter digitalen Bedingungen. Funktionen, Akteure und Probleme des Vertrauens [Science communication under digital conditions: Functions, actors and problems of trust]. In P. Weingart, H. Wormer, A. Wenninger & R. F. Hüttl (Eds.), Perspektiven der Wissenschaftskommunikation im digitalen Zeitalter [Perspectives on science communication in the digital age] (pp. 29–59). Velbrück Wissenschaft.

Weingart, P. (2022). Trust or attention? Medialization of science revisited. Public Understanding of Science, 31, 288–296. https://doi.org/10.1177/09636625211070888

Welbourne, D. J., & Grant, W. J. (2016). Science communication on YouTube: factors that affect channel and video popularity. Public Understanding of Science, 25, 706–718. https://doi.org/10.1177/0963662515572068

Yeo, S. K., Cacciatore, M. A., Su, L. Y.-F., McKasy, M., & O’Neill, L. (2021). Following science on social media: the effects of humor and source likability. Public Understanding of Science, 30, 552–569. https://doi.org/10.1177/0963662520986942

Notes

1. The educational mandate of public broadcasters in Germany includes the dissemination of content that serves the public interest. In the context of digital formats for social media, evaluation increasingly relies on platform metrics such as views and watch time, upon which decisions to fund specific formats are made. One of the benchmarks was creating at least seven videos that achieved 10,000 views each.

About the authors

Clarissa Elisa Walter is a doctoral researcher at the Weizenbaum Institute and the Berlin University of the Arts. Using ethnographic methods, her research draws on Science and Technology Studies to explore the implications of algorithmic systems and AI for science communication practice. As a practitioner herself, she communicates social science concepts on social media.

E-mail: clarissa.walter@weizenbaum-institut.de

Sascha Friesike is Professor of Digital Innovation Design at the Berlin University of the Arts and Director of the Weizenbaum Institute. He is also an associate researcher at the Alexander von Humboldt Institute. Friesike is an industrial engineer and holds a PhD from the University of St.Gallen. In his research, he focuses on the role digital plays when something new is created. He investigates the role of digitalization in science and looks at how creative people work.

E-mail: sascha.friesike@weizenbaum-institut.de