1 Introduction

Utopian and dystopian visions regularly dominate the public discourse on artificial intelligence (AI) [Cave & Dihal, 2019]. These normative debates around AI often revolve around the relationship between humans and machines but increasingly also consider how AI (re)consolidates existing discrimination and social inequalities. Yet, what is important to note in light of these debates is that AI as a notion and sociotechnical phenomenon is itself an object of negotiation. While AI is now routinely treated as self-evident [Suchman, 2023], it is still very much under formation as a sociotechnical phenomenon itself, with public perception and discursive framing having considerable influence.

This underscores the crucial role of public and science communication in shaping the societal understanding of AI. Given the complexity and wide-ranging implications of science communication, stakeholders wield significant discursive power, influencing public perceptions, policy directions, and societal expectations of AI. Examining stakeholders’ positions and discourses offers critical insights into the power dynamics and contestations that shape both public and science communication about AI. This study thus investigates how key stakeholders construct and negotiate competing imaginaries of AI. It builds on the concept of imaginaries to study how AI is being negotiated between stakeholders in the U.S., China and Germany — and thus seeks to “trace its sources of power and to demystify its referents” [Suchman, 2023, p. 1]. The concept of imaginaries has been applied across various fields, including science communication. For example, studies have utilised imaginaries to explore public discourse surrounding gene editing and other forms of technological and industrial advancements [Das et al., 2024; Vicente & Dias-Trindade, 2021].

The early sociological work on imaginaries highlighted the role of perceptions, discourses and future visions in the complex interactions and negotiations that arise when co-constructing technological developments [Anderson, 1983; Taylor, 2003]. More recent work on sociotechnical imaginaries [Jasanoff & Kim, 2009; Jasanoff & Kim, 2015] enables scholars to reconstruct the multiple, contested and often commodified [Mager & Katzenbach, 2021] discursive negotiations between different actors regarding technological development and its integration into society. In a more concrete form [Richter et al., 2023], it offers a constructive framework for questioning the role of different stakeholders in shaping AI imaginaries and the often-contentious negotiation processes around AI innovation and application.

While various studies have analysed national, industrial and political visions of digital media and automation [Felt & Öchsner, 2019; Mager, 2017], there is a lack of research analysing how imaginations of potential futures of AI are negotiated between stakeholders in the actual field of AI development. Against this background, this paper examines the imaginaries of different stakeholders in the field. It thus explores the dominant imaginaries pushed by key stakeholders in the U.S., Germany and China. The study further questions how these countries and stakeholders relate to each other in the context of the dominant imaginaries.

We conducted 40 interviews within industry, government, academia, media and civil society sectors in three leading countries in AI development and debate: the U.S., Germany and China. As tensions between the U.S. and China rise in the AI and tech sector [Schindler et al., 2023], it is important to shed light on the emerging and institutionalising future visions of AI development and innovation across the two nations and offer a comparative view on the socio-political differences and similarities. Germany represents an important case study as part of Europe. Europe has acted as a counterpoint to the U.S.’s apathy to regulation due to innovation pressure. It has also distinguished itself from the U.S.-China discourse in a quest to preserve its AI sovereignty [Mügge, 2024] and stay relevant in the global AI market.

2 Negotiating AI imaginaries

As in other societally relevant debates, stakeholders from different sectors in the AI environment have vied for attention to shape the public perception around AI and have their preferred future vision of AI development, implementation and regulation heard by decision-makers [Schäfer, 2009]. Yet, current research into AI stakeholders has often focused on general issues, such as ethical or responsible AI [Fukuda-Parr & Gibbons, 2021], national AI strategies [Mager & Katzenbach, 2021; Hälterlein, 2024; Paltieli, 2022] and national AI imaginaries [Kao, 2024; Kim, 2023]. Alternatively, studies have prioritised a particular AI stakeholder sector — for instance, policymakers [Breuer & Müller, 2024], healthcare [Puaschunder, 2019; Scott et al., 2021], industry [Pereira & Hargreaves, 2024; Rohde & Santarius, 2023] or the media’s role as employing and representing AI in public discourse [Beckett & Yaseen, 2023; Borchardt et al., 2024; Ji et al., 2024]. Simultaneously, literature has reflected on the strong industry dominance in public discourse [Brennen et al., 2018; Fischer & Puschmann, 2021]. Yet, we lack a comprehensive analysis cutting across various AI stakeholder groups.

Historically, the discourse surrounding AI has gone through alternating periods of intense activity and decline [Haenlein & Kaplan, 2019]. In these ongoing attention cycles since the 1950s, different stakeholders have consistently influenced and shaped the AI debate and public perception [Haenlein & Kaplan, 2019]. Thus, it is crucial to analyse the composition of these stakeholders and their influence on the broader AI landscape. Recent studies on stakeholder typologies in technology debates [Gorwa, 2022] have sparked renewed interest in key players shaping the future of AI and other technological trajectories. Current work on AI stakeholder distributions on Twitter/X has shown that industry, media, government, academia and increasingly civil society actors have been present. However, stakeholder involvement and relevance has fluctuated over the ten years (2012–2021) across the U.S. and Germany in line with ongoing public discourse [Richter et al., forthcoming].

Thus, stakeholders play a pivotal role in the public discourse surrounding AI: they shape its future trajectories and implementation by pushing their imaginaries of AI [Richter et al., 2023]. Such sociotechnical imaginaries are “collectively held, institutionally stabilised, and publicly performed visions of desirable futures” [Jasanoff, 2015, p. 4]. This approach goes beyond communication, incorporating factors such as government interventions, corporate investments, technological innovations and other elements that contribute to realising a projected future. Additionally, socio-technical imaginaries include a strong temporal future-oriented component. They “at once describe attainable futures and prescribe futures that states believe ought to be attained” [Jasanoff & Kim, 2009, p. 120]. By doing so, they influence present actions and decisions by articulating visions of preferred futures. Scholars have particularly highlighted the role of public communication in the context of multiple, contested and competing imaginaries and their impact on broader societal decisions such as on how to implement and regulate AI [Brause et al., 2023; Bareis & Katzenbach, 2021].

Building on the concept of sociotechnical imaginaries, scholars have employed the “sociotechnical imaginaries in public communication” framework for a more focused and structured approach to analysing imaginaries in public communication [Richter et al., forthcoming]. This framework integrates insights from public communication scholarship about science and technologies [Schäfer, 2009] into scholarship on sociotechnical imaginaries. It conceptualises sociotechnical imaginaries in public communication “as publicly constructed visions of (un)desirable socio-technical futures. They can guide action, mobilise resources and layout trajectories for the materialisation or prevention of those futures” [Brause et al., forthcoming]. Distinct from related concepts such as frames, narratives or discourses, this framework emphasises visions of socio-technical futures aimed at garnering collective support or opposition, connecting to specific actions shaping these futures. This framework is both descriptive and prescriptive; when sufficiently supported by stakeholders and the public, imaginaries can significantly influence whether the envisioned outcomes are actualised or avoided. It thus facilitates systematic analyses of public and stakeholder representations of AI.

This paper aims to expand the current research into specific stakeholders, nation-states or AI concerns by providing a comprehensive analysis spanning various stakeholder groups within the AI environment and a cross-national analysis. Employing the sociotechnical imaginaries in public communication framework offers a new perspective in identifying important imaginaries and the stakeholders shaping current discourse and future trajectories. It further allows us to discern (dis)connects between stakeholders within countries and larger political positionings globally. Lastly, it directs the analytical lens onto the future AI trajectories currently being developed and their potential implications for socio-political decision-making.

3 Methodology

The study is based on semi-structured interviews with AI experts in industry, government, academia, media and civil society from Germany, the U.S. and China. Previous research has focused heavily on analysing policy and AI strategies as well as on the AI stakeholders or AI imaginaries represented in media debates. However, both expert surveys [Puaschunder, 2019] and interviews have yielded important insights into the development of imaginaries [Rohde & Santarius, 2023] and AI implementation [Borchardt et al., 2024]. Expert interviews provide in-depth knowledge about stakeholders’ navigation of the larger AI environment and their beliefs and imaginations around AI. These can inform broader-scale observations that enable us to compare the sociocultural and techno-political differences across the three case studies.

The expert categories were chosen following a two-dimensional stakeholder typology of AI discourses by Richter et al. The typology was developed based on German and U.S. Twitter data from 2012 to 2021 mapping the distribution and longitudinal relevance of AI stakeholders in the German and English AI discourse [Richter et al., forthcoming]. Experts from industry included individuals from top-ranking tech corporations such as platform companies and AI associations. Academia was represented by Ivy League AI centres and German academic AI clusters, with experts providing overarching commentary, including on regulatory and governmental trajectories. While most NGOs in the U.S. were very tech and AI focused, German NGOs often had a wider focus including project teams on tech innovation and AI. Lastly, media experts spanned both tech outlets and general publications with strong AI coverage and internal usage, providing commentary on the larger media environment. We followed Marres et al. [2024] in applying a broad and integrative understanding of experts as an extended peer community that is (1) consistently involved in ongoing AI research or discourse and (2) actively intervenes in existing knowledge, knowledge production and changing understandings of AI.

The three countries represent three powerful nation-states with different regulatory traditions and frameworks [Perthes, 2021]. We assume that the AI stakeholder environment greatly varies across the three countries. Whereas the strong position of industry actors in AI discourse and development is an international phenomenon [Brause et al., 2023], it is particularly evident in the U.S., being home to most major technology firms and a market-based regulatory framework [Bradford, 2023]. In Germany, we anticipate a more diverse stakeholder setup. There are strong industry stakeholders in Germany, but also a substantive policy debate and public regulation at both national and EU level [Bradford, 2023]. This further allows for more effective interventions by NGOs and facilitates public debates. In China, the state government and party play a significantly stronger and more centralising role in leading AI discourse and development [Pan et al., 2024]. The different political and economic trajectories are likely further reflected in the uptake of different imaginaries by stakeholder sectors across the three case studies and cross-sectoral advocacy for emerging AI imaginaries. Additionally, China, as part of East Asia, has historically shown a more positive attitude in tech imaginary development, as seen in their popular cultural adoption of robots [De Boer et al., 2020], than the U.S. and Germany, with the latter particularly known for its more cautious approach to tech innovation [Hornung & Schnabel, 2009]. Against this background, we sought to understand the similarities and differences in AI imaginaries of different stakeholder sectors across the countries and how they inform the future techno-political and socio-economic development of technologies on the rise.

Therefore, the semi-structured interviews followed an interview guideline developed to answer two guiding research questions: which dominant imaginaries are articulated by key stakeholders in each country? And how do these countries and different stakeholders within each country relate to each other in the context of these dominant imaginaries? The interview guideline was structured according to seven main categories, with follow-up questions inquiring about 1) interviewees’ position(ing), 2) their understanding of AI, 3) the role of AI in their organisation, 4) and the larger AI environment, 5) aims and responsibility regarding AI, 6) their communication strategy on AI, and lastly, 7) a general future outlook on the AI environment.

Interviewees were recruited through direct messages, snowball sampling and network outreach among the targeted stakeholders and organisations. Interviews lasted between 30 to 60 minutes and were conducted in-person or online, with some follow-up correspondence for details or additional material, including websites, reports, PR and media articles, which formed part of the larger contextual corpus. The corpus resulted in 40 interviews. Most interviewees provided cross-sectoral expertise, offering insights into the larger AI environment and sectoral relations (Table 1). As interviewees’ positionality varied greatly across different countries and stakeholder groups, all data was anonymised.

PIC
Table 1: Interview distribution across sectors.

NGOs and media were excluded from our interviews in China. Due to the systematic constraints imposed on non-governmental and other civil organisations [Han, 2018], there are few civil society actors in China devoted to AI-related issues compared to their Western counterparts. While we successfully engaged journalists for expert consultations in the media sector, they declined to participate in interviews. As media organisations in China are predominantly state-owned, journalists must exercise caution when sharing views with foreign researchers due to the sensitive nature of their work. Given these circumstances, we focused on experts from the industry and universities with cross-sector expertise, who could provide well-rounded insights into various AI-related areas in China. This helped us to compensate for the absence of first-hand interviews with NGOs and media practitioners.

The interviews were automatically transcribed and then manually cleaned for analysis. Based on the conceptual framework of sociotechnical imaginaries in public communication [Brause et al., forthcoming], we coded each interview regarding (1) the proponent of an imaginary, (2) the type of AI, (3) the vision for AI, (4) its desirability and (5) spatiotemporal focus and (6) potential implications. Memos were created for each interview following the six categories to identify emerging and established AI imaginaries based on what AI visions were articulated, how desirable they were and for whom, and what counter-imaginaries were present within each sector or across AI sectors. The transcripts and memos were then analysed using critical discourse analysis [Wodak, 2015]. This meant, first, identifying overarching themes before, second, mapping the relational development of imaginaries across stakeholder groups and major emerging imaginaries on AI for each country. Third, we compared the findings for the countries under investigation and contextualised these interview results with the specific sociocultural and political settings in these countries.

4 Results

The analyses of the semi-structured interviews reveal clear differences across the three countries, but also point to similarities in larger tech imaginaries that impact the future visions of AI. The following analysis focuses on the most dominant imaginaries around AI as communicated in the interviews for each country. It then looks comparatively at how stakeholder groups and case study countries relate to each other in regard to the communicated AI imaginaries in the discussion.

PIC
Table 2: Overview of Dominant AI Imaginaries in Germany, the U.S. and China.

4.1 AI imaginaries in Germany

Germany’s strong ties to the European market and policy discourse emphasise the rift between German stakeholders, who propagate an important regulatory focus for Germany’s and Europe’s relevance in the larger AI environment, and a push for more innovation. While industry stakeholders advocate for economic opportunities through more open regulation, these debates coincide with international collaboration and call for more cross-stakeholder collaboration. Overall, the German corpus emphasises three dominant imaginaries.

First, the German analysis shows a clustering of governmental actors, industry associations and NGOs focused on the AI race for sovereignty as a developing imaginary. The imaginary envisions Germany retaining a key economic position in the global AI environment through well-regulated innovation to pave “the way for digital sovereignty” (DE5). While there is a strong German focus, the overarching goal is globally oriented. Local regulations are envisioned as potential best-practice frameworks for global adoption, similar to privacy regulations. Within this imaginary, the U.S. and China are often represented as global counter-players in this AI race (DE1, DE7). “Economic discourse on international competitiveness against China and the U.S. is becoming a reality right now (…) with very oppositional takes on where to situate Germany between regulation and pioneering development” (DE1). These oppositional takes revolve around the type and specificity of regulation needed, as AI is discussed as an umbrella term for automation technologies. Other stakeholders, especially those in industry, emphasise the need for more nuanced regulatory approaches to allow innovation to flourish within feasible guidelines for specific AI applications to stay in the race. “We also realise that the larger companies (…) support the whole thing to their advantage because they also have the money to say yes” (DE5). This imaginary foregrounds Germany’s role as part of Europe to establish a relevant position in the global market, emphasising European regulation and values as key competitive factors for ethical innovation. Values regarding privacy, data protection and democratic ideals in AI regulation and innovation are often summarised “a bit sweepingly under the term European values” (DE5). The current implications of this future vision of AI depend on the ongoing negotiations of what AI should and can be used for (DE10).

AI as a tool in human control is another German imaginary that has developed over the last decade. While academics have prominently pushed this imaginary, it is now further backed by a decentralised cluster of NGOs and industry. This imaginary has been referenced strongly by several tech-specialised media outlets as a relevant counter-imaginary. It emphasises de-escalating prominent utopian and dystopian views in public AI discourse fuelled by pop culture references such as the Terminator movies (DE10). Human agency and responsibility when using AI applications are foregrounded rather than an image of AI as self-determining. “We can do everything with AI, but a human has to be in charge and make the final decision” (DE4). This future vision of AI innovation and implementation heavily focuses on practical applications and regulations for societally beneficial adoption. The value of AI resides in its versatile potential to support humans in addressing more significant (societal) issues. Fears around job loss are countered with statements such as: “AI is simply a smart tool; it is not yet a replacement for humans or competitors in the field” (DE6). While AI is used as a term in public communication, various stakeholders emphasise the need for technological specificity around AI applications. “We need a more differentiated approach to the topic” (DE3). Different uses, potential risks and biases do not apply to the whole field but pertain to specific areas of innovation and need to be regulated appropriately. This imaginary emphasises the need for (public) education and AI literacy so that people can better understand the potential of AI technologies in everyday life, positive or negative (DE9). Lastly, industry, advocacy groups tied to industry and unions are propagating a third imaginary envisioning AI cooperation for innovation. This imaginary pushes collaboration in AI innovation and regulation for an economically viable future in the global market. In contrast to the first imaginary’s focus on political standing, this imaginary emphasises Germany’s economic position. Proponents of this imaginary criticise the current European regulatory approach as too general and advocate for addressing specific AI applications differently to allow for global competitiveness. This includes a call for technologically specific language “because the stakeholders, the experts with whom we work, have different definitions” (DE8) of AI, and they require clear labels for collaboration. At the centre of this imaginary is cooperation for innovation between German AI stakeholders and beyond so that Germany can stand a chance against competition from the U.S. and China. “All of our projects generally have international cooperation partners. But (…) [they] naturally have to grow over a longer period of time” (DE7). This means that connections have to be made and nurtured now for future AI development, suggesting that “only cooperative approaches will be able to survive in competition with international players” (DE8). Europe plays a relevant role within this future vision as “anyone who only thinks in terms of national borders has already lost” (DE2). A sole focus on Germany is considered too narrow to retain international relevance in the AI market. However, in contrast to European values or regulation being key to international relevance, this imaginary revolves around cooperation across sectors and national borders to establish a global position. The general concern currently revolves around how to “get business, research, politics and civil society on the same page” (DE11).

Overall, these AI imaginaries highlight different but also overlapping standpoints on how to view AI’s future role in the context of Germany as part of a larger European political and economic network: as a political tool, a societal opportunity or an economic one. As such, they are in an ongoing cycle of negotiation, impacting future AI development and adaptation, especially regarding pending regulatory decisions.

4.2 AI imaginaries in the U.S.

In contrast to Germany, the U.S. stakeholder landscape revolves around a trifecta of industry stakeholders advocating for AI’s relevance, academia supplying relevant basic research and educated AI personnel and non-governmental organisations benefiting from industry funding while often opposing industry views. Stakeholders frequently move across these sectors, highlighting an underlying dilemma of what views they represent: those of the company, the institution or their own. While “there may be cultural differences related to regulatory style that drive that difference between the U.S. and, say, the EU and Germany” (US4), there are also cultural differences between different stakeholder groups such as industry and academia. However, three major AI imaginaries dominate the U.S. corpus.

First, the AI race for global (political) dominance imaginary is heavily advocated for by the U.S. government and industry through lobbying. It emphasises AI’s potential to underscore U.S. dominance while grappling with regulatory needs. Thus, AI is considered a technology with high future potential across all sectors. However, without a clear definition, the term becomes both all-encompassing and meaningless in application. The strong economic focus in governmental discourse stems from the vocabulary and topics that government officials are comfortable with: “If you go to policymakers and you talk the language of the technologies, mostly they don’t understand you. But if you talk the language of the economist, that’s a familiar vocabulary or discourse” (US8). AI is, therefore, strongly emphasised as essential for economic growth. Various interviewees commented that the lack of understanding of AI, technologically and beyond economics, is a larger issue across the sector. They considered the AI-race-imaginary as a symptom of a lack of (public) understanding of AI, often associated with threatening “human labour and job displacement” (US8). In this view, “winning the race” is hailed as the solution to imminent public fears as this imaginary impacts future AI visions by conflating fears of falling behind with techno-political and economic tensions with China’s AI development. The imaginary further builds on the narrative that Europe regulates and the U.S. innovates (US8, US10). Interviewees emphasised that these stakeholders view it as “incredibly important that the U.S. continue to be a dominant player in the AI industry, and so supporting research” (US18) and that “getting Washington, DC, into the game of GDPR or the Digital Services Act is a mistake” (US8).

Yet, the question of who regulates has become a recurring point of contention, leading industry stakeholders to foreground corporate regulation through their AI as a key technology for the future imaginary. It emphasises a strong focus on societal good in public communication despite its clear economic core. In contrast to the first imaginary, this imaginary is built on technological terminology and specific application trends around LLMs, AGI and GenAI. AI is strategically mobilised as a public communications term: “There’s a marketing aspect to it that can’t be ignored. AI is a very hot topic. And so there’s obviously some business advantages to describing your work as being on the cutting edge” (US18). However, specific technological language is used in-house: “Publicly, we’re saying AI, but what I find in practice is that people are talking about specific types of model functions” (US9). Similarly, most large industry players now have ethics departments and AI principles to counteract concerns of potential harm by a technology hailed as revolutionary. Internal research departments often play a significant role in shaping these “responsible” AI trajectories (US5, US9). However, interviewees comment on the limited power and agency these departments have. Although the ideal of innovation before regulation still stands, public and political interest is ramping up for larger legislative approaches such as the AI Bill of Rights. In contrast, this AI imaginary envisions Silicon Valley corporations at the heart of regulatory processes, despite ongoing critique of AI tech-solutionism that also applies to their ethics departments (US11) since mass layoffs. As far as the sector’s outward appearance and public communication are concerned, it portrays AI as a key technology for the future but employees describe AI or specific applications in more modest terms, avoiding the ongoing industry hype.

This discrepancy is also reflected in the AI as a tool imaginary advocated for by academia, NGOs, tech industry researchers and journalists. As in Germany, the general consensus reflects a counter-imaginary based on AI as a tool that requires technological-specific critique and is dependent on human agency. Therein lies its harm and potential for progress. “People say, ‘AI will do X’. AI will do nothing. AI is a technology; that’s like saying a hammer will do X. A hammer does nothing” (US3). These stakeholders warn increasingly against utopian and dystopian extremes (US1) creating a dispersed imaginary centring agency around AI technologies back onto humans instead of an “all-knowing” technology. While this imaginary accounts for AI’s potential benefits, it also recognises its potential harm to certain communities (US7). In this context, actors address specific AI technologies whenever possible while acknowledging that the umbrella term AI is (a) a helpful unifier or signpost for audiences by journalists (US2, US1), (b) a marketing term with public resonance for education by academia (US6), (c) an entry point into larger conversations on ethics and harms by NGOs (US7), and (d) a funding opportunity across sectors (US19, US4, US10). Despite cross-sectoral differences, it emphasises a general need for regulation and clear accountability and responsibility of AI technologies by defining human responsibility. Interviewees agree that “a voice of reason” is needed to counter the fear-mongering and hype to ensure a realistic future vision of successful long-term AI innovation and implementation. Interviewees further reflect the critique of the other two imaginary proponents that “there is no long-term vision. Not really. So the longest horizon right now is like the net zero goal, something by 2025, by 2030, by 2040” (US11). For long-term success, future possibilities and implications need to be addressed.

Overall, all sectors are grappling with “the general perception [that] no one wants to be left behind” (US2) while simultaneously trying to shift the perspective to more significant sociopolitical questions that need to be addressed (US6). This can be seen in the various cross-sectoral collaborations and in the push for collaboration to broaden conversations around AI to add more nuanced perspectives on future regulation and implementation (US7, US14, US17). There is also a regularly articulated need “to see a larger spate of civil society actors at the table when we’re talking about regulation” (US4). This reflects the ongoing “problem of whose work is valued and what kinds of expertise are valued” (US11). Lastly, a general conundrum within proponents across imaginaries lies in their personal position versus the public positioning of the entities with which they are affiliated.

4.3 AI imaginaries in China

When it comes to AI imaginaries, China’s stakeholder landscape is shaped by a strong top-down influence from the central government, complemented by motivated industry players. Unlike its European and American counterparts, grassroots organisations and independent civil society actors play a limited role in shaping the direction of technological development. The top-down model of incentives, driven by policies and investments, places the party-state at the centre of decision-making and resource allocation. An interviewee recalled advice after completing his studies in the U.K.: “If your business isn’t about AI, you might as well stay abroad because China’s policies are only focused on AI and semiconductors” (CN8). This highlights the efficiency with which China’s persistent policy support mobilises resources and talent to push AI development. This is further reflected in the three dominant Chinese AI imaginaries.

A key Chinese imaginary considers AI as a “trust-worthy all-purpose solution” to many of China’s societal challenges. This view is shared by a variety of stakeholder types. Interviewees frequently mention AI’s potential to tackle issues, including social inequality, demographic crises, and crime. For instance, AI-empowered inclusive financing is often cited as an example of how AI can help solve social and economic inequality, as it promises to help underprivileged populations gain access to loans through big data and advanced AI models. Similarly, the use of facial recognition and surveillance systems is regularly highlighted as contributing to lower crime rates. When asked why privacy and data safety concerns appear less pronounced in China compared to the U.S. or Germany, interviewees offered insights into the role of imagination and trust. “The public’s understanding of privacy and data safety is based on their imagination of potential consequences. But the visible benefits and everyday convenience of these technologies outweigh the need to imagine what could happen [with their data]” (CN4). Another interviewee highlighted the role of trust, “People trust that their data won’t be used for malicious purposes because of China’s strong legal regulations and crackdowns on internet crime. This has built trust, which diminishes privacy concerns compared to other countries” (CN5).

As in other countries, there is also an imaginary that positions China in the context of global competition — China as a stumbling AI superpower — capturing the countries’ aspirations and concerns regarding AI development within the global context. AI development is often discussed in the context of national competition and as a source of nationalistic pride. Yet, our interviewees also share concerns about structural and infrastructural issues that limit AI innovation in China. The country’s top-down model benefits large tech giants in China, but as one interviewee pointed out, “real breakthroughs often come from small, innovative teams (such as OpenAI). China’s government model of support is not always friendly to small groups, which limits its effectiveness in fostering true innovation” (CN8). Another interviewee compared the current AI race to China’s earlier internet boom: “During the internet phase, we were able to rely on our own capabilities and the demographic dividend to advance applications and boost economic growth. But in the AI era, particularly with large language models, it’s unclear how much genuine breakthrough we can achieve given the current political and economic climate” (CN7). Another major bottleneck for China’s AI industry is its dependence on semiconductors, a vulnerability laid bare by the ongoing “chip war.” Geopolitical tensions have introduced significant uncertainties in relation to the country’s AI ambitions, emphasising the need for domestic chip production. On this topic, interviewees expressed pessimism. “Making chips is like building airplanes — it requires a massive infrastructure. China has struggled to make its own airplanes, and it faces similar, if not greater, challenges in producing its own chips” (CN1).

The third key imaginary relates to the country’s tech culture, favouring quick returns but leading to long-term harm. OpenAI’s ChatGPT, for example, is frequently mentioned as an innovation that has yet to find a parallel in China, sparking critical reflections on why this might be the case. Many interviewees express frustration at the gap between their aspirations for AI supremacy and the reality of limited groundbreaking advancements. Interviewees describe China’s tech culture as “浮躁” (impatient or restless), driven by trends and quick financial returns rather than long-term, foundational research. One remarked, “We’re constantly chasing trends set by the government or investors. Last year, it was the metaverse; this year, it’s LLMs. Few people are willing to focus on foundational research that doesn’t yield short-term economic benefits” (CN6). This critique of China’s tech culture is widespread, with many believing the sector is too focused on quick monetisation (“变现”) at the expense of true innovation. As one interviewee put it succinctly, “Hot money has caused harm in the long run…China’s tech industry needs idealism” (CN8).

The country’s top-down approach is reflected in how stakeholders envision AI governance across the three imaginaries. Unlike in Germany or the U.S., where civil society and academia play a prominent role in shaping AI regulation and AI imaginaries, Chinese stakeholders emphasise the central government’s responsibility to foster a safe and sustainable tech sector. Some also criticise that efforts to regulate AI and integrate ethics remain largely cosmetic, with ethics teams often viewed as obstacles rather than as valuable participants in projects. Interestingly, Chinese stakeholders often idealise the West’s approach to AI governance. The EU’s GDPR, for example, is frequently cited as a “perfect protocol”.

5 Cross-country negotiations of their position in the global AI environment

The portrayed imaginaries bear the distinct and different sociocultural and political contexts of the three case studies, yet when looking at the relation between countries, they share one similar dominant imaginary around an AI race. This imaginary reflects the ongoing global negotiation for political and economic power ascribed to AI technology [Bradford, 2023]. Externally, this imaginary plays on nationalistic ideals of being a global leader or superpower. However, internally within countries, it reflects a shared fear of being left behind, not keeping up and therefore losing relevance and political standing due to a loss of economic power. Stakeholders mobilise this imaginary to shape AI discourses for their benefit.

This ongoing AI imposter syndrome is regularly mentioned by experts and stakeholder groups in all three countries, yielding a strong motif of comparison and competition. Europe, for example, is often positioned in opposition to the U.S.-China rivalry [Perthes, 2021]. This AI triangle is regularly referred to by expert interviewees themselves. With the ongoing developments, this competitiveness leaves stakeholders feeling left behind. The U.S. discourse quickly emphasises the fear of a West against the rest discourse, especially regarding China’s AI development, outpacing the U.S. dominance in tech development — and specifically in AI innovation — becomes enmeshed with political supremacy within these debates. Regulating — and potentially hindering — AI innovation becomes a major political issue. This argument seems especially relevant to counterpose with China’s rather identical fear: interviewees cited Open AI’s ChatGPT as embodying a culture of innovation or Europe’s GDPR as a educational policy development, reiterating the same fear of losing out in the global play for AI dominance.

However, there are also differences in the AI race imaginaries across the countries. These reflect the sociocultural and political situatedness of imaginary building and adaptation, as well as the active positioning of stakeholders within the larger global AI environment. Various German stakeholders highlight the importance of European integration for Germany’s political relevance vis-a-vis much larger U.S. and Chinese economic markets. Additionally, the European approach to digital development is characterised by a strong regulatory focus. China’s interest in GDPR and the U.S.’s narrative of “Europe regulates and the U.S. innovates” highlights the influence of the GDPR beyond Europe. The European emphasis on AI sovereignty aligns with the general European theme of sovereignty in digital governance [Pohle & Santaniello, 2024] and tech development [Pohle & Thiel, 2020].

In contrast, the U.S. buys into a historically well-known cultural narrative of “being #1” based on American exceptionalism [O’Connor et al., 2022]. This is reflected in various strategies, such as research funding and a strong entrepreneurial ecosystem favouring innovative ideas through philanthropic and tech investors [Calimanu, 2023]. Asserting their need for political dominance through technological excellence fits the general global political positioning. This further exacerbates the long-standing rivalry with China, as indicated by the Chinese interviewees’ reflections on China’s push to be perceived as a global AI superpower.

Although the U.S. and Europe have become relevant benchmarks for AI development and regulation, China’s ‘We against the West” rhetoric plays into a historicised differentiation of East versus West. It reflects a long-standing competition with the U.S. in the tech sector that has previously led to product and company bans and constant disputes on espionage [Girishankar, 2024; Pan et al., 2024]. China has, furthermore, surpassed the U.S. claim to global dominance in various sectors, reinforcing the ongoing rivalry [Lippert & Perthes, 2020]. While Europe is considered part of the West, the discourse seems less polarised and rife with black-and-white positioning. Although this imaginary has become relevant again with the current AI summer, the AI race imaginary has an expansive history debating similar issues, implications and possibilities.

Considering that these AI imaginaries are historically informed and political, it is important to question the implications of this geopolitical tension. Such a competitive environment pushes concerns regarding a similar “great game” emphasised by interviewees, not unlike the 20th-century Cold War around nuclear weapons [Schindler et al., 2023]. However, while all three countries push their own interests and values regarding future AI development and implementation, none of these happen in isolation. Ongoing cross-sectoral cooperation, including industry and academia, but also civil society and governmental actors, emphasise the entanglement of stakeholder groups and the three countries in the global AI environment. Thus, a better understanding of the ongoing negotiations of current AI imaginaries can inform future trajectories, given that stakeholders across the three countries stress the need to not repeat the same mistake as with other digital or technological developments.

6 Stakeholders and sectoral co-dependencies

Concerning the relationship between stakeholders, all countries exhibit relevant sectoral co-dependencies across the different stakeholders despite the difference in interview distribution. The academia-to-industry trajectory is especially prevalent across all countries, yet there are sociocultural and technopolitical differences. Academia-industry collaborations have developed into the cornerstone of most AI development and AI industry expansion. From funding to recruiting new labour to the start-up culture and knowledge exchange, the connection between these two sectors was the strongest and most volatile due to differences in associated values.

First, the German stakeholder landscape makes a clear distinction between academic work and AI industry development regarding ethical concerns and societally beneficial innovation. However, the larger stakeholder landscape highlights connections between industry stakeholders and academic experts on various levels of the AI environment. While Siemens, VW and other German tech companies actively fund their own AI labs, their members often hold professorships and other academic positions at established universities. Simultaneously, the German Research Center for Artificial Intelligence (DFKI), which is led by professors with different sectoral expertise across the German university system, collaborates directly with industry stakeholders. While some research collaborations are declined due to ethical concerns, especially regarding weapon development, academia-industry collaborations provide an important funding source for academic research centres and AI labs. These research collaborations often encompass fundamental research on new AI technologies not yet viable for market adoption. Funding by industry stakeholders for academic institutions therefore plays a pivotal role in providing a well-educated labour force and the necessary knowledge for AI development and early innovative trajectories. In contrast, industry connections to advocacy groups exist but are often based on funding alone, curating a societal good focus for public perception rather than fostering a reciprocal relationship.

The U.S. tech sector is even more built on this symbiotic relationship between universities, academic institutes and the tech industry. Silicon Valley was developed due to this collaborative approach [Etzkowitz, 2022]. However, academic experts were more concerned with the ongoing brain drain into industry as universities historically have taken on non-commercial basic research, not relevant or economically viable for industry, including ethical and experimental work [Jurowetzki et al., 2021]. The “philanthropic” work of the tech sector has proved a double-edged sword. It provides vital funding for large research institutes such as BAIR and Stanford’s HAI, yet it also offers access to the next generation of industry recruits. This concurs with the call for a “voice of reason” by various interviewees. A lot of academic stakeholders have pushed to establish a future imaginary around “AI as a tool” rather than a “key technology”, which colours AI development as utopian instead of setting realistic expectations for tech literacy and political safeguarding. This dichotomy between securing private funding for critical and relevant non-commercial work, while remaining a crucial voice in public AI discourse, is further reflected in the third connected sector: advocacy organisations are closely tied to industry actors in the U.S., with various interviewees having repeatedly moved between academia, industry and advocacy. This creates an interesting trifecta of deeply connected sectors. Yet industry has a strong hold on academia and advocacy because it constitutes a key funding source and potential career trajectory.

In China, the government is the dominant stakeholder shaping AI development; academia plays a passive and ambivalent role in the AI stakeholder landscape. Various professors and other academics are prominently represented in the AI industry. There is consensus on the importance of academic research on AI to cover relevant but neglected topics by the Chinese AI sector, often critiqued for emphasising the technologies economic over societal benefits. At the same time, many interviewed experts lament that academia lags behind industry in AI research. In recent years, the government has pushed for stronger ethics and security frameworks in China’s AI sector. Despite the ambition to achieve high-level development around AI ethics [Fukuda-Parr & Gibbons, 2021; Jobin et al., 2019], the applicability of ethical frameworks to current AI innovation is still insufficient.

While imaginaries are neither co-constructed nor held across all stakeholder groups concurrently, the academia-industry co-dependency is especially clear across all three countries despite different localised iterations of the phenomenon. This trajectory also calls into question the typically assigned roles of these stakeholders in societal perception and reifies the powerful position of corporate stakeholders in AI imaginaries and AI development. The often-perceived role of academic and advocacy work in keeping industry in check needs to be challenged.

7 Conclusion

In this paper, we have set out to better understand how different stakeholders establish their vision of AI as a key sociotechnical phenomenon. Since this varies greatly across different regions and countries, we have addressed this by comparing China, the U.S. and Germany as particularly interesting and differing cases. The analysis confirms previous general findings of corporate dominance in AI discourse and development, yet this does not occur uniformly across all countries and themes. In the U.S., the discourse differs across several geographic AI centres. The German case reveals a strong focus on EU policy compliance and displayed distinct geographical distributions by stakeholders with different AI imaginaries. Lastly, the Chinese case emphasises a congruence with party policies, thus minimising local specificities in the AI discourse. The AI race has been positioned across all countries as a particularly powerful and pertinent imaginary that not only mobilises national activities but also yields benefits for powerful stakeholders, namely industry, to allocate resources and beneficial regulation in the name of global competition.

Given the vast resources that flow into AI development globally, and, at the same time, the wide spectrum of potential interpretations and implementations of AI, such analyses of imaginaries of AI are highly relevant. It is these visions that drive AI development and establish the technology as self-evident [Suchman, 2023]. While previous studies have shown strong industry dominance in this process, our investigation into the role of stakeholders in China, the U.S. and Germany gives these findings much more nuance. For example, we were able to reconstruct how government-led imaginaries have been adopted by corporate stakeholders in Germany and China. This allows us to better understand how AI as an object of (science) communication is actively negotiated between powerful stakeholders and how this process and the power relations vary across different countries and their different sociopolitical structures.

As the empirical work faced limitations due to limited access to different Chinese stakeholders, this yields constraints when comparing the countries. While we have sought to reflect these differences in the analyses, future research needs to continue unpacking how powerful stakeholders mobilise AI and the high level of imagination around this technology for their own interest. In this negotiation of different future visions of AI and society, stakeholders in different countries have differing amounts of power and resources. While AI is being integrated ever more deeply into society, future research can and needs to identify and facilitate imaginaries and communication strategies that have the potential to uphold public interest — even against powerful corporate and governmental interests.

Acknowledgments

This work was supported by the German Research Foundation (DFG) under Grant 450649594 (“Imaginaries of AI”) as well as the Swiss National Science Foundation (SNSF) under Grant number 100017L_197552.

References

Anderson, B. (1983). Imagined communities: reflections on the origin and spread of nationalism. Verso.

Bareis, J., & Katzenbach, C. (2021). Talking AI into being: the narratives and imaginaries of national AI strategies and their performative politics. Science, Technology, & Human Values, 47, 855–881. https://doi.org/10.1177/01622439211030007

Beckett, C., & Yaseen, M. (2023). Generating change. A global survey of what news organisations are doing with AI. POLIS, The London School of Economics; Political Science. https://www.journalismai.info/research/2023-generating-change

Borchardt, A., Simon, F., Zachrison, O., Bremme, K., Mulhall, E., & Johanny, Y. (2024). News report 2024: trusted journalism in the era of generative AI. European Broadcasting Union. https://www.ebu.ch/guides/open/report/news-report-2024-trusted-journalism-in-the-age-of-generative-ai

Bradford, A. (2023). Digital empires: the global battle to regulate technology. Oxford University Press.

Brause, S. R., Schäfer, M., Katzenbach, C., Mao, Y., Zeng, J., Richter, V., & Dergacheva, D. (forthcoming). Sociotechnical imaginaries and public communication: analytical framework and empirical illustration using the case of artificial intelligence.

Brause, S. R., Zeng, J., Schäfer, M. S., & Katzenbach, C. (2023). Media representations of artificial intelligence: surveying the field. In Handbook of critical studies of artificial intelligence (pp. 277–288). Edward Elgar Publishing. https://doi.org/10.4337/9781803928562.00030

Brennen, J. S., Howard, P. N., & Nielsen, R. K. (2018). An industry-led debate: how U.K. media cover artificial intelligence. Reuters Institute for the Study of Journalism. https://doi.org/10.60625/risj-v219-d676

Breuer, S., & Müller, R. (2024). Digitalization, AI, and robotics for good care and work? German policy imaginaries of healthcare technologies. Science and Public Policy, 51, 951–962. https://doi.org/10.1093/scipol/scae036

Calimanu, S. (2023). Why the U.S. leads the world in entrepreneurship and innovation. ResearchFDI. https://researchfdi.com/resources/articles/why-the-us-leads-the-world-in-entrepreneurship-and-innovation/

Cave, S., & Dihal, K. (2019). Hopes and fears for intelligent machines in fiction and reality. Nature Machine Intelligence, 1, 74–78. https://doi.org/10.1038/s42256-019-0020-9

Das, A., Cordoba, D., Kristiansen, S., Velardi, S., Wonneberger, A., Yamaguchi, T., & Selfa, T. (2024). Sociotechnical imaginaries of gene editing in food and agriculture: a comparative content analysis of mass media in the United States, New Zealand, Japan, the Netherlands and Canada. Public Understanding of Science. https://doi.org/10.1177/09636625241287392

De Boer, S., Jansen, B., Bustos, V. M., Prinse, M., Horwitz, Y., & Hoorn, J. F. (2020). Social robotics in eastern and western newspapers: China and (even) Japan are optimistic. International Journal of Innovation and Technology Management, 18. https://doi.org/10.1142/s0219877020400015

Etzkowitz, H. (2022). Entrepreneurial university icon: Stanford and Silicon Valley as innovation and natural ecosystem. Industry and Higher Education, 36, 361–380. https://doi.org/10.1177/09504222221109504

Felt, U., & Öchsner, S. (2019). Reordering the “world of things”: the sociotechnical imaginary of RFID tagging and new geographies of responsibility. Science and Engineering Ethics, 25, 1425–1446. https://doi.org/10.1007/s11948-018-0071-z

Fischer, S., & Puschmann, C. (2021). Wie Deutschland über Algorithmen schreibt: Eine Analyse des Mediendiskurses über Algorithmen und Künstliche Intelligenz (2005–2020). https://doi.org/10.11586/2021003

Fukuda-Parr, S., & Gibbons, E. (2021). Emerging consensus on ‘ethical AI’: human rights critique of stakeholder guidelines. Global Policy, 12, 32–44. https://doi.org/10.1111/1758-5899.12965

Girishankar, N. (2024). Staying ahead in the global technology race: a roadmap for economic security. https://features.csis.org/global-tech-race

Gorwa, R. (2022). Who are the stakeholders in platform governance? Yale Journal of Law and Technology, 24, 493–509. https://doi.org/10.31235/osf.io/ayx8h

Haenlein, M., & Kaplan, A. (2019). A brief history of Artificial Intelligence: on the past, present and future of Artificial Intelligence. California Management Review, 61, 5–14. https://doi.org/10.1177/0008125619864925

Hälterlein, J. (2024). Imagining and governing artificial intelligence: the ordoliberal way — an analysis of the national strategy ‘AI made in Germany’. AI & Society. https://doi.org/10.1007/s00146-024-01940-0

Han, H. (2018). Legal governance of NGOs in China under Xi Jinping: reinforcing divide and rule. Asian Journal of Political Science, 26, 390–409. https://doi.org/10.1080/02185377.2018.1506994

Hornung, G., & Schnabel, C. (2009). Data protection in Germany I: the population census decision and the right to informational self-determination. Computer Law & Security Review, 25, 84–88. https://doi.org/10.1016/j.clsr.2008.11.002

Jasanoff, S. (2015). Future imperfect: science, technology and the imaginations of modernity. In S. Jasanoff & S. Kim (Eds.), Dreamscapes of modernity (pp. 1–33). University of Chicago Press. https://doi.org/10.7208/chicago/9780226276663.003.0001

Jasanoff, S., & Kim, S.-H. (Eds.). (2015). Dreamscapes of modernity: sociotechnical imaginaries and the fabrication of power. The University of Chicago Press.

Jasanoff, S., & Kim, S.-H. (2009). Containing the atom: sociotechnical imaginaries and nuclear power in the United States and South Korea. Minerva, 47, 119–146. https://doi.org/10.1007/s11024-009-9124-4

Ji, X., Kuai, J., & Zamith, R. (2024). Scrutinizing algorithms: assessing journalistic role performance in Chinese news media’s coverage of Artificial Intelligence. Journalism Practice, 18, 2396–2413. https://doi.org/10.1080/17512786.2024.2336136

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389–399. https://doi.org/10.1038/s42256-019-0088-2

Jurowetzki, R., Hain, D., Mateos-Garcia, J., & Stathoulopoulos, K. (2021). The privatization of AI research(-ers): causes and potential consequences — from university-industry interaction to public research brain-drain? https://doi.org/10.48550/arXiv.2102.01648

Kao, K.-T. (2024). From robodebt to responsible AI: sociotechnical imaginaries of AI in Australia. Communication Research and Practice, 10, 387–397. https://doi.org/10.1080/22041451.2024.2346420

Kim, J. (2023). Traveling AI-essentialism and national AI strategies: a comparison between South Korea and France. Review of Policy Research, 40, 705–728. https://doi.org/10.1111/ropr.12552

Lippert, B., & Perthes, V. (2020). Strategic rivalry between United States and China. Stiftung Wissenschaft und Politik (SWP). https://doi.org/10.18449/2020RP04

Mager, A. (2017). Search engine imaginary: visions and values in the co-production of search technology and Europe. Social Studies of Science, 47, 240–262. https://doi.org/10.1177/0306312716671433

Mager, A., & Katzenbach, C. (2021). Future imaginaries in the making and governing of digital technology: multiple, contested, commodified. New Media & Society, 23, 223–236. https://doi.org/10.1177/1461444820929321

Marres, N., Castelle, M., Gobbo, B., Poletti, C., & Tripp, J. (2024). AI as super-controversy: eliciting AI and society controversies with an extended expert community in the U.K. Big Data & Society, 11. https://doi.org/10.1177/20539517241255103

Mügge, D. (2024). EU AI sovereignty: for whom, to what end and to whose benefit? Journal of European Public Policy, 31, 2200–2225. https://doi.org/10.1080/13501763.2024.2318475

O’Connor, B., Cox, L., & Cooper, D. (2022). The ideology of American exceptionalism: American nationalism’s nom de plume. Journal of Political Ideologies, 29, 634–655. https://doi.org/10.1080/13569317.2022.2112126

Paltieli, G. (2022). The political imaginary of National AI Strategies. AI & Society, 37, 1613–1624. https://doi.org/10.1007/s00146-021-01258-1

Pan, X., Aiwen, L., & Zhenzhen, C. (2024). Contesting coercion: U.S.-China strategic competition, the middle technology trap and Chinese government-guided funds. Asian Review of Political Economy, 3. https://doi.org/10.1007/s44216-024-00034-4

Pereira, V. J., & Hargreaves, T. (2024). Are you thinking what I’m thinking? The role of professionals’ imaginaries in the development of smart home technologies. Futures, 163, 103458. https://doi.org/10.1016/j.futures.2024.103458

Perthes, V. (2021). Dimensions of rivalry: China, the United States and Europe. China International Strategy Review, 3, 56–65. https://doi.org/10.1007/s42533-021-00065-z

Pohle, J., & Santaniello, M. (2024). From multistakeholderism to digital sovereignty: toward a new discursive order in internet governance? Policy & Internet, 16, 672–691. https://doi.org/10.1002/poi3.426

Pohle, J., & Thiel, T. (2020). Digital sovereignty. Internet Policy Review, 9. https://doi.org/10.14763/2020.4.1532

Puaschunder, J. M. (2019). Stakeholder perspectives on Artificial Intelligence (AI), robotics and big data in healthcare: an empirical study. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3497261

Richter, V., Dergacheva, D., Katzenbach, C., & Kuznetsova, V. (forthcoming). Who’s driving the AI hype in its formative phase? A longitudinal analysis of stakeholders in the U.S. and German AI discourse on Twitter 2012–2021.

Richter, V., Katzenbach, C., & Schäfer, M. S. (2023). Imaginaries of artificial intelligence. In Handbook of critical studies of artificial intelligence (pp. 209–223). Edward Elgar Publishing. https://doi.org/10.4337/9781803928562.00024

Rohde, F., & Santarius, T. (2023). Emerging sociotechnical imaginaries — how the smart home is legitimized in visions from industry, users in homes and policymakers in Germany. Futures, 151, 103194. https://doi.org/10.1016/j.futures.2023.103194

Schäfer, M. S. (2009). From public understanding to public engagement: an empirical assessment of changes in science coverage. Science Communication, 30, 475–505. https://doi.org/10.1177/1075547008326943

Schindler, S., Alami, I., DiCarlo, J., Jepson, N., Rolf, S., Bayırbağ, M. K., Cyuzuzo, L., DeBoom, M., Farahani, A. F., Liu, I. T., McNicol, H., Miao, J. T., Nock, P., Teri, G., Vila Seoane, M. F., Ward, K., Zajontz, T., & Zhao, Y. (2023). The second cold war: U.S.-China competition for centrality in infrastructure, digital, production and finance networks. Geopolitics, 29, 1083–1120. https://doi.org/10.1080/14650045.2023.2253432

Scott, I. A., Carter, S. M., & Coiera, E. (2021). Exploring stakeholder attitudes towards AI in clinical practice. BMJ Health & Care Informatics, 28, e100450. https://doi.org/10.1136/bmjhci-2021-100450

Suchman, L. (2023). The uncontroversial ‘thingness’ of AI. Big Data & Society, 10. https://doi.org/10.1177/20539517231206794

Taylor, C. (2003). Modern social imaginaries. Duke University Press.

Vicente, P. N., & Dias-Trindade, S. (2021). Reframing sociotechnical imaginaries: the case of the Fourth Industrial Revolution. Public Understanding of Science, 30, 708–723. https://doi.org/10.1177/09636625211013513

Wodak, R. (2015). Critical discourse analysis, discourse-historical approach. In K. Tracy (Ed.), The international encyclopedia of language and social interaction (1st ed., pp. 1–14). John Wiley & Sons Inc.

About the authors

Vanessa Richter is a PhD candidate at the University of Amsterdam and a researcher at the Centre for Media, Communication & Information Research (ZeMKI) at the University of Bremen. She has a background in social media, journalism, and cultural studies. Her research interests focus on imaginaries around technology such as social media platforms and AI systems.

E-mail: vrichter@uni-bremen.de Bluesky: @v-richter

Christian Katzenbach is Professor of Media and Communication at ZeMKI, University of Bremen and associated researcher at the Alexander von Humboldt Institut for Internet and Society (HIIG). He leads the Lab Platform Governance, Media, and Technology, and the MA programme Digital Media and Society. His research addresses the formation of platforms and their governance, the discursive and political shaping of “Artificial Intelligence” (AI), and the increasing automation of communication.

E-mail: katzenbach@uni-bremen.de Bluesky: @ckatzenbach

Jing Zeng is an Assistant Professor of Computational Communication and Social Science at the Department of Media and Communication Research (IKMZ) at the University of Zurich, Switzerland.

E-mail: j.zeng@ikmz.uzh.ch