1 AI will fundamentally transform science communication

Advancements in artificial intelligence (AI) have reshaped the way we communicate and who we communicate with. While previous technologies primarily served as “channels” for human interaction, AI is increasingly designed as an active “communicative subject” and a “life-like communication partner” [Guzman & Lewis, 2020, p. 73]. Esposito highlights the critical implications of this shift, emphasizing that “the problem is not that the machine is able to think but that it is able to communicate” [2017, p. 250]. This paradigmatic shift within the emerging field of human-machine communication (HMC), which focuses on the “creation of meaning among humans and machines” [Guzman, 2018, p. 1; Spence, 2019], has led to new conceptualizations of AI’s communicative role, including terms such as “automated media” [Andrejevic, 2019], “communicative robots” [Hepp, 2020], “journalistic AI” [Helberger et al., 2022], or “communicative AI” [Guzman & Lewis, 2020].

While AI is not a new phenomenon [McCarthy et al., 1995], the rise of generative AI (GenAI) — exemplified by large language models (LLMs) such as GPT, Gemini, and DeepSeek — has sparked new socio-technical imaginaries [Vrabič Dežman, 2024] and fueled yet another wave of AI hype [Katzenbach & Pentzold, 2024]. With its ability to generate original text, images, and (audio)visual content using deep-learning models trained on vast datasets, GenAI is poised to transform — or even “disrupt” [Golan, 2023] — key sectors of society, including economics [B. Chen et al., 2023], education [Michel-Villarreal et al., 2023], journalism, PR and media [Cools & Diakopoulos, 2024; Guzman & Lewis, 2024], or medicine [Rajpurkar et al., 2022]. Beyond these fields, its transformative power extends to the very processes of knowledge production and dissemination [Jungherr & Schroeder, 2023], shaping how information is created, shared, and interpreted.

The disruptive and transformative effects of AI are particularly evident in science communication [Biyela et al., 2024; Schäfer, 2023], i.e. the public communication from and about science to non-expert audiences such as citizens or stakeholders, encompassing its production, content, use, and effects [Davies & Horst, 2016; Schäfer et al., 2015].

2 The promises and perils of generative AI

Predictions of how GenAI will transform science communication vary widely — as is often the case with emerging technologies — ranging from highly optimistic to deeply dystopian [cf. Schäfer, 2023]. Proponents highlight GenAI’s ability to simplify complex topics [Hegelbach, 2023] by using clear language and structure [Skjuve et al., 2023], as well as its capacity to facilitate interactive learning by enabling users to ask follow-up questions [Wissenschaft im Dialog, 2023]. Its ability to engage users in human-like interactions further enhances its communicative potential [K. Chen et al., 2024]. In addition, GenAI has been recognized for its efficiency in summarizing scientific publications and findings [Lund et al., 2023], generating media releases and journalistic articles [Tatalovic, 2018], and tailoring content to individual users and their specific needs [Karpouzis et al., 2024]. Moreover, recent surveys suggest that LLMs such as ChatGPT may serve as “novel information intermediaries” [Greussing et al., 2025, p. 2] that users can turn to for science-related information or factual questions [Fletcher & Nielsen, 2024; Schäfer et al., 2024].

Critics, on the other hand, argue that LLMs generate responses based on complex statistical patterns in their training data rather than on an intrinsic understanding of the content. As a result, their outputs can be inaccurate [Gravel et al., 2023] while appearing convincingly factual [De Angelis et al., 2023], lack up-to-date information [Dwivedi et al., 2023], or even reference non-existent scientific studies [Perkins, 2023]. A study by Spitale et al. [2023] highlights that while GPT models can provide useful health-related information, they can also contribute to misinformation. Others have shown that while GenAI can provide accurate information and nuanced perspectives on fundamental issues such as the existence of climate change, the validity of astrology, or the evaluation of the replication crisis, it is also skewed towards STEM fields and positivist approaches [Volk et al., 2024]. Although some users recognize these limitations [Skjuve et al., 2023], identifying inaccuracies often requires domain-specific knowledge that many lack. Beyond factual reliability, LLM-generated content has been criticized for flawed logical reasoning, a lack of critical reflection, and unoriginality [Dwivedi et al., 2023]. Moreover, the proprietary nature of AI systems — where companies restrict access to the technology’s inner workings [van Dis et al., 2023] — raises concerns about transparency and explainability [Dwivedi et al., 2023]. Critics also warn that GenAI can reproduce or amplify biases present in their training data [Corless, 2023], leading to what Teubner et al. [2023] describe as the “possibility of infinite reproduction of the same old trivialities and stereotypes” [2023, p. 99].

To substantiate and validate these claims, a broader evidence base and further research are needed to describe, explain, assess, and potentially predict the characteristics, drivers, and impacts of AI in science communication.

3 Developments and characteristics of research on science communication and AI

An assessment of research on the nexus of science communication and AI shows that the field is still in its infancy and has several gaps. To assess the field, we selected all publications in the Scopus database mentioning “science communication” and “‘science communication’ and ‘artificial intelligence’” in the title, keywords, or abstract, as well as AI-related research articles in the three leading science communication journals: Science Communication, Journal of Science Communication (JCOM), and Public Understanding of Science; to examine how many studies on AI in science communication research have already been published. Our choice of database and journals was driven by the need for a comprehensive and field-relevant literature base. We selected the Scopus database because it is one of the largest multidisciplinary repositories available and covers a wider range of publications; it offers about 20% more coverage than Web of Science [Elsevier, 2025; Falagas et al., 2008]. Additionally, the journals Science Communication, Journal of Science Communication (JCOM), and Public Understanding of Science were chosen because they are recognized as most prominent and leading journals in the field [Guenther & Joubert, 2017]. These journals have rigorous peer-review processes and high citation rates, which attest to their influence and credibility, and offer a mix of theoretical and empirical studies.

We examined published studies on the nexus between science communication and AI by searching, firstly, the number of articles in the Scopus database mentioning “science communication” and “‘science communication’ and ‘artificial intelligence”’ in the title, keywords, or abstract, as well as the number of articles on AI-related topics in the leading science communication journals (see Figure 1). The Scopus search included the following document types: “Article”, “Book chapter”, “Editorial”, “Conference Paper”, “Book”, “Letter” and “Note”. The first search included the entire database, the second only the subject areas “Social Sciences”, “Psychology”, “Arts and Humanities”, “Business, Management and Accounting” and “Economics, Econometrics and Finance”, and the third only the subject area “Social Sciences”. The search was conducted in January 2025.

In Scopus and the three journals, research on science communication and AI is rising, albeit from a very low starting point (Figure 1).

PIC

Figure 1: Annual number of articles in the Scopus database and leading science communication journals on science communication and AI. Left: Annual number of articles in the Scopus database mentioning “science communication” (grey) and “’science communication’ and ‘artificial intelligence”’ (red) in the headline, keywords or abstract. Right: Annual number of articles in three scholarly journals that focus on science communication (Journal of Science Communication, Science Communication and Public Understanding of Science; grey) and number of those articles that mention “artificial intelligence” in the headline, keywords or abstract (red).

For a more in-depth analysis, we conducted targeted searches in the three leading science communication journals. Within these journals, we searched for articles mentioning “artificial intelligence” in the title, abstract, or keywords, restricting our search to peer-reviewed, English-language journal articles published up to 2024. Each article was reviewed to ensure that both science communication and artificial intelligence were central themes rather than peripherally mentioned. Studies that only briefly referenced AI without substantive discussion in a science communication context were excluded. The coding process was conducted by two independent coders: one performed the initial coding, while the other carried out the verification coding to ensure accuracy and consistency.

A total of 35 studies were identified in the three leading science communication journals. While in their entirety, those studies were published between 2002 and 2024, only two of them predate 2020 (both theoretical in nature). In contrast, over 40% of the articles were published in 2024 alone, marking a clear shift in research attention towards AI, fueled by the advent of publicly accessible GenAI tools such as ChatGPT.

In addition to sheer volume, we also assessed the origin of authors, thematic foci, and research designs of existing studies (Figure 2). This shows that, geographically, the majority originates from authors from the United States, followed by Germany and the United Kingdom. This trend reflects a well-documented Western bias in academic publishing, with research from North America and Europe accounting for most research published in high-impact journals [Guenther & Joubert, 2017; Schäfer, 2012]. The distribution also suggests that research on this topic is primarily concentrated in countries with strong AI research hubs and a well-established tradition of interdisciplinary technological inquiry.

PIC

Figure 2: Percentage distribution of countries of origin, study types, focus area of the studies, and study topics (N = 35).

Regarding topics, the most prevalent focus across studies was AI in general, along with related technologies such as autonomous vehicles and machines. Other prominent themes include health-related issues, science communication in general, and environmental topics — particularly climate change (see Table 1 in the supplementary material). A closer look at the type of AI examined shows that approximately 17% of the studies specifically focused on generative AI, with a particular emphasis on ChatGPT [e.g., Volk et al., 2024]. Meanwhile, two studies addressed large language models (LLMs) in general, and the remainder referred to AI more broadly.

The analysis of AI-related research focus areas reveals clear patterns. The majority of studies (71%) examined communication about AI, reflecting an emphasis on how AI is discussed, framed, and understood in public and academic discourse. In contrast, 14% investigated communication with AI, exploring interactions between humans and AI systems, and another 14% examined the impact of AI on science communication ecosystems, signaling an emerging but still limited interest in how AI is transforming the dissemination and interpretation of scientific knowledge. Notably, none of the studies in the sample addressed AI’s impact on methodological approaches, highlighting a potential research gap in how AI might influence research designs and analysis in science communication studies [Schäfer, 2023]. When analyzing these studies through the lens of Lasswell’s communication model [Lasswell, 1948], most research focused on what content is communicated about AI or examined the audience’s perspective (“to whom”) — indicating that scholars are primarily concerned with AI representations and public perceptions. In contrast, fewer studies have explored the effects of AI-related communication or the ways in which AI itself is communicated. Additionally, research on (AI) actors or communicators has been relatively scarce and remains an underexplored area.

In terms of the type of study, the majority of studies were empirical (77%), while a quarter were theoretical. The studies analyzed applied an array of theories, with framing theory being the most common [e.g., Zeng et al., 2022]. Other theoretical perspectives included motivated reasoning, social representation theory, and agenda-setting theory, reflecting a multidisciplinary approach to studying AI-related communication.

Among empirical studies, eighteen studies relied on standardized quantitative methods, five employed experimental designs and five used qualitative approaches, and four combined qualitative and quantitative methods. The strong prevalence of standardized, quantitative methods suggests a tendency toward large-scale data analysis and survey-based research, while the relatively small number of experimental and qualitative studies indicates that causal mechanisms in AI communication remain underexplored and more in-depth insights into AI-related communication are lacking. Nearly 30% of the studies employed a mixed-methods approach, integrating methodologies such as surveys and content analysis [e.g., K. Chen et al., 2024] or computational data collection using API scraping and automated content analysis [e.g., Zeng et al., 2022] to analyze AI discourse more comprehensively. The studies employed a diverse range of data collection methods, with surveys, interviews, and computational data collection techniques — such as API scraping — being the most commonly used approaches. In terms of data analysis, automated and manual quantitative content analysis were the most frequently applied methods, though qualitative content analysis was also utilized in some cases.

The type and size of samples varied across studies, reflecting the different methodological approaches and research objectives. Some studies relied on small-scale samples, such as a survey of 20 AI researchers examining the role of literature in artificial intelligence research [Dillon & Schaffer-Goddard, 2023]. In contrast, others employed large-scale representative surveys, such as in the United States, where researchers investigated the effects of text-based frames and visuals on public support for AI [Bingaman et al., 2021], or in Singapore, where a study explored how news media influences public acceptance of AI-powered autonomous passenger drones [Cheung & Ho, 2024]. Beyond surveys, several studies analyzed AI-generated outputs to investigate human-AI interaction. For example, one study examined how users’ experiences and learning outcomes varied across social groups when engaging in dialogues with GPT-3 on controversial science and social issues such as climate change and the Black Lives Matter movement [K. Chen et al., 2024]. Meanwhile, other studies conducted large-scale discourse analyses, examining tens of thousands of online posts to assess public debates about AI on platforms like WeChat and People’s Daily Online [Zeng et al., 2022].

4 A roadmap for research on science communication and AI

The rapid growth and emerging differentiation of studies on science communication and AI — also manifested in the collection of ten articles in the Special Issue “Science communication in the age of artificial intelligence” — demonstrates the rising scholarly interest and expansion of the field. Yet our analysis also highlights that the field remains in its early stages, with notable gaps and biases. While studies have increasingly examined how AI is communicated, public perceptions of AI, and its broader societal implications, research remains geographically concentrated, methodologically limited, and often focused on specific applications rather than the broader systemic impact of AI on science communication itself. This underscores the need for a more comprehensive, structured research agenda that moves toward a deeper understanding of AI as both an object and agent of science communication [cf. Choi et al., 2024; Klein-Avraham et al., 2024; Schäfer, 2023].

As AI technologies continue to evolve and integrate into the production and dissemination of and the engagement with science-related issues, researchers must critically assess their specific characteristics as well as their broader implications for science communication ecosystems, research methodologies, and theory-building. In our view, the field should pursue four focus areas of research:

(1) Communication about AI. Scholars should analyze AI as an object of science communication, similar to studies analyzing communication and discourses about nanotechnology [e.g. Runge et al., 2013], biotechnology [e.g. Nisbet & Lewenstein, 2002], climate science [e.g. Hase et al., 2021] or other science-related issues. Such studies may focus, first, on the producers of AI-related communication, i.e., on the communication efforts and strategies of scholars, scientific organizations and institutions of higher education, but also of tech companies, regulators, NGOs, and other stakeholders [e.g., Richter et al., 2023]. Second, they could focus on intermediaries of communication, such as journalists, social media influencers, or tech platforms, and their influence on communication about AI [Nishal & Diakopoulos, 2023]. Third, they may analyze public communication about generative AI in legacy media, social media, public imagery, fictional accounts etc. [Brause et al., 2023]. Fourth, they could focus on consumption, i.e., on the perceptions, use and effects of AI-related communication — among citizens, but also among stakeholders, regulators, researchers, and others [Begenat & Kero, 2023; Lermann Henestrosa et al., 2023; Starke & Lünich, 2020]. (2) Communication with AI. Scholars should also analyze AI as an agent of (science) communication. After all, AI differs from other objects of science communication because the technology itself has “increased agency” as a form of “communicative AI” [Guzman & Lewis, 2020, p. 79; also Hepp et al., 2022], making analyses of human-AI interactions highly relevant [e.g., Dogruel & Dickel, 2022]: How people interact with (generative) AI, evaluate it, how the technology responds and adapts, and what the results of these interactions are on both sides are some of the most interesting research questions of the near future [B. Chen et al., 2023; Lermann Henestrosa & Kimmerle, 2024]. This includes a focus on reconstructing the — often opaque and proprietary [Buhmann & Fieseler, 2021] — inner workings of communicative AI, its underlying values and likely biases [Volk et al., 2024; cf. Seaver, 2019]. We are also interested in studies assessing how and to what extent science communicators and journalists use AI in the creation or distribution of science-related content [cf. Wilczek & Haim, 2023], and whether they do so responsibly and ethically [Henke, 2023; Medvecky & Leach, 2019]. (3) The impact of AI on science communication ecosystems. AI influences societal communication in various ways, from generative AI being an agent of communication over algorithmic curation on tech and social media platforms all the way to the use of AI for surveilling communication and content moderation. Studies on such uses of AI vis-à-vis science communication, and the impact of AI technologies on the broader science communication ecosystem, are highly relevant as well. They could assess whether AI-tools lend themselves equally well to different topics, formats, or audiences of science communication, or whether they result in, or reproduce, biases [Volk et al., 2024]. They could focus on AI’s influence on the diversity of and balance of power between different science communicators, journalists etc., and on job market implications in science communication practice. They could focus on potential AI-related changes in the content of public communication about science, e.g., how accurate AI-generated content is, how much misinformation or deep fakes it contains [Godulla et al., 2021], and whether it produces “wrongness at scale” [Ulken, 2023]. And they could focus on AI’s impact on users [Ho, 2023], e.g., on whether it (dis)informs audiences better than humans [Spitale et al., 2023] or whether it produces digital divides [Hargittai & Hsieh, 2013] in terms of access to the technology (“first-level divides”) or in terms of skills and literacy necessary to make optimal use of the technology (“second-level divides”). (4) The impact of AI on science, theoretical and methodological approaches. AI will also fundamentally impact both the theoretical and conceptual foundations and methodological repertoire of science communication research. On the one hand, this concerns theoretical and conceptual perspectives: After all, “artificial intelligence (AI) and people’s interactions with it […] do not fit neatly into paradigms of communication theory that have long focused on human-human communication” [Guzman & Lewis, 2020, p. 70]. Conceptual work and theory-building are therefore needed [cf. Greussing et al., 2022], drawing on fields like Human-Machine-Communication [Guzman, 2018], social-constructivist approaches like Science and Technology Studies or Social Construction of Technology or critical-interventionist approaches like Value-Sensitive Design [Schäfer & Wessler, 2020]. On the other hand, contributions examining the impact of AI on the methodology and methods of science communication research are needed: AI will afford researchers new opportunities and can function as a research tool [B. Chen et al., 2023; Schmidt, 2023], e.g., for conducting literature reviews and generating hypotheses, data collection and annotation, coding and summarizing and presenting findings [Stokel-Walker & Van Noorden, 2023].

5 Conclusion and future perspectives

We see expanding research on communication about and with AI, for example regarding AI portrayals in media coverage and on social media, audiences’ trust in AI, as well as AI’s role in shaping the professional landscape of science communicators and journalists. However, most research continues to focus on public representations of AI and its use, while the production and regulation of AI-generated communication receive less attention. Similarly, many existing studies emphasize trust, while aspects such as AI literacy, knowledge, and skills as well as AI divides and inequalities open up interesting avenues for future research. Furthermore, AI research in science communication still focuses predominantly on Western and industrialized contexts, highlighting the need for broader and critical perspectives including from the Global South and comparative research on AI’s impact across different science communication ecosystems. Finally, research largely seems to apply existing theoretical and methodological approaches rather than exploring how AI changes the scientific process and developing new frameworks and innovative methods.

Beyond extending research across countries and increasing sample sizes for more generalizable findings, a critical next step is to investigate the drivers and conditions that shape AI’s long-term impact on science communication and science. Future studies should continue to explore science communication in the age of AI along the four suggested focus areas — (1) communication about AI, (2) communication with AI, (3) the impact of AI on science communication ecosystems, and (4) the impact of AI on science, theoretical and methodological approaches. Such efforts would benefit from interdisciplinary approaches, deepening our understanding of the interface between science communication and AI.

Acknowledgments

We thank Damiano Lombardi and Yoojin Kim for their help coding the articles for the literature review.

References

Andrejevic, M. (2019). Automated media. Routledge. https://doi.org/10.4324/9780429242595

Begenat, M., & Kero, S. (2023). Was die Bevölkerung über KI denkt [What the public thinks about ai]. wissenschaftskommunikation.de. https://www.wissenschaftskommunikation.de/was-die-bevoelkerung-ueber-ki-denkt-68281

Bingaman, J., Brewer, P. R., Paintsil, A., & Wilson, D. C. (2021). “Siri, show me scary images of AI”: effects of text-based frames and visuals on support for Artificial Intelligence. Science Communication, 43, 388–401. https://doi.org/10.1177/1075547021998069

Biyela, S., Dihal, K., Gero, K. I., Ippolito, D., Menczer, F., Schäfer, M. S., & Yokoyama, H. M. (2024). Generative AI and science communication in the physical sciences. Nature Reviews Physics, 6, 162–165. https://doi.org/10.1038/s42254-024-00691-7

Brause, S. R., Zeng, J., Schäfer, M. S., & Katzenbach, C. (2023). Media representations of Artificial Intelligence. In S. Lindgren (Ed.), Handbook of critical studies of Artificial Intelligence. Edward Elgar Publishing.

Buhmann, A., & Fieseler, C. (2021). Towards a deliberative framework for responsible innovation in artificial intelligence. Technology in Society, 64, 101475. https://doi.org/10.1016/j.techsoc.2020.101475

Chen, B., Wu, Z., & Zhao, R. (2023). From fiction to fact: the growing role of generative AI in business and finance. Journal of Chinese Economic and Business Studies, 21, 471–496. https://doi.org/10.1080/14765284.2023.2245279

Chen, K., Shao, A., Burapacheep, J., & Li, Y. (2024). Conversational AI and equity through assessing GPT-3’s communication with diverse social groups on contentious topics. Scientific Reports, 14, 1561. https://doi.org/10.1038/s41598-024-51969-w

Cheung, J. C., & Ho, S. S. (2024). Explainable AI and trust: how news media shapes public support for AI-powered autonomous passenger drones. Public Understanding of Science. https://doi.org/10.1177/09636625241291192

Choi, S., Lee, C.-J., Park, A., & Lee, J. A. (2024). How the public makes sense of Artificial Intelligence: the interplay between communication and discrete emotions. Science Communication. https://doi.org/10.1177/10755470241297664

Cools, H., & Diakopoulos, N. (2024). Uses of generative AI in the newsroom: mapping journalists’ perceptions of perils and possibilities. Journalism Practice, 1–19. https://doi.org/10.1080/17512786.2024.2394558

Corless, V. (2023). ChatGPT is making waves in the scientific literature. Advanced Science News. https://www.advancedsciencenews.com/where-and-how-should-chatgpt-be-used-in-the-scientific-literature/

Davies, S. R., & Horst, M. (2016). Science communication: culture, identity and citizenship. Palgrave Macmillan. https://doi.org/10.1057/978-1-137-50366-4

De Angelis, L., Baglivo, F., Arzilli, G., Privitera, G. P., Ferragina, P., Tozzi, A. E., & Rizzo, C. (2023). ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health. Frontiers in Public Health, 11. https://doi.org/10.3389/fpubh.2023.1166120

Dillon, S., & Schaffer-Goddard, J. (2023). What AI researchers read: the role of literature in artificial intelligence research. Interdisciplinary Science Reviews, 48, 15–42. https://doi.org/10.1080/03080188.2022.2079214

Dogruel, L., & Dickel, S. (2022). Die Kommunikativierung der Maschinen [The communicativization of machines]. Publizistik, 67, 475–486. https://doi.org/10.1007/s11616-022-00755-7

Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., … Wright, R. (2023). Opinion paper: “So what if ChatGPT wrote it?” multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642

Elsevier. (2025). Scopus content overview. https://www.elsevier.com/solutions/scopus

Esposito, E. (2017). Artificial communication? The production of contingency by algorithms. Zeitschrift für Soziologie, 46, 249–265. https://doi.org/10.1515/zfsoz-2017-1014

Falagas, M. E., Pitsouni, E. I., Malietzis, G. A., & Pappas, G. (2008). Comparison of PubMed, Scopus, Web of Science and Google Scholar: strengths and weaknesses. The FASEB Journal, 22, 338–342. https://doi.org/10.1096/fj.07-9492lsf

Fletcher, R., & Nielsen, R. K. (2024). What does the public in six countries think of generative AI in news? https://doi.org/10.60625/risj-4zb8-cg87

Godulla, A., Hoffmann, C. P., & Seibert, D. (2021). Dealing with deepfakes — an interdisciplinary examination of the state of research and implications for communication studies. Studies in Communication and Media, 10, 72–96. https://doi.org/10.5771/2192-4007-2021-1-72

Golan, Y. (2023). Navigating the disruptive landscape of generative AI. Forbes. https://www.forbes.com/councils/forbesfinancecouncil/2023/06/08/navigating-the-disruptive-landscape-of-generative-ai-a-vc-perspective/

Gravel, J., D’Amours-Gravel, M., & Osmanlliu, E. (2023). Learning to fake it: limited responses and fabricated references provided by ChatGPT for medical questions. Mayo Clinic Proceedings: Digital Health, 1, 226–234. https://doi.org/10.1016/j.mcpdig.2023.05.004

Greussing, E., Taddicken, M., & Baram-Tsabari, A. (2022). Changing epistemic roles through communicative AI. ICA Science of Science Communication Preconference.

Greussing, E., Guenther, L., Baram-Tsabari, A., Dabran-Zivan, S., Jonas, E., Klein-Avraham, I., Taddicken, M., Agergaard, T. E., Beets, B., Brossard, D., Chakraborty, A., Fage-Butler, A., Huang, C.-J., Kankaria, S., Lo, Y.-Y., Nielsen, K. H., Riedlinger, M., & Song, H. (2025). The perception and use of generative AI for science-related information search: insights from a cross-national study. Public Understanding of Science, 1–17. https://doi.org/10.1177/09636625241308493

Guenther, L., & Joubert, M. (2017). Science communication as a field of research: identifying trends, challenges and gaps by analysing research papers. JCOM, 16, A02. https://doi.org/10.22323/2.16020202

Guzman, A. L. (2018). What is human-machine communication, anyway? In A. L. Guzman (Ed.), Human-machine communication: rethinking communication, technology and ourselves (pp. 1–28). Peter Lang.

Guzman, A. L., & Lewis, S. C. (2020). Artificial intelligence and communication: a human-machine communication research agenda. New Media & Society, 22, 70–86. https://doi.org/10.1177/1461444819858691

Guzman, A. L., & Lewis, S. C. (2024). What generative AI means for the media industries and why it matters to study the collective consequences for advertising, journalism and public relations. Emerging Media, 2, 347–355. https://doi.org/10.1177/27523543241289239

Hargittai, E., & Hsieh, Y. P. (2013). Digital inequality. In W. H. Dutton (Ed.), The Oxford handbook of internet studies (pp. 129–150). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199589074.013.0007

Hase, V., Mahl, D., Schäfer, M. S., & Keller, T. R. (2021). Climate change in news media across the globe: an automated analysis of issue attention and themes in climate change coverage in 10 countries (2006–2018). Global Environmental Change, 70, 102353. https://doi.org/10.1016/j.gloenvcha.2021.102353

Hegelbach, S. (2023). ChatGPT opened our eyes. UZH News. https://www.dizh.uzh.ch/en/2023/03/20/chatgpt-opened-our-eyes/

Helberger, N., van Drunen, M., Moeller, J., Vrijenhoek, S., & Eskens, S. (2022). Towards a normative perspective on journalistic AI: embracing the messy reality of normative ideals. Digital Journalism, 10, 1605–1626. https://doi.org/10.1080/21670811.2022.2152195

Henke, J. (2023). Hochschulkommunikation im Zeitalter der KI [University communication in the age of ai] [HoF-Arbeitsbericht 122]. Institut für Hochschulforschung (HoF). https://www.hof.uni-halle.de/web/dateien/pdf/ab_122.pdf

Hepp, A. (2020). Artificial companions, social bots and work bots: communicative robots as research objects of media and communication studies. Media, Culture & Society, 42, 1410–1426. https://doi.org/10.1177/0163443720916412

Hepp, A., Loosen, W., Dreyer, S., Jarke, J., Kannengießer, S., Katzenbach, C., Malaka, R., Pfadenhauer, M., Puschmann, C., & Schulz, W. (2022). Von der Mensch-Maschine-Interaktion zur kommunikativen KI: Automatisierung von Kommunikation als Gegenstand der Kommunikations- und Medienforschung [From human-machine interaction to communicative ai: Automation of communication as the subject of communication and media research]. Publizistik, 67, 449–474. https://doi.org/10.1007/s11616-022-00758-4

Ho, S. S. (2023). Promise or reservations? Public perceptions of AI applications in Singapore. Invited keynote at the code vs. code conference.

Jungherr, A., & Schroeder, R. (2023). Artificial intelligence and the public arena. Communication Theory, 33, 164–173. https://doi.org/10.1093/ct/qtad006

Karpouzis, K., Pantazatos, D., Taouki, J., & Meli, K. (2024). Tailoring education with GenAI: a new horizon in lesson planning. https://doi.org/10.48550/arXiv.2403.12071

Katzenbach, C., & Pentzold, C. (2024). Automating communication in the digital society: editorial to the special issue. New Media & Society, 26, 4925–4937. https://doi.org/10.1177/14614448241265655

Klein-Avraham, I., Greussing, E., Taddicken, M., Dabran-Zivan, S., Jonas, E., & Baram-Tsabari, A. (2024). How to make sense of generative AI as a science communication researcher? A conceptual framework in the context of critical engagement with scientific information. JCOM, 23, A05. https://doi.org/10.22323/2.23060205

Lasswell, H. D. (1948). The structure and function of communication in society. In L. Bryson (Ed.), The communication of ideas (pp. 37–51). Harper; Brothers.

Lermann Henestrosa, A., Greving, H., & Kimmerle, J. (2023). Automated journalism: the effects of AI authorship and evaluative information on the perception of a science journalism article. Computers in Human Behavior, 138, 107445. https://doi.org/10.1016/j.chb.2022.107445

Lermann Henestrosa, A., & Kimmerle, J. (2024). Understanding and perception of automated text generation among the public: two surveys with representative samples in Germany. Behavioral Sciences, 14, 353. https://doi.org/10.3390/bs14050353

Lund, B. D., Wang, T., Mannuru, N. R., Nie, B., Shimray, S., & Wang, Z. (2023). ChatGPT and a new academic reality: Artificial Intelligence-written research papers and the ethics of the large language models in scholarly publishing. Journal of the Association for Information Science and Technology, 74, 570–581. https://doi.org/10.1002/asi.24750

McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1995). A proposal for the Dartmouth summer research project on Artificial Intelligence, August 31, 1955. AI Magazine, 27, 12. https://doi.org/10.1609/aimag.v27i4.1904

Medvecky, F., & Leach, J. (2019). An ethics of science communication. Palgrave Macmillan.

Michel-Villarreal, R., Vilalta-Perdomo, E., Salinas-Navarro, D. E., Thierry-Aguilera, R., & Gerardou, F. S. (2023). Challenges and opportunities of generative AI for higher education as explained by ChatGPT. Education Sciences, 13, 856. https://doi.org/10.3390/educsci13090856

Nisbet, M. C., & Lewenstein, B. V. (2002). Biotechnology and the American media: the policy process and the elite press, 1970 to 1999. Science Communication, 23, 359–391. https://doi.org/10.1177/107554700202300401

Nishal, S., & Diakopoulos, N. (2023). Envisioning the applications and implications of generative AI for news media. CHI Workshop on Generative AI & HCI.

Perkins, M. (2023). Academic integrity considerations of AI Large Language Models in the post-pandemic era: ChatGPT and beyond. Journal of University Teaching and Learning Practice, 20. https://doi.org/10.53761/1.20.02.07

Rajpurkar, P., Chen, E., Banerjee, O., & Topol, E. J. (2022). AI in health and medicine. Nature Medicine, 28, 31–38. https://doi.org/10.1038/s41591-021-01614-0

Richter, V., Katzenbach, C., & Schäfer, M. S. (2023). Imaginaries of Artificial Intelligence. In S. Lindgren (Ed.), Handbook of critical studies of Artificial Intelligence. Edward Elgar Publishing.

Runge, K. K., Yeo, S. K., Cacciatore, M., Scheufele, D. A., Brossard, D., Xenos, M., Anderson, A., Choi, D.-h., Kim, J., Li, N., Liang, X., Stubbings, M., & Su, L. Y.-F. (2013). Tweeting nano: how public discourses about nanotechnology develop in social media environments. Journal of Nanoparticle Research, 15, 1–11. https://doi.org/10.1007/s11051-012-1381-8

Schäfer, M. S., Kristiansen, S., & Bonfadelli, H. (Eds.). (2015). Wissenschaftskommunikation im Wandel [Science communication in transition]. Herbert von Halem Verlag. https://ebookcentral.proquest.com/lib/kxp/detail.action?docID=2056185

Schäfer, M. S. (2012). Taking stock: a meta-analysis of studies on the media’s coverage of science. Public Understanding of Science, 21, 650–663. https://doi.org/10.1177/0963662510387559

Schäfer, M. S. (2023). The notorious GPT: science communication in the age of artificial intelligence. JCOM, 22, Y02. https://doi.org/10.22323/2.22020402

Schäfer, M. S., Kremer, B., Mede, N. G., & Fischer, L. (2024). Trust in science, trust in ChatGPT? How Germans think about generative AI as a source in science communication. JCOM, 23, A04. https://doi.org/10.22323/2.23090204

Schäfer, M. S., & Wessler, H. (2020). Öffentliche Kommunikation in Zeiten künstlicher Intelligenz [Public communication in the age of artificial intelligence]. Publizistik, 65, 307–331. https://doi.org/10.1007/s11616-020-00592-6

Schmidt, E. (2023). This is how AI will transform the way science gets done. MIT Technology Review. https://www.technologyreview.com/2023/07/05/1075865/eric-schmidt-ai-will-transform-science

Seaver, N. (2019). Knowing algorithms. In Knowing algorithms (pp. 412–422). Princeton University Press. https://doi.org/10.1515/9780691190600-028

Skjuve, M., Følstad, A., & Brandtzaeg, P. B. (2023). The user experience of ChatGPT: findings from a questionnaire study of early users. In M. Lee, C. Munteanu, M. Porcheron, J. Trippas & S. T. Völkel (Eds.), Proceedings of the 5th International Conference on Conversational User Interfaces (pp. 1–10). ACM. https://doi.org/10.1145/3571884.3597144

Spence, P. R. (2019). Searching for questions, original thoughts, or advancing theory: human-machine communication. Computers in Human Behavior, 90, 285–287. https://doi.org/10.1016/j.chb.2018.09.014

Spitale, G., Biller-Andorno, N., & Germani, F. (2023). AI model GPT-3 (dis)informs us better than humans. Science Advances, 9, 1–9. https://doi.org/10.1126/sciadv.adh1850

Starke, C., & Lünich, M. (2020). Artificial intelligence for political decision-making in the European Union: effects on citizens’ perceptions of input, throughput and output legitimacy. Data & Policy, 2, E4. https://doi.org/10.1017/dap.2020.19

Stokel-Walker, C., & Van Noorden, R. (2023). What ChatGPT and generative AI mean for science. Nature, 614, 214–216. https://doi.org/10.1038/d41586-023-00340-6

Tatalovic, M. (2018). AI writing bots are about to revolutionise science journalism: we must shape how this is done. JCOM, 17, E. https://doi.org/10.22323/2.17010501

Teubner, T., Flath, C. M., Weinhardt, C., van der Aalst, W., & Hinz, O. (2023). Welcome to the era of ChatGPT et al.: the prospects of large language models. Business & Information Systems Engineering, 65, 95–101. https://doi.org/10.1007/s12599-023-00795-x

Ulken, E. (2023). Generative AI brings wrongness at scale. Nieman Lab. https://www.niemanlab.org/2022/12/generative-ai-brings-wrongness-at-scale/

van Dis, E. A. M., Bollen, J., Zuidema, W., van Rooij, R., & Bockting, C. L. (2023). ChatGPT: five priorities for research. Nature, 614, 224–226. https://doi.org/10.1038/d41586-023-00288-7

Volk, S. C., Schäfer, M. S., Lombardi, D., Mahl, D., & Yan, X. (2024). How generative artificial intelligence portrays science: interviewing ChatGPT from the perspective of different audience segments. Public Understanding of Science, 34, 132–153. https://doi.org/10.1177/09636625241268910

Vrabič Dežman, D. (2024). Promising the future, encoding the past: AI hype and public media imagery. AI and Ethics, 4, 743–756. https://doi.org/10.1007/s43681-024-00474-x

Wilczek, B., & Haim, M. (2023). Wie kann Künstliche Intelligenz die Effizienz von Medienorganisationen steigern? Eine Systematisierung entlang der Nachrichtenwertkette mit besonderer Berücksichtigung lokaler und regionaler Medien [How can artificial intelligence improve the efficiency of media organizations? a systematization along the news value chain with particular emphasis on local and regional media]. MedienWirtschaft, 4, 44–50.

Wissenschaft im Dialog. (2023). Wissenschaftsbarometer 2023 [Science barometer 2023]. https://wissenschaft-im-dialog.de/projekte/wissenschaftsbarometer/#erhebung-2023

Zeng, J., Chan, C.-h., & Schäfer, M. S. (2022). Contested Chinese dreams of AI? Public discourse about artificial intelligence on WeChat and people’s daily online. Information, Communication & Society, 25, 319–340. https://doi.org/10.1080/1369118x.2020.1776372

About the authors

Sabrina H. Kessler is a senior research and teaching associate at the Department of Communication and Media Research (IKMZ) of the University of Zurich (Switzerland). She is a speaker of the Swiss Young Academy and chair of the division “Media Reception and Effects” of the German Communication Association. Her research interests include science and health communication, as well as online search and perception behavior, particularly in the context of generative artificial intelligence.

E-mail: s.kessler@ikmz.uzh.ch Bluesky: @shkessler

Daniela Mahl is a postdoctoral researcher at the Department of Communication and Media Research (IKMZ) of the University of Zurich (Switzerland). Her research focuses on responsible artificial intelligence, sociocultural implications of digital platforms, with a focus on misinformation and conspiracy theories, and science communication.

E-mail: d.mahl@ikmz.uzh.ch

Mike S. Schäfer is a full professor of science communication, Head of Department at IKMZ — the Department of Communications and Media Research and director of the Center for Higher Education and Science Studies (CHESS) at the University of Zurich (Switzerland).

E-mail: m.schaefer@ikmz.uzh.ch Bluesky: @mss7676

Sophia Charlotte Volk is a Senior Research and Teaching Associate at the Department of Communication and Media Research (IKMZ) at the University of Zurich (Switzerland). Previously, she was a Research Associate at the Chair of Strategic Communication at Leipzig University (Germany). Her research interests include science and university communication, evaluation and impact measurement, strategic communication, digital media environments and technologies like artificial intelligence, and international comparative research.

E-mail: s.volk@ikmz.uzh.ch Bluesky: @sophiavolk

Supplementary material

Available at https://doi.org/10.22323/2.24020401