Browse all Publications

Filter by section: Article

Publications included in this section.

547 publications found

Apr 14, 2025 Article
Exploring temporal and cross-national patterns: The use of generative AI in science-related information retrieval across seven countries

by Esther Greussing, Lars Guenther, Ayelet Baram-Tsabari, Shakked Dabran-Zivan, Evelyn Jonas, Inbal Klein-Avraham, Monika Taddicken, Torben Agergaard, Becca Beets, Dominique Brossard, Anwesha Chakraborty, Antoinette Fage-Butler, Chun-Ju Huang, Siddharth Kankaria, Yin-Yueh Lo, Lindsey Middleton, Kristian H. Nielsen, Michelle Riedlinger and Hyunjin Song

This study explores the role of ChatGPT in science-related information retrieval, building on research conducted in 2023. Drawing on online survey data from seven countries—Australia, Denmark, Germany, Israel, South Korea, Taiwan, and the United States—and two data collection points (2023 and 2024), the study highlights ChatGPT’s growing role as an information intermediary, reflecting the rapid diffusion of generative AI (GenAI) in general. While GenAI adoption is a global phenomenon, distinct regional variations emerge in the use of ChatGPT for science-related searches. Additionally, the study finds that a specific subset of the population is more likely to use ChatGPT for science-related information retrieval. Across all countries surveyed, science-information seekers report higher levels of trust in GenAI compared to non-users. They also exhibit a stronger understanding of how (Gen)AI works and, with some notable exceptions, show greater awareness of its epistemic limitations.

Volume 24 • Issue 02 • 2025 • Science Communication in the Age of Artificial Intelligence (Science Communication & AI)

Apr 14, 2025 Article
“ChatGPT, is the influenza vaccination useful?” Comparing perceived argument strength and correctness of pro-vaccination-arguments from AI and medical experts

by Selina A. Beckmann, Elena Link and Marko Bachl

Realizing the ascribed potential of generative AI for health information seeking depends on recipients’ perceptions of quality. In an online survey (N = 294), we aimed to investigate how German individuals evaluate AI-generated information compared to expert-generated content on the influenza vaccination. A follow-up experiment (N = 1,029) examined the impact of authorship disclosure on perceived argument quality and underlying mechanisms. The findings indicated that expert arguments were rated higher than AI-generated arguments, particularly when authorship was revealed. Trust in science and the Standing Committee on Vaccination accentuated these differences, while trust in AI and innovativeness did not moderate this effect.

Volume 24 • Issue 02 • 2025 • Science Communication in the Age of Artificial Intelligence (Science Communication & AI)

Apr 14, 2025 Article
Balancing realism and trust: AI avatars In science communication

by Jasmin Baake, Josephine Schmitt and Julia Metag

AI-generated avatars in science communication offer potential for conveying complex information. However, highly realistic avatars may evoke discomfort and diminish trust, a key factor in science communication. Drawing on existing research, we conducted an experiment (n = 491) examining how avatar realism and gender impact trustworthiness (expertise, integrity, and benevolence). Our findings show that higher realism enhances trustworthiness, contradicting the Uncanny Valley effect. Gender effects were dimension-specific, with male avatars rated higher in expertise. Familiarity with AI and institutional trust also shaped trust perceptions. These insights inform the design of AI avatars for effective science communication while maintaining public trust.

Volume 24 • Issue 02 • 2025 • Science Communication in the Age of Artificial Intelligence (Science Communication & AI)

Apr 14, 2025 Article
Behind the Screens: How Algorithmic Imaginaries Shape Science Content on Social Media

by Clarissa Elisa Walter and Sascha Friesike

Based on an ethnography of the development and production of science YouTube videos – a collaboration between a German public broadcaster and social science scholars – we identify three intermediary steps through which recommendation algorithms shape science content on social media. We argue that algorithms induce changes to science content through the power they exert over the content's visibility on social media platforms. Change is driven by how practitioners interpret algorithms, infer content strategies to enhance visibility, and adjust content creation practices accordingly. By unpacking these intermediate steps, we reveal the nuanced mechanisms by which algorithms indirectly shape science content.

Volume 24 • Issue 02 • 2025 • Science Communication in the Age of Artificial Intelligence (Science Communication & AI)

Apr 14, 2025 Article
ChatGPT’s Potential for Quantitative Content Analysis: Categorizing Actors in German News Articles

by Clarissa Hohenwalde, Melanie Leidecker-Sandmann, Nikolai Promies and Markus Lehmkuhl

We assessed ChatGPT's ability to identify and categorize actors in German news media articles into societal groups. Through three experiments, we evaluated various models and prompting strategies. In experiment 1, we found that providing ChatGPT with codebooks designed for manual content analysis was insufficient. However, combining Named Entity Recognition with an optimized prompt for actor Classification (NERC pipeline) yielded acceptable results. In experiment 2, we compared the performance of gpt-3.5-turbo, gpt-4o, and gpt-4-turbo, with the latter performing best, though challenges remained in classifying nuanced actor categories. In experiment 3, we demonstrated that repeating the classification with the same model produced highly reliable results, even across different release versions.

Volume 24 • Issue 02 • 2025 • Science Communication in the Age of Artificial Intelligence (Science Communication & AI)

Mar 24, 2025 Article
Identifying trust cues: how trust in science is mediated in content about science

by Justin T. Schröder, Janise Brück and Lars Guenther

Most public audiences in Germany receive scientific information via a variety of (digital) media; in these contexts, media act as intermediaries of trust in science by providing information that present reasons for public audiences to place their trust in science. To describe this process, the study introduces the term “trust cues”. To identify such content-related trust cues, an explorative qualitative content analysis has been applied to German journalistic, populist, social, and other (non-journalistic) online media (“n” = 158). In total, “n” = 1,329 trust cues were coded. The findings emphasize the diversity of mediated trust, with trust cues being connected to dimensions of trust in science (established: expertise, integrity, benevolence; recently introduced: transparency, dialogue). Through this analysis, the study aims for a better understanding of mediated trust in science. Deriving this finding is crucial since public trust in science is important for individual and collective informed decision-making and crises management.

Volume 24 • Issue 01 • 2025

Mar 17, 2025 Article
Citizens' perspectives on science communication

by Ionica Smeets, Charlotte B. C. M. Egger, Sicco de Knecht, Anne M. Land-Zandstra, Aletta Lucia Meinsma, Ward Peeters, Sanne Romp, Julie Schoorl, Winnifred Wijnker and Alex Verkade

The evolving landscape of science communication highlights a shift from traditional dissemination to participatory engagement. This study explores Dutch citizens' perspectives on science communication, focusing on science capital, public engagement, and communication goals. Using a mixed-methods approach, it combines survey data (“n”=376) with focus group (“n”=66) insights. Findings show increasing public interest in participating in science, though barriers like knowledge gaps persist. Trust-building, engaging adolescents, and integrating science into society were identified as key goals. These insights support the development of the Netherlands' National Centre of Expertise on Science and Society and provide guidance for inclusive, effective science communication practices.

Volume 24 • Issue 01 • 2025

Mar 10, 2025 Article
Wit meets wisdom: the relationship between satire and anthropomorphic humor on scientists' likability and legitimacy

by Alexandra L. Frank, Michael A. Cacciatore, Sara K. Yeo and Leona Yi-Fan Su

We conducted an experiment examining public response to scientists' use of different types of humor (satire, anthropomorphism, and a combination of the two) to communicate about AI on Twitter/X. We found that humor led to increased perceptions of humor, measured as increased mirth. Specifically, we found that combining anthropomorphism and satire elicited the highest levels of mirth. Further, reported mirth was positively associated with the perceived likability of the scientist who posted the content. Our findings indicate that mirth mediated the effects of the humor types on publics' perceptions that the scientist on social media was communicating information in an appropriate and legitimate way. Overall, this suggests that scientists can elicit mirth by using combining satire and anthropomorphic humor, which can enhance publics' perceptions of scientists. Importantly, publics' responses to harsh satire were not examined. Caution should be exercised when using satire due to potential backfire effects.

Volume 24 • Issue 01 • 2025

Feb 24, 2025 Article
"It's mostly a one-way street, to be honest": the subjective relevance of public engagement in the science communication of professional university communicators

by Kaija Biermann, Lennart Banse and Monika Taddicken

This study explores the subjective relevance and challenges of public engagement (PES) in science communication among professional university communicators based on 29 qualitative interviews in one German federal state. Despite recognizing its value, interviewees reveal significant uncertainties in understanding, objectives, and implementation of PES. They cite barriers such as reliance on scientists and control concerns. Surprisingly, social media is rarely considered for PES, with online engagement seen as difficult. This research highlights the complexities and challenges of PES in practice, emphasizing opportunities for optimized digital science communication strategies and clearer role structures between professionals and researchers to enhance PES.

Volume 24 • Issue 01 • 2025

Feb 17, 2025 Article
Exploring the dynamics of interaction about generative artificial intelligence between experts and the public on social media

by Noriko Hara, Eugene Kim, Shohana Akter and Kunihiro Miyazaki

Generative Artificial Intelligence (GenAI) greatly attracts the public's interest; thus, this research investigates discussions between experts and members of the public about this new technology on social media. Using computational and manual analysis of X (formerly Twitter) data, we investigated discussion topics, the roles discussants — including both experts and public — play, and the differences between experts' posts and the public's replies. Moreover, we examined the dynamics between the discussants' roles and social media engagement measures. We found that the public is not only actively contributing to the discussion of GenAI on X, but also becoming knowledge co-producers alongside experts in the sphere.

Volume 24 • Issue 01 • 2025