Publications including this keyword are listed below.
14 publications found
Artificial Intelligence (AI) is fundamentally transforming science communication. This editorial for the JCOM Special Issue “Science Communication in the Age of AI” explores the implications of AI, especially generative AI, for science communication, its promises and challenges. The articles in this Special Issue can be categorized into four key areas: (1) communication about AI, (2) communication with AI, (3) the impact of AI on science communication ecosystems, and (4) AI’s influence on science, theoretical and methodological approaches. This collection of articles advances empirical and theoretical insight into AI’s evolving role in science communication, emphasizing interdisciplinary and comparative perspectives.
This study examines the adoption of generative AI (genAI) tools in German university communication departments using 2023 and 2024 survey data. Adoption has significantly increased in 2024, particularly for text generation, with private universities leading the way. Efficiency gains are evident, but issues with factual accuracy and data privacy persist. The findings highlight a transition from cautious experimentation to mainstream integration of genAI in communication strategies, though ethical concerns remain. Communication departments face the challenge of balancing genAI’s efficiency benefits with the need to uphold quality, individuality, and privacy.
The advent of generative Artificial Intelligence (genAI) is expected to have a significant impact on journalism. In this study, we address whether this development could help mitigate the crisis in science journalism. We conducted semi-structured interviews with 30 German science journalists, asking them about the potential impact genAI may have on the news-making process (i.e., selection, production, and distribution). The results suggest that interviewees anticipate many future benefits associated with genAI, some believe that the technology is unlikely to worsen the crisis in science journalism, while others express concerns about potential negative consequences (e.g., job loss).
This study explores the role of ChatGPT in science-related information retrieval, building on research conducted in 2023. Drawing on online survey data from seven countries—Australia, Denmark, Germany, Israel, South Korea, Taiwan, and the United States—and two data collection points (2023 and 2024), the study highlights ChatGPT’s growing role as an information intermediary, reflecting the rapid diffusion of generative AI (GenAI) in general. While GenAI adoption is a global phenomenon, distinct regional variations emerge in the use of ChatGPT for science-related searches. Additionally, the study finds that a specific subset of the population is more likely to use ChatGPT for science-related information retrieval. Across all countries surveyed, science-information seekers report higher levels of trust in GenAI compared to non-users. They also exhibit a stronger understanding of how (Gen)AI works and, with some notable exceptions, show greater awareness of its epistemic limitations.
Realizing the ascribed potential of generative AI for health information seeking depends on recipients’ perceptions of quality. In an online survey (N = 294), we aimed to investigate how German individuals evaluate AI-generated information compared to expert-generated content on the influenza vaccination. A follow-up experiment (N = 1,029) examined the impact of authorship disclosure on perceived argument quality and underlying mechanisms. The findings indicated that expert arguments were rated higher than AI-generated arguments, particularly when authorship was revealed. Trust in science and the Standing Committee on Vaccination accentuated these differences, while trust in AI and innovativeness did not moderate this effect.
AI-generated avatars in science communication offer potential for conveying complex information. However, highly realistic avatars may evoke discomfort and diminish trust, a key factor in science communication. Drawing on existing research, we conducted an experiment (n = 491) examining how avatar realism and gender impact trustworthiness (expertise, integrity, and benevolence). Our findings show that higher realism enhances trustworthiness, contradicting the Uncanny Valley effect. Gender effects were dimension-specific, with male avatars rated higher in expertise. Familiarity with AI and institutional trust also shaped trust perceptions. These insights inform the design of AI avatars for effective science communication while maintaining public trust.
We assessed ChatGPT's ability to identify and categorize actors in German news media articles into societal groups. Through three experiments, we evaluated various models and prompting strategies. In experiment 1, we found that providing ChatGPT with codebooks designed for manual content analysis was insufficient. However, combining Named Entity Recognition with an optimized prompt for actor Classification (NERC pipeline) yielded acceptable results. In experiment 2, we compared the performance of gpt-3.5-turbo, gpt-4o, and gpt-4-turbo, with the latter performing best, though challenges remained in classifying nuanced actor categories. In experiment 3, we demonstrated that repeating the classification with the same model produced highly reliable results, even across different release versions.
Artificial Intelligence (AI) is profoundly reshaping the field of science communication research. We conducted a literature review of 35 articles published between 2002 and 2024, which reveals that research on AI in science communication is still in its infancy but growing, predominantly concentrated in Western contexts, and methodologically inclined toward quantitative approaches. The field largely focuses on communication about AI and public perceptions of AI rather than analyzing actual engagement with generative AI or its systemic impact on science communication ecosystems. To address these gaps, we propose a research agenda centered on four key areas: (1) communication about AI, (2) communication with AI, (3) the impact of AI on science communication ecosystems, and (4) AI’s influence on science, theoretical and methodological approaches.