Artificial Intelligence, and particularly generative AI that produces novel outputs based on user prompts fundamentally impacts science communication. It can assist practitioners in generating content or identifying new ideas and trends, translating and preparing scientific results and publications for different channels and audiences, and enabling interactive exchanges with various user groups. It also comes with pronounced challenges, from errors and “hallucinations” in AI outputs over new digital divides all the way to ethical and legal concerns. The Special Issue brings together cutting-edge research assessing the role of AI in science communication, discussing communication about AI, communication with AI, the impact of AI technologies on the larger science communication ecosystem and potential theoretical and methodological implications.
Artificial Intelligence (AI) is fundamentally transforming science communication. This editorial for the JCOM Special Issue “Science Communication in the Age of AI” explores the implications of AI, especially generative AI, for science communication, its promises and challenges. The articles in this Special Issue can be categorized into four key areas: (1) communication about AI, (2) communication with AI, (3) the impact of AI on science communication ecosystems, and (4) AI’s influence on science, theoretical and methodological approaches. This collection of articles advances empirical and theoretical insight into AI’s evolving role in science communication, emphasizing interdisciplinary and comparative perspectives.
This paper examines how artificial intelligence (AI) imaginaries are negotiated by key stakeholders in the United States, China, and Germany, focusing on how public perceptions and discourses shape AI as a sociotechnical phenomenon. Drawing on the concept of sociotechnical imaginaries in public communication, the study explores how stakeholders from industry, government, academia, media and civil society actively co-construct and contest visions of the future of AI. The comparative analysis challenges the notion that national perceptions are monolithic, highlighting the complex and heterogeneous discursive processes surrounding AI. The paper utilises stakeholder interviews to analyse how different actors position themselves within these imaginaries. The analysis highlights overarching and sociopolitically diverse AI imaginaries as well as sectoral and stakeholder co-dependencies within and across the case study countries. It hence offers insights into the socio-political dynamics that influence AI’s evolving role in society, thus contributing to debates on science communication and the social construction of technology.
Realizing the ascribed potential of generative AI for health information seeking depends on recipients’ perceptions of quality. In an online survey (N = 294), we aimed to investigate how German individuals evaluate AI-generated information compared to expert-generated content on the influenza vaccination. A follow-up experiment (N = 1,029) examined the impact of authorship disclosure on perceived argument quality and underlying mechanisms. The findings indicated that expert arguments were rated higher than AI-generated arguments, particularly when authorship was revealed. Trust in science and the Standing Committee on Vaccination accentuated these differences, while trust in AI and innovativeness did not moderate this effect.
This paper studies how artificial intelligence was set to the agenda in the press and social media in France. By simultaneously analysing the framing of AI and the key actors who dominated the discourse on this technology in the national press and on the X and Facebook platforms, the study highlights, on the one hand, the influence of digital companies and government narratives, and on the other, the presence of alternative stakeholder perspectives that diverge from dominant discourses and contribute to political polarisation on AI-related issues such as facial recognition. Our study sheds light on how AI framing can highlight dominant and alternative narratives and visions and may contribute to the consolidation of socio-technical imaginaries in the French public sphere.
AI-generated avatars in science communication offer potential for conveying complex information. However, highly realistic avatars may evoke discomfort and diminish trust, a key factor in science communication. Drawing on existing research, we conducted an experiment (n = 491) examining how avatar realism and gender impact trustworthiness (expertise, integrity, and benevolence). Our findings show that higher realism enhances trustworthiness, contradicting the Uncanny Valley effect. Gender effects were dimension-specific, with male avatars rated higher in expertise. Familiarity with AI and institutional trust also shaped trust perceptions. These insights inform the design of AI avatars for effective science communication while maintaining public trust.