Publications including this keyword are listed below.
137 publications found
This paper examines how artificial intelligence (AI) imaginaries are negotiated by key stakeholders in the United States, China, and Germany, focusing on how public perceptions and discourses shape AI as a sociotechnical phenomenon. Drawing on the concept of sociotechnical imaginaries in public communication, the study explores how stakeholders from industry, government, academia, media and civil society actively co-construct and contest visions of the future of AI. The comparative analysis challenges the notion that national perceptions are monolithic, highlighting the complex and heterogeneous discursive processes surrounding AI. The paper utilises stakeholder interviews to analyse how different actors position themselves within these imaginaries. The analysis highlights overarching and sociopolitically diverse AI imaginaries as well as sectoral and stakeholder co-dependencies within and across the case study countries. It hence offers insights into the socio-political dynamics that influence AI’s evolving role in society, thus contributing to debates on science communication and the social construction of technology.
This study explores the subjective relevance and challenges of public engagement (PES) in science communication among professional university communicators based on 29 qualitative interviews in one German federal state. Despite recognizing its value, interviewees reveal significant uncertainties in understanding, objectives, and implementation of PES. They cite barriers such as reliance on scientists and control concerns. Surprisingly, social media is rarely considered for PES, with online engagement seen as difficult. This research highlights the complexities and challenges of PES in practice, emphasizing opportunities for optimized digital science communication strategies and clearer role structures between professionals and researchers to enhance PES.
Generative AI like ChatGPT has been diagnosed to fundamentally impact different realms of life. This includes science communication, where GenAI tools are becoming important sources of science-related content for many people. This raises the question of whether people trust GenAI as a source in this field, a question that has not been answered sufficiently yet. Adapting a model developed by Roberts et al. [2013] and utilizing survey data from the German Science Barometer 2023, we find that Germans are rather sceptical about and do not strongly trust GenAI in science communication. Structural equation modelling shows that respondents' trust in GenAI as a source in science communication is driven strongly by their general trust in science, which is largely driven by their knowledge about science and the perception that science improves quality of life.
Volume 23 • Issue 07 • 2024 • Special Issue: Communicating Discovery Science
Volume 23 • Issue 07 • 2024 • Special Issue: Communicating Discovery Science