1325 publications found
This paper examines how artificial intelligence (AI) imaginaries are negotiated by key stakeholders in the United States, China, and Germany, focusing on how public perceptions and discourses shape AI as a sociotechnical phenomenon. Drawing on the concept of sociotechnical imaginaries in public communication, the study explores how stakeholders from industry, government, academia, media and civil society actively co-construct and contest visions of the future of AI. The comparative analysis challenges the notion that national perceptions are monolithic, highlighting the complex and heterogeneous discursive processes surrounding AI. The paper utilises stakeholder interviews to analyse how different actors position themselves within these imaginaries. The analysis highlights overarching and sociopolitically diverse AI imaginaries as well as sectoral and stakeholder co-dependencies within and across the case study countries. It hence offers insights into the socio-political dynamics that influence AI’s evolving role in society, thus contributing to debates on science communication and the social construction of technology.
Realizing the ascribed potential of generative AI for health information seeking depends on recipients’ perceptions of quality. In an online survey (N = 294), we aimed to investigate how German individuals evaluate AI-generated information compared to expert-generated content on the influenza vaccination. A follow-up experiment (N = 1,029) examined the impact of authorship disclosure on perceived argument quality and underlying mechanisms. The findings indicated that expert arguments were rated higher than AI-generated arguments, particularly when authorship was revealed. Trust in science and the Standing Committee on Vaccination accentuated these differences, while trust in AI and innovativeness did not moderate this effect.
AI-generated avatars in science communication offer potential for conveying complex information. However, highly realistic avatars may evoke discomfort and diminish trust, a key factor in science communication. Drawing on existing research, we conducted an experiment (n = 491) examining how avatar realism and gender impact trustworthiness (expertise, integrity, and benevolence). Our findings show that higher realism enhances trustworthiness, contradicting the Uncanny Valley effect. Gender effects were dimension-specific, with male avatars rated higher in expertise. Familiarity with AI and institutional trust also shaped trust perceptions. These insights inform the design of AI avatars for effective science communication while maintaining public trust.