1018 publications found
Realizing the ascribed potential of generative AI for health information seeking depends on recipients’ perceptions of quality. In an online survey (N = 294), we aimed to investigate how German individuals evaluate AI-generated information compared to expert-generated content on the influenza vaccination. A follow-up experiment (N = 1,029) examined the impact of authorship disclosure on perceived argument quality and underlying mechanisms. The findings indicated that expert arguments were rated higher than AI-generated arguments, particularly when authorship was revealed. Trust in science and the Standing Committee on Vaccination accentuated these differences, while trust in AI and innovativeness did not moderate this effect.
AI-generated avatars in science communication offer potential for conveying complex information. However, highly realistic avatars may evoke discomfort and diminish trust, a key factor in science communication. Drawing on existing research, we conducted an experiment (n = 491) examining how avatar realism and gender impact trustworthiness (expertise, integrity, and benevolence). Our findings show that higher realism enhances trustworthiness, contradicting the Uncanny Valley effect. Gender effects were dimension-specific, with male avatars rated higher in expertise. Familiarity with AI and institutional trust also shaped trust perceptions. These insights inform the design of AI avatars for effective science communication while maintaining public trust.
Most public audiences in Germany receive scientific information via a variety of (digital) media; in these contexts, media act as intermediaries of trust in science by providing information that present reasons for public audiences to place their trust in science. To describe this process, the study introduces the term “trust cues”. To identify such content-related trust cues, an explorative qualitative content analysis has been applied to German journalistic, populist, social, and other (non-journalistic) online media (“n” = 158). In total, “n” = 1,329 trust cues were coded. The findings emphasize the diversity of mediated trust, with trust cues being connected to dimensions of trust in science (established: expertise, integrity, benevolence; recently introduced: transparency, dialogue). Through this analysis, the study aims for a better understanding of mediated trust in science. Deriving this finding is crucial since public trust in science is important for individual and collective informed decision-making and crises management.