Publications included in this section.
557 publications found
This study explores the role of ChatGPT in science-related information retrieval, building on research conducted in 2023. Drawing on online survey data from seven countries—Australia, Denmark, Germany, Israel, South Korea, Taiwan, and the United States—and two data collection points (2023 and 2024), the study highlights ChatGPT’s growing role as an information intermediary, reflecting the rapid diffusion of generative AI (GenAI) in general. While GenAI adoption is a global phenomenon, distinct regional variations emerge in the use of ChatGPT for science-related searches. Additionally, the study finds that a specific subset of the population is more likely to use ChatGPT for science-related information retrieval. Across all countries surveyed, science-information seekers report higher levels of trust in GenAI compared to non-users. They also exhibit a stronger understanding of how (Gen)AI works and, with some notable exceptions, show greater awareness of its epistemic limitations.
Realizing the ascribed potential of generative AI for health information seeking depends on recipients’ perceptions of quality. In an online survey (N = 294), we aimed to investigate how German individuals evaluate AI-generated information compared to expert-generated content on the influenza vaccination. A follow-up experiment (N = 1,029) examined the impact of authorship disclosure on perceived argument quality and underlying mechanisms. The findings indicated that expert arguments were rated higher than AI-generated arguments, particularly when authorship was revealed. Trust in science and the Standing Committee on Vaccination accentuated these differences, while trust in AI and innovativeness did not moderate this effect.
AI-generated avatars in science communication offer potential for conveying complex information. However, highly realistic avatars may evoke discomfort and diminish trust, a key factor in science communication. Drawing on existing research, we conducted an experiment (n = 491) examining how avatar realism and gender impact trustworthiness (expertise, integrity, and benevolence). Our findings show that higher realism enhances trustworthiness, contradicting the Uncanny Valley effect. Gender effects were dimension-specific, with male avatars rated higher in expertise. Familiarity with AI and institutional trust also shaped trust perceptions. These insights inform the design of AI avatars for effective science communication while maintaining public trust.
Based on an ethnography of the development and production of science YouTube videos – a collaboration between a German public broadcaster and social science scholars – we identify three intermediary steps through which recommendation algorithms shape science content on social media. We argue that algorithms induce changes to science content through the power they exert over the content's visibility on social media platforms. Change is driven by how practitioners interpret algorithms, infer content strategies to enhance visibility, and adjust content creation practices accordingly. By unpacking these intermediate steps, we reveal the nuanced mechanisms by which algorithms indirectly shape science content.
We assessed ChatGPT's ability to identify and categorize actors in German news media articles into societal groups. Through three experiments, we evaluated various models and prompting strategies. In experiment 1, we found that providing ChatGPT with codebooks designed for manual content analysis was insufficient. However, combining Named Entity Recognition with an optimized prompt for actor Classification (NERC pipeline) yielded acceptable results. In experiment 2, we compared the performance of gpt-3.5-turbo, gpt-4o, and gpt-4-turbo, with the latter performing best, though challenges remained in classifying nuanced actor categories. In experiment 3, we demonstrated that repeating the classification with the same model produced highly reliable results, even across different release versions.
Most public audiences in Germany receive scientific information via a variety of (digital) media; in these contexts, media act as intermediaries of trust in science by providing information that present reasons for public audiences to place their trust in science. To describe this process, the study introduces the term “trust cues”. To identify such content-related trust cues, an explorative qualitative content analysis has been applied to German journalistic, populist, social, and other (non-journalistic) online media (“n” = 158). In total, “n” = 1,329 trust cues were coded. The findings emphasize the diversity of mediated trust, with trust cues being connected to dimensions of trust in science (established: expertise, integrity, benevolence; recently introduced: transparency, dialogue). Through this analysis, the study aims for a better understanding of mediated trust in science. Deriving this finding is crucial since public trust in science is important for individual and collective informed decision-making and crises management.
The evolving landscape of science communication highlights a shift from traditional dissemination to participatory engagement. This study explores Dutch citizens' perspectives on science communication, focusing on science capital, public engagement, and communication goals. Using a mixed-methods approach, it combines survey data (“n”=376) with focus group (“n”=66) insights. Findings show increasing public interest in participating in science, though barriers like knowledge gaps persist. Trust-building, engaging adolescents, and integrating science into society were identified as key goals. These insights support the development of the Netherlands' National Centre of Expertise on Science and Society and provide guidance for inclusive, effective science communication practices.
This study explores the subjective relevance and challenges of public engagement (PES) in science communication among professional university communicators based on 29 qualitative interviews in one German federal state. Despite recognizing its value, interviewees reveal significant uncertainties in understanding, objectives, and implementation of PES. They cite barriers such as reliance on scientists and control concerns. Surprisingly, social media is rarely considered for PES, with online engagement seen as difficult. This research highlights the complexities and challenges of PES in practice, emphasizing opportunities for optimized digital science communication strategies and clearer role structures between professionals and researchers to enhance PES.