Publications including this keyword are listed below.
17 publications found
We assessed ChatGPT's ability to identify and categorize actors in German news media articles into societal groups. Through three experiments, we evaluated various models and prompting strategies. In experiment 1, we found that providing ChatGPT with codebooks designed for manual content analysis was insufficient. However, combining Named Entity Recognition with an optimized prompt for actor Classification (NERC pipeline) yielded acceptable results. In experiment 2, we compared the performance of gpt-3.5-turbo, gpt-4o, and gpt-4-turbo, with the latter performing best, though challenges remained in classifying nuanced actor categories. In experiment 3, we demonstrated that repeating the classification with the same model produced highly reliable results, even across different release versions.
Artificial Intelligence (AI) is profoundly reshaping the field of science communication research. We conducted a literature review of 35 articles published between 2002 and 2024, which reveals that research on AI in science communication is still in its infancy but growing, predominantly concentrated in Western contexts, and methodologically inclined toward quantitative approaches. The field largely focuses on communication about AI and public perceptions of AI rather than analyzing actual engagement with generative AI or its systemic impact on science communication ecosystems. To address these gaps, we propose a research agenda centered on four key areas: (1) communication about AI, (2) communication with AI, (3) the impact of AI on science communication ecosystems, and (4) AI’s influence on science, theoretical and methodological approaches.
Generative AI like ChatGPT has been diagnosed to fundamentally impact different realms of life. This includes science communication, where GenAI tools are becoming important sources of science-related content for many people. This raises the question of whether people trust GenAI as a source in this field, a question that has not been answered sufficiently yet. Adapting a model developed by Roberts et al. [2013] and utilizing survey data from the German Science Barometer 2023, we find that Germans are rather sceptical about and do not strongly trust GenAI in science communication. Structural equation modelling shows that respondents' trust in GenAI as a source in science communication is driven strongly by their general trust in science, which is largely driven by their knowledge about science and the perception that science improves quality of life.