1 Introduction
With the launch of ChatGPT in November 2022, generative artificial intelligence (GenAI) got worldwide attention. GenAI includes Large Language Models such as GPT, Gemini, PaLM or Mistral.ai which provide original, human-like responses in textual, visual, auditive or audiovisual form to user prompts based on large-scale digital training data and supervised learning techniques that involve human feedback. It is widely diagnosed to fundamentally change contemporary information and communication ecosystems and profoundly influence individual, organisational and societal dynamics in different fields — including science communication [Alvarez et al., 2024; Biyela et al., 2024; Schäfer, 2023].
Science communication is the public communication from and about science in which non-experts are a recognized part of the audience [Davies & Horst, 2016; Schäfer et al., 2015]. This includes communication by individual researchers, organisational communication by universities or scientific organisations, science journalism and public debates about prominent or controversial scientific or science-related topics.
Prognoses about how science communication will transform due to GenAI range — as is often the case with emerging technologies — from highly positive to strongly negative [for an overview see Schäfer, 2023]: Proponents point out how it can help developing content ideas and producing media releases and journalistic articles [Tatalovic, 2018]. It can adapt content to individual users and their specific needs [Leßmöllmann, 2019]. And different from prior technological applications which aided science communication in a more mechanistic sense, such as writing assistants or graphic design tools, GenAI might bring even more fundamental changes as citizens now have the opportunity to interact with these applications directly [Chen et al., 2024; Hyun Baek & Kim, 2023]. But GenAI in general, and ChatGPT specifically, have also been viewed critically regarding their role in science communication [Schäfer, 2023]. Scholars and pundits voiced concerns about the tool‘s lack of accuracy and biases, the dangers of misuse for pseudo- and anti-scientific purposes, a resulting deluge of science-related information of differing and partly problematic quality, and a potential digital divide between users with different abilities to use GenAI effectively and to discern reliable information from less reliable one [Könneker, 2024].
In this study, we focus on GenAI as a source in science communication — and on whether users trust this source. Trust in complex and specialized areas of life is important in contemporary societies in general [Giddens, 1990; Luhmann, 2000] — with science, as a specialized, expert endeavour being one of the prime examples [Hendriks et al., 2016; Wintterlin et al., 2022]. As many people do not have direct contact with science, but experience it in a mediated way, via news, public events, social media etc., trust in the mediators of science is important as well [Reif & Guenther, 2021; Schäfer, 2016]. So far, the respective research has mostly analysed trust in news or social media as mediators of science [e.g. Huber et al., 2019; van Dijck & Alinejad, 2020; Weingart & Guenther, 2016].
But people already, and likely increasingly, use GenAI to inform themselves about scientific and science-related issues such as climate change or health [Biswas, 2023; Chen et al., 2024]. In Western countries, a majority of people have heard about GenAI, or at least about ChatGPT as the most prominent tool [e.g. Vogels, 2023], and a considerable and rising number of people are using it to inform themselves about different issues [e.g. McClain, 2024], both professionally and personally [Faverio & Tyson, 2023], and including on science [Fecher et al., 2023; Greussing et al., 2024]. Among German students, for example, a majority already use ChatGPT for science-related queries [von Garrel & Mayer, 2023].
It is therefore crucial to assess the attitudes of citizens towards GenAI, including their trust in it as a source in science communication, and the factors shaping this trust. In this study, we do so by utilizing data from the German Science Barometer 2023, which collected data not only on general attitudes of Germans towards science and research, but also towards “programs such as ChatGPT” as sources of science communication. We aim to answer the following research questions:
RQ1a: What are the attitudes of Germans towards GenAI as a source of science-related information?RQ1b: How trustworthy do German citizens find GenAI as a source of science-related information?
Scholars from different fields have noted a lack of understanding of how trust in AI is constituted [Liao & S. Sundar, 2022, p. 1257]. Therefore, we also assess the drivers of the trustworthiness of GenAI as a source in science communication, including individual-level attitudes such as generalized trust in science as well as a range of sociodemographic factors as independent variables. Hence, our second research question is:
RQ2: How can Germans’ trust in GenAI as a source in sciencecommunication be explained?
Our study contributes to research on science communication in two major ways: on the one hand, we provide early insights into attitudes towards the use of GenAI in science communication through data from a nationally representative survey, insights into attitudes towards an emerging technology that will become more present and influential in science communication and beyond. On the other hand, we employ and test a model explaining trust in specific technologies specific to science communication, and contribute to refining this model so that it might be used for analysing trust in other technologies as well.
2 Conceptual background: attitudes towards and trust in GenAI as a source in science communication
Regarding attitudes towards GenAI in science communication, we could not rely on established survey instruments and scales specific to our topic, as those are not yet available. Therefore, we surveyed the scholarly and public discussions about GenAI in the context of science communication to identify core arguments brought forward [for an overview Schäfer, 2023]. In favour of the use of GenAI in science communication, authors have emphasized GenAI’s capability to “explain complicated issues simply” [Hegelbach, 2023] and to summarize scientific publications and results [Gravel et al., 2023], to do so highly efficiently [Myklebust, 2023] and to interact with users in a human-like way [Goedecke & Koester, 2023]. Critics have pointed out that GenAI responses may be inaccurate [Gravel et al., 2023], and that users may find it difficult to identify and validate the origin of AI-generated content and its sources [Doctorow, 2023; Sarraju et al., 2023]. They have also emphasized that GenAI can be used to disseminate dis- and misinformation and may give rise to an “AI-driven infodemic” due to its “ability of LLMs to rapidly produce vast amounts of text” [De Angelis et al., 2023, p. 1]. These dimensions have been discussed at length as potential pros and cons of GenAI specifically to science communication, and were included in the study.
As we were interested in trust in GenAI specifically as a source in science communication, we went beyond a generalized definition that understands trust in AI as a user’s attitudes that AI will help him or her achieve certain goals [Ueno et al., 2022], and adapted a definition from journalism scholarship, defining trust more specifically as the confidence of users in GenAI and its products to provide accurate, relevant, and balanced information on scientific issues [Fink, 2019; Grosser et al., 2016; cf. Schäfer, 2016]. So far, however, conceptual models explaining trust in GenAI are scarce in (science communication) scholarship [for models of trust in AI in general see for an overview Ueno et al., 2022]. Previous research on GenAI suggested that the possibility to interact with chatbots in a humanlike fashion [Hyun Baek & Kim, 2023; P. Hu et al., 2021] as well as their humanlike appearance [Cheng et al., 2022] may positively influence trust in these chatbots. They also found that trust in GenAI tends to be higher among people who have prior experience with such tools as opposed to those who have not [Amoozadeh et al., 2023]. But conceptual models explaining such trust are scarce. And while widely used approaches like the Technology Acceptance Model have been applied to ChatGPT in fields like education [Saif et al., 2024] or tourism [Solomovich & Abraham, 2024], they have been used to explain adoption or use of the tool and do not consider trust as a dependent variable [for a discussion of trust in the context of the TAM, albeit also in another field, see Venkatesh & Bala, 2008].
To explain trust in GenAI as a source in science communication, we therefore used and adapted a conceptual model developed by Roberts et al. [2013], which integrates science communication scholarship, research on public perceptions and attitudes towards science and studies on technology acceptance. We have chosen this model as it specifically relates to science communication and is situated within the respective research. Roberts et al.’s model incorporates different factors that have been shown to affect people’s attitudes towards and trust in specific technologies: (see Figure 1).
-
Sociodemographic variables: Roberts et al. [2013] include gender, age, level of education, income and city size in their model. Studies indicate that these factors can also be influential for attitudes towards and trust in GenAI, even though effect sizes and directions vary: Schepman and Rodway showed that men and younger people were more likely to have positive attitudes towards GenAI [Schepman & Rodway, 2023]. Amoozadeh et al. found that 47% of university students expressed trust in GenAI, and that both gender and level of education were relevant predictors of that trust [Amoozadeh et al., 2023]. Similarly, In contrast, Zhang and Dafoe showed that respondents with lower levels of formal education were significantly less enthusiastic about the development of AI [Zhang & Dafoe, 2019]. Generally, Sindermann et al. pointed out that results regarding gender differences in attitudes towards technologies are often ambivalent, which should also be taken into account when assessing attitudes towards GenAI [Sindermann et al., 2020].
Roberts et al. [2013] assume that these factors are linked to people’s self-perceived knowledge about science, their assessment of science’s impact on their own quality of life, as well as their personal attachment to science:
-
Self-perceived knowledge about science: Roberts et al. [2013] operationalise this factor with questions related to specific scientific topics or incidents (“Please indicate how scientifically informed you are about each of the following topics:”, and then mentioning “Mad cow disease”, “Climate change” etc.), connecting to a long history of science communication research emphasizing the importance of knowledge and scientific literacy, but taking into account that actual and self-assessed knowledge about science correlate only moderately [cf. Klerck & Sweeney, 2007; Mede et al., 2024]. Studies on the relation between knowledge and trust in technologies have produced mixed results: Roberts et al. [2013] show that respondents’ perceived knowledge is positively associated with general trust in science and technology, but that this association disappears when general attitudes towards science are taken into account. Wintterlin et al. [2022] argue conceptually that citizens need to trust science due to their “bounded understanding of science” and scientific work (p. 1–2) — but find empirically that a basic understanding and orientations toward science are the strongest, positive predictors of trust in science.
-
Quality of life attitudes: Roberts and colleagues [2013] assume that people who perceive that science improves their own quality of life will also trust science and, subsequently, science-based technologies. Studies have indeed shown that trust in science is positively associated with favourable attitudes to science, positive perceptions of its impact on one’s own life as well as society more broadly, and public “beliefs in the promises of science” more generally [e.g. Bromme et al., 2022; Miller et al., 1997; Wintterlin et al., 2022].
-
Personal attachment to science: Roberts and colleagues [2013] use a set of items relating to personal experiences of respondents with science, assuming that personal attachment to science is linked to higher trust in science in general and more trust in specific technologies in particular. Empirical studies indeed show that people with greater proximity to science (for example in the form of a perceived tangibility of science and relevance for themselves) are less sceptical about science [Većkalov et al., 2024]. However, it has been shown that there is a difference in how people feel about general science and school science specifically [Hashimoto & Karasawa, 2014].
Roberts et al. [2013] assume further that these concepts affect general trust in science and hypothesize a reciprocal relationship between this general trust in science and trust in a specific technology:
-
Trust in science: Roberts et al. assume that general trust in science lays the foundation for people’s attitudes towards and trust in specific technologies, assuming that when individuals have a strong belief in the scientific method and the integrity of the scientific community, they are more likely to trust technologies that are endorsed or developed through scientific research. This is in line with previous research [Dixson et al., 2022; Većkalov et al., 2024, 2024]. Accordingly, they hypothesize a positive relation between general trust in science and trust in a specific technology.
-
Trust in Specific Technologies / GenAI: Roberts et al. [2013] assume that these factors influence trust in specific technologies. For our study, we specify this dimension in two ways: On the one hand, we apply the model to trust in GenAI specifically — re-labeling this dimension accordingly. On the other hand, we focus on trust in GenAI as a source in science communication, i.e. to provide accurate, relevant, and balanced information on scientific issues — measuring it accordingly.
3 Data and method
3.1 Data
We use data from the German Science Barometer 2023, an annual, nationally representative telephone survey of the German population aged 14 years and older. Germany is an interesting case for several reasons: first, it is a highly developed country with strong and diversified science and higher education sectors that are largely publicly funded and rely on public support and legitimation, making science communication relevant [Bonfadelli et al., 2017]. Second, GenAI is used by a considerable proportion of the population, similar to other Western countries [Schlude et al., 2023]. Third, the country’s media system [Hallin & Mancini, 2016; Hallin, 2020], social media landscape [Humprecht et al., 2022] and academic system [Hölscher, 2016] have pronounced similarities to other (continental) European countries, making the results relevant beyond the German case. Fourth, Germany is less often analyzed in science communication research compared to English-speaking countries like the U.S. or the U.K. [Guenther & Joubert, 2017].
The Science Barometer assesses people’s attitudes towards science and research, such as interest in science, perceptions of its risks and benefits, or general trust in science. Every year, the survey has a topical focus. In 2023, this focus was on GenAI in science communication: respondents were asked to assess “programs such as ChatGPT” as a source of science-related information, and rate its trustworthiness (see www.sciencebarometer.com for additional information).
Data were collected between August 22 and 24, 2023, with telephone interviews (80% landline, 20% mobile) in Germany conducted by a major market research company. The sample contained 1,037 respondents (age: M = 51.96, SD = 20.27; gender: 52.5% female; education: 35.1% post-secondary). We applied post-hoc weighting to obtain descriptive estimates that are representative for federal state, size of city, gender, age, occupation, formal education, and household size. The SEM analysis used unweighted data. The SEM analyses can be reproduced with the data and R code we share publicly through the Open Science Framework at: https://osf.io/kj98e.
3.2 Measures
To operationalize Roberts et al.’s [2013] model, we used variables from the standard Science Barometer questionnaire as well as the topical focus on GenAI. Even though these variables were not specifically designed to test Roberts et al.’s conceptual model, they allowed us to operationalize the latent constructs Self-Perceived Knowledge, Quality of Life, and Personal Attachment to Science. While Roberts et al.’s operationalization of the Quality of Life construct focused on benefits of science and technology for oneself , we also used indicators for perceived benefits for society, so as to achieve a more comprehensive operationalization of this construct [Miller et al., 1997].
Minor adjustments to the latent construct Trust in Science and Technology were necessary, as the Science Barometer only captures trust in science and research as well as in researchers, but not in technology. Therefore, we named the respective latent construct Trust in Science and Research. Furthermore, questions assessing perceptions of and trust in GenAI for reproducing scientific content were included for the first time in the 2023 survey, enabling the operationalization of the latent construct Trust in GenAI.
Responses from participants were captured using five-point Likert scales for most items. The only exceptions were questions regarding proximity to science, where respondents were initially asked whether they currently work, have previously worked, or have never worked in the field of science and research. Respondents who had never worked in science and research were asked a series of follow-up questions (other professional involvement in science and research; personal acquaintance with researchers; friends or family members studying or having studied at a university), which could be answered with yes or no. From this item set, a score for proximity to science was computed, where individuals responding “no” to all these questions were coded as 1, and those working in science and research were coded as 6 (see Table 1).
3.3 Analysis
Following Roberts et al. [2013], we applied structural equation modelling (SEM) to investigate how trust in GenAI — operationalized as “programs such as ChatGPT” — as a source in science communication is affected by broader science-related attitudes and sociodemographic characteristics of the German population, and explore the association between generalized trust in science and trust in GenAI. SEM tests hypothesized relationships between measured variables and latent constructs [Schweizer & DiStefano, 2016].
In doing so, we replicate the SEM used by Roberts et al. [2013] in the context of GenAI in science communication. Roberts et al. [2013] included five latent constructs in their model: (1) Self-Perceived Knowledge, (2) Quality of Life perceptions, (3) Personal Attachment to Science, (4) Trust in Science and Technology, and (5) Trust in GenAI. They assumed a causal influence of (1–3) on (4) as well as effects of (4) on (5), and vice versa. Additionally, they tested how (1–3) are affected by five observed variables: gender, age, level of education, level of income and city size Table 1 shows how we operationalized the five latent constructs based on the Science Barometer 2023 data, which also included measures for the five sociodemographic variables, i.e. gender (binary, 1 = female), age (continuous), education (binary, 1 = post-secondary), net household income (4 levels), city size (7 levels). All variables were scaled for the analyses.
Taking the original operationalization of the five latent constructs by Roberts et al. into account (see chapter 2.2), we selected questions from the Science Barometer to operationalize these latent constructs as well, albeit with some adaptations (see Table 1):
-
Self-Perceived Knowledge: as GenAI can serve as a source of knowledge on any type of scientific questions, we condensed this to a general question of the level of perceived knowledge participants believe to have about science and research.
-
Quality of Life: Roberts et al. [2013] used a set of variables focusing on personal benefits of science for one’s own life in the geographical context of their study. Adapting their model to our context, we used variables relating to participants’ perceptions of the benefits of science for their personal life as well as their beliefs in the benefit of science for society in general, which has been shown to be an important component of quality of life attitudes in the context of science [see Miller et al., 1997].
-
Personal Attachment to Science: Roberts et al. narrow this dimension down strongly to respondents’ research-related experiences. We broaden the scope of the dimension and include items measuring personal proximity to science more generally, as the conceptual model focuses more generally on science in other dimensions as well.
-
Trust in Science and Research: Roberts et al. use a series of variables to operationalise respondents’ trust in science and technology in general. We adapted these items to the context of science communication, where general trust in science plays a key role. The increasing availability of scientific information and its complexity make it indispensable for citizens to trust in the scientists who generate and report scientific finding [Hendriks et al., 2016]. In accordance with the concept of epistemic trust [Hendriks et al., 2015], we include a set of items, regarding reasons to trust or distrust scientists to capture the three expertise, integrity and benevolence dimensions inherent int this concept — as well as a general item on trust in science.
-
Trust in GenAI: while Roberts et al. assessed trust in a range of technologies (like biotechnology), we are interested in trust in GenAI as a source in science communication. Therefore, we relied on items developed for the 2023 Science Barometer survey concerning trust in programs such as ChatGPT in reproducing scientific content, as well as assessing potential benefits and risks of such technologies in this context.
For the dimensions interest in science and research, trust in science and the assessment of positive and negative aspects of GenAI, we created indices for the SEM. The reason for this is twofold: from a conceptual perspective, the latent construct Trust in Science and Research, e.g., can be understood as being composed of three sub-constructs: global trust in science, reasons for trust, and reasons for distrust. If the latter two sub-constructs were broken down into their individual variables, the individual trust and distrust reasons would have the same weight as the sub-construct of global trust in science, which we did not find meaningful. From a methodological perspective, it is also plausible that a model with fewer indices would result in a more robust solution, as it is more parsimonious and requires the estimation of fewer free parameters.
We fitted the SEM with the R package lavaan v0.6-17 (R version 4.3.3), using maximum likelihood estimation with robust standard errors and a Satorra-Bentler scaled test statistic [Rosseel, 2012]. The model syntax replicated Roberts et al.’s [2013] approach precisely, with one minor exception: to avoid Heywood cases due to a negative observed variance (for perceived positive aspects of programs like ChatGPT), we had to constrain its variance to range between 0 and 0.5 [see Schweizer & DiStefano, 2016]. We also fitted alternative models: first, we tested a model that did not include the individual exogenous variables (see Table 1) but used their mean values to measure the five latent constructs and thereby treated them as exogenous. Second, we fitted this reduced model and the original model using survey weights. All alternative models had clearly worse fit. Moreover, whether and how to account for weights in SEM is an issue of debate [Bollen et al., 2013] and arguably dispensable as we are interested in assessing patterns of correlations and effect sizes rather than nationally representative point estimates. Therefore, we discarded alternative model versions and report results of the original model used by Roberts et al. [2013].
The SEM showed an acceptable global fit (2 = 559.5, df = 111, p < 0.001; n = 677). However, fit indices were only mediocre (RMSEA = 0.077 [90% CI: 0.071-0.083], SRMR = 0.085), with two criteria being below established cut-off criteria (CFI = 0.759; TLI = 0.689; L.-T. Hu and Bentler [see 1999]). This might indicate that some variables included in the model are less useful for explaining trust in GenAI and trust in science in general.
4 Results
4.1 How Germans assess GenAI as a source of science-related content
Overall, as of August 2023, Germans were cautious when assessing GenAI as a source of science-related content. On the one hand, when asked to evaluate the possibilities of “programs like ChatGPT” in science communication, half of the respondents (53%) “agreed” or “completely agreed” that the ability to explain complex scientific content in a highly simplified way is positive about GenAI (see Figure 2). Similarly, 50% of respondents stated that the capability of GenAI to provide examples and answer questions when there are uncertainties about scientific subjects is positive. Between 23% and 29%, however, did not rate these aspects positively. On the other hand, only one-third of respondents viewed GenAI’s ability to engage users in science-related conversations akin to human interaction and generating texts in the style of scientific papers in a very short time as something positive. In turn, between 38% and 45% did not perceive these aspects positively. On average, only 43% of respondents viewed the surveyed capabilities of “programs like ChatGPT” in science communication as positive, with approximately one-third of Germans being sceptical.
This scepticism was also evident regarding concerns about GenAI as a source in science communication. Between 62% and 66% of respondents shared concerns about GenAI’s potential to disseminate misinformation, its lack of transparency in content reproduction, and inadequate source verification. Conversely, only between 15% and 19% did not find these aspects concerning. On average, a clear majority of almost two thirds (63%) of the German population agrees with the concerns about GenAI specified in the survey.
These concerns and the overall rather critical assessment of GenAI as sources of science communication correspond with a low level of trust Germans have in these tools as sources in science communication: when asked about their trust in programs like ChatGPT for reproducing scientific content, only 17% of respondents expressed trust or complete trust. More than one-third of respondents (36%) remained undecided. Almost half of the population, 46%, indicated not to trust GenAI in this context.
Notably, this low trust in GenAI as a source in science communication is found across nearly all population groups, as closer examination of differences along sociodemographic characteristics reveals (see Figure 3). We found only minor disparities regarding gender (with men trusting, but also distrusting GenAI more and women being more often undecided), levels of formal education (respondents with moderate levels of formal education show slightly higher trust in GenAI, for example), income groups (with individuals with lower household net incomes trusting GenAI more) and respondents from differently sized cities. Most notable is the disparity between age groups: among respondents aged 14 to 29, 46% trust GenAI as a source in science communication. Among those over 30, only between 10% and 17% do.
4.2 Factors affecting trust in science and generative AI: SEM results
Using SEM (see Figure 4), we found several significant and conceptually plausible relations between the latent and exogenous variables: for example, people who are male and have post-secondary education reported slightly higher self-perceived knowledge about science. Moreover, younger respondents with post-secondary education and higher income reported stronger quality of life attitudes with regard to science, as well as stronger personal attachment to science. Plausibly, the effects of education on quality of life attitudes and personal attachment to science were relatively strong, whereas the effects of age, for example, were less pronounced. City size, however, showed no significant effects on any of the endogenous variables considered.
Respondents’ assessments of whether science improves their quality of life were strong predictors of general trust in science — consistent with Roberts et al. [2013]. Additionally, self-perceived knowledge showed a slight positive effect on general trust in science at p < .01. Feeling informed about science and having confidence in scientists’ expertise, benevolence and integrity were thus only marginally associated with each other. However, personal attachment to science did not show a significant effect on trust in our model.
As expected, and consistent with Roberts et al. [2013] as well, general trust in science was a positive and comparatively strong predictor of respondents’ trust in GenAI in science communication. The reverse, however, was not the case. This suggests that trust in science serves as a gateway to trust in GenAI as a source in science communication, yet trust in GenAI does not benefit trust in science in general.
While these findings correspond to Roberts et al. [2013], we also found some differences. Different from our findings, Roberts et al. found no significant effect of perceived knowledge on trust in science, but a clear effect of personal attachment to science. Differences in the influence of sociodemographic characteristics are also visible. For example, Roberts et al. [2013] find a significant effect of gender on perceived knowledge, quality of life attitudes and personal attachment to science, whereas level of income shows no significant effects.
5 Discussion and conclusion
GenAI has become an important source of information in many fields of life, including science and science communication. With ever more people using GenAI, it is relevant to assess to users’ attitudes towards and trust in GenAI as a source in science communication. We did so, employing a secondary analysis of survey data from the German Science Barometer.
In general, we see that German citizens are rather cautious in embracing GenAI as a source in science communication. In line with prior scholarship, they appreciate that GenAI presents scientific content in a simplified way [Leßmöllmann, 2019]. However, while it is regarded as a great advantage by some scholars that citizens can interact with GenAI directly and in a humanlike fashion [Chen et al., 2024], German citizens do not seem to consider this a benefit of GenAI in science communication. The potential negative implications of use of GenAI in science communication, on the other hand such as lack of accuracy and biases, and the dangers of misuse for pseudo- and anti-scientific purposes [Könneker, 2024] are also strongly perceived as potential dangers by the German public.
This considerable scepticism among Germans towards GenAI as a source of science-related content is intriguing against the backdrop of Germans’ relatively high general trust in science and research: in 2023, 56% of Germans reported trusting science somewhat or completely. 31% of respondents stated that they were undecided about this issue, and only 13% reported not trusting science. This stands in stark contrast to the 46% of Germans stating not to trust “programs such as ChatGPT” as sources in science communication, i.e. for reproducing scientific content, and only 13% indicating they trust GenAI.
The multivariate analysis, guided by Roberts et al.’s [2013] explanatory model, provides more insights into the drivers behind Germans’ (lack of) trust in GenAI as a source of science communication. SEM results reveal that respondents’ general attitudes towards science are important for their uptake of GenAI as well. They show that, first, despite the differences between general trust in science and trust in GenAI, general trust in science still influences trust in specific technologies, including GenAI, with higher trust in science being linked to higher trust in GenAI in science communication. They also show, second, that attitudes towards science’s impact on individuals’ quality of life — i.e. perceptions of risks and benefits — are shaping trust in science and, by extension, in specific technologies such as GenAI. And they show, third, that these attitudes are significantly influenced by age (with younger respondents being more trusting), educational level, and income, suggesting a complex interplay of demographic factors in the formation of attitudes towards and trust in GenAI.
While our explanatory findings are largely in line with Roberts et al. [2013], our study diverges in identifying education, rather than gender, as a crucial factor in influencing quality of life perceptions. Our study also shows a number of other notable differences to Roberts et al. [2013], especially regarding the impact of perceived knowledge about science on trust in science. Unlike Roberts et al., we found a significant effect of perceived knowledge on trust, indicating the nuanced role of knowledge and attitudes in trust formation. Furthermore, our findings diverge concerning the influence of personal attachment to science, which we did not find to be significant drivers, contrary to Roberts et al. [2013]. These differences may stem from the different contexts in which the two studies were conducted, but also from the slightly different variables used in our model compared to Roberts et al. For example, perceived knowledge was assessed differently in our study, with one generalized question instead of a more detailed quiz.
Generally, our study has a number of limitations that should be remedied in future research. First, the discrepancies observed between our findings and those of Roberts et al. underscore the complexity of trust in science and technology, highlighting the need for continued exploration of the underlying factors and mechanisms. Such explorations should take differences in the trustworthiness of different GenAI tools into account, which come with different degrees of transparency, for example [Arnold et al., 2019; Liao & S. Sundar, 2022]. Second, the conceptual model used here could be expanded, for example to include factors emphasized in models of Technology Acceptance such as (perceived) easiness of use, peoples’ familiarity or prior experiences with a given technology [e.g. Marangunić & Granić, 2015]. Third, not all variables could be measured as detailed as desirable; as a secondary analysis, we had to rely on questions embedded in the Science Barometer. Future studies should try to remedy these shortcomings — and stay abreast of developments in this crucial field for the future of science communication.
Overall, our study showed that the conceptual model proposed by Roberts et al. is useful to identify drivers of attitudes towards GenAI as well, and can be applied in further studies to measure trust towards different technologies from a science communication perspective. Future studies will show whether the influence of different variables varies according to the technology in question, where education and age had a large influence in the case of GenAI, gender or city size might prove influential in other cases.
Empirically, we demonstrated that trust in GenAI in science communication is still limited among German citizens. This insight has a number of more practical implications: first, it should inform how science communicators, science journalists and others in science communication approach and use GenAI. Efforts are necessary to improve the tools themselves when it comes to science-related issues [Vaghefi et al., 2023], but also to improve users’ ability to competently assess and use these tools, i.e. to further their GenAI-related literacy [Ng et al., 2021; Schäfer, 2023]. Second, it shows that if scientists, science communicators or science journalists use GenAI in science communication, signalling this use is important yet difficult. Communicators may be faced with the dilemma that users do not trust AI-generated content much, but still expect transparency, i.e. want to know when GenAI has been used [e.g. Diakopoulos et al., 2024; Vogler et al., 2023]. That being said, we want to emphasise that our results are descriptive rather than prescriptive. There needs to be an ongoing discussion within science communication research as well as practice about the use of GenAI, its applications and limits, and whether trust in these technologies should be actively promoted. We contribute to this debate by assessing current levels of trust and explanatory factors in the German case.
Acknowledgments
The collection and analysis of these data were made possible through the funding and support of the German Science Barometer 2023 by the Carl-Zeiss-Stiftung and the Fraunhofer-Gesellschaft.
References
-
Alvarez, A., Caliskan, A., Crockett, M. J., Ho, S. S., Messeri, L., & West, J. (2024). Science communication with generative AI. Nature Human Behaviour, 8, 625–627. https://doi.org/10.1038/s41562-024-01846-3
-
Amoozadeh, M., Daniels, D., Nam, D., Chen, S., Hilton, M., Ragavan, S. S., & Alipour, M. A. (2023). Trust in generative AI among students: an exploratory study. https://doi.org/10.48550/arXiv.2310.04631
-
Arnold, M., Bellamy, R. K. E., Hind, M., Houde, S., Mehta, S., Mojsilovic, A., Nair, R., Ramamurthy, K. N., Olteanu, A., Piorkowski, D., Reimer, D., Richards, J., Tsay, J., & Varshney, K. R. (2019). FactSheets: increasing trust in AI services through supplier’s declarations of conformity. IBM Journal of Research and Development, 63, 6:1–6:13. https://doi.org/10.1147/jrd.2019.2942288
-
Biswas, S. S. (2023). Role of ChatGPT in public health. Annals of Biomedical Engineering, 51, 868–869. https://doi.org/10.1007/s10439-023-03172-7
-
Biyela, S., Dihal, K., Gero, K. I., Ippolito, D., Menczer, F., Schäfer, M. S., & Yokoyama, H. M. (2024). Generative AI and science communication in the physical sciences. Nature Reviews Physics, 6, 162–165. https://doi.org/10.1038/s42254-024-00691-7
-
Bollen, K. A., Tueller, S. J., & Oberski, D. (2013). Issues in the structural equation modeling of complex survey data. Proceedings 59th ISI World Statistics Congress.
-
Bonfadelli, H., Fähnrich, B., Lüthje, C., Milde, J., Rhomberg, M., & Schäfer, M. S. (2017). Forschungsfeld Wissenschaftskommunikation. Springer. https://doi.org/10.1007/978-3-658-12898-2
-
Bromme, R., Mede, N. G., Thomm, E., Kremer, B., & Ziegler, R. (2022). An anchor in troubled times: trust in science before and within the COVID-19 pandemic (A. Gesser-Edelsburg, Ed.). PLOS ONE, 17, e0262823. https://doi.org/10.1371/journal.pone.0262823
-
Chen, K., Shao, A., Burapacheep, J., & Li, Y. (2024). Conversational AI and equity through assessing GPT-3’s communication with diverse social groups on contentious topics. Scientific Reports, 14. https://doi.org/10.1038/s41598-024-51969-w
-
Cheng, X., Zhang, X., Cohen, J., & Mou, J. (2022). Human vs. AI: understanding the impact of anthropomorphism on consumer response to chatbots from the perspective of trust and relationship norms. Information Processing & Management, 59, 102940. https://doi.org/10.1016/j.ipm.2022.102940
-
Davies, S. R., & Horst, M. (2016). Science communication: culture, identity and citizenship. Springer. https://doi.org/10.1057/978-1-137-50366-4
-
De Angelis, L., Baglivo, F., Arzilli, G., Privitera, G. P., Ferragina, P., Tozzi, A. E., & Rizzo, C. (2023). ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4352931
-
Diakopoulos, N., Cools, H., Li, C., Helberger, N., Kung, E., & Rinehart, A. (2024). Generative AI in journalism: the evolution of newswork and ethics in a generative information ecosystem. https://doi.org/10.13140/RG.2.2.31540.05765
-
Dixson, H. G. W., Komugabe-Dixson, A. F., Medvecky, F., Balanovic, J., Thygesen, H., & MacDonald, E. A. (2022). Trust in science and scientists: effects of social attitudes and motivations on views regarding climate change, vaccines and gene drive technology. Journal of Trust Research, 12, 179–203. https://doi.org/10.1080/21515581.2022.2155658
-
Doctorow, C. (2023). Google’s chatbot panic. Pluralistic. Retrieved April 4, 2023, from https://pluralistic.net/2023/02/16/tweedledumber/
-
Faverio, M., & Tyson, A. (2023). What the data says about Americans’ views of artificial intelligence. Pew Research Center. https://www.pewresearch.org/short-reads/2023/11/21/what-the-data-says-about-americans-views-of-artificial-intelligence/
-
Fecher, B., Hebing, M., Laufer, M., Pohle, J., & Sofsky, F. (2023). Friend or foe? Exploring the implications of large language models on the science system. AI & Society. https://doi.org/10.1007/s00146-023-01791-1
-
Fink, K. (2019). The biggest challenge facing journalism: a lack of trust. Journalism, 20, 40–43. https://doi.org/10.1177/1464884918807069
-
Giddens, A. (1990). The consequences of modernity. Stanford University Press.
-
Goedecke, C., & Koester, V. (2023). Chatting with ChatGPT. ChemViews. https://doi.org/10.1002/chemv.202300001
-
Gravel, J., D’Amours-Gravel, M., & Osmanlliu, E. (2023). Learning to fake it: limited responses and fabricated references provided by ChatGPT for medical questions. https://doi.org/10.1101/2023.03.16.23286914
-
Greussing, E., Guenther, L., Baram-Tsabari, A., Dabran-Zivan, S., Jonas, E., Klein-Avraham, I., Taddicken, M., Beets, B., Brossard, D., Chakraborty, A., Agergaard, T. E., & Song, H. J. (2024). Predicting and describing the use of generative AI in science-related information search: insights from a multinational survey. Presentation at AISCICOMM24 conference.
-
Grosser, K. M., Hase, V., & Blöbaum, B. (2016). Trust in online journalism. In Trust and Communication in a Digitized World (pp. 53–73). Springer International Publishing. https://doi.org/10.1007/978-3-319-28059-2_3
-
Guenther, L., & Joubert, M. (2017). Science communication as a field of research: identifying trends, challenges and gaps by analysing research papers. JCOM, 16, A02. https://doi.org/10.22323/2.16020202
-
Hallin, D. C. (2020). Comparative media studies in the digital age — comparative research, system change and the complexity of media systems. International Journal of Communication, 14, 5775–5786. https://ijoc.org/index.php/ijoc/article/view/14550
-
Hallin, D. C., & Mancini, P. (2016). Ten years after comparing media systems: what have we learned? Political Communication, 34, 155–171. https://doi.org/10.1080/10584609.2016.1233158
-
Hashimoto, T., & Karasawa, K. (2014). Science, so close and yet so far away: how people view science, science subjects and scientists. In Recent advances in natural computing: selected results from the IWNC 7 symposium (pp. 57–67). Springer. https://doi.org/10.1007/978-4-431-55105-8_4
-
Hegelbach, S. (2023). ChatGPT opened our eyes. DIZH. https://dizh.ch/en/2023/03/20/chatgpt-opened-our-eyes/
-
Hendriks, F., Kienhues, D., & Bromme, R. (2015). Measuring laypeople’s trust in experts in a digital age: the Muenster Epistemic Trustworthiness Inventory (METI) (J. M. Wicherts, Ed.). PLOS ONE, 10, e0139309. https://doi.org/10.1371/journal.pone.0139309
-
Hendriks, F., Kienhues, D., & Bromme, R. (2016). Trust in science and the science of trust. In Trust and communication in a digitized world (pp. 143–159). Springer International Publishing. https://doi.org/10.1007/978-3-319-28059-2_8
-
Hölscher, M. (2016). Spielarten des akademischen Kapitalismus: Hochschulsysteme im internationalen Vergleich. Springer Fachmedien. https://doi.org/10.1007/978-3-658-10962-2
-
Hu, L.-T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Structural equation modeling: a multidisciplinary journal, 6, 1–55. https://doi.org/10.1080/10705519909540118
-
Hu, P., Lu, Y., & Gong, Y. (2021). Dual humanness and trust in conversational AI: a person-centered approach. Computers in Human Behavior, 119, 106727. https://doi.org/10.1016/j.chb.2021.106727
-
Huber, B., Barnidge, M., Gil de Zúñiga, H., & Liu, J. (2019). Fostering public trust in science: the role of social media. Public Understanding of Science, 28, 759–777. https://doi.org/10.1177/0963662519869097
-
Humprecht, E., Castro Herrero, L., Blassnig, S., Brüggemann, M., & Engesser, S. (2022). Media systems in the digital age: an empirical comparison of 30 countries. Journal of Communication, 72, 145–164. https://doi.org/10.1093/joc/jqab054
-
Hyun Baek, T., & Kim, M. (2023). Is ChatGPT scary good? How user motivations affect creepiness and trust in generative artificial intelligence. Telematics and Informatics, 83, 102030. https://doi.org/10.1016/j.tele.2023.102030
-
Klerck, D., & Sweeney, J. C. (2007). The effect of knowledge types on consumer-perceived risk and adoption of genetically modified foods. Psychology & Marketing, 24, 171–193. https://doi.org/10.1002/mar.20157
-
Könneker, C. (2024). The challenge of science communication in the age of AI. Stanford Social Innovation Review. https://ssir.org/articles/entry/science-communication-artificial-intelligence
-
Leßmöllmann, A. (2019). 31. Current trends and future visions of (research on) science communication. In Science communication (pp. 657–688). De Gruyter. https://doi.org/10.1515/9783110255522-031
-
Liao, Q. V., & S. Sundar, S. (2022). Designing for responsible trust in AI systems: a communication perspective. FAccT ’22: Proceedings of the 2022 ACM Conference on Fairness, Accountability and Transparency, 1257–1268. https://doi.org/10.1145/3531146
-
Luhmann, N. (2000). Vertrauen: Ein Mechanismus der Reduktion sozialer Komplexität. UTB / Lucius & Lucius.
-
Marangunić, N., & Granić, A. (2015). Technology acceptance model: a literature review from 1986 to 2013. Universal Access in the Information Society, 14, 81–95. https://doi.org/10.1007/s10209-014-0348-1
-
McClain, C. (2024). Americans’ use of ChatGPT is ticking up, but few trust its election information. PEW Research Center. https://www.pewresearch.org/short-reads/2024/03/26/americans-use-of-chatgpt-is-ticking-up-but-few-trust-its-election-information
-
Mede, N. G., Rauchfleisch, A., Metag, J., & Schäfer, M. S. (2024). The interplay of knowledge overestimation, social media use and populist ideas: cross-sectional and experimental evidence from Germany and Taiwan. Communication Research. https://doi.org/10.1177/00936502241230203
-
Miller, J. D., Pardo, R., & Niwa, F. (1997). Public perceptions of science and technology: a comparative study of the European Union, the United States, Japan and Canada. Fundación BBV.
-
Myklebust, J. P. (2023). Universities adjust to ChatGPT, but the ‘real AI’ lies ahead. University World News. Retrieved April 4, 2023, from https://www.universityworldnews.com/post.php?story=20230301105802395
-
Ng, D. T. K., Leung, J. K. L., Chu, K. W. S., & Qiao, M. S. (2021). AI literacy: definition, teaching, evaluation and ethical issues. Proceedings of the Association for Information Science and Technology, 58, 504–509.
-
Reif, A., & Guenther, L. (2021). How representative surveys measure public (dis)trust in science: a systematisation and analysis of survey items and open-ended questions. Journal of Trust Research, 11, 94–118. https://doi.org/10.1080/21515581.2022.2075373
-
Roberts, M. R., Reid, G., Schroeder, M., & Norris, S. P. (2013). Causal or spurious? The relationship of knowledge and attitudes to trust in science and technology. Public Understanding of Science, 22, 624–641. https://doi.org/10.1177/0963662511420511
-
Rosseel, Y. (2012). lavaan: an R package for structural equation modeling. Journal of Statistical Software, 48, 1–36. https://doi.org/10.18637/jss.v048.i02
-
Saif, N., Khan, S. U., Shaheen, I., Alotaibi, F. A., Alnfiai, M. M., & Arif, M. (2024). Chat-GPT; validating Technology Acceptance Model (TAM) in education sector via ubiquitous learning mechanism. Computers in Human Behavior, 154, 108097. https://doi.org/10.1016/j.chb.2023.108097
-
Sarraju, A., Bruemmer, D., Van Iterson, E., Cho, L., Rodriguez, F., & Laffin, L. (2023). Appropriateness of cardiovascular disease prevention recommendations obtained from a popular online chat-based artificial intelligence model. JAMA, 329, 842–844. https://doi.org/10.1001/jama.2023.1044
-
Schäfer, M. S. (2016). Mediated trust in science: concept, measurement and perspectives for the ‘science of science communication’. JCOM, 15, C02. https://doi.org/10.22323/2.15050302
-
Schäfer, M. S. (2023). The notorious GPT: science communication in the age of artificial intelligence. JCOM, 22, Y02. https://doi.org/10.22323/2.22020402
-
Schäfer, M. S., Kristiansen, S., & Bonfadelli, H. (Eds.). (2015). Wissenschaftskommunikation im Wandel. von Halem.
-
Schepman, A., & Rodway, P. (2023). The General Attitudes towards Artificial Intelligence Scale (GAAIS): confirmatory validation and associations with personality, corporate distrust, and general trust. International Journal of Human-Computer Interaction, 39, 2724–2741. https://doi.org/10.1080/10447318.2022.2085400
-
Schlude, A., Schwind, M., Mendel, U., Stürz, R. A., Harles, D., & Fischer, M. (2023). Verbreitung und Akzeptanz generativer KI in Deutschland und an deutschen Arbeitsplätzen. bidt. https://www.bidt.digital/publikation/verbreitung-und-akzeptanz-generativer-ki-in-deutschland-und-an-deutschen-arbeitsplaetzen
-
Schweizer, K., & DiStefano, C. (Eds.). (2016). Principles and methods of test construction: Standards and recent advances. Hogrefe.
-
Sindermann, C., Elhai, J. D., & Montag, C. (2020). Predicting tendencies towards the disordered use of Facebook’s social media platforms: on the role of personality, impulsivity, and social anxiety. Psychiatry Research, 285, 112793. https://doi.org/10.1016/j.psychres.2020.112793
-
Solomovich, L., & Abraham, V. (2024). Exploring the influence of ChatGPT on tourism behavior using the technology acceptance model. Tourism Review. https://doi.org/10.1108/tr-10-2023-0697
-
Tatalovic, M. (2018). AI writing bots are about to revolutionise science journalism: we must shape how this is done. JCOM, 17, E. https://doi.org/10.22323/2.17010501
-
Ueno, T., Sawa, Y., Kim, Y., Urakami, J., Oura, H., & Seaborn, K. (2022). Trust in human-AI interaction: scoping out models, measures and methods. CHI Conference on Human Factors in Computing Systems Extended Abstracts, 1–7. https://doi.org/10.1145/3491101.3519772
-
Vaghefi, S. A., Stammbach, D., Muccione, V., Bingler, J., Ni, J., Kraus, M., Allen, S., Colesanti-Senni, C., Wekhof, T., Schimanski, T., Gostlow, G., Yu, T., Wang, Q., Webersinke, N., Huggel, C., & Leippold, M. (2023). ChatClimate: grounding conversational AI in climate science. Communications Earth & Environment, 4, 480. https://doi.org/10.1038/s43247-023-01084-x
-
van Dijck, J., & Alinejad, D. (2020). Social media and trust in scientific expertise: debating the COVID-19 pandemic in the Netherlands. Social Media + Society, 6. https://doi.org/10.1177/2056305120981057
-
Većkalov, B., Zarzeczna, N., McPhetres, J., van Harreveld, F., & Rutjens, B. T. (2024). Psychological distance to science as a predictor of science skepticism across domains. Personality and Social Psychology Bulletin, 50, 18–37. https://doi.org/10.1177/01461672221118184
-
Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision Sciences, 39, 273–315. https://doi.org/10.1111/j.1540-5915.2008.00192.x
-
Vogels, E. A. (2023). A majority of Americans have heard of ChatGPT, but few have tried it themselves. PEW Research Center. https://www.pewresearch.org/short-reads/2023/05/24/a-majority-of-americans-have-heard-of-chatgpt-but-few-have-tried-it-themselves
-
Vogler, D., Eisenegger, M., Fürst, S., Udris, L., Ryffel, Q., Rivière, M., & Schäfer, M. S. (2023). Künstliche Intelligenz in der journalistischen Nachrichtenproduktion: Wahrnehmung und Akzeptanz in der Schweizer Bevölkerung. In Jahrbuch Qualität der Medien (pp. 33–45). https://doi.org/10.5167/uzh-235608
-
von Garrel, J., & Mayer, J. (2023). Artificial intelligence in studies—use of ChatGPT and AI-based tools among students in Germany. Humanities and Social Sciences Communications, 10, 1–9. https://doi.org/10.1057/s41599-023-02304-7
-
Weingart, P., & Guenther, L. (2016). Science communication and the issue of trust. JCOM, 15, C01. https://doi.org/10.22323/2.15050301
-
Wintterlin, F., Hendriks, F., Mede, N. G., Bromme, R., Metag, J., & Schäfer, M. S. (2022). Predicting public trust in science: the role of basic orientations toward science, perceived trustworthiness of scientists and experiences with science. Frontiers in Communication, 6. https://doi.org/10.3389/fcomm.2021.822757
-
Zhang, B., & Dafoe, A. (2019). Artificial intelligence: American attitudes and trends. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3312874
About the authors
Mike S. Schäfer: Professor of Science Communication at IKMZ, University of Zurich; Head of Department of IKMZ — Dept. of Communication and Media Research; Director of Center of Higher Education and Science Studies (CHESS).
E-mail: m.schaefer@ikmz.uzh.ch X: @mss7676
Bastian Kremer: Project Leader of the German Science Barometer at Wissenschaft im Dialog, Berlin.
E-mail: bastian.kremer@w-i-d.de
Niels G. Mede: Senior Research and Teaching Associate at the Department of Communication and Media Research, University of Zurich.
E-mail: n.mede@ikmz.uzh.ch X: @nielsmede
Liliann Fischer: Leader of the Insights Programme at Wissenschaft im Dialog, Berlin.
E-mail: liliann-fischer@uni-passau.de X: @Liliann_F