Seeking scientific information plays a vital role in education, curiosity-driven exploration, and practical decision-making, empowering individuals to navigate an increasingly complex world [Brossard, 2013; Segev & Baram-Tsabari, 2012]. Digital media have become central intermediaries in this process, shaping how knowledge is exchanged and negotiated within society [Neuberger et al., 2023]. These intermediary functions have increasingly been augmented by algorithms, introducing new levels of automation into epistemic practices [Bartsch et al., 2025]. In this context, artificial intelligence (AI), specifically generative AI (GenAI), is reshaping the mediation between science and society — not only by disseminating content but also by autonomously generating and contextualizing it [Schäfer, 2023].

The release of ChatGPT in 2022 marks a transformative moment in this respect. Developed by OpenAI, ChatGPT is a large language model (LLM) designed to generate coherent and contextually relevant text in response to user prompts. Given its central role in the GenAI landscape [Fletcher & Nielsen, 2024], focusing on ChatGPT provides a suitable lens through which to examine broader trends in the use of GenAI for science communication. Accordingly, this study examines the use of ChatGPT for science-related information retrieval, building on prior research conducted in 2023 [Greussing et al., 2025].

Serving as a direct follow-up to our 2023 investigation, this study has two primary objectives. First, it provides a cross-country comparison of the reported use of ChatGPT for science-related information searches, highlighting variations in the proportion, knowledge, and trust levels of people who engage ChatGPT for science-related queries across seven technologically advanced countries [World Intellectual Property Organization, 2024]: Australia, Denmark, Germany, Israel, South Korea, Taiwan, and the United States (U.S.A.). Cross-country comparison is vital as it acknowledges GenAI as a global phenomenon, with its adoption shaped by diverse regional, cultural, and economic contexts. The seven countries under investigation make it possible to explore patterns in AI-driven science communication practices across varied yet comparable settings. Each represents a society with high internet penetration and engagement with digital tools [World Intellectual Property Organization, 2024], while diverging in aspects such as cultural priorities including attitudes toward AI [Neudert et al., 2020] and the structure of their science communication ecosystems [Gascoigne et al., 2020].

Second, this study examines changes over time in the use of ChatGPT for science-related inquiries, drawing on data collected in 2023 and 2024. This period captures a critical phase in the evolution of GenAI, from its initial adoption to its growing integration into everyday platforms and applications, enhancing user familiarity and embedding the technology into routine practices [Liu & Wang, 2024]. The years 2023 and 2024 were further characterized by intensified public debate on GenAI, drawing attention to both its potential benefits and associated risks [Dijkstra et al., 2024; Roe & Perkins, 2023]. Technologically, ChatGPT advanced significantly during this period, improving in accuracy, contextual comprehension, and its ability to address complex, domain-specific queries [Reeves & Sylvia, 2024]. Simultaneously, competing applications like Microsoft Copilot, Google Gemini, and Claude gained prominence, reshaping the global GenAI landscape. In 2023, different GenAI applications exhibited distinctive features that influenced their suitability for science communication [Klein-Avraham et al., 2024]. By 2024, innovations such as ChatGPT’s web-browsing functionality blurred these distinctions, enabling different applications to perform overlapping tasks and offer increasingly similar user experiences. Despite this convergence, focusing on a specific GenAI application remains a valuable approach for studying science-related information retrieval [e.g., Volk et al., 2024], and it allows for meaningful comparisons with existing research.

1 The role of GenAI in science communication

Ever since the release of ChatGPT in late 2022, the transformative potential of GenAI for science communication has gained rapid recognition [e.g., Alvarez et al., 2024; Biyela et al., 2024]. GenAI models are capable of processing large volumes of information and produce coherent, context-specific outputs in natural language, which can enhance the accessibility of scientific knowledge [Schäfer, 2023]. However, they are not inherently programmed to prioritize accuracy or reliability, raising concerns about oversimplification of nuanced scientific issues and dissemination of misinformation [Schäfer et al., 2024; Shin et al., 2024]. These concerns are underlined by studies showing that models such as ChatGPT are optimized to present information in a clear and relatable style, making them particularly appealing to users [Doshi & Hauser, 2024]. Moreover, there is evidence that information delivered through text-based dialogic interfaces is perceived as more credible than identical content in static text formats [Anderl et al., 2024]. However, the perceived interactivity of chatbots has also been shown to reduce health-related misperceptions [Gong & Su, 2024]. This dual capacity of GenAI to effectively disseminate both accurate and inaccurate information, while making it difficult for users to distinguish between organic and synthetic content, presents significant challenges as it can heighten epistemic uncertainty surrounding scientific topics and discourage users from critically evaluating AI-generated material [Spitale et al., 2023].

Biases embedded in the training data of GenAI models further complicate their use. For instance, a study on climate justice has highlighted how these biases manifest in outputs, potentially influencing public understanding [Nguyen et al., 2024]. At the same time, GenAI tools have demonstrated the potential to benefit disadvantaged groups, with individuals of lower educational attainment gaining the most knowledge from AI-generated summaries of historical events [Karell et al., 2024]. These findings underscore the nuanced and context-dependent role of GenAI in science communication.

A particularly notable feature of GenAI is its ability to provide personalized learning experiences, enabling users to explore scientific topics in ways that align with their interests and prior knowledge. This supports the vision of “dialogue at scale”, where personalized interactions with AI systems can broaden public engagement with science [Schäfer, 2023]. However, given the reported influence of AI-generated content on people’s opinions and beliefs [e.g., Wang & Peng, 2023], the personalization built into GenAI models also carries risks. Studies suggest that tailored responses from GPT-models may reinforce users’ pre-existing views, prompting critical questions about their impact on public understanding and engagement with science [Chen et al., 2024; Volk et al., 2024].

2 Adoption and use of GenAI

Due to their ability to provide seemingly instant access to vast amounts of knowledge, GenAI applications have been quickly adopted for information retrieval [Liu & Wang, 2024], including for science-related issues [Choudhury & Shamszare, 2023]. According to a 2024 Reuters report, 11% of respondents across six countries — Denmark, the U.S.A., the United Kingdom (U.K.), Argentina, France, and Japan — reported using GenAI for factual inquiries on at least one occasion [Fletcher & Nielsen, 2024].

For understanding the factors that drive technology adoption, the Technology Acceptance Model (TAM) [Davis, 1989] builds upon the Theory of Reasoned Action and the Theory of Planned Behavior. Central to TAM are two key determinants: perceived usefulness and perceived ease of use, which shape users’ attitudes toward the technology. These attitudes, in turn, influence behavioral intentions, ultimately predicting actual usage. However, technology adoption is not a uniform process; it is shaped by a complex interplay of individual characteristics and contextual factors. Over time, the TAM has thus evolved to incorporate additional determinants, including cultural dimensions. A meta-analysis on e-learning, for example, indicates that in collectivistic cultures, subjective norms and self-efficacy have a stronger impact on users’ behavioral intentions, whereas in individualistic cultures, perceived usefulness plays a more significant role [Zhao et al., 2020].

Moreover, studies in the U.S.A. and Germany suggest that individuals who feel confident in their ability to interact effectively with AI are more likely to adopt AI tools for health information-seeking. Additionally, attitudes toward using AI for health information-seeking, combined with subjective AI-related norms, significantly influence whether people incorporate AI into their behaviors [Liao et al., 2024; Link & Beckmann, 2024].

The present study is grounded in the TAM by focusing on perceived usefulness and perceived ease of use. Perceived usefulness is defined as the extent to which individuals believe that using a particular technology will enhance their task performance [Davis, 1989]. In the domain of science-related information retrieval, the perceived usefulness of GenAI applications may be seen as ambiguous. Empirical research has shown that individuals seek scientific information online primarily after exposure to media coverage [Baram-Tsabari & Segev, 2015; Myrick et al., 2015] or in response to educational prompts on specific topics, such as genetics [Segev & Sharon, 2017]. While GenAI can make complex scientific topics more accessible by summarizing information or explaining concepts in easy terms [Skjuve et al., 2023; Wissenschaft im Dialog, 2023], their capacity to provide real-time, up-to-date information on the latest scientific developments is limited. The free version of ChatGPT available in 2024 relies on a knowledge base that is only as current as the training data it was built on. Similar to its limitations in delivering accurate and timely updates on current events [Fletcher & Nielsen, 2024] and coupled by concerns regarding the accuracy and reliability of AI-generated content [Schäfer et al., 2024], its usefulness may therefore be perceived as limited. However, as GenAI becomes more integrated into daily routines and gains broader societal adoption [Liu & Wang, 2024], more people may turn to ChatGPT for science-related information searches compared to 2023. This raises questions about trends in adoption rates between 2023 and 2024, leading to the first research question (RQ):

RQ1a: What proportion of people in the countries being investigated use ChatGPT to search for science-related information in 2024 compared with 2023?

The TAM identifies ease of use, defined as the extent to which an individual believes that using a particular technology will require minimal effort, as a second primary factor influencing technology adoption [Davis, 1989]. This is particularly relevant given the distinct functionality of ChatGPT compared to traditional information intermediaries such as search engines. While search engines curate and rank web links based on relevance, ChatGPT provides synthesized, conversational responses that simulate human interaction [Guzman & Lewis, 2019; Klein-Avraham et al., 2024]. The growing familiarity with GenAI underscores the need for continued exploration of how users relate to GenAI applications compared to established information intermediaries such as search engines. This further allows for identifying broader shifts in the information landscape [Neuberger et al., 2023]. Accordingly, this study examines users’ satisfaction with science-related information retrieved via ChatGPT compared to Google Search, as well as their confidence in using each system. We ask:

RQ1b: How do users perceive science-related information retrieval with ChatGPT and Google Search in 2024 compared with 2023?

Beyond science-related information searches, GenAI applications serve a broad array of purposes, such as assistance in writing or brainstorming ideas [Fletcher & Nielsen, 2024]. Comparing these diverse applications with science-related searches can offer deeper insights into the perceived usefulness of GenAI technology by contextualizing how it addresses various user needs. By examining these broader applications alongside its role in science-focused searches, we can identify patterns in user engagement and better interpret the adoption rates observed. We thus pose a follow-up question:

RQ1c: What other purposes are regular ChatGPT users employing GenAI for in2024?

3 Characteristics of GenAI users seeking science-related information

Researchers and science communicators have suggested that GenAI’s simplified access to information could broaden the accessibility of science-related content to wider audiences [Biyela et al., 2024; Schäfer, 2023]. However, to date, studies show that certain demographic groups are more inclined to use GenAI: for instance, applications like ChatGPT are more popular among younger users, men, and individuals with higher levels of formal education [Fletcher & Nielsen, 2024; Liu & Wang, 2024]. This trend extends to the use of GenAI for science-related information searches [Greussing et al., 2025], aligning with general patterns on technology adoption [e.g., Casino et al., 2019, for blockchain technology] and persistent societal disparities reflecting the digital divide [van Dijk, 2020]. Thus, to track potential shifts in the user base of ChatGPT, in this study — as in our 2023 investigation [Greussing et al., 2025] — we intend to focus on three subpopulations: (1) regular users of ChatGPT who use the technology for science-related information search, (2) regular users of ChatGPT who use the technology for other purposes, and (3) non-users. Besides general interest in the size of the three subpopulations, we are interested in their demographics, their level of knowledge about the functioning of AI and the quality of the output generated by GenAI, and their level of trust in GenAI.

4 Knowledge about AI

Research on public understanding of AI shows that, as with other scientific issues that are complex in nature [Krauss & Colombo, 2020], knowledge about AI remains relatively limited among the public [Lermann Henestrosa & Kimmerle, 2024; Selwyn & Gallo Cordoba, 2022]. Prior to 2022, research on algorithm literacy emphasized the role of (news) media as a primary, accessible source of information [Dogruel et al., 2022; Zarouali et al., 2021]. As GenAI technologies gained prominence in 2022, the volume of news articles, debates, and commentary on these models surged [Dijkstra et al., 2024]. In addition, a U.K.-based study found that headlines about GenAI often centered on explaining the technology itself — besides reporting potential dangers and disruptions that may be caused by AI technologies [Roe & Perkins, 2023]. The intensified focus on AI in public discourse may have contributed to greater public awareness and a more nuanced understanding of these systems. Consequently, people today may have a better grasp of AI than they did in 2023.

Of particular interest within science communication is whether publics have developed an understanding of the epistemic limitations of GenAI, including the inherent boundaries in its ability to generate reliable content [Ji et al., 2023], and whether this type of knowledge is associated with the use — or non-use — of GenAI for science-related information searches. To assess the AI knowledge of individuals who use ChatGPT for science-related information compared to other subpopulations, we pose the following research question:

RQ2: What is the level of factual knowledge about (Gen)AI1 among ChatGPT users who engage the model for science-related information searches compared to non-users and users who utilize ChatGPT for other purposes in 2024 compared with 2023?

1Throughout this article, we use the term (Gen)AI, as distinct from GenAI, to indicate that we are referring to both AI in general and Generative AI specifically, as our measurement of factual knowledge covers both domains.

5 Trust in GenAI

Trust can be understood as a relational variable between a trustor (subject of trust) and a trustee (object of trust), where the trustor relies on the trustee to perform a task that holds value for them [Lee & See, 2004]. In its nature, trust involves uncertainty and risk, as there is always the possibility that the trustee may fail or act contrary to the trustor’s interests [Mayer et al., 1995]. In this sense, trust operates as a mechanism for reducing complexity, enabling individuals to navigate uncertain environments with greater confidence.

The use of GenAI for science-related purposes can be considered a situation of uncertainty and risk, as errors and omissions in outputs generated by these models can significantly impact public understanding and decision-making. GenAI is inherently probabilistic, relying on patterns in training data to generate responses, while lacking actual understanding of these responses. As a result, it may confidently present outputs that are incomplete, biased, or entirely incorrect. Additionally, commercial GenAI applications are often opaque, functioning as “black-box” technologies that obscure how conclusions are reached [van Dis et al., 2023], thus increasing uncertainty and making it even harder for users to evaluate the credibility of the information they provide [Ou et al., 2024]. In other words, when interacting with a GenAI application like ChatGPT, users are at an epistemic disadvantage [Walmsley, 2021], which makes trust in GenAI a critical factor [Jonas et al., 2024].

Trust has been shown to play a crucial role in technology acceptance more generally [Kelly et al., 2023], and with regard to AI and algorithms more specifically [Choung et al., 2023; Rheu et al., 2021]. Yet, a large-scale study involving participants from the U.S.A., Australia, Canada, Germany, and the U.K. found that people exhibit rather low levels of trust in AI [Gillespie et al., 2021]. These findings have been further supported by research in Germany, particularly concerning GenAI used to communicate science-related information [Schäfer et al., 2024]. Experimental studies comparing science content attributed to AI versus human authorship, however, reveal no significant difference in perceived trustworthiness or credibility of the content itself [Lermann Henestrosa et al., 2023].

Trust is typically a stable construct, and the patterns of trust across different user groups may remain consistent over time. However, it can be shaped by mediated experiences. Between 2023 and 2024, an intensified public discourse, increased familiarity with GenAI technology, and advancements in its development may have influenced the overall level of trust in GenAI. This raises the question:

RQ3: What is the level of trust in GenAI among ChatGPT users who engage the model for science-related information searches compared to non-users and users who utilize ChatGPT for other purposes in 2024 compared with 2023?

6 Methods

To address our research questions, we conducted online surveys across seven countries: Australia, Denmark, Germany, Israel, South Korea, Taiwan, and the U.S.A. Data were collected at two distinct time points: between July and August 2023 (Ntotal = 4,320) and between August and September 2024 (Ntotal = 4,449). The number of respondents per country in 2023 was as follows: nAUS = 552, nDEN = 504, nGER = 566, nISR = 500, nKOR = 642, nTWN = 504, nU.S.A. = 1,052. For 2024, it was as follows: nAUS = 699, nDEN = 500, nGER = 562, nISR = 500, nKOR = 500, nTWN = 512, nU.S.A. = 1,176.

Data collection was carried out using online access panels managed by survey companies located in the seven countries. The survey companies oversaw data collection, adhering to quotas designed to reflect the respective national adult online populations in terms of age, gender, and education.1 While the survey companies provided the final datasets, all subsequent data analysis was performed by the authors of this paper. A demographic breakdown of respondents by country and year is provided in the supplementary material C (Tables S6a/b). The questionnaire remained consistent across both survey periods, ensuring comparability between the 2023 and 2024 data. It was translated into the primary language of each participating country. The English version of the questionnaire is available in the supplementary material B.

The countries selected for this study are characterized by high levels of affluence and advanced technological infrastructure [World Intellectual Property Organization, 2024]. However, they display differences in public attitudes toward AI [Neudert et al., 2020] and in the structure and organization of their science communication landscapes [Gascoigne et al., 2020]— factors that may shape individual perceptions and usage patterns of GenAI for science-related information-seeking, thus making them relevant for a cross-country comparison. It should be noted that country selection was not systematic but was based on availability through professional networks, yet care was taken to gather data from different regions of the world. Also, it needs to be acknowledged that online access panels are generally composed of people voluntarily opting in, typically leading to an overrepresentation of individuals who have a less-traditional media diet, are politically active [Pforr & Dannwolf, 2017], and can be assumed to be somewhat interested in the topic of the survey. That being said, the findings of this study need to be interpreted with caution.

7 Measurements

Using ChatGPT for searching science-related information. The variable assessing science-related information search with ChatGPT follows a format similar to Fletcher and Nielsen [2024]. The questionnaire first explored participants’ general experiences with five AI applications — including ChatGPT — and Google Search. Respondents were then presented with a list of applications they first reported to have experience with, and then were asked about their use of these applications when searching for science-related information. Additional variables included respondents’ confidence in finding what they needed and their contentment with the science-related content they found, each measured on a 5-point scale single item. To ensure shared understanding across participants, we provided a definition of “science-related information search.” Purpose of GenAI use. In the 2024 survey, a multiple-choice question was introduced for individuals who identified as regular users of GenAI applications to explore the purposes for which they used them. The question was framed to encompass all GenAI applications in use, with examples such as ChatGPT and Google Gemini provided, rather than focusing on one specific application. Drawing on common themes highlighted in the literature [Fletcher & Nielsen, 2024], the response options included purposes such as writing assistance, brainstorming, and engaging in dialogue. Knowledge about (Gen)AI. To assess respondents’ factual understanding of AI and GenAI, we developed nine statements in collaboration with AI experts and based on prior research [Long & Magerko, 2020, see the supplementary material B, Tables S8a/b]. Each respondent evaluated the statements as “true” or “false,” with the option to select “I don’t know.”. Six of the nine statements focused on the technical functioning of AI, resulting in a sum score from 0 to 6 (M2023 = 3.5, SD2023 = 1.5; M2024 = 3.7, SD2024 = 1.6), while the remaining three addressed the quality of information provided by GenAI, resulting in a sum score of 0 to 3 (M2023 = 1.5, SD2023 = 1.1; M2024 = 1.6, SD2024 = 1.2). These two types of knowledge — technical understanding and awareness of epistemic limitations — were weakly correlated in both years studied (Pearson’s r < .31), warranting separate treatment as distinct dimensions. Following standard practices in the field [Calice et al., 2022], participants were provided with definitions of “AI” and “GenAI.” One of the statements about AI functionality, accurate in 2023, was updated by 2024 to reflect current developments (see the supplementary material B). Trust in GenAI. To measure trust in GenAI, we utilized a 10-item scale inspired by previous research [Choung et al., 2023; Weidmüller, 2022]. This scale covered elements of both human trust (competence/expertise, integrity, and benevolence) and machine trust (functionality, reliability, and helpfulness; see the supplementary material C, Tables S9a/b). In the specific context of science communication, dialogue and transparency were included as additional dimensions, aligned with prior frameworks for evaluating trust in scientific communicators [Reif et al., 2024]. Items were rated on 5-point scales (1 = “strongly disagree,” 5 = “strongly agree”), forming a reliable scale in both 2023 (Cronbach’s α = .92, M = 3.4, SD = 0.8) and 2024 (Cronbach’s α = .93, M = 3.4, SD = 0.8). Based on exploratory factor analysis, an aggregated trust score was calculated for subsequent analyses.

8 Results

RQ1a: What proportion of people in the countries under study use ChatGPT to search for science-related information in 2024 compared with 2023?

Echoing previous studies [Fletcher & Nielsen, 2024], ChatGPT remains the most widely used GenAI model in our study, with n2024 = 1,358 regular users (30.5% of the total 2024 sample). As illustrated in the supplementary material A (Figure S1), the reported proportion of regular ChatGPT users across all seven countries increased strongly from 2023 to 2024, with user numbers at least doubling in each case. Turning to the use of ChatGPT for science-related information searchers (see the supplementary material A, Table S1), in 2024, this subset includes 848 users, representing 19% of the total sample — up from 9% (n = 372) in 2023. This increase, however, primarily reflects growth in the overall number of regular ChatGPT users.

A closer examination of the breakdown between users searching for science-related information and those with other interests reveals more nuanced trends (see Table S1). Comparing the survey data obtained in 2023 and 2024, in Taiwan, the U.S.A., and Australia, the share of regular users reporting to rely on ChatGPT for science-related searches decreased, resulting in 78% of users (95% CI [0.72, 0.83]) in Taiwan, 56% (95% CI [0.51, 0.61]) in the U.S.A., and 49% (95% CI [0.41, 0.56]) in Australia. In contrast, reported usage rates remained steady in South Korea (65%, 95% CI [0.58, 0.72]) and Germany (54% [0.45, 0.62]), while Israel and Denmark stand out as the only countries where the reported proportion of regular ChatGPT users seeking science-related information increased — resulting in 69% (95% CI [0.63, 0.75]) of ChatGPT users in Israel and 63% (95% CI [0.54, 0.71]) in Denmark. Overall, in 2024, respondents from Taiwan, Israel, South Korea, and Denmark appear to be the most inclined to use ChatGPT for science-related information. This reflects a shift from 2023, when the U.S.A. was part of this leading group, while Denmark was not.

RQ1b: How do users perceive science-related information retrieval with ChatGPT and Google Search in 2024 compared with 2023?

In 2024, users across the seven countries studied report contentment with the scientific information they obtained through ChatGPT (M = 4.0, SD = 0.9) and moderate confidence in their ability to find needed information (M = 3.6, SD = 1.0; see the supplementary material A, Table S2). Comparing ChatGPT to Google Search, at the country level in 2024, respondents in Denmark and Israel rate ChatGPT significantly lower than Google Search in both satisfaction and confidence (p < .01). In South Korea and Germany, such a significant difference appears only in user confidence for finding information (p < .01). Compared to 2023, our findings suggest evolving user perceptions by country. In 2023, Google Search held higher ratings than ChatGPT in Germany and Israel, though only in terms of user confidence; no other countries showed significant differences between the two applications at that time.

RQ1c: What other purposes are regular ChatGPT users employing GenAI for in2024?

This analysis examines all regular ChatGPT users in our sample (n = 1,358), with their responses reflecting use of generative AI applications in general rather than ChatGPT specifically. Participants could select multiple use cases. As shown in the supplementary material A (Table S3), regular ChatGPT users primarily utilize GenAI for seeking knowledge and facts, for language and writing assistance, and for accessing science-related information. Conversely, using these applications as a conversational partner is the least common purpose.

RQ2: What is the level of factual knowledge about (Gen)AI among ChatGPT users who engage the model for science-related information searches compared to non-users and users who utilize ChatGPT for other purposes in 2024 compared with 2023?

Among ChatGPT users seeking science information, the understanding of AI technology varies considerably across countries (see the supplementary material A, Tables S4a/b). On a six-point sum score, average knowledge ranges from M = 4.0 (SD = 1.6) in Germany to M = 4.6 (SD = 1.1) in Taiwan, and M = 4.5 (SD = 1.1 and SD = 1.0, respectively) in Israel and South Korea. Compared to 2023 and across all countries, knowledge levels in this group have risen modestly (t(783.0) = -4.670, p < .001), with an overall mean increase of 0.3. However, this trend is not observed in Denmark and Germany.

For science information-seekers, knowledge about the quality of information provided by GenAI — understood as the epistemic limitations inherent in GenAI technology — also varies by country. The three-point sum scores range from M = 1.2 (SD = 1.0) in Taiwan to M = 2.3 (SD = 1.0) in Denmark, and M = 2.0 (SD = 1.1) in South Korea. Only South Korea shows a significant increase in understanding of information quality (t(172) = -3.959, p < .001), with a mean difference of 0.7 between the two points of data collection.

When comparing the three (non-)user groups, across all seven countries studied in 2024, ChatGPT users seeking science-related information demonstrate a stronger understanding of both AI functionality (F(2, 4446) = 135.83, p < .001) and GenAI’s epistemic limitations (F(2, 4446) = 17.99, p < .001) than non-users. At the country level, however, this difference in knowledge about epistemic limitations between science information seekers and non-users is not observed in Germany, Australia, or the U.S.A. Notably, in 2024, the number of countries where science information-seekers demonstrated higher knowledge levels than non-users was greater compared to 2023. Despite these gains, responses of “I don’t know” remain prevalent in 2024 (see the supplementary material C, Tables S8a/b).

RQ3: What is the level of trust in GenAI among ChatGPT users who engage the model for science-related information searches compared to non-users and users who utilize ChatGPT for other purposes in 2024 compared with 2023?

The analysis of trust in GenAI among ChatGPT users seeking science-related information revealed notable differences across countries. In 2024, mean trust scores ranged from M = 3.2 (SD = 0.6) in Denmark to M = 4.0 (SD = 0.4) in Taiwan (see the supplementary material A, Table S5). Overall, science information seekers reported significantly higher trust in GenAI compared to both non-users (mean difference = 0.5) and users who employ ChatGPT for other purposes (mean difference = 0.3; F(2, 3422) = 137.75, p < .001). However, at the country level, significant differences between the two groups of ChatGPT users were observed only in the U.S.A., while in Israel, trust levels did not vary significantly across any of the three (non-)user groups.

These patterns were consistent with those observed in 2023: respondents from Denmark and Taiwan still reported the lowest and highest trust ratings, respectively, with science information seekers showing consistently higher trust than non-users (except from those in Israel). Additionally, as in 2023, a substantial portion of respondents in 2024 were uncertain about their trust in GenAI, often selecting “I don’t know”. At the country level, among ChatGPT users employing the model for science-related searches, a comparison of the two points of data collection revealed that trust increased significantly among respondents in Israel (mean difference = 0.3) but decreased significantly among respondents in South Korea (mean difference = -0.2).

9 Discussion

This study contributes to the ongoing discourse about the implications of GenAI for science communication by providing empirical insights on the use of ChatGPT for science-related content across Australia, Denmark, Germany, Israel, South Korea, Taiwan, and the U.S.A. Our descriptive analyses show that from 2023 to 2024, ChatGPT use increased significantly, reflecting growing public interest and adoption [Liu & Wang, 2024]. A substantial majority of these users report relying solely on the free versions of GenAI technology. This is particularly notable, as Volk et al. [2024] identified variations in the quality of science-related outputs across different GPT models.

Although trends in the proportion of ChatGPT users seeking science-related information have varied across the seven countries studied, overall, in 2024, a substantial 62% of ChatGPT users in our study report relying on the application for science-related inquiries. While GenAI is a global phenomenon, the adoption of ChatGPT for science-related information searches reveals distinct regional patterns. Although the cross-country differences identified in our study should be interpreted cautiously due to small sample sizes and associated sampling variability, the findings provide insights into how such applications are integrated in the science communication practices in affluent, technologically advanced nations. Respondents from Taiwan, Israel, and South Korea, for instance, reported considerable use of ChatGPT for science-related inquiries, consistent with trends observed in 2023. These countries share a strong cultural emphasis on science and technology as critical drivers of national prosperity, alongside favorable conditions for AI adoption [Getz et al., 2020; Johnson & Tyson, 2020]. However, their science communication infrastructures remain at a developmental stage [Baram-Tsabari et al., 2020; Huang et al., 2020; Kim, 2020]. This combination of technological receptivity and evolving science communication systems may create a promising context for applications like ChatGPT to act as intermediaries, bridging gaps in access to science-related information. Notably, respondents from Denmark also report increased use of ChatGPT for science-related information searches in 2024 — a trend that has also been observed in educational contexts [Thomsen, 2024]. At the same time, population surveys document that Denmark maintains a rather critical stance toward new technologies, reflecting a balanced approach to its adoption and use [European Commission, 2021]. For science communication research, these findings highlight a need for reflection about GenAI as a new information intermediary to understand the impact of this emerging technology on individual and collective epistemic practices. Following Neuberger et al. [2023], the rise of non-human intermediaries complicates traditional roles in the information ecosystem, blurring distinctions between producers, gatekeepers, and consumers. Unlike conventional media, GenAI not only disseminates knowledge but also actively contributes to its construction, potentially challenging established paradigms of epistemic authority [Bartsch et al., 2025]. Accordingly, future studies should move beyond specific applications like ChatGPT to examine the broader category of non-human actors [Guzman & Lewis, 2019], addressing issues such as hybrid authorship and the diminishing boundaries between human and machine sources.

Our descriptive analyses aimed to characterize ChatGPT users who seek science-related information, providing insights into this user group’s trust in and knowledge about (Gen)AI compared to non-users and those who use ChatGPT for other purposes. Across all seven countries studied, science-information seekers report higher levels of trust in GenAI compared to non-users. They also demonstrate better understanding of how (Gen)AI functions and — except in Germany, the U.S.A., and Australia — greater awareness of its epistemic limitations. Additionally, this user group tends to be younger than the population average. These trends, consistent with 2023 data, highlight the existence of a tech-savvy subpopulation [proficient and comfortable with technology; Spica, 2022] that is particularly inclined toward AI-driven solutions for science-related inquiries. Supporting this observation, half of these users report utilizing multiple GenAI technologies alongside ChatGPT, reflecting patterns characteristic of early technology adoption stages [Rogers, 2003]. Looking ahead, it remains to be seen how deeply GenAI technologies will integrate into science communication practices and whether they will effectively bridge the gap for less tech-savvy and less science-oriented audiences. A further analysis of our 2024 data reveals that, across all countries, individuals who use ChatGPT for science-related information searches report encountering news stories about science and technology more frequently than non-users (see the supplementary material C, Table S10). This suggests that ChatGPT currently serves as an additional source of information rather than attracting individuals who do not already access science-related content through traditional intermediaries. However, it is important to acknowledge the possibility that our sample may include individuals working in science-related industries who utilize ChatGPT as part of their professional activities. Also, this study focuses on ChatGPT, which, at the time of data collection, operated as a standalone GenAI application. As GenAI becomes increasingly embedded in traditional tools for science-related information searches, the dynamics of adoption and perception are likely to evolve further.

A closer examination of the adoption patterns, user experience, trust, and knowledge about GenAI among those using ChatGPT for science-related information searches, however, reveals no consistent trends across the selected countries or between the two points of data collection. This suggests that ChatGPT’s role as an intermediary for science-related information may be influenced by country-specific dynamics rather than overarching global trends. Future research should address this by exploring the contextual factors shaping adoption, trust, and knowledge about GenAI more in-depth. However, it is important to critically acknowledge the limitations of our study, particularly the small sample sizes among ChatGPT users.

Our study shows that in 2024, science-related inquiries indeed appear to be an important purpose for using GenAI applications like ChatGPT, alongside writing assistance and general information retrieval. In addition, user satisfaction with ChatGPT is rather high regarding the science-related information obtained, and users express moderate confidence in their ability to find what they need. However, particularly among respondents from Denmark and Israel, differences in user experience arise when ChatGPT is compared to Google Search. This discrepancy may point to inherent challenges GenAI models face, including concerns over the accuracy, verifiability, and reliability of information [Skjuve et al., 2023], leading to cautious optimism among individuals [Wissenschaft im Dialog, 2023].

These findings must be interpreted in light of several limitations. First, the country selection was non-systematic, and there are relatively small sample sizes for regular ChatGPT users in each country, precluding robust (multi-level) analyses. Additionally, the lack of panel data means comparisons across time points rely on independent samples, limiting insights into longitudinal trends. Furthermore, perspectives from less affluent and technologically developed regions are notably absent, leaving gaps in understanding how GenAI applications like ChatGPT interact with their science communication ecosystems. This omission is underscored by data showing that low-income economies account for less than 1% of ChatGPT traffic [Liu & Wang, 2024]. Here, future research is needed.

Despite its limitations, this study highlights sustained public interest in ChatGPT for science-related information in the seven countries under study, albeit primarily among a distinct demographic. Given the rapid advancements in this technology, scholarly efforts are needed to explore whether and how GenAI will emerge as a new type of information intermediary. Science communication research should accompany this process with both empirical and theoretical efforts, also given the inherent limitations associated with the current versions of GenAI models accessible to the public. This study represents one step in that direction.

Acknowledgments

This research was supported by Niedersächsisches Vorab, Research Cooperation Lower Saxony – Israel. Lower Saxony Ministry for Science and Culture (MWK), Germany [Grant No. 11- 76251-2345/2021 (ZN 3854)]; Aarhus University Research Foundation [Grant No. AUFF-E-2019-9-13]; Morgridge Institute for Research & Wisconsin Alumni Research Foundation (WARF); National Science and Technology Council, Taiwan [Grant No. MOST 113-2410-H-194-018-MY4 and Grant No. NSTC 112-2628-H-128-001-MY3]; and Yonsei University Signature Research Cluster Initiative [Funding no.: 2024-22-0168].

References

Alvarez, A., Caliskan, A., Crockett, M. J., Ho, S. S., Messeri, L., & West, J. (2024). Science communication with generative AI. Nature Human Behaviour, 8, 625–627. https://doi.org/10.1038/s41562-024-01846-3

Anderl, C., Klein, S. H., Sarigül, B., Schneider, F. M., Han, J., Fiedler, P. L., & Utz, S. (2024). Conversational presentation mode increases credibility judgements during information search with ChatGPT. Scientific Reports, 14. https://doi.org/10.1038/s41598-024-67829-6

Baram-Tsabari, A., Orr, D., Baer, A., Garty, E., Golumbic, Y., Halevy, M., Krein, E., Levi, A., Leviatan, N., Lipman, N., Mir, R., & Nevo, E. (2020). Israel: developed science, developing science communication. In T. Gascoigne, B. Schiele, J. Leach, M. Riedlinger, B. V. Lewenstein, L. Massarani & P. Broks (Eds.), Communicating science: a global perspective (pp. 443–468). ANU Press. https://doi.org/10.22459/cs.2020.19

Baram-Tsabari, A., & Segev, E. (2015). The half-life of a “teachable moment”: the case of Nobel laureates. Public Understanding of Science, 24, 326–337. https://doi.org/10.1177/0963662513491369

Bartsch, A., Neuberger, C., Stark, B., Karnowski, V., Maurer, M., Pentzold, C., Quandt, T., Quiring, O., & Schemer, C. (2025). Epistemic authority in the digital public sphere. An integrative conceptual framework and research agenda. Communication Theory, 35, 37–50. https://doi.org/10.1093/ct/qtae020

Biyela, S., Dihal, K., Gero, K. I., Ippolito, D., Menczer, F., Schäfer, M. S., & Yokoyama, H. M. (2024). Generative AI and science communication in the physical sciences. Nature Reviews Physics, 6, 162–165. https://doi.org/10.1038/s42254-024-00691-7

Brossard, D. (2013). New media landscapes and the science information consumer. Proceedings of the National Academy of Sciences, 110, 14096–14101. https://doi.org/10.1073/pnas.1212744110

Calice, M. N., Bao, L., Newman, T., Scheufele, D. A., Brossard, D., & Xenos, M. A. (2022). U.S. public attitudes on artificial intelligence. https://doi.org/10.17605/OSF.IO/K82D6

Casino, F., Dasaklis, T. K., & Patsakis, C. (2019). A systematic literature review of blockchain-based applications: current status, classification and open issues. Telematics and Informatics, 36, 55–81. https://doi.org/10.1016/j.tele.2018.11.006

Chen, K., Shao, A., Burapacheep, J., & Li, Y. (2024). Conversational AI and equity through assessing GPT-3’s communication with diverse social groups on contentious topics. Scientific Reports, 14. https://doi.org/10.1038/s41598-024-51969-w

Choudhury, A., & Shamszare, H. (2023). Investigating the impact of user trust on the adoption and use of ChatGPT: survey analysis. Journal of Medical Internet Research, 25, e47184. https://doi.org/10.2196/47184

Choung, H., David, P., & Ross, A. (2023). Trust in AI and its role in the acceptance of AI technologies. International Journal of Human-Computer Interaction, 39, 1727–1739. https://doi.org/10.1080/10447318.2022.2050543

Davis, F. D. (1989). Perceived usefulness, perceived ease of use and user acceptance of information technology. MIS Quarterly, 13, 319. https://doi.org/10.2307/249008

Dijkstra, A. M., de Jong, A., & Boscolo, M. (2024). Quality of science journalism in the age of Artificial Intelligence explored with a mixed methodology (A. Gesser-Edelsburg, Ed.). PLOS ONE, 19, e0303367. https://doi.org/10.1371/journal.pone.0303367

Dogruel, L., Facciorusso, D., & Stark, B. (2022). ‘I’m still the master of the machine.’ Internet users’ awareness of algorithmic decision-making and their perception of its effect on their autonomy. Information, Communication & Society, 25, 1311–1332. https://doi.org/10.1080/1369118x.2020.1863999

Doshi, A. R., & Hauser, O. P. (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances, 10, eadn5290. https://doi.org/10.1126/sciadv.adn5290

European Commission. (2021). European citizens’ knowledge and attitudes towards science and technology: special eurobarometer 516. http://data.europa.eu/88u/dataset/S2237_95_2_516_ENG

Fletcher, R., & Nielsen, R. K. (2024, May 28). What does the public in six countries think of generative AI in news? https://doi.org/10.60625/risj-4zb8-cg87

Gascoigne, T., Schiele, B., Leach, J., Riedlinger, M., Lewenstein, B. V., Massarani, L., & Broks, P. (Eds.). (2020). Communicating science: a global perspective. ANU Press. https://doi.org/10.22459/cs.2020

Getz, D., Buchnik, T., & Zatcovetsky, I. (2020). Science, technology and innovation indicators in Israel: an international comparison — 2019 — Part A — key figures. https://www.neaman.org.il/science-technology-and-innovation-indicators-in-israel-an-international-comparison-2019-part-a-key-figures/

Gillespie, N., Lockey, S., & Curtis, C. (2021). Trust in artificial intelligence: a five country study. https://doi.org/10.14264/e34bfa3

Gong, Z., & Su, L. Y.-F. (2024). Exploring the influence of interactive and empathetic chatbots on health misinformation correction and vaccination intentions. Science Communication, 47, 276–308. https://doi.org/10.1177/10755470241280986

Greussing, E., Guenther, L., Baram-Tsabari, A., Dabran-Zivan, S., Jonas, E., Klein-Avraham, I., Taddicken, M., Agergaard, T. E., Beets, B., Brossard, D., Chakraborty, A., Fage-Butler, A., Huang, C.-J., Kankaria, S., Lo, Y.-Y., Nielsen, K. H., Riedlinger, M., & Song, H. (2025). The perception and use of generative AI for science-related information search: insights from a cross-national study. Public Understanding of Science. https://doi.org/10.1177/09636625241308493

Guzman, A. L., & Lewis, S. C. (2019). Artificial intelligence and communication: a human-machine communication research agenda. New Media & Society, 22, 70–86. https://doi.org/10.1177/1461444819858691

Huang, C.-J., Li, Y.-Y., & Lo, Y.-Y. (2020). Taiwan: from nationalising science to democratising science. In T. Gascoigne, B. Schiele, J. Leach, M. Riedlinger, B. V. Lewenstein, L. Massarani & P. Broks (Eds.), Communicating science: a global perspective (pp. 849–864). ANU Press. https://doi.org/10.22459/cs.2020.35

Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y. J., Madotto, A., & Fung, P. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 55, 1–38. https://doi.org/10.1145/3571730

Johnson, C., & Tyson, A. (2020). People globally offer mixed views of the impact of artificial intelligence, job automation on society. Pew Research Center. https://www.pewresearch.org/short-reads/2020/12/15/people-globally-offer-mixed-views-of-the-impact-of-artificial-intelligence-job-automation-on-society/

Jonas, E., Greussing, E., & Taddicken, M. (2024). How do laypeople assess their trust in LLM-based chatbots when they seek science-related information? Results from a qualitative interview study using a hybrid trust approach [Presentation]. Annual Meeting of the Science Communication Division of the German Communication Association (DGPuK).

Karell, D., Shu, M., Okura, K., & Davidson, T. (2024). Artificial intelligence summaries of historical events improve knowledge compared to human-written summaries. https://doi.org/10.31235/osf.io/3gsqw

Kelly, S., Kaye, S.-A., & Oviedo-Trespalacios, O. (2023). What factors contribute to the acceptance of artificial intelligence? A systematic review. Telematics and Informatics, 77, 101925. https://doi.org/10.1016/j.tele.2022.101925

Kim, H.-S. (2020). South Korea: a different exemplar. In T. Gascoigne, B. Schiele, J. Leach, M. Riedlinger, B. V. Lewenstein, L. Massarani & P. Broks (Eds.), Communicating science (pp. 801–824). ANU Press. https://doi.org/10.2307/j.ctv1bvnctz.37

Klein-Avraham, I., Greussing, E., Taddicken, M., Dabran-Zivan, S., Jonas, E., & Baram-Tsabari, A. (2024). How to make sense of generative AI as a science communication researcher? A conceptual framework in the context of critical engagement with scientific information. JCOM, 23, A05. https://doi.org/10.22323/2.23060205

Krauss, A., & Colombo, M. (2020). Explaining public understanding of the concepts of climate change, nutrition, poverty and effective medical drugs: an international experimental survey (I. Novo-Cortí, Ed.). PLOS ONE, 15, e0234036. https://doi.org/10.1371/journal.pone.0234036

Lee, J. D., & See, K. A. (2004). Trust in automation: designing for appropriate reliance. Human Factors: The Journal of the Human Factors and Ergonomics Society, 46, 50–80. https://doi.org/10.1518/hfes.46.1.50_30392

Lermann Henestrosa, A., Greving, H., & Kimmerle, J. (2023). Automated journalism: the effects of AI authorship and evaluative information on the perception of a science journalism article. Computers in Human Behavior, 138, 107445. https://doi.org/10.1016/j.chb.2022.107445

Lermann Henestrosa, A., & Kimmerle, J. (2024). Understanding and perception of automated text generation among the public: two surveys with representative samples in Germany. Behavioral Sciences, 14, 353. https://doi.org/10.3390/bs14050353

Liao, W., Weisman, W., & Thakur, A. (2024). On the motivations to seek information from artificial intelligence agents versus humans: a risk information seeking and processing perspective. Science Communication, 46, 458–486. https://doi.org/10.1177/10755470241232993

Link, E., & Beckmann, S. (2024). AI at everyone’s fingertips? Identifying the predictors of health information seeking intentions using AI. Communication Research Reports, 42, 1–11. https://doi.org/10.1080/08824096.2024.2427609

Liu, Y., & Wang, H. (2024). Who on earth is using generative AI? Policy research working paper 10870. World Bank Group. https://hdl.handle.net/10986/42071

Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. In R. Bernhaupt (Ed.), Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–16). ACM. https://doi.org/10.1145/3313831.3376727

Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. The Academy of Management Review, 20, 709. https://doi.org/10.2307/258792

Myrick, J. G., Willoughby, J. F., & Verghese, R. S. (2015). How and why young adults do and do not search for health information: cognitive and affective factors. Health Education Journal, 75, 208–219. https://doi.org/10.1177/0017896915571764

Neuberger, C., Bartsch, A., Fröhlich, R., Hanitzsch, T., Reinemann, C., & Schindler, J. (2023). The digital transformation of knowledge order: a model for the analysis of the epistemic crisis. Annals of the International Communication Association, 47, 180–201. https://doi.org/10.1080/23808985.2023.2169950

Neudert, L.-M., Knuutila, A., & Howard, P. N. (2020). Global attitudes towards AI, machine learning & automated decision making. Oxford Commission on AI & Good Governance. https://oxcaigg.oii.ox.ac.uk

Nguyen, H., Nguyen, V., Ludovise, S., & Santagata, R. (2024). Misrepresentation or inclusion: promises of generative artificial intelligence in climate change education. Learning, Media and Technology, 1–17. https://doi.org/10.1080/17439884.2024.2435834

Ou, M., Zheng, H., Zeng, Y., & Hansen, P. (2024). Trust it or not: understanding users’ motivations and strategies for assessing the credibility of AI-generated information. New Media & Society. https://doi.org/10.1177/14614448241293154

Pforr, K., & Dannwolf, T. (2017). What do we lose with online-only surveys? Estimating the bias in selected political variables due to online mode restriction. Statistics, Politics and Policy, 8, 105–120. https://doi.org/10.1515/spp-2016-0004

Reeves, C., & Sylvia, J. J. (2024). Generative AI in technical communication: a review of research from 2023 to 2024. Journal of Technical Writing and Communication, 54, 439–462. https://doi.org/10.1177/00472816241260043

Reif, A., Taddicken, M., Guenther, L., Schröder, J. T., & Weingart, P. (2024). The public trust in science scale: a multilevel and multidimensional approach. Science Communication. https://doi.org/10.1177/10755470241302758

Rheu, M., Shin, J. Y., Peng, W., & Huh-Yoo, J. (2021). Systematic review: trust-building factors and implications for conversational agent design. International Journal of Human–Computer Interaction, 37, 81–96. https://doi.org/10.1080/10447318.2020.1807710

Roe, J., & Perkins, M. (2023). ‘What they’re not telling you about ChatGPT’: exploring the discourse of AI in UK news media headlines. Humanities and Social Sciences Communications, 10. https://doi.org/10.1057/s41599-023-02282-w

Rogers, E. M. (2003). Diffusion of innovations. Free Press.

Schäfer, M. S. (2023). The notorious GPT: science communication in the age of artificial intelligence. JCOM, 22, Y02. https://doi.org/10.22323/2.22020402

Schäfer, M. S., Kremer, B., Mede, N. G., & Fischer, L. (2024). Trust in science, trust in ChatGPT? How Germans think about generative AI as a source in science communication. JCOM, 23, A04. https://doi.org/10.22323/2.23090204

Segev, E., & Baram-Tsabari, A. (2012). Seeking science information online: data mining Google to better understand the roles of the media and the education system. Public Understanding of Science, 21, 813–829. https://doi.org/10.1177/0963662510387560

Segev, E., & Sharon, A. J. (2017). Temporal patterns of scientific information-seeking on Google and Wikipedia. Public Understanding of Science, 26, 969–985. https://doi.org/10.1177/0963662516648565

Selwyn, N., & Gallo Cordoba, B. (2022). Australian public understandings of artificial intelligence. AI & SOCIETY, 37, 1645–1662. https://doi.org/10.1007/s00146-021-01268-z

Shin, D., Koerber, A., & Lim, J. S. (2024). Impact of misinformation from generative AI on user information processing: how people understand misinformation from generative AI. New Media & Society. https://doi.org/10.1177/14614448241234040

Skjuve, M., Følstad, A., & Brandtzaeg, P. B. (2023). The user experience of ChatGPT: findings from a questionnaire study of early users. In M. Lee, C. Munteanu, M. Porcheron, J. Trippas & S. T. Völkel (Eds.), Proceedings of the 5th International Conference on Conversational User Interfaces (pp. 1–10). ACM. https://doi.org/10.1145/3571884.3597144

Spica, E. (2022). The influence of technological savviness and home internet access on student preferences for print or digital course materials. 34, 81–96. https://eric.ed.gov/?id=ej1363726

Spitale, G., Biller-Andorno, N., & Germani, F. (2023). AI model GPT-3 (dis)informs us better than humans. Science Advances, 9. https://doi.org/10.1126/sciadv.adh1850

Thomsen, C. S. (2024). AI has exploded among students: University of Copenhagen to now change the rules. Uniavisen. https://uniavisen.dk/en/ai-has-exploded-among-students-university-of-copenhagen-to-now-change-the-rules/

van Dijk, J. (2020). The digital divide. Polity Press.

van Dis, E. A. M., Bollen, J., Zuidema, W., van Rooij, R., & Bockting, C. L. (2023). ChatGPT: five priorities for research. Nature, 614, 224–226. https://doi.org/10.1038/d41586-023-00288-7

Volk, S. C., Schäfer, M. S., Lombardi, D., Mahl, D., & Yan, X. (2024). How generative artificial intelligence portrays science: interviewing ChatGPT from the perspective of different audience segments. Public Understanding of Science, 34, 132–153. https://doi.org/10.1177/09636625241268910

Walmsley, J. (2021). Artificial intelligence and the value of transparency. AI & SOCIETY, 36, 585–595. https://doi.org/10.1007/s00146-020-01066-z

Wang, J., & Peng, L. (2023). Striking an emotional chord: effects of emotional appeals and chatbot anthropomorphism on persuasive science communication. Science Communication, 45, 485–511. https://doi.org/10.1177/10755470231194583

Weidmüller, L. (2022). Human, hybrid, or machine? Exploring the trustworthiness of voice-based assistants. Human-Machine Communication, 4, 85–110. https://doi.org/10.30658/hmc.4.5

Wissenschaft im Dialog. (2023). Wissenschaftsbarometer 2023 [Science barometer 2023]. https://wissenschaft-im-dialog.de/documents/47/WiD-Wissenschaftsbarometer2023_Broschuere_web.pdf

World Intellectual Property Organization. (2024). Global innovation index 2024: unlocking the promise of social entrepreneurship. https://doi.org/10.34667/tind.50062

Zarouali, B., Boerman, S. C., & de Vreese, C. H. (2021). Is this recommended by an algorithm? The development and validation of the algorithmic media content awareness scale (AMCA-scale). Telematics and Informatics, 62, 101607. https://doi.org/10.1016/j.tele.2021.101607

Zhao, Y., Wang, N., Li, Y., Zhou, R., & Li, S. (2020). Do cultural differences affect users’ e-learning adoption? A meta-analysis. British Journal of Educational Technology, 52, 20–41. https://doi.org/10.1111/bjet.13002

Notes

1. In Denmark, the data was weighted accordingly. In Taiwan, the final sample includes an overrepresentation of younger individuals and those with higher educational attainment.

About the authors

Esther Greussing is a postdoctoral researcher at the Institute for Communication Science at Technische Universität Braunschweig in Germany. Her research focuses on the digitalization of science communication, with a particular emphasis on the role of non-human agents in the communication process. She explores the conditions and impact of the use of these agents within the context of the knowledge society.

E-mail: e.greussing@tu-braunschweig.de

Lars Guenther (Ph.D., 2015, at Friedrich Schiller University Jena, Germany) is Professor of Communication Science at LMU Munich’s Department of Media and Communication in Germany, and Extraordinary Associate Professor at the Centre for Research on Evaluation, Science and Technology (CREST) at Stellenbosch University in South Africa. He is interested into public perceptions of (controversial) science, science and health journalism, trust in science, as well as the public communication about risks and scientific (un)certainty.

E-mail: lars.guenther@ifkw.lmu.de

Ayelet Baram-Tsabari is a professor of science education and communication at the Faculty of Education in Science and Technology at the Technion — Israel Institute of Technology. Her research program focuses on the relevance of science education to public engagement with science and on training scientists for effective science communication.

E-mail: ayelet@technion.ac.il

Shakked Dabran-Zivan is a Ph.D. student at the Faculty of Education in Science & Technology at the Technion — Israel Institute of Technology.

E-mail: shakkeda@gmail.com

Evelyn Jonas is a Research Assistant and PhD candidate at the Institute for Communication Science at Technische Universität Braunschweig, Germany. Her research focuses on user perceptions of trustworthiness and the usage of (Gen)AI as an intermediary for complex and science-related information.

E-mail: evelyn.jonas@tu-braunschweig.de

Inbal Klein-Avraham is a postdoctoral fellow at the Faculty of Education in Science and Technology, Technion — Israel Institute of Technology. Her current research focuses on publics’ engagement with science-related information via generative AI. Her previous studies were published, inter alia, in New Media and Society, Journalism Studies, and more.

E-mail: inbal.klein@campus.technion.ac.il

Monika Taddicken is professor and chair in Communication Science at the Technische Universität Braunschweig (Germany). Her research interests include science communication with a special focus on new media environments and user engagement.

E-mail: m.taddicken@tu-braunschweig.de

Torben E. Agergaard is a PhD student at Aarhus University within the field of science studies. His project concerns ethical and epistemic aspects of explainable artificial intelligence.

E-mail: ta@css.au.dk

Becca Beets is an assistant professor in the Department of Communication at the University of Maryland, College Park. Her research examines emerging areas of science and technology, with a focus on public opinion, engagement, and science communication.

E-mail: beets@umd.edu Bluesky: @beccabeets

Dominique Brossard is professor and chair in the department of Life Sciences Communication at the University of Wisconsin-Madison and principal investigator with the Morgridge Institute for Research. Her research is situated at the intersection of science, media and policy as related to new technologies.

E-mail: dbrossard@wisc.edu Bluesky: @brossard

Anwesha Chakraborty is a postdoctoral fellow at the Department of Communications, Humanities and International Studies (DISCUI) at the University of Urbino Carlo Bo in Italy. Her research focuses broadly on the study of technology as a social good. She has contributed to a wide range of projects related to digital governance, responsible innovation, and more recently, the use of generative AI in combatting disinformation on social media.

E-mail: anwesha.chakraborty@uniurb.it Bluesky: @informativeicts

Antoinette Fage-Butler is an Associate Professor in the School of Communication and Culture at Aarhus University whose research interests centre on the communication of science, risk and trust.

E-mail: fage-butler@cc.au.dk Bluesky: @afagebutler

Chun-Ju Huang is a distinguished professor at the General Education Center of National Chung Cheng University. His research primarily focuses on science communication, public understanding of science, and general education in the science domain. Currently, his major focus is on scientific uncertainty communication in the post-truth era.

E-mail: cjhuang@ccu.edu.tw Bluesky: @subaru419

Siddharth Kankaria works at the intersection of science communication research, practice and teaching in India. He currently serves on the Scientific Committee of the PCST Network and focuses on developing decolonised and multicultural science engagement practices for the Global South.

E-mail: siddharth.kankaria@gmail.com

Yin-Yueh Lo is an associate professor at the Department of Communications Management, Shih Hsin University, Taiwan. Her research interests center on the public communication of science, with a particular emphasis on cross-cultural variations in scientists’ communication attitudes and practices. Currently, her research includes studying the public relations activities of universities and research institutes in Taiwan.

E-mail: yylo@shu.edu.tw Bluesky: @y-ylo

Lindsey Middleton is a PhD candidate in the Department of Life Sciences Communication at University of Wisconsin-Madison in Madison, Wisconsin, United States. Her research interests lie in health communication around chronic illness and under-studied health conditions, as well as communication about emerging technologies such as AI and CRISPR. Previous work has included engagement research at a climate science center through an interpersonal communication lens.

E-mail: lmiddleton3@wisc.edu

Kristian H. Nielsen is an Associate Professor in science communication at Aarhus University with a research interest in trust in science, citizen science and science history.

E-mail: khn@css.au.dk

Michelle Riedlinger is a Chief Investigator in the Digital Media Research Centre at the Queensland University of Technology. Her research interests include emerging environmental, agricultural and health research communication practices, roles for “alternative” science communicators, online fact checking, platformised engagement with scientific research, and, most recently, generative authenticity.

E-mail: michelle.riedlinger@qut.edu.au

Hyunjin Song is an Associate Professor in the Department of Communication at Yonsei University, Seoul, South Korea. His research centers on statistical modeling of social networks and explores how algorithmically driven information environments influence individuals’ reception of political and scientific messages, as well as the broader consequences of these effects.

E-mail: hyunjinsong@yonsei.ac.kr

Supplementary material

Available at https://doi.org/10.22323/2.24020205