1 Introduction

Climate change and COVID-19 are prominent examples of how modern societies are characterized by increasing complexity and multiple crises. In such contexts, there is a heightened need for scientific information. Public trust in science is among the most important variables for the public to reduce complexity and engage with this type of information [e.g., Plohl & Musil, 2021; Saffran et al., 2020; Wintterlin et al., 2022]; at the same time, a supposed decline of public trust in science and its implications have recently been discussed [e.g., Neuberger, 2014; Reif & Guenther, 2021; Weingart & Guenther, 2016; for institutional trust: Verboord, 2023]. These researchers name digital media environments, along with their diversity regarding actors, topics, and content, as a potential reason for this decline.1 In many countries, including Germany, large parts of the population that traditionally used journalistic media for information about science now obtain this information online [e.g., European Commission, 2021; Guenther et al., 2022; National Science Board, 2018] — while this includes journalistic online media, the relevance of non-journalistic social and fringe/populist sources cannot be underestimated [e.g., Wissenschaft im Dialog, 2023]. In digital media environments, journalistic and non-journalistic actors with various (sometimes vested) interests access digital public spheres and can, therefore, publicly discuss scientific issues [e.g., Huber et al., 2019; Taddicken & Krämer, 2021; Weingart, 2017].

When audiences use various (digital)2 sources to inform themselves about science, these media act as intermediaries of trust in science. Information mediated by (digital) media affects the development of public trust in specific systems, organizations, and individuals [Bentele, 1994; see also Verboord, 2023]. That is why (digital) media are not only objects of trust themselves, but they provide cues for trust in objects such as science via content indicators [e.g., see also Kohring, 2016; Reif, 2021; Schäfer, 2016] that provide reasons why (or why not) to trust science and presented scientific information. In this paper, we call these indicators trust cues. While they have already been identified in content about science [see Schröder et al., 2023] and their importance has been shown in audience studies [e.g., studies on how cues in media content affect audiences’ trust in science; Reif et al., 2020; Rosman et al., 2022], research on public trust in science has focused less on intermediaries and has yet to examine if and how the frequency of trust cues in content affects public trust in science. Answering this question can provide insights into what sources of information potentially affect public trust in science (positively or negatively) via their use of trust cues. Studies to date have either focused on the effects of the frequency of media use on public trust in science [e.g., Huber et al., 2019; Wintterlin et al., 2022], often pointing to small or no effects, or to effects of very specific contents [e.g., Hendriks et al., 2020]. We argue that there are two main reasons why science-related information use has not emerged as a (significant or strong) predictor of trust in science: firstly, researchers did not establish the connection to specific content, and secondly aggregated analyses were performed for overall samples.

Audience segmentation prevents analyzing overall samples; they can acknowledge the diversity of audiences [e.g., Guenther & Weingart, 2018; Klinger et al., 2022; Schäfer et al., 2018] and group-specific (changes in) trust in science [Reif et al., 2023]. Thus, in the present study, we propose to combine two novel approaches — trust cues in content and audience segmentation in a panel survey — to not only examine if exposure to trust cues in media content about science affects public trust in science but also to assess if this varies across population groups. In contrast to previous research, we will put intermediaries of trust in science in the limelight.

2 Public trust in science, trust cues, and audience groups

2.1 Defining public trust in science

Although this varies across definitions and research traditions (e.g., in sociology, psychology, and communication research), trust is often described as a mechanism to reduce complexity; trust comes into play in situations characterized by (uncertainty and) risk [e.g., Giddens, 1990; Luhmann, 2014]. In addition, public trust in science is an important variable when engaging with scientific information. It plays a crucial role in the relationship between science and its publics [for an overview: Reif & Guenther, 2021]. Seen as a relational variable, trust in science requires at least one subject of trust (i.e., who trusts), which, in this paper, are publics (and public audiences, respectively), and one object of trust (i.e., who is trusted), which is science [see also Giddens, 1990; Luhmann, 2014; Mayer et al., 1995]. More specifically, and with a focus on trust in science, the concept of epistemic trust seems appropriate. This refers to science as a producer of valid knowledge and thus includes aspects of the validity of scientific knowledge and the assessment of science as a secure source of information [e.g., Origgi, 2012; Sperber et al., 2010; see also Wintterlin et al., 2022]. This concept does include the risk of not being informed correctly.

Furthermore, based on established definitions, we define trust in science as a multilevel [e.g., Giddens, 1990; Grünberg, 2014; Luhmann, 2014; Schäfer, 2016] and multidimensional construct [e.g., Bentele, 1994; Besley et al., 2021; Fiske & Dupree, 2014; Hendriks et al., 2015, 2016; Mayer et al., 1995; Reif & Guenther, 2021]. Multilevel means that there is a distinction between science as a system (i.e., macro-level), scientific organizations (i.e., meso-level; e.g., universities or research departments of companies), and scientists [i.e., micro-level; see also Mayer et al., 1995]. Trust assessments may differ in terms of whether science is considered a system or refers to its organizations or scientists. Multidimensional means that when referring to epistemic trust in science, we refer to several established dimensions underlying this construct; some describe them as reasons to trust science. In detail, that is expertise, integrity, benevolence, transparency, and dialogue orientation [Hendriks et al., 2015, 2016; Reif & Guenther, 2021; see also Besley et al., 2021; Resnick et al., 2015; Wintterlin et al., 2022]. According to Reif et al. [2023], expertise can be defined as science’s capacity to recognize, evaluate, and solve problems by applying specialized knowledge acquired through education, experience, and qualifications in the respective research domain. Integrity indicates sciences’ objectivity, validity, and reliability, achieved through adherence to scientific standards and processes. This includes highlighting science’s methodological approach, focusing on quality control, and emphasizing its independence from external influences. Benevolence means that science has the ultimate goal of improving people’s lives and promoting the advancement of societal welfare. This definition includes referring to the social responsibility of science and the representation of scientific research as adhering to ethical norms and moral values. Transparency means that scientific research and knowledge are accessible to public audiences and comprehensible to all. Lastly, dialogue orientation refers to how science actively engages with and encourages interaction with the public through activities such as public lectures or citizen science projects.

With these definitory steps in mind, we will examine if exposure to trust cues in media content about science affects public trust in science (across population groups).

2.2 Trust cues in content about science

Deriving from the role of (digital) media as intermediaries of scientific information, we rely on Bentele’s [1994] theory of public trust, which considers that intermediaries are a significant factor in trust relationships. Following this theoretical approach, the formation of trust in any object of trust, such as science, is strongly influenced by information presented by the media; this includes the media’s representation of facts and events. That is why we have developed the concept of trust cues and defined them as indicators and the specific language characteristics/linguistic markers in content about science [see also Bentele, 1994; Kohring, 2016; Reif, 2021; Schäfer, 2016], which may give hints for how much to trust science and scientific information, respectively. Trust cues can be assigned to different levels represented (science as a system, scientific organizations, or scientists) and, because of their focus on reasons why to trust science, to different dimensions, thus forming expertise, integrity, benevolence, transparency, and dialogue cues. So far, few studies have focused on such cues [see Welzenbach-Vogel et al., 2021] or provided insights that can be interpreted as trust cues. Indeed, trust cues have been found for all established trust dimensions. For instance, expertise cues could link to information about the organizational background of presented research(ers) or publications [e.g., Hijmans et al., 2003], while integrity cues could refer to funding sources, relevant methodological criteria, as well as scientific uncertainties [e.g., Cook et al., 2007; Guenther et al., 2019].

Based on the theory of public trust [Bentele, 1994] and a lack of research on this issue, Schröder et al. [2023] focused on identifying trust cues provided by intermediaries of public trust in science. They used qualitative content analysis on a comprehensive and representative sample of 158 media pieces (drawn from journalistic, non-journalistic online, social, and right-wing populist media). With a working definition in mind of what expertise, integrity, benevolence, transparency, and dialogue orientation mean (see above), two coders used a mainly inductive approach to openly collect all trust-relevant criteria mentioned in these pieces, along with the respective levels (macro, meso, or micro3). Multiple dimensions were counted within a single source. A list of n = 1,329 cues was then condensed in several iterative steps and linked to the established dimensions of trust [for more information, see Schröder et al., 2023]. In that way, 35 trust cues were identified and summarized, and the dimensions of trust were specified further: For expertise, trust cues pertain to academic education (e.g., where scientists studied, obtained their PhD), professional experience (e.g., how long scientists have been working in a field), and qualification (e.g., degrees, positions, criteria indicating reputation such as prizes). For integrity, the cues focus on independence (e.g., from clients, funders), scientific quality assurance (e.g., peer review, uncertainty), and scientific standards and processes (e.g., collaborations, publications, descriptions of the research (process)). Benevolence cues address ethical norms (e.g., misconduct), social responsibility (e.g., predictions and assessments of current affairs), and societal benefits (e.g., breakthroughs, discoveries, applicability). Lastly, for transparency, trust cues refer to the accessibility of results (e.g., making them publicly available) and comprehensible language, while for dialogue orientation, trust cues refer to the participation at public events, media presence (e.g., interviews, talk shows, or presence on social media), and public engagement in research.

The identified trust cues also varied across (digital) media sources [Schröder & Guenther, 2024]. This variation seems crucial because digital media environments include many voices, actors, and contents representing various interests — some journalistic, some not, and not all communicators count as experts [see also Taddicken & Krämer, 2021; Weingart & Guenther, 2016]. Thus, the kind of (digital) media used likely affects public trust in science. For instance, “although not all content in alternative counter-news is fake news, these outlets do attract a specific […] audience” [Frischlich et al., 2023, p. 80], and their representations of science may affect trust in science.

However, studies so far have not tested if exposure to trust cues affects public trust in science; instead, studies have investigated the connection between the frequency of (science-related) media use and trust in science. The findings in this area are mixed. Wintterlin et al. [2022] did not detect a relationship between media use and trust in science. At the same time, there is some indication that social media use can positively predict trust in science [Huber et al., 2019]. For online sources more broadly, Takahashi and Tandoc [2015] present evidence of a negative connection. Trust in science also shows a negative connection to social media use in Schäfer et al. [2022] [for similar findings related to institutional trust, see Reinemann et al., 2022]. The mixed evidence reminds scholars of research in political communication, where various distinctive and yet incompatible hypotheses about the relationship between media use and trust exist [e.g., media use either positively (“virtuous circle” hypothesis) or negatively (“media malaise” hypothesis) predicts political trust; Verboord, 2023].

Consequently, we see at least two obstacles to research to date; the first is that media use, instead of exposure to trust cues, has been tested, and the second is that the evidence of an effect of media use is mixed at best. This supports ongoing research, which is why our first research question (RQ1) is: To what extent does exposure to trust cues in (digital) content about science affect trust in science?

2.3 Audience groups of public trust in science

Deriving from the fact that segmentation analyses have developed into a popular and useful approach in science communication research [e.g., Guenther & Weingart, 2018; Klinger et al., 2022; Schäfer et al., 2018], for instance, to arrive at more targeted communication, Reif et al. [2024] have proposed to use this technique in identifying so-called trust groups.4 In such a setting, variables referencing public trust in science are used to identify groups within a population.

Reif et al. [2024] used measures relating to the five dimensions of trust introduced earlier (i.e., expertise, integrity, benevolence, transparency, and dialogue orientation), each with three items formulated as reasons to trust in scientists (i.e., “Scientists can be trusted because they…”). These items were presented in a randomized order with five response options (from “strongly disagree” to “strongly agree”). Later, Reif et al. [2024] computed mean indices of these five dimensions and performed a Latent Profile Analysis (using package tidyLPA in RStudio) to identify groups of trust in science. Following both scree plots and fit indices, they identified five distinct groups and validated this finding with an additional discriminant analysis (96% correctly classified cases).

Hence, the trust groups considered here [see Reif et al., 2024] are, in descending order regarding their trust in science: “Fully trusting,” who show complete trust in science and only slightly more agreement to science’s expertise than integrity and benevolence. “Highly trusting” and “Moderately trusting” give special importance to science’s expertise, though the score for “Moderately trusting” was slightly below the mean value. “Rather untrusting” are not trusting as much (and, e.g., only moderately in science’s expertise), and a fifth group is “Untrusting” towards science.

Noteworthy, these groups also showed variance concerning the frequency in which they used media sources to inform themselves about science. “Fully trusting” used all sources the most frequently, especially public television (TV), but social media less often. “Highly trusting” and “Moderately trusting” also often used public TV. “Rather untrusting” used the sources less frequently, and “Untrusting” were, in comparison, more frequent users of populist media.

This reminds, to some degree, of studies focusing on media repertoires, with outcomes ranging from minimalists who use media infrequently to omnivores who use them very frequently [see Verboord, 2023]. In total, it seems likely that the way population groups use different media sources, combined with the fact that these sources may differ with respect to how they represent trust cues, affects trust in science. From this perspective, this can potentially explain why some groups may not experience changes in their trust in science while others may experience increases or decreases regarding their trust in science. However, research on this needs to be expanded upon. Consequently, RQ2 reads: To what extent does exposure to trust cues in (digital) content about science affect trust in science across trust groups?

3 Methods

Answering the RQs requires a mixed-method design. This linkage study combines content analysis with panel survey data in two waves. Our focus is on Germany — the biggest economy in Europe, with a tradition of public surveys on perceptions of science and technology, which show variations with respect to trust in science [e.g., Wissenschaft im Dialog, 2023].

3.1 Content analysis to identify trust cues in content about science

For the whole year between the two waves of the panel survey (March 2022–March 2023), in seven constructed weeks, we collected data of the most important sources that public audiences in Germany use to inform themselves about science [e.g., European Commission, 2021; Wissenschaft im Dialog, 2023].5 The sources include relevant journalistic media, incorporating TV newscasts and special science TV programs, print and online newspapers, weekly news magazines/newspapers, and special science magazines. We also included right-wing populist, non-mainstream media sources. For broader inclusion of digital contexts, we chose popular science blogs and online news aggregators as further (in many cases non-journalistic) online media. Lastly, we selected several popular social media and non-journalistic accounts (see Table 1 for all media sources). For data collection, we relied on various databases and approaches and, where possible, used established search strings [Guenther et al., 2019; for more details, see also Schröder et al., 2023].

In total, n = 10,244 pieces of information were collected and then manually checked for relevance (n = 1,812); relevance was determined by two requirements: the presence of both a scientific object of trust and a trust cue. For the present study, we used data from a quantitative content analysis applied for half of the relevant pieces identified (random selection; n = 906). Based on the qualitative study [Schröder et al., 2023], a standardized codebook, which included the 35 trust cues mentioned earlier, was developed [see also Schröder & Guenther, 2024]. Four coders were trained and conducted the coding after their reliability was tested successfully (ranges are α = .74–.99; CR = .85–1). This paper will use the average number of trust cues per media sources, as shown in Table 1. The table shows that the highest number of trust cues was found in science magazines and the lowest across social media. In the 906 pieces, 5,932 trust cues were coded — which sets the average of 6.55 trust cues per media source for the whole sample.

PIC
Table 1: Average number of trust cues and media use per media source.

3.2 Panel survey in two waves

For the panel data, we made use of YouGov’s online access panel, representative for Germans over the age of 18, at t1, achieved via quota plans [t1: March/April 2022, n = 4,824; t2: March/April 2023, n = 1,030; for more information, see Reif et al., 2023, 2024].6 The survey at t1 was our baseline measurement of trust in science; the survey at t2 was our follow-up one year after data collection to measure changes regarding trust in science. The questionnaires were fairly similar between t1 and t2.

The surveys contained the five dimensions of trust (expertise, integrity, benevolence, transparency, and dialogue orientation), each presented by three items measuring reasons to trust scientists [see also Reif et al., 2023]. As mentioned before, these items were used to identify groups of trust in science. In the following, we base our study on the 1,030 individuals who participated in both waves. The five groups, for t1, reached the following frequencies: “Fully trusting” (n = 163; 16%), “Highly trusting” (n = 230; 22%), “Moderately trusting” (n = 250; 24%), “Rather untrusting” (n = 207; 20%), and “Untrusting” (n = 180; 18%).

Central further variables for this paper are additional four direct trust in science measurements, which — for both t1 (M = 3.16; SD = 1.02; α = .92) and t2 (M = 3.10; SD = .96; α = .92)7 — were used to create an index8 based on the items capturing trust in science (macro), scientists at universities and research institutes as well as scientists in private companies/industry (2 items, meso), and scientists (micro), to represent all levels (scale from 1 “do not trust at all” to 5 “trust a great deal”). At both points in time, respondents were also asked for their science information-specific media use — of which the 13 items/categories represented in Table 1 corresponded between content analysis and panel survey (scale from 1 “never” to 5 “very often”). In the first step, we created mean variables between t1 and t2 to mirror self-reported media use across the whole year of data collection (see Table 1). The data supported that survey respondents most frequently encountered science via public TV and least frequently through tabloid newspapers and microblogging services like X (formerly Twitter).

3.3 Linking content analysis and panel survey data

We created linkage variables to link the content analysis with the panel survey data [for an overview, see De Vreese et al., 2017]. We refer to them as “trust cue exposure variables” (TCE) and created one for each of the 13 media sources that corresponded between content analysis and survey. As the following formula shows, these exposure variables (TCEsource) are the multiplication of source-specific media use (usource [0;.5;1] as the average of media use frequencies between t1 and t2, weighted by no (0, for M = 1.0), moderate (.5, for M = 1.1–3.0), and high (1, for M = 3.6–5.0) use [see Wirz et al., 2018] and trust cues (as the average number of trust cues per source (trust cuessource divided by nsource), divided by the average number of trust cues across all sources (trust cuesall divided by nall), which is 6.55). Finally, all values were z-standardized. Higher values of the TCE variables indicate an overrepresented exposure to trust cues. The formula is as follows:

TCEsource = usource (trustcuessource nsource /trustcuesall nall )

Noteworthy, our linkage variable is one option out of many and uses proxies for both the frequency of trust cues across media sources and for science information-specific media use. For the statistical analysis, we determined the trust in science at t2 variable to be the dependent variable. In hierarchical regression analyses, we tested the same indicator of trust in science at t1,9 the 13 trust exposure variables, and sociodemographic information as predictors in three models — for the whole sample (RQ1) and the groups that are based on dimensions of trust (RQ2).

4 Results

The findings, with respect to RQ1 (effects of trust cue exposure on trust in science), revealed that trust in science at t2 was significantly predicted by trust in science at t1 (see Table 2); across all models (also concerning RQ2), this variable was always the strongest predictor. This finding and the means reported show that trust in science was a stable construct for our respondents and that their trust assessments did not change as much over the year between the survey waves.

In addition, in Model 2, trust cue exposure on public TV was a positive but weak predictor of trust in science at t2, while trust cue exposure in populist media was a negative and weak predictor. This means that the more respondents were exposed to trust cues on public TV, the higher their trust in science was at t2. At the same time, their trust in science at t2 was higher the less they were exposed to trust cues in populist media.

Including trust cue exposure variables in Model 2 only slightly increased the explained variance of the analysis, which underlines the stability of trust in science assessments over time.

PIC
Table 2: Predicting trust in science (t2) via linear regression models.

The effects of trust cue exposure on the whole sample were rather limited in total, but this may not be the case for specific trust groups. This is the focus of RQ2 (effects of trust cue exposure on trust in science per group); Table 3 presents the final models for all groups.

PIC
Table 3: Predicting trust in science (t2) via linear regression models across trust groups.

For “Fully trusting” only trust in science at t1 was a significant predictor of trust in science at t2, probably again indicating stable and high trust, especially for those respondents with the highest trust scores.

In the case of “Highly trusting”, despite trust in science at t1 being the strongest predictor (i.e., the more they trusted science at t1, the higher their trust in science was at t2), trust in science at t2 was also negatively influenced by trust cue exposure in science blogs. This means that the less respondents were exposed to trust cues in science blogs, the higher their reported trust in science was at t2.

Exposure to trust cues on public TV and in science magazines were positive predictors of trust in science at t2 for “Moderately trusting”; besides trust in science at t1 being the strongest predictor again. Hence, the higher the trust respondents reported at t1, the more they were exposed to trust cues on public TV and in science magazines, the higher they trusted science at t2. For “Moderately trusting”, one of the sociodemographic information also had a significant effect: education. Hence, the more educated respondents were, the higher their trust in science at t2.

For “Rather untrusting”, none of the TCE variables had an effect; however, trust in science at t1 predicted trust in science at t2 positively.

Lastly, for “Untrusting,” trust in science at t2 was negatively predicted by trust cue exposure in populist media and positively predicted by trust in science at t1. This means that the higher the trust respondents stated at t1, the higher their trust in science was at t2. In addition, the less they were exposed to trust cues in populist media, the higher their trust in science was at t2.

5 Discussion

Certainly, the need for scientific information is a crucial characteristic of modern societies. We have defined public trust in science as a central variable in any relationship between science and its publics [e.g., Reif et al., 2024; Wintterlin et al., 2022]. Derived from the supposed decline of public trust in science and its assumed connection to digital media environments, we explored the role of intermediaries on public trust in science. More specifically, in a linkage study, we combined the content and audience perspectives when examining if exposure to trust cues in (digital) media content about science affects public trust in science and whether this varies across population groups.

We emphasized that answering this question would provide insights into what sources of information potentially affect public trust in science (positively or negatively) via their representation of trust cues. The findings indicate that trust in science seems to be a relatively stable construct within the present study’s sample — at least in the year between the two data collection points. Thus, they seem to reflect the asymmetry in trust formation described by Slovic [1993]. According to this, it takes a long time to build trust, for instance, through several confirmations that this trust is well given or through positive events, but only a moment to break it, for instance through negative events, which often receive higher (media) visibility and have more impact (“Trust is fragile”; p. 677). Hence, maybe in the year between data collection, no major event supported building more trust in science or eroding the already existing trust. At the same time, at t1, COVID-19 and potential negative events associated with it may have been more present when respondents made their judgments.

Findings, however, also point to a nuanced nature of trust-assessing processes in (fragmented) digital media environments. For the whole sample (RQ1), trust cue exposure on public TV was a weak positive predictor of trust in science — which, overall, may be explained by the fact that the content analysis revealed that they have a moderate frequency of using trust cues in their content; at the same time, the panel survey revealed that they are the source where most German audiences are exposed to scientific information. Reports on public TV are often longer, and there are specific science TV programs offered by German public TV channels [see Ruhrmann et al., 2015]. Naturally, there may be specific audience characteristics to consider here: not everyone watches public TV and specialized programs. It could be that only people with a special interest in science watch these programs and probably pay close attention to the content. At the same time, only specific audiences would likely pay attention to content in populist media [see also Frischlich et al., 2023], which showed a weak but negative effect on trust in science at t2. The content analysis showed that they, indeed, also use trust cues [although differently, see Schröder & Guenther, 2024], and the panel survey showed that they are not used by many but still by some. We think that this finding should motivate researchers to pay closer attention to how populist media mediate trust in science.

The overall findings outlined were not replicated for all trust groups. The segmentation approach generally showed more variety and stronger effects — accounting for the diversity of science communication audiences [see Schäfer et al., 2018; Schäfer et al., 2022]. “Fully trusting” and “Rather untrusting” were unaffected by trust cue exposure to any of the 13 media sources we tested — at the same time, for both groups, the explained variance was the highest across all groups. For “Fully trusting,” this may be explained by the fact that they already reached the highest trust scores and may experience a particularly stable trust in science [see also Reif et al., 2024]. This assumption, as well as an explanation for the “Rather untrusting,” should be investigated further. The findings may also be linked to the null effects found in some studies testing the effect of media use on trust in science [e.g., Wintterlin et al., 2022]. If trust in science is indeed a stable construct, showing little variation over a whole year, then this may explain why previous studies did not show (strong) effects or why their effects did not point in a clear direction.

Regarding the trust groups, trust in science at t2 of “Highly trusting” was negatively affected by trust cue exposure in science blogs. In this case, it would be worthwhile to know more about the specific blogs respondents used or had in mind when making their assessment. Although the current analysis is novel in its approach, it is not fine-grained enough to shed more light on this and provide explanations. This should remind researchers that in future studies, we need to find ways to ask for media use in more detail than we often do today, which would provide further explanations.

At the same time, “Moderately trusting” were positively affected by trust cue exposure on public TV (just as the overall sample; this is the biggest trust group) and in science magazines. This group was the only one in which one of the sociodemographic information — education — had a significant effect. Furthermore, the nuanced nature of trust-assessing processes in digital media environments was also apparent for “Untrusting”. For them, trust cue exposure in populist media was a negative predictor (just as in the overall sample). This reflects to some degree the potential negative effects [e.g., Takahashi & Tandoc, 2015] of online media on trust in science, as stated in previous research. Reif et al. [2024] had also already established that “Untrusting” are indeed more frequent users of populist media. Hence, in total, the present study was able to find (unique) effects of some journalistic and populist (online) media for some population groups; however, surprisingly, we were not able to find effects for journalistic media like (online) newspapers, social media, or other online media, although many of them contained quite a number of trust cues and many of them are used frequently by audiences.

Nevertheless, the findings give some direction on how to reach specific audiences of science communication when the goal is to affect their trust in science — and this targeted communication is something that is often defined as the central goal of audience segmentation [see also Guenther & Weingart, 2018; Klinger et al., 2022]. It is worth mentioning that in those cases where genuine journalistic sources had an effect, it was always a positive one on trust in science. When considering further online media, such as blogs and populist media, it was always a negative one. Hence, our study may be placed between the “media malaise” and “virtuous circle” hypotheses stated for political communication [see Verboord, 2023].

With the central goal of putting intermediaries in the limelight, the present study has limitations. Since this is a linkage study, both methodologies (content analysis and panel survey) and their connection have limitations — we will focus on the most important ones next. For the content analysis, this refers to the sample selected, using constructed weeks, and selecting subsamples for analytical steps. In total, our findings with respect to the frequency of trust cues can only be seen as a proxy. The set of media sources selected also differs considerably, for instance, regarding their length. Furthermore, we treated each identified trust cue unweighted, but their importance may vary among audiences. That is why we propose to test the identified trust cues in interviews or focus groups. For the panel survey, limitations relate to the representativeness of the data (which was only assured for t1), the measures developed [see Reif et al., 2023, including the ones on science information-specific media use, which is also only a proxy and does not fully capture the diversity of actors and content in digital media] and using interpretative techniques such as Latent Profile Analysis to identify groups of trust in science. Further, although developed after thoroughly considering the data, our linkage variables are only one potential way of linking content analytical and survey data [see De Vreese et al., 2017]. We also did not include covariates in our regressions as this is beyond the scope of the RQs but should be tested in future studies.

Hence, this study is a small but important further step into exploring the connection between a potential decline of public trust in science and digital media environments. We would like to emphasize that we see it as a central advancement of our study to analyze trust in science as a multilevel and multidimensional construct — combining novel approaches in content analysis (i.e., measurement of trust cues) and audience segmentation (i.e., identifying trust groups). In many ways, our approach was more detailed, for instance, not working with frequencies of media use alone and not just performing analyses with aggregated data. We have laid out quantitative connections. In the next step, these deserve explanation and enrichment, which we propose to achieve through qualitative interviews with members of the trust groups.

Acknowledgments

This research is part of the project ‘The trust relationship between science and digitized publics’ (TruSDi), funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) — 456602133. Grant applicants are Lars Guenther (GU 1674/3-1) and Monika Taddicken (TA 712/4-1). The project is coordinated by Anne Reif and supported by Peter Weingart in an advisory capacity. Further members of the research group are Justin T. Schröder, Evelyn Jonas, and Janise Brück. We would like to thank Kajal Premnath for proofreading this article.

References

Bentele, G. (1994). Öffentliches Vertrauen — normative und soziale Grundlage für Public Relations. In Normative Aspekte der Public Relations (pp. 131–158). VS Verlag für Sozialwissenschaften. https://doi.org/10.1007/978-3-322-97043-5_7

Besley, J. C., Lee, N. M., & Pressgrove, G. (2021). Reassessing the variables used to measure public perceptions of scientists. Science Communication, 43, 3–32. https://doi.org/10.1177/1075547020949547

Cook, D. M., Boyd, E. A., Grossmann, C., & Bero, L. A. (2007). Reporting science and conflicts of interest in the lay press (P. Middleton, Ed.). PLoS ONE, 2, e1266. https://doi.org/10.1371/journal.pone.0001266

De Vreese, C. H., Boukes, M., Schuck, A., Vliegenthart, R., Bos, L., & Lelkes, Y. (2017). Linking survey and media content data: opportunities, considerations and pitfalls. Communication Methods and Measures, 11, 221–244. https://doi.org/10.1080/19312458.2017.1380175

European Commission. (2021). European citizens’ knowledge and attitudes towards science and technology (Special Eurobarometer Nr. 516). https://doi.org/10.2775/071577

Fiske, S. T., & Dupree, C. (2014). Gaining trust as well as respect in communicating to motivated audiences about science topics. Proceedings of the National Academy of Sciences, 111, 13593–13597. https://doi.org/10.1073/pnas.1317505111

Frischlich, L., Kuhfeldt, L., Schatto-Eckrodt, T., & Clever, L. (2023). Alternative counter-news use and fake news recall during the COVID-19 crisis. Digital Journalism, 11, 80–102. https://doi.org/10.1080/21670811.2022.2106259

Giddens, A. (1990). The consequences of modernity. Polity Press.

Grünberg, P. (2014). Vertrauen in das Gesundheitssystem. Springer.

Guenther, L., Bischoff, J., Löwe, A., Marzinkowski, H., & Voigt, M. (2019). Scientific evidence and science journalism: analysing the representation of (un)certainty in German print and online media. Journalism Studies, 20, 40–59. https://doi.org/10.1080/1461670x.2017.1353432

Guenther, L., Reif, A., Taddicken, M., & Weingart, P. (2022). Positive but not uncritical: perceptions of science and technology amongst South African online users. South African Journal of Science, 118. https://doi.org/10.17159/sajs.2022/11102

Guenther, L., & Weingart, P. (2018). Promises and reservations towards science and technology among South African publics: a culture-sensitive approach. Public Understanding of Science, 27, 47–58. https://doi.org/10.1177/0963662517693453

Hendriks, F., Kienhues, D., & Bromme, R. (2015). Measuring laypeople’s trust in experts in a digital age: the Muenster Epistemic Trustworthiness Inventory (METI) (J. M. Wicherts, Ed.). PLOS ONE, 10, e0139309. https://doi.org/10.1371/journal.pone.0139309

Hendriks, F., Kienhues, D., & Bromme, R. (2016). Evoking vigilance: would you (dis)trust a scientist who discusses ethical implications of research in a science blog? Public Understanding of Science, 25, 992–1008. https://doi.org/10.1177/0963662516646048

Hendriks, F., Kienhues, D., & Bromme, R. (2020). Replication crisis = trust crisis? The effect of successful vs failed replications on laypeople’s trust in researchers and research. Public Understanding of Science, 29, 270–288. https://doi.org/10.1177/0963662520902383

Hijmans, E., Pleijter, A., & Wester, F. (2003). Covering scientific research in Dutch newspapers. Science Communication, 25, 153–176. https://doi.org/10.1177/1075547003259559

Huber, B., Barnidge, M., Gil de Zúñiga, H., & Liu, J. (2019). Fostering public trust in science: the role of social media. Public Understanding of Science, 28, 759–777. https://doi.org/10.1177/0963662519869097

Klinger, K., Metag, J., Schäfer, M. S., Füchslin, T., & Mede, N. (2022). Are science communication audiences becoming more critical? Reconstructing migration between audience segments based on Swiss panel data. Public Understanding of Science, 31, 553–562. https://doi.org/10.1177/09636625211057379

Kohring, M. (2016). Misunderstanding trust in science: a critique of the traditional discourse on science communication. JCOM, 15, C04. https://doi.org/10.22323/2.15050304

Luhmann, N. (2014). Vertrauen: Ein Mechanismus der Reduktion sozialer Komplexität. UVK.

Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. The Academy of Management Review, 20, 709. https://doi.org/10.2307/258792

National Science Board. (2018). Science & engineering indicators 2018. National Science Foundation. https://www.nsf.gov/statistics/2018/nsb20181/assets/nsb20181.pdf

Neuberger, C. (2014). Konflikt, Konkurrenz und Kooperation: Interaktionsmodi in einer Theorie der dynamischen Netzwerköffentlichkeit. Medien & Kommunikationswissenschaft, 62, 567–587. https://doi.org/10.5771/1615-634x-2014-4-567

Origgi, G. (2012). Epistemic injustice and epistemic trust. Social Epistemology, 26, 221–235. https://doi.org/10.1080/02691728.2011.652213

Plohl, N., & Musil, B. (2021). Modeling compliance with COVID-19 prevention guidelines: the critical role of trust in science. Psychology, Health & Medicine, 26, 1–12. https://doi.org/10.1080/13548506.2020.1772988

Reif, A. (2021). Mehr Raum für Vertrauen? Potenzielle Veränderungen des Vertrauens in Wissenschaft durch partizipative Onlineumgebungen. In T. Döbler, C. Pentzold & C. Katzenbach (Eds.), Neue Schriften zur Online-Forschung: volume 16. Räume digitaler Kommunikation: Lokalität — Imagination — Virtualisierung (pp. 210–243). Herbert von Halem.

Reif, A., & Guenther, L. (2021). How representative surveys measure public (dis)trust in science: a systematisation and analysis of survey items and open-ended questions. Journal of Trust Research, 11, 94–118. https://doi.org/10.1080/21515581.2022.2075373

Reif, A., Kneisel, T., Schäfer, M., & Taddicken, M. (2020). Why are scientific experts perceived as trustworthy? Emotional assessment within TV and YouTube videos. Media and Communication, 8, 191–205. https://doi.org/10.17645/mac.v8i1.2536

Reif, A., Taddicken, M., Guenther, L., Schröder, J. T., & Weingart, P. (2023). The public trust in science scale: a multilevel and multidimensional approach [Preprint]. https://doi.org/10.31219/osf.io/bp8s6

Reif, A., Taddicken, M., Guenther, L., Schröder, J. T., & Weingart, P. (2024). Back to a moderate level of trust after the pandemic? Results from a two-wave panel study on trust in science among digitised publics in Germany. 74th Annual Conference of the International Communication Association (ICA).

Reinemann, C., Haas, A., & Rieger, D. (2022). “I don’t care, ’cause I don’t trust them!” The impact of information sources, institutional trust and right-wing populist attitudes on the perception of the COVID-19 pandemic during the first lockdown in Germany. Studies in Communication and Media, 11, 132–168. https://doi.org/10.5771/2192-4007-2022-1-132

Resnick, H. E., Huddleston, N., & Sawyer, K. (2015). Trust and confidence at the interfaces of the life sciences and society: does the public trust science? A workshop summary. The National Academies Press. https://doi.org/10.17226/21798

Rosman, T., Bosnjak, M., Silber, H., Koßmann, J., & Heycke, T. (2022). Open science and public trust in science: results from two studies. Public Understanding of Science, 31, 1046–1062. https://doi.org/10.1177/09636625221100686

Ruhrmann, G., Guenther, L., Kessler, S. H., & Milde, J. (2015). Frames of scientific evidence: how journalists represent the (un)certainty of molecular medicine in science television programs. Public Understanding of Science, 24, 681–696. https://doi.org/10.1177/0963662513510643

Saffran, L., Hu, S., Hinnant, A., Scherer, L. D., & Nagel, S. C. (2020). Constructing and influencing perceived authenticity in science communication: experimenting with narrative (D. R. J. O’Neale, Ed.). PLOS ONE, 15, e0226711. https://doi.org/10.1371/journal.pone.0226711

Schäfer, M. S., Mahl, D., Füchslin, T., Metag, J., & Zeng, J. (2022). From hype cynics to extreme believers: typologizing the Swiss population’s COVID-19-related conspiracy beliefs, their corresponding information behavior and social media use. International Journal of Communication, 16, 2885–2910. https://ijoc.org/index.php/ijoc/article/view/18863/0

Schäfer, M. S. (2016). Mediated trust in science: concept, measurement and perspectives for the ‘science of science communication’. JCOM, 15, C02. https://doi.org/10.22323/2.15050302

Schäfer, M. S., Füchslin, T., Metag, J., Kristiansen, S., & Rauchfleisch, A. (2018). The different audiences of science communication: a segmentation analysis of the Swiss population’s perceptions of science and their information and media use patterns. Public Understanding of Science, 27, 836–856. https://doi.org/10.1177/0963662517752886

Schröder, J. T., Brück, J., & Guenther, L. (2023). Identifying trust cues: how trust between science and publics is mediated through content about science. 73rd Annual Conference of the International Communication Association (ICA).

Schröder, J. T., & Guenther, L. (2024). Mediating trust in content about science: comparing trust cues across different media. 74th Annual Conference of the International Communication Association (ICA).

Slovic, P. (1993). Perceived risk, trust and democracy. Risk Analysis, 13, 675–682. https://doi.org/10.1111/j.1539-6924.1993.tb01329.x

Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G., & Wilson, D. (2010). Epistemic vigilance. Mind & Language, 25, 359–393. https://doi.org/10.1111/j.1468-0017.2010.01394.x

Taddicken, M., & Krämer, N. (2021). Public online engagement with science information: on the road to a theoretical framework and a future research agenda. JCOM, 20, A05. https://doi.org/10.22323/2.20030205

Takahashi, B., & Tandoc, E. C. (2015). Media sources, credibility, and perceptions of science: Learning about how people learn about science. Public Understanding of Science, 25, 674–690. https://doi.org/10.1177/0963662515574986

Verboord, M. (2023). Bundles of trust? Examining the relationships between media repertoires, institutional trust and social contexts. Communications, 49, 243–262. https://doi.org/10.1515/commun-2022-0013

Weingart, P. (2017). Wissenschaftskommunikation unter digitalen Bedingungen. Funktionen, Akteure und Probleme des Vertrauens. In P. Weingart, H. Wormer, A. Wenninger & R. F. Hüttl (Eds.), Perspektiven der Wissenschaftskommunikation im digitalen Zeitalter (pp. 31–59). Velbrück Wissenschaft.

Weingart, P., & Guenther, L. (2016). Science communication and the issue of trust. JCOM, 15, C01. https://doi.org/10.22323/2.15050301

Welzenbach-Vogel, I. C., Milde, J., Stengel, K., & Dern, M. (2021). Vertrauen in das Gelingen der Energiewende durch Medienberichterstattung? Eine Inhaltsanalyse von journalistischen Beiträgen auf deutschen Online-Nachrichtenportalen unter Berücksichtigung von vertrauensrelevanten Aussagen zu an der Energiewende beteiligten Akteuren. In J. Milde, I. C. Welzenbach-Vogel & M. Dern (Eds.), Intention und Rezeption von Wissenschaftskommunikation (pp. 65–86). Herbert von Halem Verlag.

Wintterlin, F., Hendriks, F., Mede, N. G., Bromme, R., Metag, J., & Schäfer, M. S. (2022). Predicting public trust in science: the role of basic orientations toward science, perceived trustworthiness of scientists and experiences with science. Frontiers in Communication, 6. https://doi.org/10.3389/fcomm.2021.822757

Wirz, D. S., Wettstein, M., Schulz, A., Müller, P., Schemer, C., Ernst, N., Esser, F., & Wirth, W. (2018). The effects of right-wing populist communication on emotions and cognitions toward immigrants. The International Journal of Press/Politics, 23, 496–516. https://doi.org/10.1177/1940161218788956

Wissenschaft im Dialog. (2023). Wissenschaftsbarometer 2023. https://wissenschaft-im-dialog.de/documents/47/WiD-Wissenschaftsbarometer2023_Broschuere_web.pdf

Notes

1. Although negative connections are often emphasized, the diversity of actors, topics, and contents in digital media environments does also have positive implications.

2. We use parentheses here and in the following to indicate that we mean both digital and non-digital media.

3. Hence, science as a system (macro-level), scientific organizations (i.e., meso-level), and scientists (micro-level) — as outlined before.

4. We refer to groups instead of segments to account for the fact that we used a segmentation approach with a limited number of very specific items (i.e., only related to dimensions of trust in science).

5. Most important, in this case, means that user frequencies guided our selection of media sources. For each media source, we identified, compared, and selected media with a large user reach. This search was based on the questionnaire items.

6. The quota plans considered gender, age (starting from 18), and federal state, to ensure representativeness of the sample regarding German census data. For t2, respondents from t1 were invited, along with a re-quotation, but this nevertheless resulted in a slightly skewed sample with more older and well-educated respondents [see Reif et al., 2023].

7. Our additional trust in science variables are multilevel but not multidimensional, as variables with a reference to the dimensions were used to identify trust groups. Noteworthy is that the small decrease in public trust in science between t1 and t2 differed across the five groups, indicating a trend towards the middle of the scale [see Reif et al., 2024].

8. A sum index, normalized by the scale points (i.e., divided by 5). The specific wording of the question was “How much do you trust in…”.

9. Theoretically, we assumed trust in science at t1 to be the strongest predictor of trust in science at t2. Hence, with further variables such as the TCE variables, we tested whether they can provide additional explanations.

About the authors

Lars Guenther is Professor of Communication Science at LMU Munich‘s Department of Media and Communication.

E-mail: lars.guenther@ifkw.lmu.de

Justin T. Schröder is Research Associate at LMU Munich’s Department of Media and Communication and University of Hamburg.

E-mail: justin.schroeder@ifkw.lmu.de X: @JTSchr

Anne Reif is PostDoc at the University of Hamburg.

E-mail: anne.reif@uni-hamburg.de X: @reif_anne

Janise Brück is Research Assistant at LMU Munich’s Department of Media and Communication.

E-mail: janise.brueck@ifkw.lmu.de X: @janisebrueck

Monika Taddicken is Professor of Communication Science at Technische Universität Braunschweig.

E-mail: m.taddicken@tu-braunschweig.de X: @m_taddicken

Peter Weingart is Professor Emeritus at the University of Bielefeld.

E-mail: peter.weingart@uni-bielefeld.de

Evelyn Jonas is Research Assistant at the Technische Universität Braunschweig.

E-mail: evelyn.jonas@tu-braunschweig.de X: @evelynjonas_237