1 Introduction

Although public trust in science remains relatively stable in most countries [Cologna et al., 2024], there is growing concern regarding the potential damage that certain hostile forms of public discourse about science could inflict [e.g. Egelhofer, 2023; Nölleke et al., 2023]. These include accusations of fraud and conspiracies between scientists and political elites [Hameleers & van der Meer, 2021] but increasingly also worrisome types of harassment such as uncivil credibility attacks, insults, and threats of sexual and physical violence [Nogrady, 2021]. Naturally, harassment against scientists is not a new phenomenon. However, the digitization of communication, especially through the rise of social networks, has facilitated the sending of hate comments and made these attacks more visible [Celuch et al., 2022]. In addition, the rise of populism and the aligned anti-elitism has fueled mistrust in science for some parts of the population [Mede & Schäfer, 2020; Zapp, 2022], with many populist politicians leveraging social media to attack experts, potentially normalizing harassment among their followers [Väliverronen & Saikkonen, 2021].

Research on the harassment of scientists is still in its infancy. However, the few existing studies suggest that it is “not a niche problem” [Nölleke et al., 2023, p. 4] and carries severe consequences for the psychological well-being of the affected academics [Global Witness, 2023; Gosse et al., 2021]. Furthermore, there is anecdotal evidence suggesting that harassment disproportionately affects women [e.g. Global Witness, 2023].

However, online harassment of scientists likely not only affects the targeted scholars. Given that much of this harassment occurs online, especially on social media [Celuch et al., 2022], a broad audience gets exposed to it. Building on substantial evidence that uncivil and hostile online comments harm public perceptions of news media — known as the ,nasty effect’ [Anderson et al., 2018] — we propose that harassment comments targeting scientists could similarly undermine public trust in the scientific community. Yet, to the best of our knowledge, there is no study to date investigating the consequences of witnessing harassment against scientists on public perceptions of science.

To address this gap, we conducted a preregistered 2×2 between-subjects experiment with a representative sample of German citizens (N = 1,246), testing whether exposure to harassment comments (vs. no comments) targeting female or male scientists negatively affects citizens’ trust in scientists and the information provided by them. Moreover, we investigate whether the effects differ based on the gender of the attacked scientist and the observer. In addition, considering the lingering assumption that populist politics are a driver of hostile online discourse toward science [Zapp, 2022], we examine the role of science-related populist attitudes.

By bringing attention to the consequences of online harassment on public perceptions of science, this study contributes a crucial perspective to the growing literature on harassment of scientists, deepening our understanding of its wider societal ramifications.

2 Harassment against scientists on social media

Social media has been argued to facilitate the harassment of scientists. As scholars are increasingly expected to proactively engage with the public through social media, they are becoming public figures who are easy to contact [Celuch et al., 2022; Gosse et al., 2021; Nölleke et al., 2023]. Additionally, the visibility that harassment can gain in comment sections allows anyone to reach a wide audience, rendering these platforms more appealing to motivated harassers [Celuch et al., 2022]. In line with this, research suggests that social media platforms are the most frequently reported venue for harassment of scientists [Global Witness, 2023; Gosse et al., 2021]. In the following, we thus focus on online harassment occurring as user comments on social media.

There is no coherent definition of online harassment; rather, the term is used to describe a variety of verbal attacks, varying in severity and ranging from rude credibility attacks to insults targeting scientists’ personal characteristics, such as their physical appearance, and in more extreme cases to threats of physical or sexual violence and even death threats [Global Witness, 2023; Gosse et al., 2021; Nogrady, 2021]. These verbal attacks can be understood as forms of incivility, defined as an “unnecessarily disrespectful tone” [Coe et al., 2014, pp. 660–661] that violates social norms. Some authors distinguish between two forms of incivility. Person-level incivility violates norms of politeness and encompasses rude, vulgar, or disrespectful language [e.g. Kümpel & Unkel, 2023; Muddiman, 2017]. Verbal attacks targeting scientists’ credibility or personal characteristics fall under this category. Public-level incivility violates democratic norms and includes rhetoric that stereotypes and discriminates against (marginalized) groups, thereby jeopardizing democratic inclusion and participation [e.g. Kümpel & Unkel, 2023; Muddiman, 2017]. Verbal attacks that discriminate against scientists as members of marginalized groups or target scientists collectively, as well as threats of violence or death, fall under this type.

It is important to note that not any negative comment can be considered a form of harassment. Criticism and disagreement are inherent parts of the scientific pursuit, striving to improve the scientific process. However, uncivil attacks or even criminally relevant forms of digital violence are unrelated to constructive debate. Therefore, some authors argue that harassment can be distinguished from criticism by its aim to suppress and silence scientists, or to undermine their voice in public discourse [Branford et al., 2019; Väliverronen & Saikkonen, 2021]. We thus define online harassment as a range of uncivil verbal attacks that are not intended to initiate a critical debate but to discredit or silence scientists, acknowledging that intentionality cannot always be discerned externally. In this study, we focus on two commonly reported forms that may be characterized as personal-level incivility: verbal attacks targeting scientists’ credibility and their physical attractiveness.

Crucially, there are indications that harassment can indeed silence scientists, as some tend to resort to self-censorship after being attacked online [Global Witness, 2023; Gosse et al., 2021; Nogrady, 2021]. Whether harassment also has the likely intended negative effects on public perceptions of scientists and their research has not yet been investigated.

2.1 Consequences for public trust in scientists

Given the extensive literature demonstrating how online incivility can undermine trust and credibility perceptions of sources and their messages [e.g. Prochazka et al., 2018; Weber et al., 2019] — also referred to as the ‘nasty effect’ [Anderson et al., 2018] — we propose that online harassment of scientists can similarly erode the public’s trust in scientists and the information they provide.

Trust in science and scientific claims is vital in functioning democracies for the well-being of society [e.g. Weingart & Guenther, 2016], for instance, when it comes to collective action in combating diseases or addressing climate change. Trust, in general, can be defined as the willingness of a person “to be vulnerable to a trustee based on past experiences” [Fawzi et al., 2021, p. 155; see also Reif & Guenther, 2021]. In the context of scientific institutions and actors, trust entails the readiness to depend on their knowledge while accepting the risk of being misinformed. To assess the trustworthiness of a scientific source, individuals consider its expertise, integrity, and benevolence [Hendriks et al., 2015]. However, on social media, individuals have limited means to evaluate these characteristics. Generally, assessing a message’s credibility or its source’s trustworthiness is a cognitively demanding task, requiring knowledge of both content and context, which individuals often lack. Consequently, they frequently rely on noticeably and easily interpreted pieces of information — heuristic cues — which help them to quickly form judgments with minimal cognitive effort [Prochazka et al., 2018; Weber et al., 2019]. Especially on social media, where users encounter numerous messages from diverse (often unknown) sources, individuals heavily rely on various cues when evaluating the trustworthiness and credibility of those messages and their sources [Metzger et al., 2010; Sterrett et al., 2019]. When messages are accompanied by negative, uncivil credibility attacks, it might trigger a judgment heuristic that only somewhat “problematic” or untrustworthy messages or authors would provoke such uncivil and accusatory comments [Weber et al., 2019].

In the context of science communication, uncivil credibility attacks could signal to the audience that the targeted scientist lacks expertise, integrity, or benevolence. While research on the effects of user comments on public perceptions of science is limited, there is evidence that exposure to news stories containing incivility and credibility attacks against scientists decreases trust in both the targeted scientists [Chinn & Hart, 2022] and scientists in general [Hameleers & van der Meer, 2021]. Furthermore, as scientific claims are often very complex and require knowledge that most lay people do not possess [Bromme & Goldman, 2014], it seems particularly likely that individuals will rely on heuristic cues such as uncivil comments when forming perceptions of these claims.

Taken together, there is reason to expect that exposure to harassment comments, i.e., uncivil attacks on scientists’ credibility and physical appearance, have a negative effect on the perceived trustworthiness of the targeted scientists (H1), the acceptance of a scientific claim made by the attacked scientist (H2), as well as the perceived trustworthiness of scientists in general (H3).

In addition, the individual scientist’s trustworthiness may form a critical nexus for the adverse effects on claim acceptance and general trust in scientists. Studies in the context of persuasive communication have shown that source credibility is linked to message credibility, meaning information presented by less credible sources is judged as less credible than information by credible sources [for an overview, see Pornpitakpan, 2004]. Consequently, when harassment comments undermine the perceived trustworthiness of a scientist, it is likely to affect how their scientific claims are received. Furthermore, as stated above, trust is formed based on past experiences [Fawzi et al., 2021]. If people have a negative experience with one scientist, it may thus impact their assessment of the trustworthiness of (other) scientists in the future. In other words, an individual scientist may be seen as an example of a general untrustworthy community, resulting in a spill-over effect eroding the trustworthiness of scientists in general. Thus, we propose that the effects specified in H2-H3 are mediated by the perceived trustworthiness of the attacked scientist (H4).

2.2 The role of science-related populism

The effects of harassment of scientists are likely most pronounced among individuals who already harbor skepticism toward them. When people hold antagonistic attitudes toward experts and academics, they may be more inclined to believe that these actors deserve to be targeted with such hostility. One set of such antagonistic attitudes is described within the framework of science-related populism, a variant of populism characterized by perceived antagonism between ordinary people and the academic elite, who is accused of ignoring the people’s truth and conspiring with the political elite [Mede & Schäfer, 2020]. Individuals endorsing science-related populist views often perceive scientists as immoral, ideologically biased, and self-serving [Mede et al., 2020], and these attitudes are associated with lower levels of trust in science [Eberl et al., 2023].

Moreover, it appears plausible that those holding such attitudes are inclined to align with politicians who express antagonism towards scientific elites, following them on social media. Extant research has consistently demonstrated that populist politicians leverage social media platforms to propagate severe criticisms of elite actors, e.g., frequently attacking journalists and news media [Engesser et al., 2017]. A growing line of research highlights that populists also increasingly engage in hostile or even aggressive discourse targeting scientists on social media, possibly normalizing or even encouraging expressions of harassment among their followers [Väliverronen & Saikkonen, 2021].

In sum, people with strong science-related populist attitudes are already more distrustful of science and likely have been normalized to hostile attacks on scientists. Thus, we expect the negative effects of exposure to harassment comments against scientists outlined in H1-3 will be stronger for individuals with stronger science-related populist attitudes (H5).

2.3 Gender differences

Lastly, we are interested in whether gender differences exist in the perception of harassment of scientists. First, effects may differ regarding the gender of the attacked scientists. So far, systematic investigations into the prevalence of harassment among male and female scientists remain limited, and one of the few surveys on harassment against scientists finds no significant difference in the volume of violent threats received by male and female scientists [Nogrady, 2021]. However, anecdotal evidence suggests that women may experience harassment differently compared to their male counterparts. For example, research indicates that many female scientists perceive their gender as the explaining factor in being targeted, with harassment often taking gender-specific forms [Gosse et al., 2021; Global Witness, 2023]. Furthermore, studies report that women face a disproportionately higher number of threats of sexual violence than men [Global Witness, 2023]. In addition, research focusing on other publicly visible professions, such as journalism and politics, consistently indicates that women are subjected to different types of attacks, including a higher frequency of sexist harassment, and are targeted more frequently [e.g. Lewis et al., 2020; Southern & Harmer, 2019]. This broader evidence underlines the nuanced nature of gender-based harassment, highlighting a pattern where women, regardless of their profession, experience harassment more frequently, and often endure more severe forms of harassment. As a result, worries that the frequency of such incidents normalizing harassment of women are increasing [e.g. Mong, 2019].

Second, there may also be differences in effects relating to the gender of the observers of harassment. Studies suggest that men and women differ in their ability to feel empathy. For example, Kim and Grabe [2022] find that women show more empathy and willingness to help individuals affected by discrimination. However, the level of empathy might also vary regarding whether one shares the gender of the victim, with women being more empathetic toward same-sex victims than toward other-sex victims. At the same time, men experience more empathy towards women than men [Stuijfzand et al., 2016].

Overall, there are several indications that both the gender of the targeted scientists and the gender of the observers of harassment may have an influence, which is why we ask: Do the effects proposed in H1-3 differ a) with respect to the gender of the attacked scientist and b) with respect to the participants’ gender? (RQ1)

3 Methods

Our study was preregistered1 and approved by the University of Vienna institutional review board (IRB).

3.1 Country context

This study is situated in Germany, a country with relatively stable levels of trust in science [Wissenschaft im Dialog/Kantar, 2023], yet increasing incidents of harassment of scientists and science communicators [Schneider, 2023; Peter et al., 2023]. The growing number of attacks has even necessitated the establishment of “scicomm support,” a platform designed to assist harassed experts [Wandt, 2023]. Additionally, Germany is home to a notably robust right-wing populist party, the Alternative for Germany (AfD), who has a history of discrediting several research fields [e.g. Krämer & Klingler, 2020].

3.2 Design and procedure

We conducted an online survey experiment with a 2 (harassment comments vs. no harassment comments) × 2 (female scientist vs. male scientist) between-subjects design. Participants were randomly assigned to one of the four groups. Upon providing informed consent, participants answered questions about their socio-demographics and science-related populist attitudes. Following this, participants were exposed to two social media posts authored by fictitious scientists. In the experimental conditions, these posts were accompanied by two harassment comments each. After each post, participants responded to questions measuring the trustworthiness of the scientist whose post they had just seen. The remainder of the survey included measures for dependent variables, attention and manipulation checks, and an extensive debrief, including contact points for victims of harassment.

3.3 Stimulus

All participants were exposed to two social media posts by two fictitious scientists. The scientist is a male in two conditions, indicated by German male names (Manuel Bieger, Dr. Tobias Freystetter). In the other two conditions, the scientist is a female, indicated by German female names (Melanie Bieger, Dr. Tabea Freystetter). In each of the posts, the scientists shared a link to an interview they gave about their research, e.g., “Science explained — in this interview, I talk about my new study and what these results mean for society. Check it out! https://bit.ly/4120ds.” In the two experimental conditions, the posts are accompanied by two harassment comments from anonymous social media users. As stated above, the harassment comments consisted of attacks on the scientists’ credibility and physical appearance. For example, two comments read “I haven’t read anything that stupid in a long time! Which fake university did you graduate from?” (i.e., credibility attack) and “Something as ugly as you should not be allowed to have a social media account!” (i.e., attack on physical appearance.)2,3

3.4 Pre-test

Before our study, we conducted an extensive pre-test of several variants of social media postings and harassment comments to ensure the realism and relevance of our stimulus material. We tested four different generic postings (i.e., no area of expertise was mentioned) that either pointed to an interview or a new publication by the authors. In addition, the post was authored by a person with either a male or female connotated name (leading to eight posts in total). For each post, we asked participants of the pretest (N = 141, convenience sample) to assess the perceived realism of the post (four items, e.g., “You often see posts like this on social media.”). For the main study, we chose the two postings that received the highest realism ratings, which were similar for both the male and female versions.

In addition, we pre-tested eight harassment comments related to the appearance or scientific aptitude of the post’s author and asked participants to assess how abusive (two items, “to me the comment was…” “insulting, “aggressive”) and threatening (“frightening,” “menacing”) the comment was. Again, we chose the four comments (two for each post) for the main study that scored highest on both dimensions while receiving similar ratings for the male and female versions.

3.5 Sample

A sample of German internet users (16 and older; M = 49.74; SD = 17.43; 51.6% female, 47.7% male, 0.6% diverse, 0.1% no answer) representative for age, gender, and education (low: 22.9%, medium: 50.6%, high: 26.6%) was recruited by the panel agency Bilendi/Respondi. We conducted a Monte Carlo power analysis for indirect effects following Schoemann et al. [2017] to determine the necessary sample size for our study. Our model, with mediation and correlations of 0.20 as the smallest effect size of interest, required a minimum of 540 participants at a power of 0.80 and α = 0.05. Considering that the analysis of interaction effects requires much larger sample sizes, we doubled the minimum number and added a 10% oversampling, setting our target sample size N = 1,200 participants.

Two attention checks were included as (1) an instructed-response item inserted in the item battery on science-related populist attitudes, asking respondents to “please select ‘5 - Agree completely”’ [see, e.g. Kung et al., 2018] and (2) a multiple-choice question, asking for the content of the social media posts. Participants who failed one of these attention checks were excluded from the final sample (n = 1,645). Furthermore, we excluded 58 participants who indicated that they were currently employed in a scientific occupation (e.g., working at a university or scientific institute), as these individuals are likely to be differently affected by harassment comments targeting scientists. Lastly, we excluded 17 speeders, i.e., participants who took less than one-third of the median length of the survey. Thus, we ended up with a final sample size of N = 1,246 for data analysis. However, we did not force responses in the survey for ethical reasons [see Sischka et al., 2020, for a discussion], and, therefore, had several missing values, as indicated in Tables 1 and 2.

Randomization checks revealed successful randomization of gender (female, male, diverse, no indication; χ2(9, 1246) = 10.923, p = .281, Cramer’s V = .05) and education (three groups: low, medium, high; χ2(6, 1246) = 2.809, p = .832, Cramer’s V = .03). A one-way ANOVA was conducted to assess the randomization of age. The test yielded a significant result (F(3, 1246)= 2.653, p = .047, η² = .006, CI [.00; .02]). However, subsequent Bonferroni post hoc comparisons did not reveal any statistically significant age differences between the respective groups, indicating that any age differences across the conditions are minor and unlikely to affect the results of the experiment substantially.

3.6 Measures

If not stated otherwise, all items were measured on 5-point scales.

Trust in scientists (attacked scientists and scientists in general) is based on a selection of six items of the three-dimensional scale by Hendriks et al. [2015]. The two items with the highest factor loadings per dimension were chosen for each of the three dimensions, expertise, integrity, and benevolence (trust in attacked scientists: M = 3.52, SD = 0.83, Cronbach’s α = 0.95; trust in scientists in general: M = 3.70, SD = 0.72, Cronbach’s α = 0.88).

Acceptance of a scientific claim is measured by asking respondents, “The scientist, Dr. Tobias / Tabea Freystetter, whose posting you just read, has made the following claim in a media interview: ‘Rising CO2 levels threaten human nutrition.’ To what extent do you agree with this statement” (M = 3.31, SD = 1.24). This statement is also the title of a publication by Myers et al. [2014]. We have chosen this statement because it is supported by broad scientific evidence. At the same time, we expect it not to be so evident to the public and, therefore, potentially influenced by communicative cues accompanying it.

Science-related populist attitudes were measured with the four-dimensional scale by Mede et al. [2020]. Following Wuttke et al. [2020], we created a non-compensatory measure of science-related populist attitudes, the Goertzian approach. This measure uses the minimum value of the four concept subdimensions (i.e., “Conceptions of the ordinary people,” “Conceptions of the academic elite,” “Demands for decision-making sovereignty,” and “Demands for truth-speaking sovereignty” [Mede et al., 2020, p. 15]; M = 2.14, SD = 0.85).

Gender was measured by asking participants to indicate with which gender they identify most; options: “female,” “male,” “diverse,” “don’t want to answer this question.” For our analyses, we created a binary gender dummy (0 = male, 1 = female).

3.7 Manipulation check

To test whether our manipulation worked, we asked participants to agree or disagree with questions about the social media content. Specifically, whether there were harassment comments under the postings or not (manipulation check harassment), and whether the social media postings were from two female scientists, two male scientists, or one female and one male scientist (manipulation check gender). Chi-square tests indicated a successful manipulation of both the presence of harassment comments and the perceived gender of scientists. In the harassment conditions, 90.3% of participants (n = 547) correctly identified the presence of harassment comments, compared to only 11.2% (n = 71) in the control conditions (χ2(1, 1244)= 778.563, p < .001, Cramer’s V = .79). The manipulation of the perceived gender of the scientists was less strong but also significant. 67.6% (n = 416) participants who were exposed to content from a female scientist were significantly more likely to agree with the statement that the content was by a female scientist compared to 7.2% (n = 45) participants in the male scientist condition (χ2(1, 1241)= 485.593, p < .001, Cramer’s V = .63). Similarly, significantly more participants in the male scientist condition (54.6% (n = 342)) affirmed seeing content by a male scientist. In contrast, in the female scientist condition, only 7.8% (n = 48) mistakenly agreed with the statement (χ2(1, 1240)= 315.102, p < .001, Cramer’s V = .50).

4 Results

4.1 Effects on public trust in scientists

To test our hypotheses and research questions, we conducted a total of four moderated mediation analyses using the PROCESS Macro for SPSS by Hayes [2013]. Figure 1 gives an overview of all hypotheses/research questions and the models in which they are tested. Table 1 shows the results of all four models. Furthermore, we use t-tests to look at the mean score comparisons of all dependent variables.

We first expected that exposure to harassment comments has a negative effect on the perceived trustworthiness of the attacked scientists (H1). As can be seen from all models, exposure to harassment had a significant negative effect on trust in the attacked scientists (b = -0.27, SE = 0.05, p < .001). These results support H1: Individuals in the harassment conditions reported significantly lower levels of trust in the attacked scientists (M = 3.39, SE = 0.04) than individuals in the control conditions (M = 3.65, SE = 0.03; t(1240) = 5.700, p < .001, Cohen’s d = 0.32, 95% CI [.21; .44]).

Furthermore, we expected harassment comments to have a negative effect on the acceptance of a claim made by the attacked scientists (H2) and that this effect would be mediated by the perceived trustworthiness of the scientists (H4). As can be seen from models 1 and 2 (Table 1), contrary to our assumption, we find a positive direct effect of harassment on the acceptance of the claim (b = 0.14, SE = 0.07, p = .041). This directed effect is countered by an expected negative indirect effect through the perceived trustworthiness of the attacked scientists, as it significantly affects the acceptance of the claim (b = 0.27, SE = 0.04, p < .001). A separate mediation analysis (PROCESS, Model 4) showed that the indirect effect is significant (b = -0.09, BootSE = 0.02, 95% BootCI [-0.130, -0.057]), supporting H4. The negative indirect effect leads to an overall null effect of harassment on the acceptance of the scientists’ claim: Participants in the experimental conditions (M = 3.35, SE = 0.05) did not report significantly different levels of claim acceptance than individuals in the control condition (M = 3.27, SE = 0.05; t(1240) = -1.125, p = .261, Cohen’s d = -.06, 95% CI [-.18, .05]), lending no support for H2.

Similarly, we hypothesized that harassment comments negatively affect trust in scientists in general (H3) and that this effect would be mediated by the perceived trustworthiness of the attacked scientists (H4). Here, we find the same pattern as for acceptance of claim (Model 2, Figure 1): While there is a direct positive effect of harassment on trust in scientists in general (b = 0.12, SE = 0.03, p < .001), we find a negative indirect effect through perceived trustworthiness of the scientists, which significantly and negatively affects trust in scientists in general (b = 0.40, SE = 0.02, p < .001). A separate mediation analysis (PROCESS, Model 4) showed that the indirect is significant (b = -0.12, BootSE = 0.02, 95% BootCI [-0.164, -0.082]), supporting H4. Again, this indirect negative effect leads to an overall null effect: Individuals in the harassment conditions (M = 3.71, SE = 0.03) did not report significantly different levels of general trust in scientists than the control condition (M = 3.69, SE = 0.03; t(1242) = -0.523, p = .601, Cohen’s d = -.03, 95% CI[-.14, .08]), lending no support for H3.

PIC

Figure 1: Overview of hypotheses, research questions, and models.

PIC
Table 1: Mediated moderation analyses (PROCESS) predicting citizens’ trust in attacked scientists, claim acceptance and trust in scientists in general.

4.2 Moderation through science-related populist attitudes

Next, we expected that the effects in H1-3 would be more pronounced for individuals with stronger science-related populist attitudes (H5). Model 1 and 2 (Table 1) show a significant interaction effect of exposure to harassment comments and science-related populist attitudes on trust in the attacked scientists (b = -0.13, SE = 0.05, p = .014). Conditional effects at different values of the moderator reveal that the impact of harassment comments on trust in the attacked scientists was negative and significant for all levels of science-related populist attitudes but getting stronger among those scoring higher (-1SD: b = -0.17, SE = 0.06, p = .010; M: b = -0.27, SE = 0.05, p < .001; +1SD: b = -0.38, SE = 0.06, p < .001), which can also be seen in Figure 2. However, we do not find interaction effects of exposure to harassment comments and science-related attitudes for claim acceptance (b = -0.05, SE = 0.08, p = .573; Model 1) or trust in scientists in general (b = -0.02, SE = 0.04, p = .552; Model 2). Thus, we only find partial support for H4.

PIC

Figure 2: The moderating effect of science-related populist attitudes on the relationship between exposure to harassment comments and trust in the attacked scientists.

4.3 Gender differences

Lastly, we asked whether the gender of the attacked scientist (RQ1a) or the participants’ gender (RQ1b) moderated any of the hypothesized effects. As can be seen in Model 3, there is no significant interaction effect of harassment comments and scientist gender on trust in the attacked scientists (b = 0.03, SE = 0.09, p = .745) or claim acceptance (b < -0.01, SE = 0.14, p = .982). However, we find a significant interaction effect for trust in scientists in general (Model 4, b = 0.15, SE = 0.07, p = .022): Additional moderation analysis reveals that the direct positive effect of harassment on general trust in scientists only holds true for participants that saw stimuli featuring a female scientist (b = 0.20, BootSE = 0.05, 95% BootCI [0.110, 0.293]), but not for those who saw a male scientist (b = 0.05, BootSE = 0.05, 95% BootCI [-0.045, 0.137]), indicating an increase in trust in scientists after being exposed to harassment towards female scientists.

Furthermore, we find a significant interaction effect of harassment comments and participants’ gender for trust in the attacked scientists (b = 0.24, SE = 0.09, p = .007; Model 3). Additional moderation analysis reveals that the negative effect of harassment on the perceived trustworthiness of attacked scientists is significantly more pronounced for male participants (b = -0.40, BootSE = 0.07, 95% BootCI [-0.528, -0.273]) than for female participants (b = -0.16, BootSE = 0.06, 95% BootCI [-0.279, -0.032]). Similarly, there is a significant interaction effect on trust in scientists in general (b = 0.13, SE = 0.07, p = .046, Model 4). In this case, an additional moderation analysis shows that while the positive direct effect of harassment comments on trust in scientists in general is negative and marginally significant for male participants (b = -0.10, SE = 0.05, p = .058), it is positive and significant for female participants (b = 0.12, SE = 0.05, p = .017), indicating a minimal increase in general trust in scientists for women when faced with harassment comments. However, there is no significant interaction effect of harassment comments and participants’ gender on claim acceptance (b = 0.12, SE = 0.14, p = .371; Model 5).

In addition, we analyzed whether effects differ when the participant and the attacked scientist share the same gender. Table 2 shows that this is not the case: There is no significant interaction effect of harassment comments and shared gender on trust in the attacked scientist (b = 0.05, SE = 0.09, p = .571; Model 5), claim acceptance (b = -0.22, SE = 0.15, p = .103, Model 5) and trust in scientists in general (b = -0.03, SE = 0.07, p = .690, Model 6).

PIC
Table 2: Mediated moderation analyses (PROCESS) with shared gender as moderator.

5 Discussion and conclusion

Online harassment of scientists is on the rise, and concerns about its potentially detrimental consequences are increasing [Gosse et al., 2021; Global Witness, 2023; Nogrady, 2021]. A growing body of literature suggests that harassment may not only be intended to silence scientists, but another goal is to publicly discredit them [Branford et al., 2019; Celuch et al., 2022; Väliverronen & Saikkonen, 2021]. However, thus far, no research has considered the influence of harassment on public perceptions of scientists and the information provided by them.

Our results show that when citizens witness scientists being harassed on social media, it negatively affects how trustworthy they perceive the targeted scientists (H1), indicating that the so-called ‘nasty effect’ [Anderson et al., 2018] of uncivil user comments also pertains to public perceptions of individual scientists. This corresponds to the worries of scientists who have experienced harassment, fearing that such publicly visible discreditation will affect how they are perceived by others [Global Witness, 2023; Gosse et al., 2021]. However, it is important to note that the effect is quite small. Further research is needed to fully understand online harassment’s implications on public perceptions of scientists. Furthermore, we find that this negative effect is more pronounced among individuals with strong science-related populist attitudes (H5). This finding adds to the literature on science-related populism [Mede & Schäfer, 2020], highlighting that individuals with these attitudes are more susceptible to uncivil comments targeting scientists. This may be because uncivil credibility attacks might confirm existing beliefs about academic elites being “immoral” and striving to mislead the public [Hameleers & van der Meer, 2021; Mede & Schäfer, 2020] and therefore deserving of harassment. Moreover, anecdotal evidence suggests that much harassment of scientists comes from populist political actors and their followers [Väliverronen & Saikkonen, 2021]. Thus, exposure to such uncivil comments might more easily cue these attitudes.

However, contrary to our expectations, exposure to harassment comments did not affect general trust in scientists (H3), independent of one’s science-related populist attitudes (H5), the gender of attacked scientists, or participants (RQ1). A possible explanation may be that general trust in scientists is likely a relatively stable attitude [Funk & Kennedy, 2020], not easily affected by a single exposure to harassment comments. However, trust in individual actors is more malleable, with people often forming quick judgments about public figures based on very little information [Akin & Scheufele, 2017]. Consequently, citizens’ trust in individual scientists is more easily affected by online harassment, while their general attitudes towards science remain unaffected [see also Egelhofer, 2023].

Moreover, there is no effect on citizens’ acceptance of a scientific claim made by the attacked scientist (H2), regardless of the strength of science-related populist attitudes (H5). This finding does not align with research showing the negative effect of uncivil user comments on quality or accuracy perceptions of online information [e.g. Prochazka et al., 2018]. A possible explanation for this null finding may lie in the methodological design, specifically how claim acceptance was measured. In our experimental setup, the scientific claim was not presented alongside the harassment within the stimulus material but was introduced later in the subsequent questionnaire. Hence, participants evaluated their belief in the scientific claim separately from and subsequent to exposure to the harassment comments. Another possibility may lie in the chosen claim itself. First, the connection between rising CO2 emissions and nutrition might have been unfamiliar to participants, leading them to perceive it as not credible, regardless of the presence of harassment. Furthermore, citizens likely have already established firm attitudes towards environmental and health issues, both attitudes that are not readily influenced by the fact that an advocate of this claim is subject to online harassment. Future research should explore how online harassment influences perceptions of scientific claims when both are presented concurrently and consider the impact of pre-existing issue attitudes to provide a more comprehensive understanding of these dynamics.

Lastly, our analysis of gender differences shows a complex picture (RQ1). First, we find no differences in effects for when a female or male scientist is targeted with harassment for trust in the attacked scientists and a claim made by them. However, our results suggest that exposure to harassment against female scientists has a positive effect on general trust in scientists. As discussed, there is evidence that both men and women are more empathetic towards women than men [Stuijfzand et al., 2016]. Consequently, when exposed to harassment of female scientists, individuals might feel empathy, leading to a backfire effect that increases general trust in scientists as a protective response against perceived attacks on the profession. However, the individual scientist, being the focal point of negativity, might not benefit from this protective response, which is why there is no positive backfire effect on trust in the attacked scientists. Furthermore, we find significant interaction effects of the participant’s gender and exposure to harassment comments for trust in scientists (both attacked and general). While both genders experience a decrease in trust in the attacked scientists, this decrease is more pronounced for males than females. Regarding general trust in scientists, the impact of harassment comments is negative for male participants but positive for females, suggesting that females might either rebound from or resist the negative impact of harassment on general trust more effectively than males. These findings highlight a complex dynamic, with males being more sensitive to the effects of harassment comments, while females resist these effects and even show an inverse reaction when it comes to trust in the scientific community as a whole. A possible reason for these gender differences may be that women react with more empathy towards discriminated individuals [Kim & Grabe, 2022], and this empathy somehow counteracts the negative effect of uncivil comments. However, these are just assumptions of why these differences for both participants’ and scientists’ genders occur. Overall, we want to highlight that the manipulation of the scientists’ gender was very subtle in our design, with gender being signaled only through the names (e.g., Tabea vs. Tobias). Our manipulation check shows that while significant, differences in perceived gender were not that strong. Future research is needed to replicate these effects and shed light on the underlying mechanisms. Such research could employ more pronounced manipulations of gender, such as through visual representations of scientists.

Beyond the already discussed limitations, we want to highlight a few more shortcomings of our design. Firstly, the stimuli were generic in nature; the scientists did not specify their field of research or the topics on which they were commenting. While this was an intentional choice to avoid pre-existing issue attitudes to influence the effects, it renders our stimuli somewhat unrealistic. Given anecdotal evidence that especially those researchers who speak out about topics that may be seen as prone to controversial debate (e.g., climate change, gender, migration) are more frequently subjected to harassment [Väliverronen & Saikkonen, 2021], future research should test the effects of harassment comments in relation to scientists’ communication about specific issues. Secondly, our design involved a comparison of harassment comments vs. no comments at all. This decision was made to isolate the effect of online harassment. Nonetheless, future investigations should explore the nuanced effects of various types of negative comments, differing in levels of criticism and incivility, to better understand the spectrum of responses to negative online comments. Lastly, we conducted our study in only one country, i.e., Germany, where academic freedom is relatively high [Kinzelbach et al., 2024] and trust in science is about average compared to global levels [Cologna et al., 2024]. As harassment of scientists occurs around the globe [e.g. Nogrady, 2021], there is an urgent need to understand its effects on public perceptions of science in other countries, as effects might differ when considering variations in cultural attitudes, levels of academic freedom, and trust in science.

Despite these limitations, our study has important implications. It is a first indication that harassment of scientists may not only affect targeted individuals but also public perceptions of them, highlighting the broader societal consequences of harassment against scientists. Given that user comments are considered a critical part of democratic debate, read by many [Stroud et al., 2016], their influence should not be underestimated. In line with this, scholars worry that large-scale harassment could generate an atmosphere where scientific evidence is disregarded, undermining the societal value of scientific inquiry [Branford et al., 2019; Celuch et al., 2022; Väliverronen & Saikkonen, 2021]. Practically, our study thus underscores the need to sensitize the public to harassment, which, unlike valid criticism, is often a strategic attempt to undermine scientists rather than engage in critical debate. Hence, addressing online harassment of scientists is crucial for maintaining public trust in scientific inquiry and ensuring the integrity of democratic discourse.

Acknowledgments

This research was funded by a research grant from the research council of the University of Klagenfurt.

References

Akin, H., & Scheufele, D. A. (2017). Overview of the science of science communication. In K. H. Jamieson, D. M. Kahan & D. A. Scheufele (Eds.), The Oxford handbook of the science of science communication (pp. 25–33). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190497620.013.3

Anderson, A. A., Yeo, S. K., Brossard, D., Scheufele, D. A., & Xenos, M. A. (2018). Toxic talk: how online incivility can undermine perceptions of media. International Journal of Public Opinion Research, 30, 156–168. https://doi.org/10.1093/ijpor/edw022

Branford, J., Grahle, A., Heilinger, J.-C., Kalde, D., Muth, M., Parisi, E. M., Villa, P.-I., & Wild, V. (2019). Cyberhate against academics. In Responsibility for refugee and migrant integration (pp. 205–226). De Gruyter. https://doi.org/10.1515/9783110628746-015

Bromme, R., & Goldman, S. R. (2014). The public’s bounded understanding of science. Educational Psychologist, 49, 59–69. https://doi.org/10.1080/00461520.2014.921572

Celuch, M., Savela, N., Oksa, R., Latikka, R., & Oksanen, A. (2022). Individual factors predicting reactions to online harassment among Finnish professionals. Computers in Human Behavior, 127, 107022. https://doi.org/10.1016/j.chb.2021.107022

Chinn, S., & Hart, P. S. (2022). Can’t you all just get along? Effects of scientific disagreement and incivility on attention to and trust in science. Science Communication, 44, 108–129. https://doi.org/10.1177/10755470211054446

Coe, K., Kenski, K., & Rains, S. A. (2014). Online and uncivil? Patterns and determinants of incivility in newspaper website comments. Journal of Communication, 64, 658–679. https://doi.org/10.1111/jcom.12104

Cologna, V., Mede, N. G., Berger, S., Besley, J. C., Brick, C., Joubert, M., Maibach, E., Mihelj, S., Oreskes, N., Schäfer, M. S., & Linden, S. v. d. (2024). Trust in scientists and their role in society across 68 countries [Preprint]. Open Science Framework. https://doi.org/10.31219/osf.io/6ay7s

Eberl, J.-M., Huber, R. A., Mede, N. G., & Greussing, E. (2023). Populist attitudes towards politics and science: how do they differ? Political Research Exchange, 5, 2159847. https://doi.org/10.1080/2474736x.2022.2159847

Egelhofer, J. L. (2023). How politicians’ attacks on science communication influence public perceptions of journalists and scientists. Media and Communication, 11, 361–373. https://doi.org/10.17645/mac.v11i1.6098

Engesser, S., Ernst, N., Esser, F., & Büchel, F. (2017). Populism and social media: how politicians spread a fragmented ideology. Information, Communication & Society, 20, 1109–1126. https://doi.org/10.1080/1369118x.2016.1207697

Fawzi, N., Steindl, N., Obermaier, M., Prochazka, F., Arlt, D., Blöbaum, B., Dohle, M., Engelke, K. M., Hanitzsch, T., Jackob, N., Jakobs, I., Klawier, T., Post, S., Reinemann, C., Schweiger, W., & Ziegele, M. (2021). Concepts, causes and consequences of trust in news media — a literature review and framework. Annals of the International Communication Association, 45, 154–174. https://doi.org/10.1080/23808985.2021.1960181

Funk, C., & Kennedy, B. (2020). Public confidence in scientists has remained stable for decades. Pew Research Center. https://www.pewresearch.org/short-reads/2020/08/27/public-confidence-in-scientists-has-remained-stable-for-decades/

Global Witness. (2023). Global hating. How online abuse of climate scientists harms climate action. https://www.globalwitness.org/en/campaigns/digital-threats/global-hating/

Gosse, C., Veletsianos, G., Hodson, J., Houlden, S., Dousay, T. A., Lowenthal, P. R., & Hall, N. (2021). The hidden costs of connectivity: nature and effects of scholars’ online harassment. Learning, Media and Technology, 46, 264–280. https://doi.org/10.1080/17439884.2021.1878218

Hameleers, M., & van der Meer, T. G. L. A. (2021). The scientists have betrayed us! The effects of anti-science communication on negative perceptions toward the scientific community. International Journal of Communication, 15, 4709–4733. https://ijoc.org/index.php/ijoc/article/view/17179

Hayes, A. F. (2013). Introduction to mediation, moderation and conditional process analysis: a regression-based approach. Guilford Press.

Hendriks, F., Kienhues, D., & Bromme, R. (2015). Measuring laypeople’s trust in experts in a digital age: the Muenster Epistemic Trustworthiness Inventory (METI). PLOS ONE, 10, e0139309. https://doi.org/10.1371/journal.pone.0139309

Kim, M., & Grabe, M. E. (2022). The influence of news brand cues and story content on citizen perceptions of news bias. The International Journal of Press/Politics, 27, 76–95. https://doi.org/10.1177/1940161220963580

Kinzelbach, K., Lindberg, S. I., & Lott, L. (2024). Academic freedom index — 2024 update. https://doi.org/10.25593/OPEN-FAU-405

Krämer, B., & Klingler, M. (2020). A bad political climate for climate research and trouble for gender studies: right-wing populism as a challenge to science communication. In B. Krämer & C. Holtz-Bacha (Eds.), Perspectives on populism and the media (pp. 253–272). Nomos Verlagsgesellschaft mbH & Co. KG. https://doi.org/10.5771/9783845297392-253

Kümpel, A. S., & Unkel, J. (2023). Differential perceptions of and reactions to incivil and intolerant user comments (C. Shen, Ed.). Journal of Computer-Mediated Communication, 28. https://doi.org/10.1093/jcmc/zmad018

Kung, F. Y. H., Kwok, N., & Brown, D. J. (2018). Are attention check questions a threat to scale validity? Applied Psychology, 67, 264–283. https://doi.org/10.1111/apps.12108

Lewis, S. C., Zamith, R., & Coddington, M. (2020). Online harassment and its implications for the journalist-audience relationship. Digital Journalism, 8, 1047–1067. https://doi.org/10.1080/21670811.2020.1811743

Mede, N. G., & Schäfer, M. S. (2020). Science-related populism: conceptualizing populist demands toward science. Public Understanding of Science, 29, 473–491. https://doi.org/10.1177/0963662520924259

Mede, N. G., Schäfer, M. S., & Füchslin, T. (2020). The SciPop scale for measuring science-related populist attitudes in surveys: development, test and validation. International Journal of Public Opinion Research, 33, 273–293. https://doi.org/10.1093/ijpor/edaa026

Metzger, M. J., Flanagin, A. J., & Medders, R. B. (2010). Social and heuristic approaches to credibility evaluation online. Journal of Communication, 60, 413–439. https://doi.org/10.1111/j.1460-2466.2010.01488.x

Mong, A. (2019). “It should not be accepted as normal”: Female journalists on harassment, intimidation in the Netherlands. Committee to Protect Journalists. https://cpj.org/2019/07/netherlands-female-journalist-harassed-attacked/

Muddiman, A. (2017). Personal and public levels of political incivility. International Journal of Communication, 11, 3182–3202. https://ijoc.org/index.php/ijoc/article/view/6137

Myers, S. S., Zanobetti, A., Kloog, I., Huybers, P., Leakey, A. D. B., Bloom, A. J., Carlisle, E., Dietterich, L. H., Fitzgerald, G., Hasegawa, T., Holbrook, N. M., Nelson, R. L., Ottman, M. J., Raboy, V., Sakai, H., Sartor, K. A., Schwartz, J., Seneweera, S., Tausz, M., & Usui, Y. (2014). Increasing CO2 threatens human nutrition. Nature, 510, 139–142. https://doi.org/10.1038/nature13179

Nogrady, B. (2021). ‘I hope you die’: how the COVID pandemic unleashed attacks on scientists. Nature, 598, 250–253. https://doi.org/10.1038/d41586-021-02741-x

Nölleke, D., Leonhardt, B. M., & Hanusch, F. (2023). “The chilling effect”: Medical scientists’ responses to audience feedback on their media appearances during the COVID-19 pandemic. Public Understanding of Science, 32, 546–560. https://doi.org/10.1177/09636625221146749

Peter, C., Frischlich, L., Obermaier, M., Schmid, U. K., Riesmeyer, C., & Menke, M. (2023). Die dunkle Seite der Wissenschaftskommunikation — Erfahrungen von Kommunikationswissenschaftler: innen mit inzivilen Angriffen [The dark side of science communication — experiences of communication scientists with uncivil attacks]. 68th Annual Conference of the German Society for Journalism and Communication Studies.

Pornpitakpan, C. (2004). The persuasiveness of source credibility: a critical review of five decades’ evidence. Journal of Applied Social Psychology, 34, 243–281. https://doi.org/10.1111/j.1559-1816.2004.tb02547.x

Prochazka, F., Weber, P., & Schweiger, W. (2018). Effects of civility and reasoning in user comments on perceived journalistic quality. Journalism Studies, 19, 62–78. https://doi.org/10.1080/1461670x.2016.1161497

Reif, A., & Guenther, L. (2021). How representative surveys measure public (dis)trust in science: a systematisation and analysis of survey items and open-ended questions. Journal of Trust Research, 11, 94–118. https://doi.org/10.1080/21515581.2022.2075373

Schneider, I. (2023). Anfeindungen von Klimaleugnern: Wettermoderatoren als neue Zielscheibe. tagesschau.de. https://www.tagesschau.de/inland/gesellschaft/angriffe-wettermoderatoren-100.html

Schoemann, A. M., Boulton, A. J., & Short, S. D. (2017). Determining power and sample size for simple and complex mediation models. Social Psychological and Personality Science, 8, 379–386. https://doi.org/10.1177/1948550617715068

Sischka, P. E., Décieux, J. P., Mergener, A., Neufang, K. M., & Schmidt, A. F. (2020). The impact of forced answering and reactance on answering behavior in online surveys. Social Science Computer Review, 40, 405–425. https://doi.org/10.1177/0894439320907067

Southern, R., & Harmer, E. (2019). Twitter, incivility and “everyday” gendered othering: an analysis of tweets sent to U.K. members of parliament. Social Science Computer Review, 39, 259–275. https://doi.org/10.1177/0894439319865519

Sterrett, D., Malato, D., Benz, J., Kantor, L., Tompson, T., Rosenstiel, T., Sonderman, J., & Loker, K. (2019). Who shared it? Deciding what news to trust on social media. Digital Journalism, 7, 783–801. https://doi.org/10.1080/21670811.2019.1623702

Stroud, N. J., Duyn, E. V., & Peacock, C. (2016). News commenters and news comment readers (engaging news project). https://mediaengagement.org/research/survey-of-commenters-and-comment-readers/

Stuijfzand, S., De Wied, M., Kempes, M., Van de Graaff, J., Branje, S., & Meeus, W. (2016). Gender differences in empathic sadness towards persons of the same- versus other-sex during adolescence. Sex Roles, 75, 434–446. https://doi.org/10.1007/s11199-016-0649-3

Väliverronen, E., & Saikkonen, S. (2021). Science communicators intimidated: researchers’ freedom of expression and the rise of authoritarian populism. JCOM, 20, A08. https://doi.org/10.22323/2.20040208

Wandt, J. (2023). Scicomm-Support: Neues Unterstützungsangebot bei Angriffen und Konflikten in der Wissenschaftskommunikation (Wissenschaftsfreiheit unter Druck? Das muss auch die KoWi kümmern) [Aviso. informationsdienst der deutschen gesellschaft für publizistik- und kommunikationswissenschaft]. https://www.dgpuk.de/de/publikationen/debatten/wissenschaftsfreiheit-unter-druck/scicomm-support-neues-unter

Weber, P., Prochazka, F., & Schweiger, W. (2019). Why user comments affect the perceived quality of journalistic content: the role of judgment processes. Journal of Media Psychology, 31, 24–34. https://doi.org/10.1027/1864-1105/a000217

Weingart, P., & Guenther, L. (2016). Science communication and the issue of trust. JCOM, 15, C01. https://doi.org/10.22323/2.15050301

Wissenschaft im Dialog/Kantar. (2023). Wissenschaftsbarometer 2023. https://wissenschaft-im-dialog.de/documents/47/WiD-Wissenschaftsbarometer2023_Broschuere_web.pdf

Wuttke, A., Schimpf, C., & Schoen, H. (2020). When the whole is greater than the sum of its parts: on the conceptualization and measurement of populist attitudes and other multidimensional constructs. American Political Science Review, 114, 356–374. https://doi.org/10.1017/s0003055419000807

Zapp, M. (2022). The legitimacy of science and the populist backlash: cross-national and longitudinal trends and determinants of attitudes toward science. Public Understanding of Science, 31, 885–902. https://doi.org/10.1177/09636625221093897

Notes

1. We deviate from the pre-registration in two ways. First, we switched the numbering of two hypotheses (H4 and H5) for better reading flow. Second, we pre-registered an additional hypothesis (H6), which we do not report here due to space limitations. The results of this analysis is available on the related project page on OSF https://osf.io/uhvmk/?view_only=5dc0faa7155c4595bb2911136e49f9a1.

2. Note that “something” instead of “someone” is a deliberate formulation, expressing a dehumanization of the scientist.

3. The used stimulus material and an English translation can be found here: https://osf.io/dv7n6/?view_only=9cd4cff7dc874951b09183ef74396a7f.

About the authors

Jana Laura Egelhofer (Ph.D., 2021, University of Vienna) is a postdoctoral researcher at the Department of Media and Communication, LMU Munich. Her research focuses on science communication, political communication, and disinformation studies.

E-mail: jana.egelhofer@ifkw.lmu.de X: @JL_Egelhofer

Christina Seeger (Ph.D., 2014, LMU Munich) is a full professor of media and communication at the Department of Media and Communications at the University of Klagenfurt. She studied communication science, political science, and psychology at the LMU Munich. Her research focuses on media usage and effects, digital and political communication, and persuasion.

E-mail: christina.seeger@aau.at X: @grissib

Alice Binder (Ph.D., 2020, University of Vienna) is a Senior Scientist in the Department of Communication at the University of Vienna. Her research interests include persuasive communication, health communication, food placement effects on children, and effects of (political) targeted advertising.

E-mail: alice.binder@univie.ac.at X: @_AliceBinder