1 Introduction
Social media platforms have become dynamic spaces for scientific discourse, providing unfiltered access to scientific information and public opinion. During the COVID-19 pandemic, these platforms, particularly Twitter (now X), played a pivotal role in shaping public understanding of the crisis. Driven by the strong public demand for reliable information, experts emerged as visible public figures. In Germany, Christian Drosten became a particular prominent example, using social media to offer direct, evidence-based updates to the public [Joubert et al., 2023; Szczuka et al., 2024]. Their approach garnered more engagement than official public health institutions’ communications via social media [Drescher et al., 2021], enhancing public understanding and establishing them as visible sources of reliable information online. However, this increased visibility came with a downside: scientists involved in public debates often face negative feedback or even harassment [e.g., Nölleke et al., 2023; Royan et al., 2023]. Besides potential unfortunate consequences for the scientists themselves, negative comments are known to change the credibility and influence of the original message as well as trust in the communicators [Winter et al., 2015; Ross Arguedas et al., 2024].
While scientists cannot control the audience’s reactions to their work, they can control how they frame their messages — for instance, by presenting curated evidence to support a claim. Although previous research has examined the differential effects of evidence types in traditional communication outlets, such as newspapers [Hoeken, 2001; Wojcieszak & Kim, 2016], there is a lack of studies systematically investigating how the type of evidence used by scientists impacts audience reactions in social media environments, which are characterised by immediacy, interactivity, and emotionality. Hence, this study addresses this gap and examines the differential effect of scientific evidence [an approximation of objectivity based on replicable data and analyses; Dahlstrom, 2014] and anecdotal evidence [relying on individual experiences; Moore & Stilgoe, 2009] in online science communication. Unlike laypersons, politicians, or journalists, scientists are uniquely qualified to provide specifically scientific evidence. However, when scientists adopt anecdotal evidence — typically associated with personal experience rather than empirical rigour — they may breach the audience’s expectations of providing accurate and scientific information [Maier et al., 2016]. Drawing on Expectation Violation Theory [EVT; Burgoon, 1993], we argue that such a breach may shift the audience’s attention from the message to the communicator, prompting evaluations of their personal characteristics. In this context, science communicators may either benefit from the unexpected use of vivid anecdotal evidence [see: exemplification theory; Zillmann, 1999, 2006] or undermine their trustworthiness by arguing “unscientifically”, since they are violating norms and expectations [Metzger et al., 2010; Metzger & Flanagin, 2013].
Beyond the message’s content, prior research has shown that audience reactions, particularly in the form of comments and replies, critically shape how online messages are interpreted [e.g., Lee & Jang, 2010; Winter et al., 2015]. While this effect has been studied in opinion-based contexts, its implications for science communication remain unclear. In particular, it is still underexplored how different types of negative comments influence the perceived trustworthiness of scientists, which depends on attributions of expertise, integrity, and benevolence [Hendriks et al., 2015].
While factual criticism reduces scientists’ perceived trustworthiness [e.g., Gierth & Bromme, 2020], social media debates are often characterised by intense emotion rather than factual objectivity [Nemes & Kiss, 2021; Oyebode et al., 2021]. Furthermore, emotionalised content not only spreads faster than content expressing positive emotions [Fan et al., 2014; Stieglitz & Dang-Xuan, 2013], but also specifically posts displaying anger attract more visual attention [Kohout et al., 2023]. These dynamics raise important questions about how negative vs. negative-emotional audience responses affect recipients’ evaluations of scientists communicating directly with the public online. By analysing how a scientist’s trustworthiness and the credibility of their messages are affected by a) these comments, and b) the use of different evidence types, we aim to provide new perspectives on public engagement with science.
Although the study uses a simplified experimental setup, it reflects common dynamics on social media, where users have to use heuristic judgements of experts based on minimal information to quickly assess their trustworthiness [Kahneman & Frederick, 2005]. To explain how audiences evaluate such brief encounters, we draw on the lay epistemic theory and the unimodel [Kruglanski, 1990; Kruglanski & Thompson, 1999], which suggests that people rely on various available cues, such as message content and social feedback, when forming judgments. Within this framework, both the expert’s evidence and the audience’s comment serve as cues that can influence perceived trustworthiness. Hence, this study identifies boundary conditions under which scientists’ messages are most effective in shaping audiences’ perceptions and when they risk being undermined by audience reactions. In this study, we specifically focus on epistemic trustworthiness, meaning the extent to which a communicator is perceived as a reliable source of information and knowledge, hinges on their perceived high expertise, integrity, and benevolence [Hendriks et al., 2015]. We assume that these three dimensions are equally susceptible to the influence of both the type of evidence used and the nature of audience responses.
Considering potential future crises and the benefits of direct contact with scientists on social media [Szczuka et al., 2024], this research extends the existing literature on trust in science, offering empirically grounded insights into how scientists’ trustworthiness is negotiated in emotionally charged digital environments and how communicators can navigate these spaces effectively.
2 Literature review
Understanding how recipients form impressions and evaluate new information is crucial to understanding scientific experts’ public perception. This study draws on findings from adjacent fields such as social psychology and media research to better understand how scientific communication functions in digital environments, thereby integrating relevant theoretical insights beyond disciplinary boundaries. We use the lay epistemic theory [LET; Kruglanski, 1990; Kruglanski & Thompson, 1999], which originates in cognitive psychology and shows how individuals evaluate information in a social context. It is particularly relevant to science communication, as it demonstrates how personal beliefs and cognitive biases can influence the perception of scientific information. According to the LET, people form judgments depending on their beliefs, individual abilities, and subsequent evaluation of the relevant information. Here, all available information and cues can equally affect the evaluation. These cues can not only include message related content (e.g., as the quality and type of evidence presented) and contextual feedback, such as social validation signals from comments or likes, or who communicates [Kruglanski et al., 2006, 2010; Metzger & Flanagin, 2013; Ross Arguedas et al., 2024] — but also other embedded trust cues such as expertise, clarity or perceived integrity [Schröder et al., 2025]. Importantly, the evaluation does not occur in isolation; instead, the interaction between these cues plays a central role in shaping judgment. This way, a piece of information is not evaluated in isolation but within its broader contextual frame [Sundar, 2008; Sundar et al., 2019]. This is especially relevant in digital environments, where scientists’ messages are rarely consumed in isolation. The content and context shape the evaluation process, with individual factors, such as prior personal attitudes, affecting which cues are given more weight. However, together, these cues influence judgments about the communicator’s perceived trustworthiness and the credibility of their messages.
2.1 Communication style: anecdotal vs. scientific evidence
In this article, we define anecdotal evidence (also known as narrative evidence) as any evidence that does not follow a strict methodology [Allen & Preiss, 1997; Dahlstrom, 2014] and relies on subjectivity and individual experiences to support a claim [Moore & Stilgoe, 2009]. While anecdotal evidence is used to draw inferences, or exemplars — often discussed in persuasive communication research — scientific evidence needs to follow higher standards [Dahlstrom, 2014] as it represents an approximation of objectivity, based on replicable and representative data and statistics [Allen & Preiss, 1997; Hoeken, 2001]. Importantly, this conceptualisation of anecdotal evidence excludes examples that illustrate scientific findings. Rather, we refer specifically to subjectively framed experiences that are not subject to any type of scientific rigour.
Previous research suggests that anecdotal evidence, which relies on narrations, may be easier to comprehend than hypothetical instances due to the vividness of concrete examples [see exemplification theory; Zillmann, 1999, 2006]. Although the difference in the persuasive effect of scientific versus anecdotal evidence appears to be minor, anecdotal evidence often influences behavioural intentions more, especially in health-related decisions [see meta-analyses: Freling et al., 2020; Xu, 2023; Zebregs et al., 2015]. For instance, recipients are less likely to verify medical (mis)information that contains anecdotal evidence as opposed to statistical evidence due to reduced cognitive effort and increased fluency [Dudley et al., 2023; Marsh & Yang, 2018; Zhao & Tsang, 2024].
However, it is crucial to consider the role of the communicator in this process. Research by Knobloch-Westerwick et al. [2015] showed no significant difference in persuasive effectiveness between the two types of evidence when presented in a neutral online context, such as a magazine. However, in online science communication, trust cues such as perceived expertise and clarity can strongly influence how messages are received [Schröder et al., 2025]. Thus, we argue that the effectiveness of evidence may differ when the communicator is an expert, especially a scientist. Laypeople may perceive anecdotal evidence presented by scientists as more credible than scientific evidence. The vividness of anecdotes, combined with the scientists’ high epistemic authority creates a synergy effect. We derive this effect from the previously discussed premise that recipients potentially use multiple cues, such as the content of the message and the communicator’s epistemic authority, when interpreting information [Kruglanski et al., 2006, 2010]. Specifically, when scientists share personal experiences, they may violate audience expectations regarding expert communication, creating an expectancy violation [Burgoon, 1993] that draws attention to the communicator’s characteristics, particularly their epistemic authority. Reporting personal experiences or narrations may signal to the audience that the scientist is not only knowledgeable but also authentic [Saffran et al., 2020], which is closely linked to the perceived determinants of integrity and benevolence of their epistemic trustworthiness [Hendriks et al., 2015]. Thus, by combining the fluency and vividness of anecdotal evidence [e.g., Marsh & Yang, 2018] with their epistemic authority, scientists should benefit from the strengths of both, thereby improving their overall trustworthiness and message credibility. Therefore, we conclude:
- Hypothesis 1a:
-
Posts containing anecdotal evidence will be rated more credible compared to posts containing scientific evidence.
- Hypothesis 1b:
-
Scientists who use anecdotal evidence in their posts will be rated as more trustworthy compared to scientists who use scientific evidence in their posts.
2.2 The public’s reactions: negative and emotionalised comments
When science communicators directly address the public, they automatically expose themselves to the public, their reactions, and comments. These comments, especially when recipients read them, can influence their evaluation of the communicators and their messages. Specifically, negative comments can have detrimental effects as they can reduce the content’s credibility [Naab et al., 2020; Waddell, 2020], increase the perceived bias [Anderson et al., 2018], and make them less convincing — whereas positive comments appear to exert no such effects [Winter et al., 2015]. The unifying determinant is the bias of attention and processing capacities towards a negative stimulus instead of a positive one [Baumeister et al., 2001].
However, social media comments are often more than just negative or opposing. As emotions are known to impact online discussions [e.g., Fan et al., 2014; Stieglitz & Dang-Xuan, 2013], we differentiate between two types of negative comments: factual and emotional expressions, which go beyond mere rejection of the content in the sense of negative valence. To capture this emotional negativity, we specifically focus on anger, as it can increase the depth of processing [Nabi, 2002] but also draws more visual attention to comments (other than negative emotions such as fear), which is even more pronounced for recipients who can allocate more cognitive resources [Kohout et al., 2023].
Research suggests that emotional expressions can strongly influence recipients’ impressions through emotional contagion [Hatfield et al., 1993; Kramer et al., 2014], where individuals unconsciously align their emotional states with others’ and integrate these emotions into their evaluation of content [Hasford et al., 2015]. Consequently, others’ emotions can play a significant role in the evaluation of content, a concept grounded in social psychology but highly relevant to current communication environments, especially in cases of less profound processing [see EASI model: Van Kleef, 2009; Van Kleef et al., 2010], and thus, influence recipients’ impressions and attitudes [Van Kleef et al., 2015]. We expect that recipients adapt to the displayed negative emotions in a comment (in this case, anger) and use them as a heuristic signal and guide when evaluating the content [see Kahneman & Frederick, 2005]. Given the nature of emotional negativity, we expect that negative-emotional comments will have a stronger impact on recipients’ evaluations than purely negative but factual comments, due to their more attention-grabbing nature. Thus, we propose the following hypothesis:
- Hypothesis 2a:
-
The negative effect of negative-factual comments on message credibility will be greater than the effect of neutral comments but smaller than the effect of negative-emotional comments.
Following the LET, an expert’s epistemic authority and their associated trustworthiness can substitute the need for stand-alone evidence [Kruglanski et al., 2006, 2010]. Given that all types of information are equally assessed when evaluating a message on social media [Kruglanski & Thompson, 1999], we argue that negative comments will undermine the trustworthiness of communicating experts. Previous research indicates that user comments factually questioning a scientist’s motivations or the complexity of their research decrease trust in the expert by challenging the researchers’ core values [Gierth & Bromme, 2020]. However, it is unclear whether negative comments expressing emotions like anger lead to comparable effects as a factually formulated rejection. Drawing on the previously described effects of negative comments, we assume that both factual and emotional comments can reduce the trustworthiness of the communicating expert. Accordingly, we assume that factual-negative comments, without emotional expression, generally harm the trustworthiness of the communicators. Additionally, we assume that negative-emotional comments reflecting anger increase the effects of negativity through emotional contagion and direct attention towards the comment. Therefore, we derive the following hypothesis:
- Hypothesis 2b:
-
The negative effect of negative-factual comments on the communicators’ trustworthiness will be greater than the effect of neutral comments but smaller than the effect of negative-emotional comments.
2.3 Interaction of evidence types and (negative) comments
Given that recipients rely on multiple cues when evaluating whom and whose messages to trust [Kruglanski et al., 2006, 2010], we now focus on how these cues interact, specifically in terms of evidence type and comments. Regarding the main effect of evidence type, we expect that science communicators who use anecdotal evidence will benefit from creating a positive expectancy violation [see EVT: Burgoon, 1993, 2015]. While continuing to benefit from their epistemic authority, they can also benefit from the vividness of anecdotes by tailoring their message to the audience, thereby emphasising their benevolence. However, when negative comments accompany these messages, recipients are more likely to interpret the expectancy violation negatively, as the comments provide additional information that influences their evaluation of the message [e.g., Winter et al., 2015]. In particular, the effect of negative comments is expected to be stronger when they are reinforced by emotional valence adjustment [Van Kleef, 2009; Van Kleef et al., 2015]. In this light, recipients who view anger, just as those who feel anger, should also engage in a more thorough evaluation and monitor the source more closely [Nabi, 2002]. The negative effects are expected to increase with the intensity of negativity expressed in the comments. In this case, scientific evidence is likely to be more beneficial, as recipients are more inclined to trust science communicators as authoritative figures who communicate in line with this expectation, leading them to perceive the messages as more credible. Following the same pattern, recipients should also perceive the communicators using scientific evidence as more trustworthy with increasingly negative and emotional comments. Thus, we derive the subsequent hypotheses for testing:
- Hypothesis 3a:
-
Posts containing anecdotal evidence will be rated less credible if the comments are negative-factual compared to neutral comments and more credible compared to negative-emotional comments.
- Hypothesis 3b:
-
Posts containing scientific evidence will be rated more credible if the comments are negative-factual compared to neutral comments and less compared to negative-emotional comments.
- Hypothesis 4a:
-
Scientists of posts containing anecdotal evidence will be rated less trustworthy if the comments are negative-factual compared to neutral comments and more credible compared to negative-emotional comments.
- Hypothesis 4b:
-
Scientists of posts containing scientific evidence will be rated more trustworthy if the comments are negative-factual compared to neutral comments and less compared to negative-emotional comments.
2.4 Need for cognition and prior attitudes
To investigate their potential effects, we included two covariates: the recipients’ attitude and need for cognition (NFC). A preference for attitude-consistent information is also evident in science communication [Gierth & Bromme, 2020]. Therefore, we included five items to assess participants’ prior attitudes towards COVID-19 and the post-COVID-19 condition at the start of the study. The recipients’ cognitive style is expected to influence how they process evidence types and comments, similar to the impact of their prior attitudes. Therefore, we will include recipients’ Need for Cognition (NFC), which refers to an individual’s inclination towards deliberate cognitive processing and reflection [Cacioppo et al., 1996], in our upcoming analyses. For instance, individuals with low NFC levels are more likely to be influenced by anecdotal evidence that aligns with their prior attitudes [Hinnant et al., 2016] and are more easily persuaded by relevant arguments than by subjective comments [Winter & Krämer, 2016].
3 Methods
An online experiment was conducted to investigate how different combinations of evidence types and user comments influence the perception of science communication on social media. The experiment employed a 2 (evidence type: scientific vs. anecdotal) Õ3 (comments: neutral, negative-factual, negative-emotional) between-subjects design. Participants were shown social media profiles from fictional experts, including one post and one user comment. Materials, information about the pretest conducted to assess the perceived emotional valences and levels of anger in the comments, additional details on the measurements, and data can be accessed via the OSF: https://osf.io/d8kea/?view_only=4a6d1dd842c547bb8c9ace1628510cb3
3.1 Sample
A total of 360 German participants were recruited for this study in the summer of 2022. To ensure data quality, four manipulation checks, and one attention check were carried out. The manipulation check assessed whether participants read the instructions and materials carefully. A total of n = 59 participants failed at least one check, resulting in a final sample of N = 301. The age of the participants ranged from 18 to 80 years, M = 30.0, SD = 10.5. In the sample, 54% of participants (n = 162) identified as female, and 1% (n = 3) identified as nonbinary or preferred not to provide information on their gender. Approximately 7% of participants (n = 21) had completed secondary school, while 35% (n = 106) completed higher education with a university entrance qualification. 33% of participants (n = 99) reported having a bachelor’s degree, and 21% (n = 63) a master’s degree. The remaining 2% (n = 6) of participants hold a Ph.D., and 2% (n = 6) did not indicate their highest level of education.
3.2 Procedure
Demographic information was collected from participants after they were informed about the study’s purpose and how their data would be used. Participants were subsequently asked to indicate their general attitudes towards COVID-19. Next, they were randomly assigned to one of six experimental groups, where each participant viewed three X/Twitter profiles of experts, each displaying a post on a different aspect of the post-COVID-19 condition. Each post was accompanied by one user comment, forming the combined stimulus that participants evaluated. All comments were from the same experimental condition to which the participant was assigned. After reviewing each profile, participants were asked to rate the posts’ credibility and the experts’ trustworthiness. They were also asked to identify the post’s topic to confirm that they had read the content. Finally, the study assessed the participants’ NFC and their attitudes towards the post-COVID-19 condition [i.e., ‘Long COVID’, World Health Organization, 2022].
3.3 Materials and measures
The stimulus material consisted of three Twitter/X profiles, each featuring posts from fictional experts discussing a different aspect of the post-COVID-19 condition: duration, symptoms, and diagnosis. The profiles displayed the experts’ profile pictures with a landscape or building as the background banner. The biographies briefly mentioned their expertise (e.g., ‘Virologist; Scientist at #ChariteVirology’). While the expert profiles included realistic indicators of credibility (e.g., institutional affiliation) to establish the expert status of the communicators, these cues were kept consistent across all conditions to control for confounding effects. Thus, only the posts’ content and comment tone varied systematically between condition (see Figure 1 for an example). The experimentally manipulated posts, which presented either anecdotal or scientific evidence, were displayed below the biography. In line with the previously discussed research on the different evidence types, posts containing anecdotal evidence discussed a patient, a friend, or a family member of the expert. Thus, personal reports without any scientific reference were presented. In contrast, posts containing scientific evidence all referenced scientific study results or analyses.
The neutral comments did not address the content of the tweets but rather the discourse surrounding COVID-19 in general (e.g., “COVID-19 must remain a topic of discussion”). The negative-factual comments disagreed with the posts’ content (e.g., “I don’t believe that COVID-19 has long-term consequences. A large proportion of people probably just tend to misinterpret common everyday symptoms.”). Negative-emotional comments not only rejected the content but also included emotional language in the form of anger (e.g., “What a mess! You really cannot take all this seriously anymore!! NOBODY can tell me that COVID-19 really lasts that long! In the end, every little cold is over-interpreted!”). The authors of the comments were given fictional names and profile pictures. All stimulus variations (original German and English translations) and information on the pretest of the materials are available on OSF.
3.3.1 Messages’ credibility
We adapted a scale developed by Appelman and Sundar [2016] to measure message credibility. Participants rated each post on how accurate, authentic, credible, correct, and reliable it was on a 7-point scale (1 = “does not describe the content well”; 7 = “describes the content well”). Overall, the content was rated relatively credible, M = 5.35, SD = 1.12, Cronbach’s = .96, McDonald’s = .92.
3.3.2 Senders’ trustworthiness
Perceived trustworthiness was measured using the METI scale, which asked participants to rate the senders on 14 semantic differential word pairs ranging on a scale from 1 to 7 [Hendriks et al., 2015]. The scale includes three dimensions: expertise (e.g., “professional–unprofessional”), M = 6.03, SD = 0.82, Cronbach’s = .96, McDonald’s = .94, integrity (e.g., “just–unjust”), M = 5.59, SD = 0.89, Cronbach’s = .95, McDonald’s = .90, and benevolence (e.g., “considerate–inconsiderate”), M = 5.67, SD = 0.89, Cronbach’s = .95, McDonald’s = .91.
3.3.3 Covariates
Need for Cognition (NFC) was assessed using the German short version of the NFC scale developed by Beißert et al. [2015]. The four items (e.g., “I would prefer complex to simple problems.”) were measured on a scale from 1 (= “completely disagree”) to 7 (= “completely agree”), M = 4.77, SD = 1.05, Cronbach’s = .69, McDonald’s = .70.
Attitude measures. To account for participants’ prior attitudes, five items were included that were used to assess participants’ prior attitudes towards COVID-19 and the post-COVID-19 condition in general at the beginning of the study. For instance, they were asked if they took the risk of long-term consequences of COVID-19 seriously. The answers were assessed on a scale ranging from 1 (= “completely disagree”) to 7 (= “completely agree”), M = 5.07, SD = 1.30, Cronbach’s = .88, McDonald’s = .89.
4 Results
4.1 Analysis plan
To test the hypotheses, separate general linear models (GLM) were conducted for each dependent variable: post credibility and the three dimensions of senders’ trustworthiness: expertise, integrity, and benevolence. Each model included the evidence type (scientific vs. anecdotal) and comment type (neutral, negative-factual, negative-emotional) as fixed factors, and participants’ prior attitudes towards COVID-19 and NFC as covariates. We report model summaries, fixed effect estimates, and post-hoc comparisons for each dependent variable.
4.2 Credibility of the posts
The overall effect of the model with credibility as the dependent variable was significant, F(7, 293) = 10.02, p <.001. We predicted that anecdotal evidence would make the post appear more credible than scientific evidence (H1a); however, this was not supported. Although a significant main effect was found, F(1, 301) = 5.15, p = .024, it favoured scientific evidence, = .239, p = .024. On average, posts containing scientific evidence were rated as more credible, M = 5.47, SD = 1.11, than those containing anecdotal evidence, M = 5.25, SD = 1.13. Thus, H1a must be rejected. H2a predicted a decrease in credibility from neutral to negative-factual and negative-emotional comments; however, this was not supported, F(2, 301) = 0.32, p = .730.
Next, we assumed that posts containing anecdotal evidence with negative-factual and neutral comments are rated more credible than those with negative-emotional comments (H3a). Additionally, we assumed that posts containing scientific evidence with negative-emotional comments are rated more credible than those with neutral comments (H3b). The analysis shows no significant interaction effect between the evidence and comment type, F(2, 301) = 1.71, p = .184. As we were particularly interested in the effects of comment negativity, we conducted follow-up post hoc tests to examine our hypotheses further. The results showed significant differences between posts containing scientific evidence paired with negative-emotional comments, M = 5.70, SD = 1.14, and posts containing anecdotal evidence and negative-emotional comments, M = 5.09, SD = 1.32, t(293) = -2.707, p = .007. Posts containing anecdotal evidence with negative-factual comments, M = 5.16, SD = 1.04, significantly differ from posts containing scientific evidence with negative-emotional comments, t(293) = -2.414, p = .016 (see Figure 2). As no consistent differences emerged between evidence types depending on comment type, hypotheses 3a and 3b are not supported.
4.3 Trustworthiness of the sender
To test the remaining hypotheses, we conducted GLMs examining how evidence type (H1b), comment valence (H2b), and their interaction (H4a, H4b) influenced the three dimensions of epistemic trustworthiness: expertise, integrity, and benevolence. No significant main effects of evidence type or comments on the perceived trustworthiness of the communicators were found (a table providing an overview of all results is available in the OSF). Consequently, H1b and H2b are not supported, as neither comment type nor evidence type significantly affected perceptions of the communicators’ trustworthiness. However, the recipients’ prior attitudes were a significant predictor for each subdimension of epistemic trustworthiness. Participants who initially perceived COVID-19 as a severe and threatening condition rated the communicators higher in expertise, = .373, p <.001, integrity, = .363, p <.001, and benevolence, = .410, p <.001.
H4a predicted that communicators using anecdotal evidence would be rated less trustworthy when accompanied by negative-factual comments compared to neutral ones and more trustworthy than those with negative-emotional comments. Conversely, H4b predicted the opposite pattern for scientific evidence. The GLM showed no significant interaction between comment and evidence type for the perceived expertise of the communicators, F(2, 301) = 2.00, p = .136. However, a significant interaction between evidence type and comment valence (negative-emotional vs. neutral) was found, = .523, p = .047. When negative-emotional comments were accompanied by anecdotal evidence, the communicators’ perceived expertise was rated lower if anecdotal evidence was used, M = 5.85, SD = 1.03, compared to scientific evidence, M = 6.24, SD = 0.73, SE = 0.153, t(293) = -2.34, p = .020 (see Figure 3, left).
The remaining differences were not statistically significant. Therefore, neither H4a nor H4b can be fully supported. However, the main differences in perceived trustworthiness of the communicators tended to favour scientific evidence over anecdotal evidence, particularly when the posts were accompanied by negative-emotional (as opposed to neutral) comments.
Regarding the communicators’ perceived integrity, a significant interaction between comment type and evidence type was observed, F(2, 301) = 3.99, p = .020. Again, there was a significant interaction effect between scientific and anecdotal evidence for negative-emotional versus neutral comments, = .737, p = .005. When posts were accompanied by negative-emotional comments, communicators using anecdotal evidence were rated lower on integrity, M = 5.43, SD = 0.95, than those using scientific evidence, M = 5.80, SD = 0.91, SE = 0.164, t(293) = -2.075, p = .039. Interestingly, when scientific evidence was used, integrity ratings were higher under negative-emotional comments, M = 5.80, SD = 0.91, than under neutral ones, M = 5.40, SD = 0.88, SE = 0.168, t(293) = -2.00, p = .046 (see Figure 3, right). Regarding perceived benevolence, no significant interaction emerged between evidence type and comment type, F(2, 301) = 2.62, p = .075. However, there was a significant main effect comparable to the patterns observed for expertise and integrity, with a difference between neutral and negative-emotional comments and between evidence types, = .586, p = .023. Unlike the other dimensions, however, post hoc comparisons did not reveal any significant pairwise differences.
5 Discussion
When science communicators address socio-scientific issues in public discourse, they enter uncertain territory, as public reactions are often unpredictable. Especially with topics of high social relevance, such as the COVID-19 pandemic and its consequences, the discourse is often marked by (negative) emotions. We therefore examined whether science communicators benefit from using specific types of evidence. Our results show that the type of evidence influences both the message’s credibility and the communicator’s perceived trustworthiness. Contrary to initial assumptions, anecdotal evidence is not perceived as more credible than scientific evidence [e.g., Hinnant et al., 2016]. This suggests that science communicators should prioritise evidence-based communication, since anecdotes may trigger negative expectancy violations that reduce perceived credibility [Metzger et al., 2010; Metzger & Flanagin, 2013]. As this finding contrasts with previous research on impersonal or journalistic media posts [Hoeken, 2001; Wojcieszak & Kim, 2016], it highlights the unique role of scientists communicating directly via personal social media accounts — and the contextual nature of how evidence types are perceived.
Moreover, the effects of negative comments reveal important nuances. Based on previous research, we assumed that negative comments would shape the evaluation outcome [e.g., Winter et al., 2015], especially when reinforced by emotions [Van Kleef et al., 2015]. In contrast to previous studies on the harmful effects of negative comments [e.g., Naab et al., 2020; Winter et al., 2015], our study found no such effects of comments in isolation. Although explicitly targeting science communicators’ motives or work can harm their trustworthiness [Gierth & Bromme, 2020], our results suggest that plain and emotional negativity in comments cannot. This circumstance underlines that general negativity may pose less of a challenge for science communicators than previously assumed and points to boundary conditions for negativity biases [Baumeister et al., 2001].
These findings further highlight the importance of taking a nuanced approach to assessing online negativity. Our findings indicate that the impact of negative comments is contingent on the type of evidence used. Previous research proposed that science communicators, like other communicators, could benefit from anecdotal evidence due to its vividness and ease of processing [e.g., Marsh & Yang, 2018], as long as no contradicting cues are present. As discussed above, science communicators did not benefit from using anecdotal evidence, even when the comments were neutral. In line with this reasoning, we expected posts containing scientific evidence to be evaluated more positively when accompanied by negative comments, as they do not violate expectations about scientific communication. Although we did not find large effects, there was evidence of this type of interaction for perceived communicator integrity and a similar, though less pronounced and consistent, pattern for message credibility and perceived expertise, providing some support for this assumption.
Communicators were perceived as more sincere and honest when they shared anecdotal evidence, but only in the presence of neutral comments — an effect consistent with research on authenticity in science communication [Saffran et al., 2020]. In contrast, perceived integrity was lower when scientific evidence was paired with neutral comments. This pattern reversed when the comments were negative. In these cases, communicators using scientific evidence were rated higher in integrity. When comments are negative and emotional, recipients may respond with resistance, rejecting the persuasive appeal of anecdotal evidence. The strong effect of participants’ prior attitudes — reflecting their perception of COVID-19 and post-COVID-19 as serious threats, as portrayed in the stimuli — supports an additional explanation: reactance may arise when negative comments conflict with pre-existing attitudes [Kim et al., 2021; Lu & Liang, 2024].
In terms of communicators’ perceived expertise and message credibility, we observed a less pronounced pattern, as there was no difference in evaluations based on evidence type when neutral comments were presented: however, when comments expressed stronger emotions (e.g., anger) and negativity, recipients rated messages and senders of posts with scientific evidence more positively. Simultaneously, negative comments weakened posts’ credibility and senders’ perceived expertise, relying on anecdotal evidence. From the perspective of the EVT [Burgoon, 1993, 2015], anecdotal evidence caused a negative expectancy violation, resulting in a less favourable evaluation due to the deviation from the expected use of scientific evidence. As anecdotal evidence does not meet the rigorous standards typically expected of scientific evidence [Dahlstrom, 2014], this violation may have ultimately undermined the perceived message’s credibility and the communicators’ perceived expertise, particularly in the presence of negative comments.
Interestingly, communicators’ perceived benevolence was not affected by the type of evidence and comments present. Although negative-emotional comments could have affected this, communicators’ perceived benevolence remained largely unchanged and relatively high. This finding suggests that benevolence may either be a stable trait of science communicators that negative comments cannot easily damage or that negative comments must specifically affect the communicators themselves to do so [Gierth & Bromme, 2020], as the negativity did not transfer to perceptions of benevolence.
Lastly, the results underline that prior attitudes towards a topic are more important in evaluating communicators than the surrounding discourse and the type of evidence used. The findings indicate that recipients’ prior attitudes influence their evaluation of online content and subsequent opinion formation [Gierth & Bromme, 2020]. Additionally, prior attitudes towards COVID-19 in general consistently emerged as a significant predictor of perceptions about the post-COVID-19 condition. This example emphasises the significance of broad mechanisms, such as confirmation bias [Knobloch-Westerwick et al., 2015], in shaping the perception of science communication on social media to promote an informed society.
5.1 Limitations and further research
The main limitation of the present study is the limited generalisability of the findings, as the investigation was limited to the post-COVID-19 condition. Despite the different aspects (diagnosis, duration, symptoms) considered in the material, further research on the impact of evidence and comments should examine other issues, ideally with varying degrees of surrounding controversy and potential for negativity. This is of utmost importance given the unexpectedly beneficial effect of scientific over anecdotal evidence, which contradicts prior research. We also encourage further research into the different types and forms of evidence used in scientific discourse. This should include an examination of the relative merits of different types of evidence, such as qualitative, quantitative, or theoretically based evidence. Changing the topic and type of evidence analysed would also allow for a more robust examination and replication of the fine-grained differences observed, particularly given the small differences found in the post-hoc analyses of perceived credibility.
Another limitation lies in the experimental nature of this study. While the design allowed for a controlled manipulation of evidence and comment types, real-world platforms typically feature multiple comments and interactions, which can jointly shape user perceptions. Future studies should aim to include richer comment threads to reflect the complexity of online discourse better. Similarly, while this study provides initial insights into the differential effects of negative-factual and negative-emotional comments on evaluating science communicators online, it is important to note that our investigation was limited to a single emotional state: anger. Therefore, research should investigate other emotions (e.g., fear) which play a pivotal role in situations where trust in scientific expertise is paramount, such as during crises, and can also influence information processing in other ways than anger appears to do [e.g., Nabi, 2002]. Furthermore, given that emotional contagion on social media may be less prevalent for negative emotions than positive ones [Gutierrez et al., 2022], future research should also examine how positive emotions in comments may influence expectation violations and the evaluation of science communicators.
Lastly, although we included participants’ NFC, we did not explicitly consider the situational capabilities and goals of the participants. Kruglanski and Thompson [1999] argue that while all information is equally important in evaluating a persuasive message, it is weighted according to the individual’s capabilities, as other theoretical considerations argue. A more comprehensive understanding of the processing conditions can be achieved by considering individual motivation and goals, such as pursuing accurate information [e.g., Chaiken et al., 1996].
5.2 Conclusion
In today’s media environment, scientists can communicate their findings directly to the public, bypassing the traditional intermediary role of journalists. This shift in communication dynamics presents both opportunities and risks, underscoring the need for carefully planned science communication. Against the backdrop of lay epistemic theory and expectancy violation theory, the results of this study show that scientific evidence can enhance the credibility of messages and protect science communicators against a decrease in trustworthiness, especially when negative emotions characterize online discussions. In particular, when anecdotal evidence was used, the perception of expertise and integrity was impacted by the detrimental effects of negativity. For this reason, we recommend that scientists rely on scientific evidence when engaging in (emotionally charged) debates on social media to avoid harm from such comments and to maintain credibility in communicating with the public. Nevertheless, the impact of simple negative comments appears to be less pronounced in science communication than in other, more opinion-based domains. Ultimately, this study demonstrates that the well-researched effects observed in traditional media contexts cannot be directly transferred to scientists who engage with the public on social media.
Acknowledgments
This research is part of the project ‘Science communication during pandemics: the role of public engagement in social media discussions’, funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) — 458609429. Grant applicants are Nicole Krämer, Monika Taddicken, and Stefan Stieglitz.
References
-
Allen, M., & Preiss, R. W. (1997). Comparing the persuasiveness of narrative and statistical evidence using meta-analysis. Communication Research Reports, 14(2), 125–131. https://doi.org/10.1080/08824099709388654
-
Anderson, A. A., Yeo, S. K., Brossard, D., Scheufele, D. A., & Xenos, M. A. (2018). Toxic talk: how online incivility can undermine perceptions of media. International Journal of Public Opinion Research, 30(1), 156–168. https://doi.org/10.1093/ijpor/edw022
-
Appelman, A., & Sundar, S. S. (2016). Measuring message credibility: construction and validation of an exclusive scale. Journalism & Mass Communication Quarterly, 93(1), 59–79. https://doi.org/10.1177/1077699015606057
-
Baumeister, R. F., Bratslavsky, E., Finkenauer, C., & Vohs, K. D. (2001). Bad is stronger than good. Review of General Psychology, 5(4), 323–370. https://doi.org/10.1037/1089-2680.5.4.323
-
Beißert, H., Köhler, M., Rempel, M., & Beierlein, C. (2015). Deutschsprachige Kurzskala zur Messung des Konstrukts Need for Cognition NFC-K. Zusammenstellung sozialwissenschaftlicher Items und Skalen (ZIS). https://doi.org/10.6102/zis230
-
Burgoon, J. K. (1993). Interpersonal expectations, expectancy violations, and emotional communication. Journal of Language and Social Psychology, 12(1–2), 30–48. https://doi.org/10.1177/0261927x93121003
-
Burgoon, J. K. (2015). Expectancy violations theory. In C. R. Berger, M. E. Roloff, S. R. Wilson, J. P. Dillard, J. Caughlin & D. Solomon (Eds.), The International Encyclopedia of Interpersonal Communication (pp. 1–9). Wiley. https://doi.org/10.1002/9781118540190.wbeic102
-
Cacioppo, J. T., Petty, R. E., Feinstein, J. A., & Jarvis, W. B. G. (1996). Dispositional differences in cognitive motivation: the life and times of individuals varying in need for cognition. Psychological Bulletin, 119(2), 197–253. https://doi.org/10.1037/0033-2909.119.2.197
-
Chaiken, S., Giner-Sorolla, R., & Chen, S. (1996). Beyond accuracy: defense and impression motives in heuristic and systematic information processing. In P. M. Gollwitzer & J. A. Bargh (Eds.), The psychology of action: linking cognition and motivation to behavior (pp. 553–578). The Guilford Press.
-
Dahlstrom, M. F. (2014). Using narratives and storytelling to communicate science with nonexpert audiences. Proceedings of the National Academy of Sciences, 111(supplement_4), 13614–13620. https://doi.org/10.1073/pnas.1320645111
-
Drescher, L. S., Roosen, J., Aue, K., Dressel, K., Schär, W., & Götz, A. (2021). The spread of COVID-19 crisis communication by German public authorities and experts on Twitter: quantitative content analysis. JMIR Public Health and Surveillance, 7(12), e31834. https://doi.org/10.2196/31834
-
Dudley, M. Z., Squires, G. K., Petroske, T. M., Dawson, S., & Brewer, J. (2023). The use of narrative in science and health communication: a scoping review. Patient Education and Counseling, 112, 107752. https://doi.org/10.1016/j.pec.2023.107752
-
Fan, R., Zhao, J., Chen, Y., & Xu, K. (2014). Anger is more influential than joy: sentiment correlation in Weibo. PLoS ONE, 9(10), e110184. https://doi.org/10.1371/journal.pone.0110184
-
Freling, T. H., Yang, Z., Saini, R., Itani, O. S., & Rashad Abualsamh, R. (2020). When poignant stories outweigh cold hard facts: a meta-analysis of the anecdotal bias. Organizational Behavior and Human Decision Processes, 160, 51–67. https://doi.org/10.1016/j.obhdp.2020.01.006
-
Gierth, L., & Bromme, R. (2020). Attacking science on social media: how user comments affect perceived trustworthiness and credibility. Public Understanding of Science, 29(2), 230–247. https://doi.org/10.1177/0963662519889275
-
Gutierrez, J. P. G., Sanchez, A. M., Ibunes, A., Dela Cruz, D. H., Vasquez, T. A., & Reyes, J. (2022). Evidence of digital emotion contagion: an experimental study on the emotional social media posts on university students’ attitude. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4109944
-
Hasford, J., Hardesty, D. M., & Kidwell, B. (2015). More than a feeling: emotional contagion effects in persuasive communication. Journal of Marketing Research, 52(6), 836–847. https://doi.org/10.1509/jmr.13.0081
-
Hatfield, E., Cacioppo, J. T., & Rapson, R. L. (1993). Emotional contagion. Current Directions in Psychological Science, 2(3), 96–100. https://doi.org/10.1111/1467-8721.ep10770953
-
Hendriks, F., Kienhues, D., & Bromme, R. (2015). Measuring laypeople’s trust in experts in a digital age: the Muenster Epistemic Trustworthiness Inventory (METI): three datasets. Research Data Center at ZPID, Leibniz Institute for Psychology. https://doi.org/10.5160/psychdata.hsfe15mu08
-
Hinnant, A., Subramanian, R., & Young, R. (2016). User comments on climate stories: impacts of anecdotal vs. scientific evidence. Climatic Change, 138(3–4), 411–424. https://doi.org/10.1007/s10584-016-1759-1
-
Hoeken, H. (2001). Anecdotal, statistical, and causal evidence: their perceived and actual persuasiveness. Argumentation, 15(4), 425–437. https://doi.org/10.1023/a:1012075630523
-
Joubert, M., Guenther, L., Metcalfe, J., Riedlinger, M., Chakraborty, A., Gascoigne, T., Schiele, B., Baram-Tsabari, A., Malkov, D., Fattorini, E., Revuelta, G., Barata, G., Riise, J., Schröder, J. T., Horst, M., Kaseje, M., Kirsten, M., Bauer, M. W., Bucchi, M., … Chen, T. (2023). ‘Pandem-icons’ — exploring the characteristics of highly visible scientists during the Covid-19 pandemic. JCOM, 22(01), A04. https://doi.org/10.22323/2.22010204
-
Kahneman, D., & Frederick, S. (2005). A model of heuristic judgment. In K. J. Holyoak & R. G. Morrison (Eds.), The Cambridge handbook of thinking and reasoning (pp. 267–293). Cambridge University Press.
-
Kim, H., Seo, Y., Yoon, H. J., Han, J. Y., & Ko, Y. (2021). The effects of user comment valence of Facebook health messages on intention to receive the flu vaccine: the role of pre-existing attitude towards the flu vaccine and psychological reactance. International Journal of Advertising, 40(7), 1187–1208. https://doi.org/10.1080/02650487.2020.1863065
-
Knobloch-Westerwick, S., Johnson, B. K., Silver, N. A., & Westerwick, A. (2015). Science exemplars in the eye of the beholder: how exposure to online science information affects attitudes. Science Communication, 37(5), 575–601. https://doi.org/10.1177/1075547015596367
-
Kohout, S., Kruikemeier, S., & Bakker, B. N. (2023). May I have your attention, please? An eye tracking study on emotional social media comments. Computers in Human Behavior, 139, 107495. https://doi.org/10.1016/j.chb.2022.107495
-
Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788–8790. https://doi.org/10.1073/pnas.1320040111
-
Kruglanski, A. W. (1990). Lay epistemic theory in social-cognitive psychology. Psychological Inquiry, 1(3), 181–197. https://doi.org/10.1207/s15327965pli0103_1
-
Kruglanski, A. W., Chen, X., Pierro, A., Mannetti, L., Erb, H.-P., & Spiegel, S. (2006). Persuasion according to the unimodel: implications for cancer communication. Journal of Communication, 56(suppl_1), S105–S122. https://doi.org/10.1111/j.1460-2466.2006.00285.x
-
Kruglanski, A. W., Orehek, E., Dechesne, M., & Pierro, A. (2010). Lay epistemic theory: the motivational, cognitive, and social aspects of knowledge formation. Social and Personality Psychology Compass, 4(10), 939–950. https://doi.org/10.1111/j.1751-9004.2010.00308.x
-
Kruglanski, A. W., & Thompson, E. P. (1999). Persuasion by a single route: a view from the unimodel. Psychological Inquiry, 10(2), 83–109. https://doi.org/10.1207/s15327965pl100201
-
Lee, E.-J., & Jang, Y. J. (2010). What do others’ reactions to news on internet portal sites tell us? Effects of presentation format and readers’ need for cognition on reality perception. Communication Research, 37(6), 825–846. https://doi.org/10.1177/0093650210376189
-
Lu, S., & Liang, H. (2024). Reactance to uncivil disagreement? The integral effects of disagreement, incivility, and social endorsement. Journal of Media Psychology: Theories, Methods, and Applications, 36(1), 15–26. https://doi.org/10.1027/1864-1105/a000378
-
Maier, M., Milde, J., Post, S., Günther, L., Ruhrmann, G., & Barkela, B. (2016). Communicating scientific evidence: scientists’, journalists’ and audiences’ expectations and evaluations regarding the representation of scientific uncertainty. Communications, 41(3), 239–264. https://doi.org/10.1515/commun-2016-0010
-
Marsh, E. J., & Yang, B. W. (2018). Believing things that are not true: a cognitive science perspective on misinformation. In B. G. Southwell, E. A. Thorson & L. Sheble (Eds.), Misinformation and mass audiences (pp. 15–34). University of Texas Press. https://doi.org/10.7560/314555-003
-
Metzger, M. J., & Flanagin, A. J. (2013). Credibility and trust of information in online environments: the use of cognitive heuristics. Journal of Pragmatics, 59, 210–220. https://doi.org/10.1016/j.pragma.2013.07.012
-
Metzger, M. J., Flanagin, A. J., & Medders, R. B. (2010). Social and heuristic approaches to credibility evaluation online. Journal of Communication, 60(3), 413–439. https://doi.org/10.1111/j.1460-2466.2010.01488.x
-
Moore, A., & Stilgoe, J. (2009). Experts and anecdotes: the role of “anecdotal evidence” in public scientific controversies. Science, Technology, & Human Values, 34(5), 654–677. https://doi.org/10.1177/0162243908329382
-
Naab, T. K., Heinbach, D., Ziegele, M., & Grasberger, M.-T. (2020). Comments and credibility: how critical user comments decrease perceived news article credibility. Journalism Studies, 21(6), 783–801. https://doi.org/10.1080/1461670x.2020.1724181
-
Nabi, R. (2002). Anger, fear, uncertainty, and attitudes: a test of the cognitive-functional model. Communication Monographs, 69(3), 204–216. https://doi.org/10.1080/03637750216541
-
Nemes, L., & Kiss, A. (2021). Social media sentiment analysis based on COVID-19. Journal of Information and Telecommunication, 5(1), 1–15. https://doi.org/10.1080/24751839.2020.1790793
-
Nölleke, D., Leonhardt, B. M., & Hanusch, F. (2023). “The chilling effect”: medical scientists’ responses to audience feedback on their media appearances during the COVID-19 pandemic. Public Understanding of Science, 32(5), 546–560. https://doi.org/10.1177/09636625221146749
-
Oyebode, O., Ndulue, C., Adib, A., Mulchandani, D., Suruliraj, B., Orji, F. A., Chambers, C. T., Meier, S., & Orji, R. (2021). Health, psychosocial, and social issues emanating from the COVID-19 pandemic based on social media comments: text mining and thematic analysis approach. JMIR Medical Informatics, 9(4), e22734. https://doi.org/10.2196/22734
-
Ross Arguedas, A. A., Badrinathan, S., Mont’Alverne, C., Toff, B., Fletcher, R., & Nielsen, R. K. (2024). Shortcuts to trust: relying on cues to judge online news from unfamiliar sources on digital platforms. Journalism, 25(6), 1207–1229. https://doi.org/10.1177/14648849231194485
-
Royan, R., Pendergrast, T. R., Woitowich, N. C., Trueger, N. S., Wooten, L., Jain, S., & Arora, V. M. (2023). Physician and biomedical scientist harassment on social media during the COVID-19 pandemic. JAMA Network Open, 6(6), e2318315. https://doi.org/10.1001/jamanetworkopen.2023.18315
-
Saffran, L., Hu, S., Hinnant, A., Scherer, L. D., & Nagel, S. C. (2020). Constructing and influencing perceived authenticity in science communication: experimenting with narrative. PLoS ONE, 15(1), e0226711. https://doi.org/10.1371/journal.pone.0226711
-
Schröder, J. T., Brück, J., & Guenther, L. (2025). Identifying trust cues: how trust in science is mediated in content about science. JCOM, 24(01), A06. https://doi.org/10.22323/2.24010206
-
Stieglitz, S., & Dang-Xuan, L. (2013). Emotions and information diffusion in social media — sentiment of microblogs and sharing behavior. Journal of Management Information Systems, 29(4), 217–248. https://doi.org/10.2753/mis0742-1222290408
-
Sundar, S. S. (2008). The MAIN model: a heuristic approach to understanding technology effects on credibility. In M. J. Metzger & A. J. Flanagin (Eds.), Digital media, youth, and credibility (pp. 73–100). The MIT Press.
-
Sundar, S. S., Xu, Q., & Dou, X. (2019). The role of technology in online persuasion: a MAIN model perspective. In S. Rodgers & E. Thorson (Eds.), Advertising theory (2nd ed., pp. 70–88). Routledge. https://doi.org/10.4324/9781351208314
-
Szczuka, J. M., Meinert, J., & Krämer, N. C. (2024). Listen to the scientists: effects of exposure to scientists and general media consumption on cognitive, affective, and behavioral mechanisms during the COVID-19 pandemic. Human Behavior and Emerging Technologies, 2024, 8826396. https://doi.org/10.1155/2024/8826396
-
Van Kleef, G. A. (2009). How emotions regulate social life: the emotions as social information (EASI) model. Current Directions in Psychological Science, 18(3), 184–188. https://doi.org/10.1111/j.1467-8721.2009.01633.x
-
Van Kleef, G. A., De Dreu, C. K. W., & Manstead, A. S. R. (2010). An interpersonal approach to emotion in social decision making: the emotions as social information model. In M. P. Zanna (Ed.), Advances in Experimental Social Psychology (pp. 45–96, Vol. 42). Elsevier. https://doi.org/10.1016/s0065-2601(10)42002-x
-
Van Kleef, G. A., van den Berg, H., & Heerdink, M. W. (2015). The persuasive power of emotions: effects of emotional expressions on attitude formation and change. Journal of Applied Psychology, 100(4), 1124–1142. https://doi.org/10.1037/apl0000003
-
Waddell, T. F. (2020). The authentic (and angry) audience: how comment authenticity and sentiment impact news evaluation. Digital Journalism, 8(2), 249–266. https://doi.org/10.1080/21670811.2018.1490656
-
Winter, S., Brückner, C., & Krämer, N. C. (2015). They came, they liked, they commented: social influence on Facebook news channels. Cyberpsychology, Behavior, and Social Networking, 18(8), 431–436. https://doi.org/10.1089/cyber.2015.0005
-
Winter, S., & Krämer, N. C. (2016). Who’s right: the author or the audience? Effects of user comments and ratings on the perception of online science articles. Communications, 41(3), 339–360. https://doi.org/10.1515/commun-2016-0008
-
Wojcieszak, M., & Kim, N. (2016). How to improve attitudes toward disliked groups: the effects of narrative versus numerical evidence on political persuasion. Communication Research, 43(6), 785–809. https://doi.org/10.1177/0093650215618480
-
World Health Organization. (2022, December 7). Post COVID-19 condition (Long COVID). https://www.who.int/europe/news-room/fact-sheets/item/post-covid-19-condition
-
Xu, J. (2023). A meta-analysis comparing the effectiveness of narrative vs. statistical evidence: health vs. non-health contexts. Health Communication, 38(14), 3113–3123. https://doi.org/10.1080/10410236.2022.2137750
-
Zebregs, S., van den Putte, B., Neijens, P., & de Graaf, A. (2015). The differential impact of statistical and narrative evidence on beliefs, attitude, and intention: a meta-analysis. Health Communication, 30(3), 282–289. https://doi.org/10.1080/10410236.2013.842528
-
Zhao, X., & Tsang, S. J. (2024). How people process different types of health misinformation: roles of content falsity and evidence type. Health Communication, 39(4), 741–753. https://doi.org/10.1080/10410236.2023.2184452
-
Zillmann, D. (1999). Exemplification theory: judging the whole by some of its parts. Media Psychology, 1(1), 69–94. https://doi.org/10.1207/s1532785xmep0101_5
-
Zillmann, D. (2006). Exemplification effects in the promotion of safety and health. Journal of Communication, 56(suppl_1), S221–S237. https://doi.org/10.1111/j.1460-2466.2006.00291.x
About the authors
Bianca Nowak is a post-doctoral researcher at the Research Center for Trustworthy Data Science and Security, University of Duisburg-Essen, Germany. Her research interests include user-centred science communication on social media platforms, (dis-)trust in science, and the role of AI in science communication.
E-mail: bianca.nowak@uni-due.de
Nicole Krämer is a Full Professor of the Department of Social Psychology: Media and Communication at the University of Duisburg-Essen, Germany. Her research interests include computer-mediated communication, especially social media usage as well as social aspects of human-technology interaction. She investigates the processes of information selection, opinion building, as well as science communication.
E-mail: nicole.kraemer@uni-due.de