1 Introduction
1.1 Context
In the age of digital information and its rapid sharing, people increasingly rely on the Internet and social media when consuming science or health-related information [Diaz et al., 2002; Rosenberg et al., 2020], with the known risk of running into fake news and misinformation [Pershad et al., 2018], the consumption of which can have deleterious effects, especially in times of pandemics [Rocha et al., 2023]. Crucially, information is usually not presented in isolation (i.e., as a simple fact or a collection of data), but discussed or nuanced by several types of commentators, with different degrees of expertise [Houtman et al., 2021]. Particularly in the case of medical advances (whether preventive, diagnostic, or therapeutic in nature), the information process often follows specific stages: first, research and related findings are published in specialized peer-reviewed journals [Scott, 2007]; then institutional bodies take up such research and eventually make the decision to approve or recommend the use of drugs or treatments resulting from such findings [Lipsky & Sharp, 2001]; official guidelines are communicated to professionals in order for them to adapt their practices [Regidor et al., 2007]; and finally, various commentators are invited to discuss those advances to the general public through the mainstream media [Shen, 2019], with such invitations being made by journalists on the basis of editorial decisions. For example, during the Covid outbreak, talk shows featured experts of various kinds (directly chosen by the show’s editors) discussing the usefulness of vaccination [Mihelj et al., 2022], although such measure had already been approved by the World Health Organization and local institutions way before the increase in experts’ debates. The role of intermediaries in discussing advances in medicine is particularly important, since it can directly influence their reception by the public and, consequently, the number of people who decide to undergo a certain procedure, or not. To further frame the role of commentators, it is useful to distinguish two aspects that might affect the public reception of their comments: their expertise in the field in which they are invited to discuss; and their actual opinions about the topic at stake. We further detail these two aspects in the following paragraphs.
Expertise can be loosely defined as the possession of high knowledge in a certain domain as well as a proven ability in performing specific actions within that domain [Goldman, 2018]; in turn, such characteristics make the experts perceived as knowledgeable and credible within their domain [Petty & Cacioppo, 1986]. This type of expertise has been defined contributory [Collins et al., 2016], since it directly involves skills embodied in practice: an example of a contributory expert would be a medical doctor, who is expected to have both the knowledge and the practical skills within the health domain. When an expert only possesses the knowledge and/or the specific “language” of a certain domain, their expertise is considered interactional [Collins & Evans, 2015], since it only interacts with the knowledge of the domain, without participating to its production, nor to its application. An example of an interactional expert would be a journalist, who often masters the specific language of their domain, without actively contributing to it nor being able to perform any specific actions. Lastly, expertise can be defined depending on the number of people it refers to: in this sense, individual expertise (e.g., a single medical doctor) can be distinguished from collective expertise [e.g., the community of medical doctors, who are expected to be seen as having a collective and rational epistemic authority; Jäger, 2023]. It is also worth noting that, from a cognitive perspective, experts do not just have a higher knowledge: in fact, at least in their domain of expertise, they also reveal a more structured organization of such knowledge, they better apprehend the “gist” of the information presented and they deploy highly flexible reasoning skills [for a review of these and other findings, see Hoffman, 1998].
The opinions refer to the viewpoints held by humans about a specific topic or informational content. They are often expressed through statements and they can be positive or negative (i.e., being favorable or not favorable towards the topic at stake). The assertiveness with which such opinions are expressed can also vary: different people having a positive opinion may reveal more or less excitement in defending such opinion, ranging from a rather neutral stance to a much more fervent position. Assertiveness is a highly multidimensional construct [Norton & Warnick, 1976], but, within the scope of our work, we operationally define it as the degree of appreciation/excitement in expressing a certain opinion.
An orthogonal aspect pertains to the extent to which such cues are consciously perceived by the public as factors that should have an effect on their judgments. In other words, there might be a gap between the public perceived influence that these factors should have and the influence they actually have on people’s beliefs. Such gap between the normative expected role of a given factor and the descriptive role that is measured in people’s actual behaviors and beliefs has been widely described in several fields. These include behavioral economics [humans often do not behave in line with the normative theory that would optimally maximize their utility; e.g., Morgenstern, 1972; Tversky, 1975; Luce & von Winterfeldt, 1994] and cognitive psychology [humans often do not perceive things in the same way they think they should, as if actual perception was not penetrable by rational cognition; e.g., Pylyshyn, 1999; Stokes, 2013; Ciccione et al., 2023]. Whether such gap also exists within the context of paraverbal and contextual cues in health communication remains to be studied.
The role of paraverbal and contextual cues such as the expertise of the commentator, as well as the possible tension between how these factors should influence people’s beliefs and how they actually influence them, can be considered in light of the Elaboration Likelihood Model, proposed by Petty and Cacioppo [1986]. According to this model, when individuals face a certain message, they can elaborate it through two possible routes: one involving a high level of elaboration, the other involving a low level. Following the first path means engaging in a central route of rational reasoning that tries to only consider the informational content of the message at stake and not the superficial features of it. Following the second path means using a peripheral route that does not carefully consider the actual content of the message but rather builds on some superficial features of the message, such as the tone used to express it or the characteristics of the person transmitting it. In the case of medical advances officially approved by health institutions, an individual might (normatively) be expected to use the central route and to highly value only the core of the message: if the medical advance has been approved by an official governmental institute, there should not be many reasons to doubt it. On the other hand, people could likely be affected by peripheral aspects of the message, such as the expertise or assertiveness of commentators asked to discuss such advances. Crucially, people might be influenced by these cues even if aware that they should not necessarily impact them.
1.2 State of research
While commentators can certainly be defined through the prism of many factors (such as displayed emotions, gender, age…), we focus on expertise and opinions (with varying degrees of assertiveness) since they have been well studied in persuasion and communication research [e.g., Petty & Cacioppo, 1986; Ames & Flynn, 2007; Klucharev et al., 2008; Huang et al., 2023; Omura et al., 2017]. While expertise is often associated with trust and epistemic authority [Stewart, 2020], opinions expressed with assertiveness can act as persuasive cues that either strengthen or undermine perceived credibility, depending on context [e.g., Ames, 2009]. Furthermore, experimental and observational studies in social and cognitive psychology have specifically explored the effects of expertise and opinions in the health domain.
Concerning opinions, people seem to be sensitive to the number of individuals sharing a certain viewpoint: for example, agreement between experts (i.e., consensus) in their opinions about the climate change, increases in the public the belief that this phenomenon is real [van der Linden, Leiserowitz et al., 2015]; similarly, perceived agreement in opinions about the utility and safety of vaccines decreases the public concerns about vaccination and improves public support for vaccination policies [van der Linden, Clarke & Maibach, 2015]. The assertiveness with which opinions are expressed seems also to have an influence: for example, training assertive communications has been shown to be impactful among healthcare professionals [Omura et al., 2017] and even AI-based agents can change clinicians’ attitudes and behaviors depending on the assertiveness with which they express their stances [Calisto et al., 2023].
Concerning expertise, people seem to be positively influenced by the expertise of their health-care providers: for example, cancer patients place more trust in physicians they perceive as knowledgeable [Blödt et al., 2021]; young adults recognize that physicians should be seen as the top experts in health care [Mendes et al., 2017]; individuals who most trusted medical experts during the Covid pandemic showed greater willingness to vaccinate [Jennings et al., 2021]. Note, however, that these findings are not always replicated: a recent study suggests that people do not always perceive experts as more trustworthy than non-experts in the health domain [Geiger, 2022]. Also, having a higher scientific literacy seems to make people more sensitive to expertise cues across different scientific domains [van Antwerpen et al., 2025].
It is important to note that in most of these studies, the expert was described as the very primary source of information. In other words, the information content was directly presented by a scientist, a physician or a health-care provider. As pointed out in the previous section, however, the general public is often subjected to an external commentator’s opinion on medical measures and advances that have been already accepted and/or deployed by institutional entities. In this context, experts provide an opinion about such measures, rather than transmitting the information per se. Considering the large importance given to experts when medical advances are discussed in media communication contexts, it is crucial to disentangle the potential influence that these commentators have on lay people’s trust and behaviors when expressing an opinion about such advances on a public platform. Several authors have advocated for considering the role played by these “intermediaries” (i.e., those linking the public with science) in affecting trust towards science [e.g., Reif & Guenther, 2021]. A systematic review by the same authors, however, found that most studies focused on trust in science rather than trust in science’s intermediaries, such as commentators not directly involved in the scientific advance itself. It is worth noting that the platform (e.g., live broadcasts or newspapers) in which intermediaries express their opinions can be seen as an intermediary itself, since it connects the institutional entities with the public. Although the study of the role played by such platforms in science communication is outside the scope of this manuscript, we invite the interested readers to consult the following articles, which discuss the topic in detail [Dunwoody, 2021; Brossard, 2013; Kleis Nielsen & Ganter, 2018].
1.3 The current study
Based on the studies reviewed so far, two research questions remain, to our knowledge, open: what is the role of a commentator’s expertise and opinion in shaping people’s trust, beliefs and decisions about medical advances? If such a biasing role is found, are readers aware of this bias?
Concretely, in our work, we wanted to study the combined effects of both expertise (either contributory or interactional, collective or individual) and opinions (against, neutral, in favor or assertively in favor) on the public’s reception of institutional health decisions, recreating a context similar to that encountered in real-life situations. Crucially, we also wanted to investigate whether there is a tension between normative expectations (i.e., the role that those factors should have on people’s attitudes and beliefs) and psychological reality (i.e., the role that those factors have in shaping people’s attitudes and beliefs). Lastly, we also wanted to investigate whether several demographic factors (such as gender, age, level of study, conspiracy beliefs, and political leanings) play a role in shaping public reception of medical advances.
2 Methods
2.1 Hypotheses
We sought to recreate a context as ecological as possible in which information about a new medical tool was presented as approved and already deployed and, subsequently, discussed by commentators. We pre-registered an experimental design (and the subsequent statistical analyses) in which we manipulated the level of expertise (3 levels: journalist, a medical doctor, the community of medical doctors) and opinions (4 levels: against, neutral, moderately in favor, highly in favor) of the expert commenting on the announced newly approved tool. We investigated how these two factors influence the reader’s trust in the commentator, as well as their beliefs and decisions about the medical tool. We also investigated the reader’s perception of how influential the informant’s expertise and assertive opinions are (and should be) in their assessments of medical advances. In particular, we chose to conduct a large-scale online study in order to control as much as possible for confounding factors such as gender or highest level of study. Note that in the pre-registration we referred to “assertiveness” rather than “opinion”; thanks to the useful feedback of one anonymous reviewer of the manuscript, however, we realized that referring only to assertiveness could be misleading, since we varied both the polarity of the commentator’s opinion (whether against or in favor) and its assertiveness (i.e., the level of excitement/endorsement in expressing their opinion). For this reason, we will refer to “opinion” instead of “assertiveness” throughout the manuscript, and we will present and discuss the role of the second aspect whenever appropriate.
We hypothesized that participants’ trust in the communicator, viewpoint on the medical tool and willingness to use such tool, would be influenced by the commentator’s expertise and opinion. This hypothesis was pre-registered prior to data collection, and the full materials and preregistration details — including the questionnaire — are available in the OSF repository (https://osf.io/4uqmr and https://osf.io/bnfyx). No directional hypotheses were made concerning the perception of the commentator’s expertise and opinions in shaping the readers’ beliefs. Statistical analyses were performed using R (version 4.4.2). All packages used and statistical details about the analyses are available in the OSF repository indicated above.
2.2 Study design
This study employed a randomized design. It was conducted online and was approved by the local ethical committee. Participants were randomly assigned to a particular experimental condition (as explained below). 1984 participants (out of 2604) completed the study. Of these, 230 participants did not answer to the whole demographic questionnaire, but we included them in the study since they responded to all other relevant questions. Among participants who responded to the demographic questionnaire (1754 in total), 1487 participants declared being women, 253 men, 14 non-binary. The mean age was 28.9 years old (± 10.4 years). In terms of the highest academic degree of participants, 19 obtained a middle school diploma, 285 a high school diploma, 715 an undergraduate degree, 579 a master’s degree, and 156 a Ph.D.
Participants were recruited through social media and mailing lists, with no monetary compensation offered. The study was open to French-speaking individuals (since French was the language used in the study), aged 18 and above. 1676 participants (i.e., the large majority) declared living in France. Prior to the experiment, each participant was informed that they could withdraw without providing any justification.
2.3 Randomization process and demographic aspects
Participants were randomly assigned to one of two scenarios (“new vaccine against colon cancer” — 768 participants — or “new X-Rays prevention test against heart attack” — 1216 participants). The fact that the randomization process resulted in more participants assigned to the prevention test condition was not predicted, since the internet link to the online study was designed to randomly assign each unique user to one of the two conditions. Since only the data from participants who completed the study were recorded (i.e., only those who pressed the “submit” button at the end), a possible explanation of the imbalance between conditions might be that participants who saw that the study was related to vaccines were less likely to finish the study (probably because the topic was considered as more controversial). The reason why we included a scenario about “vaccines” was precisely related to the known and debated difficulty to accept vaccination policies by a large portion of the population and, therefore, to the importance of studying the factors that might influence such acceptance. We decided to include the other scenario (i.e., the prevention test) in order to increase the potential generalizability of the study’s results.
The key demographics described above (for those who answered to the whole demographic questionnaire) are evenly distributed across the two scenarios (see Table 1). The two scenarios did not differ significantly in the gender distribution, χ2(2, N = 1754) = 2.93, p = .23, nor in the highest level of education, χ2(4, N = 1754) = 1.49, p = .83. Participants in the prevention-test condition were, however, slightly older (M = 29.4) than those in the vaccine condition (M = 28.1), Welch’s t(1614) = 2.67, p < .01.
| Demographic aspect | Category | Prevention test condition | Vaccine condition |
| Gender | Men | 144 | 109 |
| Women | 901 | 586 | |
| Non-binary | 6 | 8 | |
| Age | Mean age (years) | 29.4 | 28.1 |
| Highest degree | Middle school | 10 | 9 |
| High school | 165 | 120 | |
| Undergraduate degree | 431 | 284 | |
| Master’s degree | 347 | 232 | |
| Ph.D. | 98 | 58 | |
2.4 Experimental protocol
First, participants were instructed about the structure of the experiment (consisting of reading a few paragraphs and then answering some questions) and on how using a -100/+100 scale to provide responses. Then, they were invited to read about a new medical advance (either the vaccine or the prevention test) and asked to consider the provided statement as true (Figure 1, left). Both statements simply described the medical advance, specifying that it was safe and efficient with negligible side effects. After reading about the statement, participants were presented with a commentator’s opinion, delivered in a live broadcast. The gender of the commentator(s) was not made explicit, although the grammatical gender was always masculine (e.g., “un médecin”, in French). The commentator could vary in terms of their kind of expertise (interactional: a journalist; contributory and individual: a medical expert; contributory and collective: the community of medical experts) and in terms of their opinion (against, neutral, in favor, assertively in favor with respect to the medical advance). All statements were structured in the same way, only a few adverbs and verbs varied from one opinion condition to the other. Figure 1 provides the English translation of each experimental condition.
To summarize, the study manipulated three experimental variables according to a 2Õ3Õ4 design: there were 2 medical advances to which participants could be exposed, 3 levels of expertise of the commentator commenting on such advance, and 4 levels of opinion of that same commentator. The randomization process of the experimental conditions resulted in a balanced distribution of participants: 651, 679, and 654 respectively for the three levels of the commentator’s expertise; 481, 492, 508, and 503 respectively for the four levels of the commentator’s opinion. A chi-square goodness-of-fit test indicated that participants were evenly distributed across expertise levels, χ2(2, N = 1984) = 0.71, p = .7, and across opinion levels, χ2(3, N = 1984) = 0.88, p = .83, showing no meaningful imbalance in group sizes for either factor.
After reading about the commentator’s opinion, participants were asked to respond, on a continuous scale (going from -100 to +100, where -100 corresponded to “not at all/extremely negative”, +100 to “maximal/extremely positive” and 0 to a neutral answer) to three different questions. The dependent variables of the study were thus the answers provided, which were taken as an operational proxy for the participants’ trust (“How much do you trust the commentator’s opinion?”), personal viewpoint (“What’s your viewpoint on the medical advance?”), and willingness to use the tool (“Would you use such medical advance for yourself?”). Other questions were asked in a second section of the experiment but their results are not analyzed in the current paper (information about such questions are available in the pre-registration). Lastly, all participants were also asked to provide, if they wanted to, an evaluation on a 5-points Likert-scale of how much expertise and assertiveness impact and should impact their thoughts and behavior towards information in real life. Several demographic variables (age, gender, highest level of study) were also recorded. The reason why we recorded such variables was to evaluate their potential effect on participants’ responses: in fact, scientific literacy has been shown to increase the sensitivity to perceived expertise [van Antwerpen et al., 2025] and higher levels of education are generally associated with greater trust in the scientific community [Gauchat, 2012]. As an optional question, we also assessed the degree of beliefs in conspiracy theories by asking participants to express their agreement (from -100 to +100), to the following five statements: 1) “tests of new substances are conducted on citizens without their consent”; 2) “many pieces of information about diseases or drugs are dissimulated to the public”; 3) “some scientists are financed by the government to manipulate evidence with the aim to support current policies”; 4) “some diseases have been deliberately disseminated in order to infect some populations”; 5) “some widespread diseases have been created in the labs”. We then calculated, for each participant, their “conspiracy index” as the average response to these five questions. Lastly, we asked participants to place themselves on a political views’ continuum from -100 (far left) to +100 (far right). Both high conspiracy beliefs and far-right political views have, in fact, been correlated with, among other things, distrust in science [McCright et al., 2013; Vranic et al., 2022] and vaccine hesitancy [Winter et al., 2022; Milošević Ðorđević et al., 2021; Backhaus et al., 2023]. Participants accessed the questionnaire via a link provided in the recruitment material. Only fully completed main questionnaires (the demographic one was not compulsory) were included in the analysis, thus excluding participants who did not provide one or more responses.
3 Results
The results presented in this section are not separated as a function of the specific tool that was proposed to participants (i.e., a new vaccine against cancer or a new X-rays prevention test to detect the risk of heart attacks) since the results were equivalent in both conditions (i.e., the main results described below hold even when the analyses are restricted to only one topic; when it is not the case, further details are provided).
3.1 Recognition of expertise and assertiveness
One might wonder whether the different levels of our manipulated variables (expertise and opinions) were really distinguished by participants. In order to verify this, we looked at participants’ estimation of the expertise and the assertiveness of the commentator (which therefore served as an experimental manipulation check; Figure 2).
An ANOVA on participants’ perception of assertiveness found that they valued assertiveness differently depending on the opinion expressed by the commentator (F[3,1796] = 146.2, p < .001; a post-hoc Tukey test confirmed a significant difference between all conditions) but not depending on their expertise (F[2,1796] = 0.3, p = .7). In other words, no commentator was considered to be “essentially” more assertive. On the other hand, the perception of expertise was affected by both the actual expertise of the commentator (F[2,1796] = 365.2, p < .001; post-hoc Tukey test: all comparisons significant with p < .001) and by their opinion (F[3,1796] = 72.6, p < .001; post-hoc Tukey Test: all comparisons significant with p < .001, except for the difference between “in favor” and “very much in favor”). In other words, the more favorable the commentator was towards the medical advance, the higher the evaluation of their expertise.
3.2 Trust in the commentator’s opinion
First, we looked at participants’ trust in the commentator’s opinion as a function of their expertise and opinion. A two-way ANOVA revealed that both the commentator’s expertise (F[2,1972] = 191.5, p < .001) and their opinion (F[3,1972] = 222.7, p < .001), significantly affected trust. No significant interaction between the two factors was found (p > .05).
Specifically, a higher expertise resulted in a higher trust. A post-hoc Tukey test confirmed that such influence was significantly different among all expertise levels (all p < .001, see Figure 3, left plot). Same effects were found when we looked at the trust as a function of the commentator’s opinion: the higher their expressed favor towards the medical tool, the more participants trusted the commentator’s opinion (all p < .001, except for the difference between “in favor” and “very much in favor”, which was not significant; Figure 3, right plot). Crucially, this additive effect of opinions on people’s trust was found even within each expertise level separately (see Figure 4: all pairwise contrasts — corrected for multiple comparisons — among opinions — except for “in favor” versus “very much in favor” — were significant for all expertise levels).
3.3 Viewpoint on a medical advance and willingness to make use of it
We then considered participants’ viewpoint on the tool (Figure 5), finding that it was not affected by the commentator’s expertise (F[2,1972] = 2.5, p > .05). However, it was affected by the commentator’s opinion (F[3,1972] = 12.1, p < .001; no interaction between the two factors was found). Pairwise comparisons showed that the difference in expressed viewpoint was significant (post-hoc Tukey test) among most opinion levels (see Figure 5, right plot). Similar findings were obtained concerning the willingness to use the tool (Figure 6): no main effect of expertise was found (F[2,1972] = 1.8, p > .05), whereas a main effect of opinion was found (F[3,1972] = 4.3, p < .01) and a small interaction between the two factors (F[6,1972] = 2.4, p < .05). A post-hoc Tukey test revealed, however, that only the difference between “against” and the two “in favor” conditions was significant; p < .05; Figure 6, right plot). More specifically, when we conducted the pairwise comparisons for each expertise level (after correcting for multiple comparisons), we found that only for the highest level of expertise there were significant differences in the willingness to use the tool for different opinion levels (all p < .05 except for “against” versus “neutral” and “in favor” versus “very much in favor”, which were not significant). Also, the willingness to use the tool was the only dependent variable for which the topic seemed to play a role: in fact, when we restricted the ANOVAs to only one topic, the commentator’s opinion was a significant factor only for participants in the “vaccine” condition (p < .05) and not for those in the “prevention test” condition.
Interestingly, the median response is overall higher for the viewpoint than for the willingness to use the tool and such a difference was significant (difference = 12.6 points; paired t-test: t(1983) = 17.74, p < .0001), as visible if we compare Figure 5 and Figure 6. In other words, people’s favor towards a new medical tool might not necessarily translate into an equal willingness to use the tool at the individual level (at least when such variables are measured through identical scales).
3.4 Perceived and desired impact of the commentator’s expertise and assertiveness
In Figure 7, we plotted the distributions of responses (given on a Likert scale of 5 items from “Not at all” to “Completely”, coded with numbers ranging from 1 to 5 for analysis’ purposes) about how much they thought that expertise and assertiveness influence their perception (top row) and how much they thought such factors should influence their perception (bottom row). In both cases they thought that these factors influenced them more than they should, as confirmed by paired t-tests (expertise: t(1638) = 4.3, p < .001, average change = .11; assertiveness: t(1602) = 32.1, p < .001, average change = .9) and by Kolmogorov-Smirnov (KS) test for differences in distributions (expertise: D = .09, p < .001; assertiveness: D = .33, p < .001). From these data and from Figure 7, it is clear that the major difference was for the factor “assertiveness”: even the mode changed in this case, passing from “a lot” to “not at all”. When we compared the perceived influence of expertise and assertiveness (Figure 7, first row), they also significantly differed, both in terms of their average (paired t-test: t(1615) = 24, p < .001) and in terms of the responses’ distribution (KS test: D = .31, p < .001). The same was true for the evaluation of the influence that these factors should have on their judgement (Figure 7, bottom row; paired t-test: t(1624) = 38.6, p < .001; KS test: D = .49, p < .001). As clear from the figure, while the mode for the desired role of expertise was “a lot”, it was “not at all” for the desired role of assertiveness.
3.5 Perceived and actual impact of the commentator’s expertise and assertiveness
The perception of the importance of expertise and assertiveness could also translate into a larger effect of these factors on participants’ responses in the main questionnaire. In other words, participants who think that these factors have an influence might be more affected by them in their trust, viewpoint and decision. In order to test for this, we performed a moderation analysis in which, for the three main dependent variables (i.e., trust, viewpoint, and decision), we computed two regressions: one as a function of the commentator’s expertise, perceived influence of it and their interaction; the other as a function of commentator’s opinion, perceived influence of assertiveness and their interaction. The significance of the interaction term would provide an indication of whether the effect of the commentator’s expertise and opinions on participants’ ratings are moderated by the individual perception of the influence of expertise and assertiveness in real life. Note that the commentator’s opinion could be against, neutral, in favor, or very much in favor. However, the “against” message was formulated in a highly assertive style that was comparable to the “very much in favor” condition, while differing only in the direction of the opinion (anti vs. pro). Thus, because this condition did not fit a monotonic continuum from low to high assertiveness, we excluded it from the present analyses and focused on the three remaining levels (neutral, in favor, very much in favor), which varied in assertiveness while expressing the same general stance.
For trust ratings, the interaction between expertise and perceived influence of expertise came close to significance, b = 3.38, t(1239) = 1.93, p = .053, indicating a tendency for the effect of expertise on trust to be stronger among participants who considered expertise to play a larger role. By contrast, for the model including the commentator’s opinion and perceived importance of assertiveness, the interaction between the two was clearly nonsignificant, b = -1.15, t(1220) = -0.68, p = .50, suggesting that the effect of assertiveness on trust did not systematically depend on participants’ beliefs about its influence.
For viewpoint ratings, there was no evidence that perceived influence of expertise modulated the impact of expertise, with a non-significant interaction of the two terms, b = 1.78, t(1239) = 1.45, p = .15. In contrast, perceived influence of assertiveness significantly moderated the effect of the commentator’s opinions on viewpoint ratings: the interaction term was significant, b = 3.23, t(1220) = 2.83, p = .005. This pattern indicates that higher levels of assertiveness in the message had a stronger impact on participants’ expressed viewpoint among those who believed assertiveness plays a larger role in real life.
A similar pattern emerged for individual decisions. The interaction between expertise and perceived role of expertise was not significant, b = 1.60, t(1239) = 1.02, p = .31, indicating no clear moderation. However, the interaction between the commentator’s opinion and perceived influence of assertiveness was significant, b = 5.43, t(1220) = 3.74, p < .001, showing that increased assertiveness in the message had a larger effect on participants’ individual decision for those who regarded assertiveness as having a larger influence. The exact same analyses were conducted on participants’ perception of the role that expertise and assertiveness should normatively have in their ratings: only for trust ratings the interaction term of opinions and normative perception of assertiveness was significant, but negative (b = -3.44, t(1224) = -2.11, p < .05): in other words, the impact of commentator’s assertiveness on trust was stronger for participants who wish that assertiveness would play a smaller role.
3.6 Effect of demographics on participants’ responses
Lastly, we ran multiple regressions on participants’ trust, viewpoint and willingness to use the medical advance as a function of their gender, level of study and the two independent variables of interest (expertise and opinion). Non-binary genders and people with a level of study lower than high school were excluded from this analysis, given their very weak representation in the survey. We found that gender did not play a role in any of participants’ answers (all p values > .05), whereas the level of study did, but only for people’s trust: specifically, participants with a doctorate degree reported on average lower trust levels compared to those holding only a high-school degree (b = -15.5, t(1713) = -3.17, p < .01). In other words, after controlling for the commentator’s expertise and opinions, and the readers’ gender and highest level of study, only the most advanced level of formal education was associated with a modest reduction in trust. As explained in the methods, some participants also answered to five questions aimed at testing their beliefs in conspiracy theories and their political views. We therefore ran the same aforementioned multiple regressions on readers’ trust, opinion and willingness to use the tool, by adding the “conspiracy index” and the “political views” as predictors. Only the first was significant (p < .05) for all three regressors (btrust = -.07, bviewpoint = -.15; bwillingness = -.24), suggesting that greater endorsement of conspiracy theories is associated with lower trust in the commentator and worse attitudes towards medical advances. It is worth noting that the average conspiracy index was quite low (-42.8, on a scale from -100 to +100).
4 Discussion
We asked a sample of almost two thousand participants to read about a medical tool recently approved and, subsequently, to discover the opinion of an intermediary commentator about this medical advance. Such commentators could vary in both their expertise (journalist, medical expert, community of medical experts) and opinions (against, neutral, in favor, very much in favor). Crucially, the initial statement had to be considered as true, so that we could specifically investigate the effect of the intermediary on people’s judgments. We found evidence that participants were affected by the expertise of the commentator, trusting the community of medical experts more than a single doctor and a single doctor more than a journalist. Using the words of Collins and Evans [2015], participants trusted more a source of contributory expertise (medical doctors) than a source of interactional expertise (journalists) and, within the first type of source, they trusted more a collective expertise (a community of medical doctors) than an individual expertise (a single medical doctor). These findings should be seen as reassuring, since they suggest that people are sensitive to implicit expertise of science commentators.
Participants were also affected by the commentator’s opinion. In other words, the more in favor the commentator was towards the medical tool, the more the participant trusted their opinion. This finding suggests that participants were highly sensitive to the fact that the medical advance was presented as already approved and backed-up by research: indeed, they did not trust opinions that were against such approval and they trusted more opinions that were assertively in favor of it compared to neutral ones. Put differently, they were able to downweight opinions that were not in agreement with the institutional consensus, especially when coming from non-contributory sources of expertise (i.e., a journalist). These results are in line with a study showing that readers of scientific findings might become skeptical when negative opinions are presented in an overly certain way [Winter et al., 2015], probably because such assertive statements are often present in highly polarized contexts [Jucá et al., 2024]. Despite this fine-tuning of trust allocations, however, participants’ viewpoint towards the tool and the willingness to use it were also significantly affected, although slightly, by the commentator’s opinion. In other words, although they trusted less an opinion against an approved medical tool, they were still affected by it when forming beliefs towards such tool.
Crucially, participants were well aware of the commentator’s expertise and assertiveness (in expressing their opinion) and they even thought that assertiveness tends to bias their beliefs and influence them more than it should. These findings highlight participants’ metacognitive abilities towards their own thoughts and decisions in our experimental context, in line with previous results that showed the importance of metacognition in health-related decisions [Fischer et al., 2023]. Interestingly, these two variables were also more affected by the commentator’s assertiveness in expressing their opinion for those participants who consider the commentator’s assertiveness to play a larger role in real life.
The findings reviewed so far can be read in the light of the Elaboration Likelihood Model [Petty & Cacioppo, 1986]. According to this model, some pieces of information might be treated at a deeper level (thus engaging high-level reasoning processes, especially when given enough time to elaborate), whereas other pieces of information might be analyzed through peripheral heuristic cues (including the expertise and displayed excitement of the commentator), which affect the attitudes towards the information, despite being orthogonal to the informational content itself. These surface characteristics of the message are also known as “persuasion cues” and have been shown, in many experimental contexts, to bias decision-making [Bodenhausen et al., 1994; Tiedens & Linton, 2001]. In our experimental setting, participants, despite being aware of the impact of these persuasion cues (i.e., being able to engage in a reflective analysis of both their expected and desired role in shaping their beliefs), still seemed to use the peripheral route when expressing their attitudes towards medical advances (such as in the first part of our questionnaire, in which they had to intuitively respond to a few questions on the basis of their feelings). The reason why such route was favored compared to a more central and deliberate one can potentially lie in the lack of real implications faced by the participants of our study: in fact, they simply had, in a relatively short amount of time, to express their attitudes towards fictitious medical scenarios, without any real consequence on their lives. Whether our findings hold in a real context (such as the adherence to vaccination programs) should be studied with methods that go beyond the scopes of classic experimental psychology (including correlational analyses on real-world data about the adherence to specific public health measures based on communication styles to which citizens are exposed to).
To summarize, the interest of our findings is that, at least in an experimental context, heuristic cues such as an external commentator’s expertise and opinion can influence readers’ beliefs about medical innovations that are presented as accepted and efficient, and this even when individuals are consciously aware of these cues and their potential biasing effects. This contributes to the body of literature [e.g., Gallagher & Updegraff, 2011; Pornpitakpan, 2004] showing that, beyond the factual content of health communications, the way information is framed — who communicates it and how strongly they advocate for it — can shape public attitudes and intentions to use new medical tools.
Our results are also novel in that they focus on a crucial aspect that is not often explicitly discussed when tailoring health-related communication [McCormack et al., 2013; Robinson et al., 2014]: the role of intermediaries commenting medical advances. Despite the fact that, in our experimental manipulation, the medical advance was given “for granted” (i.e., participants were informed to consider the information as true), the opinions of the commentator still affected participants’ beliefs towards such approved tools. In other words, one might conclude that our sample of participants (despite being, for the most part, highly educated and, thus, not representative of the entire population) was gullible [Forgas & Baumeister, 2019]. Although this sort of interpretation is often seen as a shortcut, since people might be way less gullible than they appear [for a review: Mercier, 2017], our results show that participants were, to some extent, quite influenceable.
Studying the role of paraverbal and contextual cues in science and health communication is particularly relevant in the case of medical advances, in which people’s choices and beliefs can drastically affect the course of events, such as in times of a pandemic. Our results advocate for a better consideration of both paraverbal (such as assertiveness) and contextual (such as expertise) factors, when communicating about health-related tools [Dhami & Mandel, 2022]. Ideally, science communication should be only driven by what the audience needs to know rather than how scientists say it [Fischhoff & Davis, 2014]: if a medical tool is approved, the assertive opinion of a discussant (especially when against the tool) should (always ideally) not play a negative role in people’s decisions and beliefs. On the contrary, trust in experts’ opinion is a fundamental ally when it comes to accepting new medical advances [Larson et al., 2011]: for example, it has been recently shown that people who trust expert medical doctors are more likely to change their minds about their “misinformed” opinions concerning vaccination [Stecula et al., 2020].
More research is needed to better identify and disentangle the impact of the multiple factors that are disconnected from the primary informational content, such as the type of media relating the information: in fact, in our study, we simply stated to participants that the commentators were giving their opinion about the medical advance in a “live broadcast”, with no further details. The specific medium, together with the frequency of exposition to such information, the commentator’s gender, age, attitude, and political alignment, are all likely to play a role in people’s beliefs. Lastly, it is important to note that our study does not provide any concrete solutions to the excessive role played by intermediaries on people’s trust, viewpoints and actions. It would be interesting to know whether such an influence can be erased or mitigated through specific interventions. As we showed, people already know that they are affected by assertiveness more than they should: capitalizing on this awareness might play a key role in tailoring adequate interventions. On the other hand, in specific contexts in which public health measures need to be followed, the use of assertive attitudes and discourses might even be used as a tool to increase trust in such measures.
5 Limitations
It is important to highlight a few limitations of our study: first, as already mentioned, our sample of participants was not representative of the entire general population. For example, our sample mainly consisted of women and young adults, and future studies should aim for a more representative pool of participants. Indeed, women seem to be more susceptible to trustworthiness than men [e.g., Abdullahi et al., 2019], although we did not find any difference in trust in our sample. Furthermore, our participants were more educated than the average citizen. The fact, however, that even our educated participants were affected by the peripheral cues of expertise and assertiveness suggest that this effect might be even stronger for less educated readers: in line with this hypothesis, it has been shown that people low in their “need for cognition” (a trait linked to lower educational attainment) seem to depend more on peripheral cues [Cacioppo & Petty, 1982]. Studying the impact of such cues on participants with a lower level of education is also important because they seem to be more vulnerable to misleading information [Edelson et al., 2024]. Also, our sample consisted only of French speakers who, for the large majority, declared living in France. Since trust in scientists has been shown to be significantly lower in France compared to other countries [Algan et al., 2021], future studies should investigate whether the findings hold for citizens of other countries. We could speculate that, in countries where scientists are considered more trustworthy, the difference in trust towards scientists and journalists should be larger than in our sample.
Another limitation lies in our operationalization of both independent and dependent variables: concerning the latter, we are aware of the limits of measuring trust or opinions with a single item measure and not with a validated scale. This methodological choice was done in order to maintain the experiment as short and intuitive as possible but future studies could explore more finely the multidimensional structure of our constructs: for example, trust could be measured through multi-items inventories [e.g., Hendriks et al., 2015], in order to more finely investigate whether different dimensions of trust are more or less affected by the commentator’s expertise and opinions.
A third limitation is given by the specific experimental context we provided to participants: we focused on the health domain because of the many relevant implications of studying the role of paraverbal cues in persuasion, but health communication is just a subdomain of science communication. Future studies could use a similar experimental paradigm within the context of other non-medical scientific advances or discoveries, in order to see whether our findings extend to other settings.
Lastly, as described in the results, we found that the more in agreement the commentator was with the medical advance, the more they were perceived as expert. In hindsight, this is not surprising: a commentator who is against a medical advance that has already been approved could be seen as lacking expertise about that advance. In other words, expertise (as directly extracted from the commentator’s job and as inferred through the opinion expressed by them) might be all that matters: it would thus be the actual cue used (at least in an experimental sample like ours) when assessing health-related comments about medical advances. Future studies should directly investigate this aspect or they could more finely manipulate the experimental stimuli in such a way that the readers’ perception of the commentator’s expertise would only depend on their actual expertise, (as expressed by their job title or study field) rather than as indirectly inferred from their (favorable or not, assertive or not) opinion.
As a sidenote, it is worth highlighting that, when the commentator’s message was highly assertive, the opinion could be either against or in favor of the medical tool. Only the three conditions “neutral”, “in favor” and “very much in favor” differed in terms of assertiveness along the same continuum (no assertiveness, mild assertiveness, high assertiveness), while all maintaining a non-negative opinion towards the tool. Crucially, participants in the “in favor” and “very much in favor” conditions did not differ in any of their ratings. Therefore, we can cautiously conclude that the commentator’s opinion had a role on such ratings depending on its polarity (against, neutral or favorable) but not on the level of excitement expressed. Future studies should more finely manipulate assertiveness (for example by including more degrees of it, both in the “against” and in the “favorable” conditions), in order to specifically isolate the role played by the level of excitement/endorsement of the commentator, independently from their general stance.
6 Conclusion
In sum, our online large-scale experiment suggests that both the perceived expertise and the opinion of intermediary commentators shape the readers’ trust towards them and their beliefs about new medical advances, even when individuals are consciously aware of the role played by these persuasion cues. By bringing these peripheral cues into an ecological health-communication setting and showing that metacognitive insight does not fully prevent from being affected by them, our work extends dual-process theories of persuasion [e.g., Petty & Cacioppo, 1986] to more realistic health-related contexts. Specifically, it suggests that the peripheral reasoning route based on paraverbal and contextual heuristics might be used (even by educated participants) when forming beliefs around medical tools discussed by commentators. These findings underscore the importance of choosing the appropriate expert intermediaries in science and the impact of calibrating assertive messaging — especially in public health campaigns — to support the uptake of medical innovations. Future research should investigate whether our findings hold in real-world health-campaigns, exploring additional source and medium characteristics, and testing targeted interventions to attenuate undue assertiveness biases (or to increase those biases, depending on the context and the desired goal). We believe that this line of work is likely to lead to more nuanced, evidence-based communication strategies for fostering public engagement with scientific advances.
Acknowledgments
This work was supported by the French government managed by the Agence Nationale de la Recherche as part of the France 2030 program (ANR-23-IAHU-0010).
References
-
Abdullahi, A. M., Oyibo, K., Orji, R., & Kawu, A. A. (2019). The influence of age, gender, and cognitive ability on the susceptibility to persuasive strategies. Information, 10(11), 352. https://doi.org/10.3390/info10110352
-
Algan, Y., Cohen, D., Davoine, E., Foucault, M., & Stantcheva, S. (2021). Trust in scientists in times of pandemic: panel evidence from 12 countries. Proceedings of the National Academy of Sciences, 118(40), e2108576118. https://doi.org/10.1073/pnas.2108576118
-
Ames, D. (2009). Pushing up to a point: assertiveness and effectiveness in leadership and interpersonal dynamics. Research in Organizational Behavior, 29, 111–133. https://doi.org/10.1016/j.riob.2009.06.010
-
Ames, D. R., & Flynn, F. J. (2007). What breaks a leader: the curvilinear relation between assertiveness and leadership. Journal of Personality and Social Psychology, 92(2), 307–324. https://doi.org/10.1037/0022-3514.92.2.307
-
Backhaus, I., Hoven, H., & Kawachi, I. (2023). Far-right political ideology and COVID-19 vaccine hesitancy: multilevel analysis of 21 European countries. Social Science & Medicine, 335, 116227. https://doi.org/10.1016/j.socscimed.2023.116227
-
Blödt, S., Müller-Nordhorn, J., Seifert, G., & Holmberg, C. (2021). Trust, medical expertise and humaneness: a qualitative study on people with cancer’ satisfaction with medical care. Health Expectations, 24(2), 317–326. https://doi.org/10.1111/hex.13171
-
Bodenhausen, G. V., Sheppard, L. A., & Kramer, G. P. (1994). Negative affect and social judgment: the differential impact of anger and sadness. European Journal of Social Psychology, 24(1), 45–62. https://doi.org/10.1002/ejsp.2420240104
-
Brossard, D. (2013). New media landscapes and the science information consumer. Proceedings of the National Academy of Sciences, 110(supplement_3), 14096–14101. https://doi.org/10.1073/pnas.1212744110
-
Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social Psychology, 42(1), 116–131. https://doi.org/10.1037/0022-3514.42.1.116
-
Calisto, F. M., Fernandes, J., Morais, M., Santiago, C., Abrantes, J. M., Nunes, N., & Nascimento, J. C. (2023). Assertiveness-based agent communication for a personalized medicine on medical imaging diagnosis. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 13. https://doi.org/10.1145/3544548.3580682
-
Ciccione, L., Dehaene, G., & Dehaene, S. (2023). Outlier detection and rejection in scatterplots: do outliers influence intuitive statistical judgments? Journal of Experimental Psychology: Human Perception and Performance, 49(1), 129–144. https://doi.org/10.1037/xhp0001065
-
Collins, H., & Evans, R. (2015). Expertise revisited, part I — interactional expertise. Studies in History and Philosophy of Science Part A, 54, 113–123. https://doi.org/10.1016/j.shpsa.2015.07.004
-
Collins, H., Evans, R., & Weinel, M. (2016). Expertise revisited, part II: contributory expertise. Studies in History and Philosophy of Science Part A, 56, 103–110. https://doi.org/10.1016/j.shpsa.2015.07.003
-
Dhami, M. K., & Mandel, D. R. (2022). Communicating uncertainty using words and numbers. Trends in Cognitive Sciences, 26(6), 514–526. https://doi.org/10.1016/j.tics.2022.03.002
-
Diaz, J. A., Griffith, R. A., Ng, J. J., Reinert, S. E., Friedmann, P. D., & Moulton, A. W. (2002). Patients’ use of the internet for medical information. Journal of General Internal Medicine, 17(3), 180–185. https://doi.org/10.1046/j.1525-1497.2002.10603.x
-
Dunwoody, S. (2021). Science journalism: prospects in the digital age. In M. Bucchi & B. Trench (Eds.), Routledge handbook of public communication of science and technology (3rd ed., pp. 14–32). Routledge. https://doi.org/10.4324/9781003039242
-
Edelson, S. M., Reyna, V. F., Singh, A., & Roue, J. E. (2024). The psychology of misinformation across the lifespan. Annual Review of Developmental Psychology, 6, 425–454. https://doi.org/10.1146/annurev-devpsych-010923-093547
-
Fischer, H., Huff, M., Anders, G., & Said, N. (2023). Metacognition, public health compliance, and vaccination willingness. Proceedings of the National Academy of Sciences, 120(43), e2105425120. https://doi.org/10.1073/pnas.2105425120
-
Fischhoff, B., & Davis, A. L. (2014). Communicating scientific uncertainty. Proceedings of the National Academy of Sciences, 111(supplement_4), 13664–13671. https://doi.org/10.1073/pnas.1317504111
-
Forgas, J. P., & Baumeister, R. F. (Eds.). (2019). The social psychology of gullibility: fake news, conspiracy theories, and irrational beliefs. Routlege. https://doi.org/10.4324/9780429203787
-
Gallagher, K. M., & Updegraff, J. A. (2011). Health message framing effects on attitudes, intentions, and behavior: a meta-analytic review. Annals of Behavioral Medicine, 43(1), 101–116. https://doi.org/10.1007/s12160-011-9308-7
-
Gauchat, G. (2012). Politicization of science in the public sphere: a study of public trust in the United States, 1974 to 2010. American Sociological Review, 77(2), 167–187. https://doi.org/10.1177/0003122412438225
-
Geiger, N. (2022). Do people actually “listen to the experts”? A cautionary note on assuming expert credibility and persuasiveness on public health policy advocacy. Health Communication, 37(6), 677–684. https://doi.org/10.1080/10410236.2020.1862449
-
Goldman, A. I. (2018). Expertise. Topoi, 37(1), 3–10. https://doi.org/10.1007/s11245-016-9410-3
-
Hendriks, F., Kienhues, D., & Bromme, R. (2015). Measuring laypeople’s trust in experts in a digital age: the Muenster Epistemic Trustworthiness Inventory (METI). PLoS ONE, 10(10), e0139309. https://doi.org/10.1371/journal.pone.0139309
-
Hoffman, R. R. (1998). How can expertise be defined? Implications of research from cognitive psychology. In R. Williams, W. Faulkner & J. Fleck (Eds.), Exploring expertise: issues and perspectives (pp. 81–100). Palgrave Macmillan. https://doi.org/10.1007/978-1-349-13693-3_4
-
Houtman, D., Vijlbrief, B., & Riedijk, S. (2021). Experts in science communication: a shift from neutral encyclopedia to equal participant in dialogue. EMBO Reports, 22(8), e52988. https://doi.org/10.15252/embr.202152988
-
Huang, H., Liu, S. Q., & Lu, Z. (2023). When and why language assertiveness affects online review persuasion. Journal of Hospitality & Tourism Research, 47(6), 988–1016. https://doi.org/10.1177/10963480221074280
-
Jäger, C. (2023). Epistemic authority. In J. Lackey & A. McGlynn (Eds.), The Oxford handbook of social epistemology (pp. 219–244). Oxford University Press.
-
Jennings, W., Stoker, G., Bunting, H., Valgarðsson, V. O., Gaskell, J., Devine, D., McKay, L., & Mills, M. C. (2021). Lack of trust, conspiracy beliefs, and social media use predict COVID-19 vaccine hesitancy. Vaccines, 9(6), 593. https://doi.org/10.3390/vaccines9060593
-
Jucá, A. M., Lotto, M., Cruvinel, A., & Cruvinel, T. (2024). Characterization of polarized scientific digital messages: a scoping review. JCOM, 23(08), A01. https://doi.org/10.22323/2.23080201
-
Kleis Nielsen, R., & Ganter, S. A. (2018). Dealing with digital intermediaries: a case study of the relations between publishers and platforms. New Media & Society, 20(4), 1600–1617. https://doi.org/10.1177/1461444817701318
-
Klucharev, V., Smidts, A., & Fernández, G. (2008). Brain mechanisms of persuasion: how ‘expert power’ modulates memory and attitudes. Social Cognitive and Affective Neuroscience, 3(4), 353–366. https://doi.org/10.1093/scan/nsn022
-
Larson, H. J., Cooper, L. Z., Eskola, J., Katz, S. L., & Ratzan, S. (2011). Addressing the vaccine confidence gap. The Lancet, 378(9790), 526–535. https://doi.org/10.1016/s0140-6736(11)60678-8
-
Lipsky, M. S., & Sharp, L. K. (2001). From idea to market: the drug approval process. The Journal of the American Board of Family Practice, 14(5), 362–367. https://www.jabfm.org/content/14/5/362
-
Luce, R. D., & von Winterfeldt, D. (1994). What common ground exists for descriptive, prescriptive, and normative utility theories? Management Science, 40(2), 263–279. https://doi.org/10.1287/mnsc.40.2.263
-
McCormack, L., Sheridan, S., Lewis, M., Boudewyns, V., Melvin, C. L., Kistler, C., Lux, L. J., Cullen, K., & Lohr, K. N. (2013). Communication and dissemination strategies to facilitate the use of health-related evidence [Evidence Report/Technology Assessment No. 213. AHRQ Publication No. 13(14)-E003-EF]. Agency for Healthcare Research and Quality.
-
McCright, A. M., Dentzman, K., Charters, M., & Dietz, T. (2013). The influence of political ideology on trust in science. Environmental Research Letters, 8(4), 044029. https://doi.org/10.1088/1748-9326/8/4/044029
-
Mendes, Á., Abreu, L., Vilar-Correia, M. R., & Borlido-Santos, J. (2017). “That should be left to doctors, that’s what they are there for!” — Exploring the reflexivity and trust of young adults when seeking health information. Health Communication, 32(9), 1076–1081. https://doi.org/10.1080/10410236.2016.1199081
-
Mercier, H. (2017). How gullible are we? A review of the evidence from psychology and social science. Review of General Psychology, 21(2), 103–122. https://doi.org/10.1037/gpr0000111
-
Mihelj, S., Kondor, K., & Štětka, V. (2022). Establishing trust in experts during a crisis: expert trustworthiness and media use during the COVID-19 pandemic. Science Communication, 44(3), 292–319. https://doi.org/10.1177/10755470221100558
-
Milošević Ðorđević, J., Mari, S., Vdović, M., & Milošević, A. (2021). Links between conspiracy beliefs, vaccine knowledge, and trust: anti-vaccine behavior of Serbian adults. Social Science & Medicine, 277, 113930. https://doi.org/10.1016/j.socscimed.2021.113930
-
Morgenstern, O. (1972). Descriptive, predictive and normative theory. Kyklos, 25(4), 699–714. https://doi.org/10.1111/j.1467-6435.1972.tb01077.x
-
Norton, R., & Warnick, B. (1976). Assertiveness as a communication construct. Human Communication Research, 3(1), 62–66. https://doi.org/10.1111/j.1468-2958.1976.tb00504.x
-
Omura, M., Maguire, J., Levett-Jones, T., & Stone, T. E. (2017). The effectiveness of assertiveness communication training programs for healthcare professionals and students: a systematic review. International Journal of Nursing Studies, 76, 120–128. https://doi.org/10.1016/j.ijnurstu.2017.09.001
-
Pershad, Y., Hangge, P. T., Albadawi, H., & Oklu, R. (2018). Social medicine: Twitter in healthcare. Journal of Clinical Medicine, 7(6), 121. https://doi.org/10.3390/jcm7060121
-
Petty, R. E., & Cacioppo, J. T. (1986). Communication and persuasion: central and peripheral routes to attitude change. Springer. https://doi.org/10.1007/978-1-4612-4964-1
-
Pornpitakpan, C. (2004). The persuasiveness of source credibility: a critical review of five decades’ evidence. Journal of Applied Social Psychology, 34(2), 243–281. https://doi.org/10.1111/j.1559-1816.2004.tb02547.x
-
Pylyshyn, Z. (1999). Is vision continuous with cognition? The case for cognitive impenetrability of visual perception. Behavioral and Brain Sciences, 22(3), 341–365. https://doi.org/10.1017/s0140525x99002022
-
Regidor, E., de la Fuente, L., Gutiérrez-Fisac, J. L., de Mateo, S., Pascual, C., Sánchez-Payá, J., & Ronda, E. (2007). The role of the public health official in communicating public health information. American Journal of Public Health, 97(Supplement_1), S93–S97. https://doi.org/10.2105/ajph.2006.094623
-
Reif, A., & Guenther, L. (2021). How representative surveys measure public (dis)trust in science: a systematisation and analysis of survey items and open-ended questions. Journal of Trust Research, 11(2), 94–118. https://doi.org/10.1080/21515581.2022.2075373
-
Robinson, M. N., Tansil, K. A., Elder, R. W., Soler, R. E., Labre, M. P., Mercer, S. L., Eroglu, D., Baur, C., Lyon-Daniel, K., Fridinger, F., Sokler, L. A., Green, L. W., Miller, T., Dearing, J. W., Evans, W. D., Snyder, L. B., Kasisomayajula Viswanath, K., Beistle, D. M., Chervin, D. D., … the Community Preventive Services Task Force. (2014). Mass media health communication campaigns combined with health-related product distribution. American Journal of Preventive Medicine, 47(3), 360–371. https://doi.org/10.1016/j.amepre.2014.05.034
-
Rocha, Y. M., de Moura, G. A., Desidério, G. A., de Oliveira, C. H., Lourenço, F. D., & de Figueiredo Nicolete, L. D. (2023). The impact of fake news on social media and its influence on health during the COVID-19 pandemic: a systematic review. Journal of Public Health, 31(7), 1007–1016. https://doi.org/10.1007/s10389-021-01658-z
-
Rosenberg, H., Syed, S., & Rezaie, S. (2020). The Twitter pandemic: the critical role of Twitter in the dissemination of medical information and misinformation during the COVID-19 pandemic. CJEM, 22(4), 418–421. https://doi.org/10.1017/cem.2020.361
-
Scott, A. (2007). Peer review and the relevance of science. Futures, 39(7), 827–845. https://doi.org/10.1016/j.futures.2006.12.009
-
Shen, X. (2019). Medical experts as health knowledge providers. East Asian Pragmatics, 4(2), 263–291. https://doi.org/10.1558/eap.37686
-
Stecula, D. A., Kuru, O., & Jamieson, K. H. (2020). How trust in experts and media use affect acceptance of common anti-vaccination claims. Harvard Kennedy School Misinformation Review, 1(1). https://doi.org/10.37016/mr-2020-007
-
Stewart, C. (2020). Expertise and authority. Episteme, 17(4), 420–437. https://doi.org/10.1017/epi.2018.43
-
Stokes, D. (2013). Cognitive penetrability of perception. Philosophy Compass, 8(7), 646–663. https://doi.org/10.1111/phc3.12043
-
Tiedens, L. Z., & Linton, S. (2001). Judgment under emotional certainty and uncertainty: the effects of specific emotions on information processing. Journal of Personality and Social Psychology, 81(6), 973–988. https://doi.org/10.1037/0022-3514.81.6.973
-
Tversky, A. (1975). A critique of expected utility theory: descriptive and normative considerations. Erkenntnis, 9(2), 163–173. https://doi.org/10.1007/bf00226380
-
van Antwerpen, N., Green, E. B., Sturman, D., & Searston, R. A. (2025). The impacts of expertise, conflict, and scientific literacy on trust and belief in scientific disagreements. Scientific Reports, 15(1), 11869. https://doi.org/10.1038/s41598-025-96333-8
-
van der Linden, S. L., Clarke, C. E., & Maibach, E. W. (2015). Highlighting consensus among medical scientists increases public support for vaccines: evidence from a randomized experiment. BMC Public Health, 15(1), 1207. https://doi.org/10.1186/s12889-015-2541-4
-
van der Linden, S. L., Leiserowitz, A. A., Feinberg, G. D., & Maibach, E. W. (2015). The scientific consensus on climate change as a gateway belief: experimental evidence. PLoS ONE, 10(2), e0118489. https://doi.org/10.1371/journal.pone.0118489
-
Vranic, A., Hromatko, I., & Tonković, M. (2022). “I did my own research”: overconfidence, (dis)trust in science, and endorsement of conspiracy theories. Frontiers in Psychology, 13, 931865. https://doi.org/10.3389/fpsyg.2022.931865
-
Winter, S., Krämer, N. C., Rösner, L., & Neubaum, G. (2015). Don’t keep it (too) simple: how textual representations of scientific uncertainty affect laypersons’ attitudes. Journal of Language and Social Psychology, 34(3), 251–272. https://doi.org/10.1177/0261927x14555872
-
Winter, T., Riordan, B. C., Scarf, D., & Jose, P. E. (2022). Conspiracy beliefs and distrust of science predicts reluctance of vaccine uptake of politically right-wing citizens. Vaccine, 40(12), 1896–1903. https://doi.org/10.1016/j.vaccine.2022.01.039
About the authors
Lorenzo Ciccione is an associate professor of cognitive psychology at Paris 8 University and IHU Robert Debré (Paris, France).
E-mail: lorenzo.ciccione@cri-paris.org Bluesky: @lorenzo-ciccione
Camille Lakhlifi is a behavioral scientist at the Interministerial Directorate for Public Transformation (Paris, France).
E-mail: camille.lakhlifi@gmail.com
Benjamin Rohaut is a professor of neurology at Sorbonne Université, APHP Pitié-Salpêtrière hospital & Paris Brain Institute — ICM (Paris, France).
E-mail: benjamin.rohaut@aphp.fr
Raphael Veil is a medical doctor at Caisse Nationale d’Assurance Maladie (Paris, France).
E-mail: raphael.veil@gmail.com