1 Introduction

Twitter is a microblogging service that is used daily by 199 million users [Firsching, 2021 ] from various backgrounds [Mislove et al., 2011 ] and it is reshaping the healthcare system [Hawn, 2009 ]. Laypeople regularly use Twitter to share health information about dementia [Robillard et al., 2013 ], tweet about their pain experiences [Heaivilin et al., 2011 ] and their use of antibiotics [Scanfeld, Scanfeld and Larson, 2010 ]. Since health information is associated with specific language characteristics [Coppersmith, Dredze and Harman, 2014 ], Twitter data can be used to measure and track the evolution of health states within societies [Prieto et al., 2014 ]. Researchers, for example, have used Twitter data to predict heart disease mortality [Eichstaedt, Schwartz, Kern et al., 2015 ; Eichstaedt, Schwartz, Giorgi et al., 2018 ], the affordable care act marketplace enrolment [Wong et al., 2015 ], HIV prevalence [Ireland et al., 2015 ], obesity rates [Mitchell, Frank et al., 2013 ] and depression [Tsugawa et al., 2015 ]. Besides tracking health states, Twitter is used by scholars to communicate research findings [Veletsianos, 2012 ] and by governments to promote citizen engagement [Chen et al., 2020 ]. Furthermore, major health organizations like the American Heart Association and the American Cancer Society use Twitter for their health promotion and public engagement efforts [Park, Reber and Chon, 2016 ].

Since the COVID-19 pandemic hit the global community [World Health Organization, 2020 ], laypeople [Singh et al., 2020 ], politicians [Spahn, 2020 ] and scientists [Drosten, 2020 ] increasingly turn to Twitter to talk about the pandemic and share health information. Communicating health information via Twitter seems to be a reasonable approach since many people report that they use social media platforms like Facebook, YouTube, Twitter and Instagram as a regular source of news [Shearer and Mitchell, 2021 ]. However, not all information about the COVID-19 pandemic on Twitter is accurate [Kouzy et al., 2020 ; Sharma et al., 2020 ; Singh et al., 2020 ]. A recent study analyzed Tweets about the COVID-19 pandemic and found that 25% of the Tweets contained misinformation and 17% contained unverifiable information [Kouzy et al., 2020 ]. The spreading of misinformation has gone so far that Twitter started to remove misleading Tweets about the COVID-19 pandemic that could encourage people to engage in harmful behavior [Peters, 2020 ]. Furthermore, scientists started to develop nudging strategies to prevent people from sharing misinformation via Twitter [Pennycook et al., 2020 ]. In light of the increasing amount of misinformation about the COVID-19 pandemic on Twitter, it is essential to understand how laypeople decide whether information sources are trustworthy and whether their information is credible.

1.1 Evaluating online health information

Even though health information on the internet often contains misinformation [Kata, 2010 ; Keelan et al., 2007 ; Miles, Petrie and Steel, 2000 ; Pandey et al., 2010 ], laypeople typically turn to the internet as their first source of information [Fox and Duggan, 2013 ; Prestin, Vieux and Chou, 2015 ]. According to the content-source integration model [Stadtler, Winter et al., 2017 ; Stadtler and Bromme, 2014 ], laypeople decide whether health information is accurate by evaluating the credibility of the provided information (e.g., “Are the health claims logically coherent and compatible with my prior knowledge?”) and the trustworthiness of the information source (e.g., “Is the information source an expert in their field?”). Whether an information source is perceived as trustworthy depends on their ability/expertise (e.g., “Is the information source competent and experienced?”), benevolence (e.g., “Is the information source responsible and considerate?”) and integrity (e.g., “Is the information source honest and fair?”) and diverse inventories have been developed to measure these factors [Engelke, Hase and Wintterlin, 2019 ; Hendriks, Kienhues and Bromme, 2015 ; Mayer, Davis and Schoorman, 1995 ]. Since most people have just a bounded understanding of science and health claims can be highly complex, laypeople often face difficulties when evaluating the accuracy of health claims [Bromme and Goldman, 2014 ; Bromme and Thomm, 2016 ]. In such difficult situations, laypeople often base their evaluations on factors that surround health claims.

Recent studies have shown that laypeople use the professional background of an information source to evaluate the accuracy of health information. Unfamiliar health information on websites, for example, is deemed less credible if it is provided by a student (e.g., Tim Alster, a high school freshman) rather than a medical expert (e.g., Dr. William Blake, HIV specialist) [Eastin, 2001 ]. Another study found that health communicators in video lectures are deemed less trustworthy if their professional background suggests a potential conflict of interest [König and Jucks, 2019c ]. Moreover, lobbyists, in comparison to scientists, are deemed less trustworthy when participating in scientific debates [König and Jucks, 2019a ] and more manipulative when giving health advice in online health forums [König and Jucks, 2019b ]. One theory that might help to explain these effects is the theory of epistemic authority [Kruglanski, Raviv et al., 2005 ; Kruglanski, Dechesne et al., 2009 ; Zagzebski, 2015 ]. The theory assumes that information sources vary in their epistemic authority which can be influenced by their professional background, expertise and various other factors. If an information source has gained high epistemic authority in a specific domain, they can become “a source on whom an individual turns to obtain knowledge on various topics” [Kruglanski, Dechesne et al., 2009 ]. Hence, health information provided by medical experts might seem more credible because medical experts have more epistemic authority in the domain of medicine than non-medical experts. Another source factor that can influence the perceived credibility and expertise of communicators on social media is the perceived fit between the communicator and the communication topic [Breves et al., 2019 ]. In the context of Twitter, one study found that “when a professional source with many followers tweets, participants tend to perceive the content to be more credible than when a layperson source with many followers tweets” [Lee and Sundar, 2013 ].

Another important factor that influences the evaluation of online health information is the message style in which health claims are written. It has been shown that overly positive message styles [König and Jucks, 2019b ; König and Jucks, 2020 ] can damage the trustworthiness of information sources and the credibility of their information when providing health information in online health forums. Furthermore, in the context of scientific debates about medications, it has been shown that the use of aggressive message styles can backfire and harm trustworthiness and credibility ratings [König and Jucks, 2019a ]. Besides influencing trustworthiness and credibility judgements, is has been shown that emotional writing and speaking styles can shape doctor-patient communications in internet forums [Bientzle et al., 2015 ], the instructional quality of podcasts [König, 2020 ] and risk perceptions [Flemming, Cress, Kimmig et al., 2018 ]. For example, one study found that after listening to an enthusiastic science communicator, participants rated the provided information as more interesting and exciting [König, 2020 ]. Moreover, the participants enjoyed the listening process more, had a higher motivation to learn more about the covered topic and they evaluated the podcast host as more trustworthy.

1.2 Effects of professional background and message style on source trustworthiness, message credibility and behavioral intentions

Since the COVID-19 pandemic started, various people with diverse professional backgrounds have turned to Twitter to communicate urgent health information to the general public. These include politicians, like the health ministers from the United States of America [Azar, 2020 ] and the Federal Republic of Germany [Spahn, 2020 ], as well as scientists from widely known institutions, like Johns Hopkins University [Gardner, 2020 ] and the Charité in Berlin [Drosten, 2020 ]. From previous studies, it is known that politicians are typically perceived as being dishonest [Gallup, 2018 ]. Furthermore, it is known that politicians are perceived as less warm and less competent than professors [Fiske and Dupree, 2014 ]. Additionally, the fit between the health topic and the scientist might be higher compared to the fit between the politician and the communication topic, which might increase the perceived source credibility [Breves et al., 2019 ]. Based on these findings, one might hypothesize that scientists could be more effective when communicating urgent COVID-19 health information to the general public via Twitter than politicians.

Furthermore, when communicating urgent information via Twitter, some people almost exclusively use capital letters when writing their messages. One prominent example who regularly tweets in capital letters is the 45th President of the United States of America [Trump, 2018 ]. Even though studies have started to look at the use of Tweets in capital letters in the context of political campaigns [Enli, 2017 ], no study has systematically investigated the specific reasons why people tweet in capital letters. In various online communities, however, tweeting in capital letters is typically interpreted as a form of shouting which is supposed to stress the importance and seriousness of the message [Strizver, 2020 ; Tschabitscher, 2021 ; Turk, 2018 ]. Even though various people tweet in capital letters to communicate COVID-19 health information to the general public [Trump, 2020 ], this strategy might backfire because it can harm processing fluency. Processing fluency may be defined as “the subjective experience of ease with which people process information” and it can stem from perceptual as well as linguistic message features [Alter and Oppenheimer, 2009 ]. Since fluency perceptions are constantly available, people rely on them regularly when evaluating information [Greifeneder and Bless, 2007 ; Whittlesea and Leboe, 2003 ]. Messages that are hard to read [Reber and Schwarz, 1999 ] or delivered in bad audio quality [Newman and Schwarz, 2018 ], for example, are deemed less credible. Based on these findings, in can be argued that Tweets that almost exclusively rely on capital letters are more difficult to read and therefore decrease processing fluency. Hence, one might hypothesize that messages written in lower-case letters could be more effective when communicating urgent COVID-19 health information to the general public via Twitter than messages written in capital letters.

So far, no study has investigated whether an information sources’ professional background (being a politician vs. being a scientist) and message style (tweeting in capital letters vs. tweeting in lower-case letters) influence the effectiveness of communicating COVID-19 health information via Twitter. To address this research gap, we conducted a 2 × 2 between-subject experiment. During the experiment, participants were shown a Twitter profile of a person called Andreas Bauer. Depending on the experimental condition, the Twitter profile stated that Andreas Bauer was either a politician (“Minister for Public Health in the Government of the Saarland”) or a scientist (“Professor of Public Health at Saarland University”). Furthermore, participants were shown a Tweet containing health information from Andreas Bauer. Depending on the experimental condition, the Tweet was written in either capital letters or lower-case letters. Subsequently, participants answered questions regarding the trustworthiness of the information source, the credibility of the provided information and their behavioral intentions. The goal of the procedure was to test the following hypotheses.

Hypothesis 1 :
Scientists, in comparison to politicians, are more effective when communicating COVID-19 health information via Twitter, concerning source trustworthiness, message credibility and behavioral intentions.
Hypothesis 2 :
Messages written in lower-case letters, in comparison to capital letters, are more effective when communicating COVID-19 health information via Twitter, concerning source trustworthiness, message credibility and behavioral intentions.

2 Methods

2.1 Sample

To recruit participants, we contacted people via email newsletters and social networking sites and asked them to take part in the experiment. As an incentive for participation, participants had the opportunity to enter a lottery and win one of multiple online shop vouchers. An a priori power analysis using G*Power (Faul et al. [ 2007 ]; specifications: test family = F tests; statistical test = ANOVA: fixed effects, special, main effects and interactions; type of power analysis = a priori; f = 0 . 1 5 , α = 0 . 0 5 , power = 0 . 8 5 , numerator d f = 1 , number of groups = 4 ) indicted that a total of 401 participants were needed to detect a small to medium effect with satisfactory power. To compensate for possible participant exclusions, we oversampled slightly. A total of 439 participants completed the experiment and indicated at the end of the study that they answered all questions honestly. 15 participants were excluded from data analysis because they stated that they faced technical problems during the study. Therefore, the final convenience sample contained 424 (262 females, 156 males, 6 diverse) participants with an average age of 26 years ( M = 2 5 . 6 5 , SD = 6 . 8 6 ).

2.2 Material and procedure

The 2 × 2 between-subject experiment was conducted online using the SoSci Survey platform (SoSci Survey GmbH, Munich, Germany) for data collection. In a first step, participants were informed about the general context of the study and the procedures of the upcoming experiment. After participants gave their informed consent to participate in the experiment, they provided demographic information and answered the control measures. Following this, participants were randomly assigned to one of the four experimental conditions and were shown a Twitter profile of a person called Andreas Bauer. Depending on the experimental condition, the Twitter profile stated that Andreas Bauer was either a politician (“Minister for Public Health in the Government of the Saarland”, see Figure 1 ) or a scientist (“Professor of Public Health at Saarland University”, see Figure 2 ). On the next page, participants were shown a Tweet containing health information from Andreas Bauer (“Important: You have contracted the new #CoronaVirus? Click here for my health advice to get better soon: andreasbauer1960.de/Gesundheitstip…”). Depending on the experimental condition, the Tweet was written in either capital letters (see Figure 3 ) or lower-case letters (see Figure 4 ). Subsequently, participants answered the dependent measures, and the manipulation check questions. After this, participants were asked whether they faced any technical problems during the study and whether they answered all questions honestly. At the end of the study, participants were debriefed and had the opportunity to enter their email address for participating in the online shop voucher lottery.


PIC

Figure 1 : Twitter profile of politician.



PIC

Figure 2 : Twitter profile of scientist.



PIC

Figure 3 : Tweet in capital letters.



PIC

Figure 4 : Tweet in lower-case letters.


2.3 Control measures and manipulation check

A total of four control measures were included to assess whether the experimental groups differed in characteristics that could bias the study results. Participants indicated their agreement with three statement about their internet usage (“I regularly use the internet to read about scientific topics”), their belief in science (“I believe in science”) and their prior knowledge (“I know a lot about COVID-19”). Participants indicated their agreement on scales ranging from 1 (strongly disagree) to 7 (strongly agree). Furthermore, participants were asked “How often do you use Twitter?”. Participants indicated their answers on a scale ranging from 1 (zero days a week) to 8 (seven days a week). As a manipulation check, participants were asked “What is Andreas Bauer’s profession?”. Participants could choose between “Minister for Public Health in the Government of the Saarland”, “Professor of Public Health at Saarland University” and “I do not know”. Furthermore, they were asked “Did Andreas Bauer almost exclusively use capital letters in his Tweet?”. Participants could choose between “Yes”, “No” and “I do not know”.

2.4 Dependent measures

2.4.1 Source trustworthiness

To assess how trustworthy the information source was perceived to be, the Muenster Epistemic Trustworthiness Inventory [Hendriks, Kienhues and Bromme, 2015 ] was used. Participants rated 15 items on scales ranging from 1 (not trustworthy at all) to 7 (very trustworthy). The items measured expertise (e.g., “unqualified – qualified”), benevolence (e.g., “immoral – moral”) and integrity (e.g., “insincere – sincere”). Since likability is frequently considered to be an additional subdimension of trustworthiness, participants indicated their agreement with the statement “I like Andreas Bauer” on a scale ranging from 1 (strongly disagree) to 7 (strongly agree).

2.5 Message credibility

To assess the credibility of the information provided in the Tweet, participants indicated their agreement with the statement “Andreas Bauer’s health advice is credible” on a scale ranging from 1 (strongly disagree) to 7 (strongly agree).

2.6 Behavioral intentions

To assess participants’ behavioral intentions after reading the Tweet, participants indicated their agreement with the statements “I would read Andreas Bauer’s health advice” and “I would share Andreas Bauer’s health advice via social media” on scales ranging from 1 (strongly disagree) to 7 (strongly agree).

3 Results

3.1 Control measures and manipulation check

For all analyses, the statistical software SPSS Statistics Version 26 (IBM Corp, Armonk, New York, United States) was used. Before analyzing the dependent measures, four one-way between-subject analyses of variance were conducted with experimental condition as the independent variable and the control measures as dependent variables to analyze whether the participants in the four experimental groups differed in aspects that could bias the study results. The results showed that the participants in the four experimental groups did not significantly differ in regard to their internet usage [ F ( 3 , 4 2 0 ) = 0 . 8 6 7 , p = . 4 5 8 , η 2 p = . 0 0 6 ], their belief in science [ F ( 3 , 4 2 0 ) = 0 . 2 5 7 , p = . 8 5 6 , η 2 p = . 0 0 2 ], their prior knowledge [ F ( 3 , 4 2 0 ) = 1 . 0 6 3 , p = . 3 6 4 , η 2 p = . 0 0 8 ] and their Twitter usage [ F ( 3 , 4 2 0 ) = 1 . 1 0 0 , p = . 3 4 9 , η 2 p = . 0 0 8 ]. Therefore, the four control measures were not included in further analyses. Of the 424 participants, 381 (89.9%) correctly remembered the professional background of Andreas Bauer and 354 (83.5%) correctly remembered whether he almost exclusively used capital letters in his Tweet. 322 (75.9%) participants answered both manipulation check questions correctly. The relatively high remembrance rates suggest that the experimental manipulations worked as expected. As information seekers naturally differ in their attention to detail in real-world online settings and the experimental manipulations might not need to be consciously remembered to have an effect, all participants were included in data analyses.

3.2 Dependent measures

For the analyses of the dependent measures, two-way between-subject analyses of variance were conducted with professional background (being a politician vs. being a scientist) and message style (tweeting in capital letters vs. tweeting in lower-case letters) as independent variables. Table 1 shows the means and standard deviations of the dependent measures by professional background and message style.


Table 1 : Means and standard deviations of the dependent measures by professional background and message style.
PIC

3.2.1 Main effects of professional background

There were significant main effects of professional background on expertise [ F ( 1 , 4 2 0 ) = 5 . 6 7 9 , p = . 0 1 8 , η 2 p = . 0 1 3 ], integrity [ F ( 1 , 4 2 0 ) = 4 . 7 2 7 , p = . 0 3 0 , η 2 p = . 0 1 1 ] and benevolence [ F ( 1 , 4 2 0 ) = 4 . 1 8 1 , p = . 0 4 2 , η 2 p = . 0 1 0 ]. However, there were no significant main effects of professional background on likability [ F ( 1 , 4 2 0 ) = 0 . 9 7 5 , p = . 3 2 4 , η 2 p = . 0 0 2 ], credibility [ F ( 1 , 4 2 0 ) = 0 . 0 0 5 , p = . 9 4 3 , η 2 p < . 0 0 1 ], reading intention [ F ( 1 , 4 2 0 ) = 0 . 0 9 3 , p = . 7 6 0 , η 2 p < . 0 0 1 ] and sharing intention [ F ( 1 , 4 2 0 ) = 0 . 4 6 0 , p = . 4 9 8 , η 2 p = . 0 0 1 ].

3.2.2 Main effects of message style

There were significant main effects of message style on expertise [ F ( 1 , 4 2 0 ) = 1 7 . 6 8 8 , p < . 0 0 1 , η 2 p = . 0 4 0 ], integrity [ F ( 1 , 4 2 0 ) = 1 5 . 9 0 0 , p < . 0 0 1 , η 2 p = . 0 3 6 ], benevolence [ F ( 1 , 4 2 0 ) = 1 8 . 4 7 4 , p < . 0 0 1 , η 2 p = . 0 4 2 ], likability [ F ( 1 , 4 2 0 ) = 1 5 . 9 8 4 , p < . 0 0 1 , η 2 p = . 0 3 7 ], credibility [ F ( 1 , 4 2 0 ) = 2 8 . 3 2 1 , p < . 0 0 1 , η 2 p = . 0 6 3 ], reading intention [ F ( 1 , 4 2 0 ) = 2 6 . 6 6 0 , p < . 0 0 1 , η 2 p = . 0 6 0 ] and sharing intention [ F ( 1 , 4 2 0 ) = 8 . 1 6 7 , p = . 0 0 4 , η 2 p = . 0 1 9 ].

3.2.3 Interaction effects

The two factors of professional background and message style did not interact with each other significantly to influence expertise [ F ( 1 , 4 2 0 ) = 3 . 3 9 2 , p = . 0 6 6 , η 2 p = . 0 0 8 ], integrity [ F ( 1 , 4 2 0 ) = 1 . 7 5 3 , p = . 1 8 6 , η 2 p = . 0 0 4 ], benevolence [ F ( 1 , 4 2 0 ) = 0 . 7 5 2 , p = . 3 8 6 , η 2 p = . 0 0 2 ], likability [ F ( 1 , 4 2 0 ) = 1 . 5 2 4 , p = . 2 1 8 , η 2 p = . 0 0 4 ], credibility [ F ( 1 , 4 2 0 ) = 2 . 4 6 9 , p = . 1 1 7 , η 2 p = . 0 0 6 ], reading intention [ F ( 1 , 4 2 0 ) = 2 . 2 7 8 , p = . 1 3 2 , η 2 p = . 0 0 5 ] and sharing intention [ F ( 1 , 4 2 0 ) = 1 . 6 4 2 , p = . 2 0 1 , η 2 p = . 0 0 4 ].

4 Discussion

The goal of the present study was to investigate whether an information sources’ professional background (being a politician vs. being a scientist) and message style (tweeting in capital letters vs. tweeting in lower-case letters) influence the effectiveness of communicating COVID-19 health information via Twitter. It was hypothesized that scientists, in comparison to politicians, are more effective when communicating COVID-19 health information via Twitter. The results, however, just partly support the hypothesis. In line with the hypothesis, scientists were perceived as possessing more expertise than politicians. However, politicians were perceived as possessing more integrity and benevolence than scientists. Furthermore, the information sources’ professional background did not influence his likability, the credibility of his health information and participants’ intention to read his health information and share it via social media. These results are surprising because previous studies have found that scientists are typically perceived as being more trustworthy than politicians. One reason for these results might lie in the operationalizations that were used in the current study. Studies that find that politicians are perceived as being less trustworthy typically ask general questions like “How trustworthy are politicians?” or “How much do you trust members of parliament?”. When confronted with such general questions, participants may base their evaluations on their knowledge about politics which probably includes knowledge about various political scandals. Therefore, their evaluations may become more negative. In the current study, however, participants did not provide their opinion about politicians in general. Instead, they evaluated an unknown politician who was not connected to any political scandals and who was a minister for public health. Participants may have assumed that politicians have to possess integrity and benevolence to achieve such a high political position. To test this explanation, future studies could replicate the current study, but instead of introducing the politician as a minister for public health, they could introduce the politician as a member of parliament.

The second hypothesis stated that messages written in lower-case letters, in comparison to capital letters, are more effective when communicating COVID-19 health information via Twitter. In line with the hypothesis, information sources who tweeted in lower-case letters were perceived as more trustworthy. More specifically, they were perceived as possessing more expertise, integrity and benevolence. Furthermore, their health information was perceived as being more credible and participants were more willing to read their health information and share it via social media. Even though the results are in line with the hypothesis, additional research could explore whether processing fluency really is the driving force behind the found effects. Future studies could replicate the current study but use a wider range of message characteristic manipulations. It could be varied, for example, whether 0%, 20%, 40%, 60%, 80% or 100% of the message are written in capital letters. If processing fluency is the underlying mechanism of the found effects, we would expect the negative evaluations to become more pronounced with increasing amounts of capital letters. This approach would, of course, require many more research participants. However, it could help to clarify whether processing fluency really is the underlying mechanism of the found effects.

Future studies could also identify and manipulate factors that might modify the found professional background and message style effects. It has been argued, for example, that the tentative nature of scientific information can influence credibility judgements [Bromme and Goldman, 2014 ; Flemming, Cress and Kimmerle, 2017 ]. In line with this argument, it seems to be a common finding in the context of journalism research that “upon reading journalistic articles about novel scientific findings, readers who recognize the tentative nature of the findings rate the journalistic article that reports these findings as less credible” [Flemming, Kimmerle et al., 2020 ]. From a scientific point of view, these findings seem surprising because acknowledging the tentativeness of scientific information is a common and reasonable practice in academic communities [Hyland, 1996 ]. Nevertheless, these findings illustrate that diverse factors have the potential to influence credibility and trustworthiness judgements [Choi and Stvilia, 2015 ; Metzger and Flanagin, 2015 ; Pornpitakpan, 2004 ]. Therefore, future studies could explore whether stressing the tentativeness of the provided information modifies the message style effect. For example, one might argue that Tweets written in lower-case letters seem credible as long as they do not stress the tentativeness of the provided information. However, if Tweets stress the tentativeness of the provided information, readers might focus on the tentativeness and ignore the message style manipulations when making their credibility judgements.

4.1 Limitations and future research directions

Even though the current study provides valuable insights into the effects of professional background and message style on source trustworthiness, message credibility and behavioral intentions, there may be limitations to the generalizability of the results. Two limitations regarding the age of the study participants and the geographical location of the experiment seem especially important. It is important to stress that we relied on a convenience sample that is not representative of the German population which might limit the generalizability of the found effects. With an average age of 26 years, for example, the study participants were relatively young. Since previous research suggests that the age of study participants might influence source monitoring, suggestibility to misinformation and credibility evaluations [Choi and Stvilia, 2015 ; Mitchell, Johnson and Mather, 2003 ], future research should replicate the current study with different age groups. It could be hypothesized, for example, that younger study participants are more critical when evaluating messages on Twitter because they are more familiar with modern communication services like Twitter and therefore are more aware of the high prevalence of misinformation on such services.

Another limitation could lie in the geographical location in which the current study took place. More specifically, countries have developed different civic epistemologies, which describe ways in which societies evaluate and discuss knowledge claims [Jasanoff, 2005 ; Jasanoff, 2011 ]. In Germany, where the current study took place, discussions typically focus on “building communally crafted expert rationales, capable of supporting a policy consensus”, whereas in the United States, “information is typically generated by interested parties and tested in public through overt confrontation between opposing, interest laden points of view” [Jasanoff, 2011 ]. Therefore, study participants in Germany may prefer messages in lower-case letters because they seem as a more constructive way of communicating health information and reaching a consensus. In the United States, however, study participants may be more familiar with messages written in capital letters and therefore may react differently to them. Hence, it could be hypothesized that the found massage style effects are stronger in Germany than the United States of America. Another factor that might have affected the results is the geographical location that was stated in the Twitter profile. Depending on the experimental condition, the Twitter profile stated that Andreas Bauer was either a politician (“Minister for Public Health in the Government of the Saarland”) or a scientist (“Professor of Public Health at Saarland University”). In both cases, the Twitter profile suggested that Andreas Bauer was located in the German state of Saarland. One might argue that the German state of Saarland is not typically associated with academic excellence and therefore scientists and politicians from this state might seem less trustworthy in general. To test this hypothesis, future studies could investigate whether scientists and politicians from German states with renowned flagship universities (e.g., North Rhine-Westphalia, Saxony) seem more trustworthy than scientist and politicians from German states without any flagship universities (e.g., Mecklenburg-Vorpommern, Saarland) [Wissenschaftsrat, 2019 ].

4.2 Conclusion

When evaluating urgent health information communicated via Twitter, people base their judgements on the professional background of the information source and their message style. If the message is written in lower-case letters, instead of capital letters, people perceive the information source as possessing more expertise, more integrity and more benevolence. Furthermore, the health information is perceived as being more credible and people are more willing to read the health information and share it via social media. In regard to the professional background of the information source, scientists are perceived as possessing more expertise than politicians. However, politicians are perceived as possessing more integrity and benevolence than scientists.

References

Alter, A. L. and Oppenheimer, D. M. (2009). ‘Uniting the tribes of fluency to form a metacognitive nation’. Personality and Social Psychology Review 13 (3), pp. 219–235. https://doi.org/10.1177/1088868309341564 .

Azar, A. (2020). @HHSgov announced nearly $1 Billion in #CARESAct grants to support older adults and people with disabilities in the community during the #COVID19 outbreak . URL: https://twitter.com/SecAzar/status/1253460484407218176 .

Bientzle, M., Griewatz, J., Kimmerle, J., Küppers, J., Cress, U. and Lammerding-Koeppel, M. (2015). ‘Impact of scientific versus emotional wording of patient questions on doctor-patient communication in an internet forum: a randomized controlled experiment with medical students’. Journal of Medical Internet Research 17 (11), e268. https://doi.org/10.2196/jmir.4597 .

Breves, P. L., Liebers, N., Abt, M. and Kunze, A. (2019). ‘The perceived fit between Instagram influencers and the endorsed brand’. Journal of Advertising Research 59 (4), pp. 440–454. https://doi.org/10.2501/JAR-2019-030 .

Bromme, R. and Goldman, S. R. (2014). ‘The public’s bounded understanding of science’. Educational Psychologist 49 (2), pp. 59–69. https://doi.org/10.1080/00461520.2014.921572 .

Bromme, R. and Thomm, E. (2016). ‘Knowing who knows: laypersons’ capabilities to judge experts’ pertinence for science topics’. Cognitive Science 40 (1), pp. 241–252. https://doi.org/10.1111/cogs.12252 .

Chen, Q., Min, C., Zhang, W., Wang, G., Ma, X. and Evans, R. (2020). ‘Unpacking the black box: how to promote citizen engagement through government social media during the COVID-19 crisis’. Computers in Human Behavior 110, 106380. https://doi.org/10.1016/j.chb.2020.106380 .

Choi, W. and Stvilia, B. (2015). ‘Web credibility assessment: conceptualization, operationalization, variability, and models’. Journal of the Association for Information Science and Technology 66 (12), pp. 2399–2414. https://doi.org/10.1002/asi.23543 .

Coppersmith, G., Dredze, M. and Harman, C. (2014). ‘Quantifying mental health signals in Twitter’. In: Workshop on Computational Linguistics and Clinical Psychology: from Linguistic Signal to Clinical Reality (Baltimore, MD, U.S.A. 27th June 2014), pp. 51–60. https://doi.org/10.3115/v1/W14-3207 .

Drosten, C. (2020). Für alle, die noch immer nicht daran glauben: Übersterblichkeit durch #COVID19 in England . URL: https://twitter.com/c_drosten/status/1252601814891069440 .

Eastin, M. S. (2001). ‘Credibility assessments of online health information: the effects of source expertise and knowledge of content’. Journal of Computer-Mediated Communication 6 (4), JCMC643. https://doi.org/10.1111/j.1083-6101.2001.tb00126.x .

Eichstaedt, J. C., Schwartz, H. A., Giorgi, S., Kern, M. L., Park, G., Sap, M., Labarthe, D. R., Larson, E. E., Seligman, M. E. P. and Ungar, L. H. (2018). ‘More evidence that Twitter language predicts heart disease: a response and replication’. https://doi.org/10.31234/osf.io/p75ku .

Eichstaedt, J. C., Schwartz, H. A., Kern, M. L., Park, G., Labarthe, D. R., Merchant, R. M., Jha, S., Agrawal, M., Dziurzynski, L. A., Sap, M., Weeg, C., Larson, E. E., Ungar, L. H. and Seligman, M. E. P. (2015). ‘Psychological language on Twitter predicts county-level heart disease mortality’. Psychological Science 26 (2), pp. 159–169. https://doi.org/10.1177/0956797614557867 .

Engelke, K. M., Hase, V. and Wintterlin, F. (2019). ‘On measuring trust and distrust in journalism: reflection of the status quo and suggestions for the road ahead’. Journal of Trust Research 9 (1), pp. 66–86. https://doi.org/10.1080/21515581.2019.1588741 .

Enli, G. (2017). ‘Twitter as arena for the authentic outsider: exploring the social media campaigns of Trump and Clinton in the 2016 US presidential election’. European Journal of Communication 32 (1), pp. 50–61. https://doi.org/10.1177/0267323116682802 .

Faul, F., Erdfelder, E., Lang, A.-G. and Buchner, A. (2007). ‘G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences’. Behavior Research Methods 39 (2), pp. 175–191. https://doi.org/10.3758/BF03193146 .

Firsching, J. (2021). Twitter Statistiken 2021: Entwicklung Nutzerzahlen, Nutzerwachstum & Umsatz . URL: https://www.futurebiz.de/artikel/twitter-statistiken-nutzerzahlen/ .

Fiske, S. T. and Dupree, C. (2014). ‘Gaining trust as well as respect in communicating to motivated audiences about science topics’. Proceedings of the National Academy of Sciences 111 (Supplement 4), pp. 13593–13597. https://doi.org/10.1073/pnas.1317505111 .

Flemming, D., Cress, U. and Kimmerle, J. (2017). ‘Processing the scientific tentativeness of medical research: an experimental study on the effects of research news and user comments in online media’. Science Communication 39 (6), pp. 745–770. https://doi.org/10.1177/1075547017738091 .

Flemming, D., Cress, U., Kimmig, S., Brandt, M. and Kimmerle, J. (2018). ‘Emotionalization in science communication: the impact of narratives and visual representations on knowledge gain and risk perception’. Frontiers in Communication 3, 3. https://doi.org/10.3389/fcomm.2018.00003 .

Flemming, D., Kimmerle, J., Cress, U. and Sinatra, G. M. (2020). ‘Research is tentative, but that’s okay: overcoming misconceptions about scientific tentativeness through refutation texts’. Discourse Processes 57 (1), pp. 17–35. https://doi.org/10.1080/0163853X.2019.1629805 .

Fox, S. and Duggan, M. (2013). ‘Health Online 2013’. Pew Research Center . URL: http://www.pewinternet.org/2013/01/15/health-online-2013/ .

Gallup (2018). Honesty/ethics in professions . URL: http://news.gallup.com/poll/1654/Honesty-Ethics-Professions.aspx .

Gardner, L. (2020). We are tracking the 2019-nCoV spread in real-time . URL: https://twitter.com/TexasDownUnder/status/1220014483516592129 .

Greifeneder, R. and Bless, H. (2007). ‘Relying on accessible content versus accessibility experiences: the case of processing capacity’. Social Cognition 25 (6), pp. 853–881. https://doi.org/10.1521/soco.2007.25.6.853 .

Hawn, C. (2009). ‘Take two Aspirin and tweet me in the morning: how Twitter, Facebook, and other social media are reshaping health care’. Health Affairs 28 (2), pp. 361–368. https://doi.org/10.1377/hlthaff.28.2.361 .

Heaivilin, N., Gerbert, B., Page, J. E. and Gibbs, J. L. (2011). ‘Public health surveillance of dental pain via Twitter’. Journal of Dental Research 90 (9), pp. 1047–1051. https://doi.org/10.1177/0022034511415273 .

Hendriks, F., Kienhues, D. and Bromme, R. (2015). ‘Measuring laypeople’s trust in experts in a digital age: the Muenster Epistemic Trustworthiness Inventory (METI)’. PLoS ONE 10 (10), e0139309. https://doi.org/10.1371/journal.pone.0139309 .

Hyland, K. (1996). ‘Talking to the academy: forms of hedging in science research articles’. Written Communication 13 (2), pp. 251–281. https://doi.org/10.1177/0741088396013002004 .

Ireland, M. E., Schwartz, H. A., Chen, Q., Ungar, L. H. and Albarracín, D. (2015). ‘Future-oriented tweets predict lower county-level HIV prevalence in the United States’. Health Psychology 34 (Suppl), pp. 1252–1260. https://doi.org/10.1037/hea0000279 .

Jasanoff, S. (2005). Designs on nature: science and democracy in Europe and the United States. Princeton, NJ, U.S.A.: Princeton University Press. https://doi.org/10.1515/9781400837311 .

— (2011). ‘Cosmopolitan knowledge: climate science and global civic epistemology’. In: The Oxford handbook of climate change and society. Ed. by J. S. Dryzek, R. B. Norgaard and D. Schlosberg. Oxford, U.K.: Oxford University Press, pp. 129–143. https://doi.org/10.1093/oxfordhb/9780199566600.003.0009 .

Kata, A. (2010). ‘A postmodern Pandora’s box: anti-vaccination misinformation on the Internet’. Vaccine 28 (7), pp. 1709–1716. https://doi.org/10.1016/j.vaccine.2009.12.022 .

Keelan, J., Pavri-Garcia, V., Tomlinson, G. and Wilson, K. (2007). ‘YouTube as a source of information on immunization: a content analysis’. JAMA: Journal of the American Medical Association 298 (21), pp. 2482–2484. https://doi.org/10.1001/jama.298.21.2482 .

König, L. (2020). ‘Podcasts in higher education: teacher enthusiasm increases students’ excitement, interest, enjoyment, and learning motivation’. Educational Studies . https://doi.org/10.1080/03055698.2019.1706040 .

König, L. and Jucks, R. (2019a). ‘Hot topics in science communication: aggressive language decreases trustworthiness and credibility in scientific debates’. Public Understanding of Science 28 (4), pp. 401–416. https://doi.org/10.1177/0963662519833903 .

— (2019b). ‘Influence of enthusiastic language on the credibility of health information and the trustworthiness of science communicators: insights from a between-subject web-based experiment’. Interactive Journal of Medical Research 8 (3), e13619. https://doi.org/10.2196/13619 .

— (2019c). ‘When do information seekers trust scientific information? Insights from recipients’ evaluations of online video lectures’. International Journal of Educational Technology in Higher Education 16, 1. https://doi.org/10.1186/s41239-019-0132-7 .

— (2020). ‘Effects of positive language and profession on trustworthiness and credibility in online health advice: experimental study’. Journal of Medical Internet Research 22 (3), e16685. https://doi.org/10.2196/16685 .

Kouzy, R., Abi Jaoude, J., Kraitem, A., El Alam, M. B., Karam, B., Adib, E., Zarka, J., Traboulsi, C., Akl, E. W. and Baddour, K. (2020). ‘Coronavirus goes viral: quantifying the COVID-19 misinformation epidemic on Twitter’. Cureus 12 (3), e7255. https://doi.org/10.7759/cureus.7255 .

Kruglanski, A. W., Dechesne, M., Orehek, E. and Pierro, A. (2009). ‘Three decades of lay epistemics: the why, how, and who of knowledge formation’. European Review of Social Psychology 20 (1), pp. 146–191. https://doi.org/10.1080/10463280902860037 .

Kruglanski, A. W., Raviv, A., Bar-Tal, D., Raviv, A., Sharvit, K., Ellis, S., Bar, R., Pierro, A. and Mannetti, L. (2005). ‘Says who?: Epistemic authority effects in social judgment’. Advances in Experimental Social Psychology 37, pp. 345–392. https://doi.org/10.1016/S0065-2601(05)37006-7 .

Lee, J. Y. and Sundar, S. S. (2013). ‘To tweet or to retweet? That is the question for health professionals on Twitter’. Health Communication 28 (5), pp. 509–524. https://doi.org/10.1080/10410236.2012.700391 .

Mayer, R. C., Davis, J. H. and Schoorman, F. D. (1995). ‘An integrative model of organizational trust’. The Academy of Management Review 20 (3), pp. 709–734. https://doi.org/10.2307/258792 .

Metzger, M. J. and Flanagin, A. J. (2015). ‘Psychological approaches to credibility assessment online’. In: The handbook of the psychology of communication technology. Ed. by S. S. Sundar. John Wiley & Sons, pp. 445–466. https://doi.org/10.1002/9781118426456.ch20 .

Miles, J., Petrie, C. and Steel, M. (2000). ‘Slimming on the Internet’. Journal of the Royal Society of Medicine 93 (5), pp. 254–257. https://doi.org/10.1177/014107680009300510 .

Mislove, A., Lehmann, S., Ahn, Y.-Y., Onnela, J.-P. and Rosenquist, J. N. (2011). ‘Understanding the demographics of Twitter users’. In: Fifth International AAAI Conference on Weblogs and Social Media (Barcelona, Spain, 17th–21st July 2011), pp. 554–557.

Mitchell, K. J., Johnson, M. K. and Mather, M. (2003). ‘Source monitoring and suggestibility to misinformation: adult age-related differences’. Applied Cognitive Psychology 17 (1), pp. 107–119. https://doi.org/10.1002/acp.857 .

Mitchell, L., Frank, M. R., Harris, K. D., Dodds, P. S. and Danforth, C. M. (2013). ‘The geography of happiness: connecting Twitter sentiment and expression, demographics, and objective characteristics of place’. PLoS ONE 8 (5), e64417. https://doi.org/10.1371/journal.pone.0064417 .

Newman, E. J. and Schwarz, N. (2018). ‘Good sound, good research: how audio quality influences perceptions of the research and researcher’. Science Communication 40 (2), pp. 246–257. https://doi.org/10.1177/1075547018759345 .

Pandey, A., Patni, N., Singh, M., Sood, A. and Singh, G. (2010). ‘YouTube as a source of information on the H1N1 influenza pandemic’. American Journal of Preventive Medicine 38 (3), E1–E3. https://doi.org/10.1016/j.amepre.2009.11.007 .

Park, H., Reber, B. H. and Chon, M.-G. (2016). ‘Tweeting as health communication: health organizations’ use of Twitter for health promotion and public engagement’. Journal of Health Communication 21 (2), pp. 188–198. https://doi.org/10.1080/10810730.2015.1058435 .

Pennycook, G., McPhetres, J., Zhang, Y., Lu, J. G. and Rand, D. (2020). ‘Fighting COVID-19 misinformation on social media: experimental evidence for a scalable accuracy nudge intervention’. https://doi.org/10.31234/osf.io/uhbk9 .

Peters, J. (2020). ‘Twitter will remove misleading COVID-19-related tweets that could incite people to engage in ‘harmful activity’’. The Verge . URL: https://www.theverge.com/2020/4/22/21231956/twitter-remove-covid-19-tweets-call-to-action-harm-5g .

Pornpitakpan, C. (2004). ‘The persuasiveness of source credibility: a critical review of five decades’ evidence’. Journal of Applied Social Psychology 34 (2), pp. 243–281. https://doi.org/10.1111/j.1559-1816.2004.tb02547.x .

Prestin, A., Vieux, S. N. and Chou, W.-y. S. (2015). ‘Is online health activity alive and well or flatlining? Findings from 10 years of the Health Information National Trends Survey’. Journal of Health Communication 20 (7), pp. 790–798. https://doi.org/10.1080/10810730.2015.1018590 .

Prieto, V. M., Matos, S., Álvarez, M., Cacheda, F. and Oliveira, J. L. (2014). ‘Twitter: a good place to detect health conditions’. PLoS ONE 9 (1), e86191. https://doi.org/10.1371/journal.pone.0086191 .

Reber, R. and Schwarz, N. (1999). ‘Effects of perceptual fluency on judgments of truth’. Consciousness and Cognition 8 (3), pp. 338–342. https://doi.org/10.1006/ccog.1999.0386 .

Robillard, J. M., Johnson, T. W., Hennessey, C., Beattie, B. L. and Illes, J. (2013). ‘Aging 2.0: health information about dementia on Twitter’. PLoS ONE 8 (7), e69861. https://doi.org/10.1371/journal.pone.0069861 .

Scanfeld, D., Scanfeld, V. and Larson, E. L. (2010). ‘Dissemination of health information through social networks: Twitter and antibiotics’. American Journal of Infection Control 38 (3), pp. 182–188. https://doi.org/10.1016/j.ajic.2009.11.004 .

Sharma, K., Seo, S., Meng, C., Rambhatla, S., Dua, A. and Liu, Y. (2020). ‘Coronavirus on social media: analyzing misinformation in Twitter conversations’. arXiv: 2003.12309 .

Shearer, E. and Mitchell, A. (2021). ‘News use across social media platforms in 2020’. Pew Research Center . URL: https://www.journalism.org/wp-content/uploads/sites/8/2021/01/PJ_2021.01.12_News-and-Social-Media_FINAL.pdf .

Singh, L., Bansal, S., Bode, L., Budak, C., Chi, G., Kawintiranon, K., Padden, C., Vanarsdall, R., Vraga, E. and Wang, Y. (2020). ‘A first look at COVID-19 information and misinformation sharing on Twitter’. arXiv: 2003.13907 .

Spahn, J. (2020). Besuche wie der in der Uniklinik Gießen-Marburg sind wichtig, um zu sehen, wo wir nachbessern müssen . URL: https://twitter.com/jensspahn/status/1250095485282594820 .

Stadtler, M. and Bromme, R. (2014). ‘The content-source integration model: a taxonomic description of how readers comprehend conflicting scientific information’. In: Processing inaccurate information: theoretical and applied perspectives from cognitive science and the educational sciences. Ed. by D. N. Rapp and J. L. G. Braasch. Cambridge, MA, U.S.A.: MIT Press, pp. 379–402.

Stadtler, M., Winter, S., Scharrer, L., Thomm, E., Krämer, N. and Bromme, R. (2017). ‘Selektion, Integration und Evaluation’. Psychologische Rundschau 68 (3), pp. 177–181. https://doi.org/10.1026/0033-3042/a000361 .

Strizver, I. (2020). ‘ALL CAPS: to set or not to set?’ Fonts.com . URL: https://www.fonts.com/content/learning/fyti/situational-typography/all-caps .

Trump, D. J. (2018). To Iranian President Rouhani . URL: https://www.thetrumparchive.com .

— (2020). WE CANNOT LET THE CURE BE WORSE THAN THE PROBLEM ITSELF . URL: https://www.thetrumparchive.com .

Tschabitscher, H. (2021). ‘Writing in all caps is like shouting’. Lifewire . URL: https://www.lifewire.com/why-not-to-write-in-all-caps-1173242 .

Tsugawa, S., Kikuchi, Y., Kishino, F., Nakajima, K., Itoh, Y. and Ohsaki, H. (2015). ‘Recognizing depression from Twitter activity’. In: CHI ’15: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea, 18th–23rd April 2015). New York, NY, U.S.A.: Association for Computing Machinery, pp. 3187–3196. https://doi.org/10.1145/2702123.2702280 .

Turk, V. (2018). ‘Why Donald Trump’s all-caps tweet seems REALLY SHOUTY AND SCARY’. WIRED . URL: https://www.wired.co.uk/article/donald-trump-twitter-iranian-president-all-caps .

Veletsianos, G. (2012). ‘Higher education scholars’ participation and practices on Twitter’. Journal of Computer Assisted Learning 28 (4), pp. 336–349. https://doi.org/10.1111/j.1365-2729.2011.00449.x .

Whittlesea, B. W. A. and Leboe, J. P. (2003). ‘Two fluency heuristics (and how to tell them apart)’. Journal of Memory and Language 49 (1), pp. 62–79. https://doi.org/10.1016/S0749-596X(03)00009-3 .

Wissenschaftsrat (2019). Förderlinie Exzellenzuniversitäten: Gesamtliste der geförderten Universitäten und des Universitätsverbunds . URL: https://www.wissenschaftsrat.de/download/2019/ExStra_Entscheidung.pdf?__blob=publicationFile&v=1 .

Wong, C. A., Sap, M., Schwartz, A., Town, R., Baker, T., Ungar, L. and Merchant, R. M. (2015). ‘Twitter sentiment predicts affordable care act marketplace enrollment’. Journal of Medical Internet Research 17 (2), e51. https://doi.org/10.2196/jmir.3812 .

World Health Organization (2020). Coronavirus disease 2019 (COVID-19): Situation Report – 93 .

Zagzebski, L. T. (2015). Epistemic authority: a theory of trust, authority, and autonomy in belief. New York, NY, U.S.A.: Oxford University Press.

Authors

Dr. Lars König is a psychologists and enthusiastic science communicator. Currently, his work focuses on science/health communication, persuasion in online environments, and the strategic design of digital learning environments. His research has appeared in the Journal of Medical Internet Research, and Public Understanding of Science, among others. ORCID: https://orcid.org/0000-0003-1450-8449. E-mail: forschung@charakter-manufaktur.de .

Dr. Priska Linda Breves is a research fellow at the Institute of Human-Computer-Media, Department of Media and Business Communication, University of Würzburg, Germany. She studied media communication at the University of Würzburg and Columbia University, New York. Her interests are strategic and persuasive communication. Her research has appeared in the International Journal of Advertising, and Computers in Human Behavior, among others.
ORCID: https://orcid.org/0000-0002-4074-8027. E-mail: priska.breves@uni-wuerzburg.de .