1 Context

The definition for science literacy has expanded from once only including individuals’ ability to understand science [American Association for the Advancement of Science (AAAS), 1990 ] to include ability to apply that knowledge along with perceptions and trust in science [National Academies of Sciences, Engineering and Medicine (NAS), 2016 ]. Recent research has concluded the public has an adequate understanding of scientific topics; however, knowledge gaps have been identified across demographic groups, like education and gender [Funk and Kehaulani Goo, 2015 ; National Science Board (NSB), 2016 ]. These knowledge gaps underscore the need for targeted science literacy efforts.

While the emergence of digital media has paved the way for scientists to directly communicate with stakeholders to increase science literacy [AAAS, 2017 ], it also allows individuals to pick and choose where they receive information [Prior, 2007 ]. This selective media exposure is linked to increased knowledge gaps [Prior, 2007 ] due to people seeking information that reaffirms their own beliefs [Scheufele et al., 2006 ]. Another side effect of selective media exposure has been an increase in diverging opinions between the public and scientists related to topics like genetically engineered food, use of pesticides, climate change, and animal research [Funk and Rainie, 2015 ]. Some members of the public can experience strong emotional responses to these types of topics due to political, cultural, and religious beliefs [Priest, 2008 ], which would also feed this cycle of selective media exposure.

These differences in attitude indicate an apparent schism between general members of the public and the scientific community regarding their trust in science [Mooney, 2012 ]. In addition to skepticism toward science, the public has lost faith in the contributions of higher education [Fingerhut, 2017 ]. Skepticism toward science and higher education could lead to issues if the public is making civic, policy, and purchasing decisions about scientific issues based in values and opinions rather than facts and evidence [Fingerhut, 2017 ; Mooney, 2012 ; Nelson, 1999 ]. Science communication, or two-way engagement between scientists and the public, has been identified as one way to address issues related to science literacy [Dudo, 2013 ; Pearson, 2001 ].

Two-way engagement is critical for science communication [Dudo, 2013 ; Pearson, 2001 ], which is why it is also important to understand how scientists feel about the public and science communication. Llorente et al. [ 2019 ] concluded that scientists in Spain believed the public lacked serious knowledge about science despite believing the public was interested in science. Interestingly, the scientists in the study believed the public cared less about science than the public had indicated in prior research, which indicated a gap in understanding between the two groups [Llorente et al., 2019 ]. Meanwhile in the United States, scientists have held favorable views toward the public; however, these positive attitudes did not predict their engagement in science communication [Besley, 2015 ].

While there are a number of factors that may influence scientists’ engagement in science communication, one potential challenge that has been identified included how academic leaders or peers valued science communication [Dunwoody, 1986 ; Lundy et al., 2006 ]. Moran et al. [ 2020 ] explored the concept of research culture and determined that scientists held negative views of the culture due to increased levels of competition and lack of collaboration between peers. Working within this negative culture hindered research creativity and facilitated negative interactions with leadership [Moran et al., 2020 ]. Researchers concluded a negative research culture would not be sustainable long-term and would lead to decreased engagement in research [Moran et al., 2020 ]. Given the likely impact of research culture on research engagement [Moran et al., 2020 ], it is also likely that societal pressures and cultural norms may influence scientists’ engagement in science communication as well. Therefore, the purpose of this study was to understand how the spiral of silence accounted for university faculty members’ engagement in effective science communication.

1.1 Spiral of silence

The spiral of silence served as the framework for this research because it deals with the role of social pressures on engagement in a behavior or activity [Noelle-Neumann, 1974 ]. People want to belong and be accepted in a group, and if their opinion matches the group’s opinion, they will share it. However, if the opinion differs, people can elect to remain silent to avoid isolation [Noelle-Neumann, 1993 ]. Essentially, public opinion was contingent on who spoke up and who remained silent, or the spiral of silence [Noelle-Neumann, 1993 ].

As Noelle-Neumann [ 1974 ] researched the spiral of silence further, she was able to develop a model to predict a person’s willingness to speak out on a topic. The variables included in this model were attitude, perceived opinion of others’ attitude, and perception of future trends in attitudes. Attitude was predicted to have a strong influence on willingness to express an opinion when that attitude was congruent with perceptions of others’ attitudes and perceptions of future trends [Noelle-Neumann, 1974 ]. Noelle-Neumann [ 1974 ] pointed out perceived distribution of public opinion was not necessarily reflective of actual public opinion though. A divergence between perceived and actual opinions indicated the perceived majority opinion was overestimated due to it being more displayed to the public compared to the actual majority opinion. Noelle-Neumann [ 1974 ] also hypothesized that current and future opinions toward a subject were positively correlated; however, a weaker correlation would indicate public opinion was changing. In instances where current and future trends were not aligned, future trends were predicted to be a stronger indicator of willingness to expose the attitude [Noelle-Neumann, 1974 ].

There has been research exploring the effects of the spiral of silence online and related to scientific topics. Researchers have linked the spiral of silence to the emergence of partisan news, determining conservatives who predominately viewed conservative news channels perceived the public to mostly hold conservative views, and the same was true for liberals [Tsfati, Jomini Stroud and Chotiner, 2014 ]. Even in online chatrooms where anonymity could be assumed, the spiral of silence was present in conversations related to genetically engineered food and people conformed their posts to the majority opinion [Kim, Kim and Oh, 2014 ]. However, researchers in Germany concluded people did not experience a fear of isolation when discussing opinions related to climate change likely due to the topic not being controversial topic in the country [Porten-Cheé and Eilders, 2015 ].

Limited research has examined faculty’s engagement in science communication using the spiral of silence theory, yet some researchers’ findings have indicated a potential use of the theory in this context. Lundy et al. [ 2006 ] found faculty did not believe their peers prioritized science communication. If one faculty believed the peers in their discipline did not believe science communication to be important, then they may be less willing to engage in science communication or vice versa [Noelle-Neumann, 1974 ]. Additionally, faculty have been faced with criticism from their peers when participating in science communication [Dunwoody, 1986 ], which would also likely influence their public engagement. However, Besley et al. [ 2018 ] determined that scientists’ perceptions of other scientists’ engagement in science communication was not predictive of their own engagement. The researchers concluded these norms may not be as important as other researchers had suggested [Poliakoff and Webb, 2007 ] and recommended exploring how perceptions of those outside academia might influence engagement in science communication instead [Besley et al., 2018 ].

Brüggemann, Lörcher and Walter [ 2020 ] proposed that science communication has changed in the past decade, which may account for the shifting influence of norms on science communication [Besley et al., 2018 ; Poliakoff and Webb, 2007 ]. According to Brüggemann, Lörcher and Walter [ 2020 ], normal science communication relies on journalists to translate science to general audiences while scientists focus on the research. When scientists take an active role in science communication, they are deviating from the historically passive role of scientists in science communication [Bucchi, 1996 ]. Due to changes in the media and society’s increased polarization, science communication has had to evolve over the past decade to actively engage scientists in what Brüggemann, Lörcher and Walter [ 2020 ] referred to as post-normal science communication. With these evolving expectations for scientists’ engagement in science communication, the role of digital media [Prior, 2007 ], and the influence of research cultures [Moran et al., 2020 ], it is important to understand if public engagement is still considered a deviation from academic norms, which would lead to increased pressures from the spiral of silence. Using the spiral of silence as a model to understand faculty engagement with the public could lead to new understandings and appropriate support for science communication.

2 Research questions

The purpose of this study was to understand the influence of the spiral of silence on university faculty’s engagement in science communication. Through quantitative and qualitative research methods, this study sought to answer the following research questions:

RQ1 :
What are University of Florida [UF] faculty’s attitude toward science communication, perceptions of peers’ attitudes toward science communication, and perceptions of future trends related to science communication?
RQ2 :
How do UF faculty’s personal characteristics, attitude toward science communication, perceptions of peers’ attitudes toward science communication, and perceptions of future trends related to science communication predict effective science communication?
RQ3 :
How do UF faculty perceive others to view science and science communication?

3 Methods

A mixed-methods design was used to fulfill the purpose of this study, which is a design that has been used to explore science communication in other contexts [Navarro and McKinnon, 2020 ; Ndlovu, Joubert and Boshoff, 2016 ; Neresini and Bucchi, 2011 ]. Specifically, an explanatory sequential design was utilized. In this design, quantitative data are collected in the first phase of the research and qualitative data are collected in the second phase [Cresswell and Plano Clark, 2011 ]. This research design was appropriate because it allowed researchers to explore significant findings from the quantitative phase in the qualitative phase [Cresswell and Plano Clark, 2011 ].

3.1 Research context

The population of interest for this study were tenure-track faculty in the Institute of Food and Agricultural Sciences [IFAS] at UF. UF is a land-grant university in the U.S., and the UF/IFAS has annually contributed $108.7 billion to the state’s gross domestic economy [UF/IFAS, 2013 ]. Land-grant faculty would be expected to engage in research, teaching, and Extension (outreach) regardless of their specific appointments [Association of Public and Land-Grant Universities, 2012 ]. UF/IFAS is home to 51,000 students, 568 faculty with research appointments, 353 with teaching appointments, and 245 with Extension appointments [UF/IFAS, 2013 ]. There were some science communication training opportunities offered through UF/IFAS at the time of this research, but they were rarely aimed at the university scientists [Agricultural Education and Communication, 2017 ].

At the time of this study, there were a few events that had recently occurred or were ongoing that may have impacted the findings. UF/IFAS had been faced with a $6 million budget cut from the state prior to data collection, which threatened the jobs of 35 faculty members [Rusnak, 2017 ]. A hiring freeze and departmental budget cuts were implemented across the institute to save these faculty positions. Another important piece of context for this research was that one UF/IFAS faculty member had come under recent, public scrutiny after engaging in science communication related to his research about genetically engineered food [Kroll, 2015 ]. Major newspaper outlets covered the story and security was increased at his building after receiving death threats in the mail [Conrow, 2017 ]. The lack of state support through funding and the public’s vocal lack of trust toward a colleague engaged in science communication could affect faculty’s engagement in science communication. This study is part of a larger research project [Ruth et al., 2020 ; Ruth et al., 2019 ], but the objectives presented in this manuscript have not been previously reported.

3.2 Quantitative phase

An online survey was distributed via email to a census of tenure-track, UF/IFAS faculty ( N = 5 6 9 ). The online instrument consisted of 45 questions that asked respondents about their perceptions and experiences related to science communication. Respondents were provided the following definition at the beginning of the survey: “For the purpose of this study, science communication is when researchers engage in meaningful communication with the public about their science”. After review by a panel of experts, the survey instrument was piloted at a peer institute to identify issues related to reliability or validity prior to distribution [Ary, Cheser Jacobs and Sorensen, 2010 ]. The pilot study identified no issues related to the spiral of silence constructs.

Attitude toward science communication was measured with a seven-item, five-point bipolar semantic differential scale. This scale was adapted from previous research [Noelle-Neumann, 1974 ]. Respondents were asked to complete the following statement: “I believe science communication is…” The items for the measurement included statements like “good/bad”, “important/unimportant”, and “essential/not essential”. After the items were recoded so positive adjectives were assigned a five and negative adjectives were a one, the reliability was calculated. The scale was reliable (Cronbach’s α = 0 . 9 2 ), and the construct was created by averaging all the items.

Perceptions of peers’ attitudes toward science communication were also asked in the survey. This question used the exact same scale and items as attitude toward science communication. However, the prompt read, “The majority of faculty in my department/discipline think that science communication is…” The scale was recoded as described for attitude and was found to be reliable (Cronbach’s α = 0 . 9 5 ). An average of the items was calculated to create the construct. The real limits used to interpret attitude toward science communication and perceptions of peers’ attitude toward science communication were 1.00–1.49 = negative, 1.50–2.49 = slightly negative, 2.50–3.49 = neutral, 3.50–4.49 = slightly positive, 4.50–5.00 = positive [Sheskin, 2004 ].

Future trends of perceptions of science communication was also measured in the survey instrument. The seven-item, Likert-type scale was adapted from prior literature [Noelle-Neumann, 1974 ]. The labels were 1 = strongly disagree, 2 = disagree, 3 = neither agree not disagree, 4 = agree, and 5 = strongly agree. The construct stem stated: “In the future…” and some examples of the items include, “Faculty will be less accepting of science communication”, “Faculty will recognize the value of science communication”, and “Faculty will be more fearful of engaging in science communication”.

The statements were recoded so positive perceptions of future attitudes were a five and negative perceptions of future attitudes were a one. The construct was created by adding the individual item scores and dividing by seven (Cronbach’s α = 0 . 8 9 ). The real limits used to interpret perceptions of future attitudes toward science communication being positive were 1.00–1.49 = strongly disagree, 1.50–2.49 = disagree, 2.50–3.49 = neither agree nor disagree, 3.50–4.49 = agree, 4.50–5.00 = strongly agree.

Two variables measuring quantity of science communication and quality of science communication were transformed to create the variable for effective science communication. Quantity of science communication asked respondents how often they engaged in 15 different science communication activities in the past 12 months (i.e. in-person presentation, blog, newspaper interview). This measurement was treated as a count variable. Quality of science communication was adapted from AAAS’ [ 2017 ] recommendations for effective communication. This construct was measured on a 9-item, 5-point Likert-type scale. Items included statements like “I provide interactive opportunities for my audience”, “I removed jargon from my presentation”, and “I considered my audience’s demographic characteristics prior to the engagement”. Respondents could also select “not applicable”; these responses were omitted from analysis. The items for quality of science communicate were averaged to create the construct (Cronbach’s α = 0 . 7 7 ).

Quantity of science communication was multiplied by quality of science communication to create the construct for effective science communication, which could potentially range from 0 to 525. This specific construct has been reported in prior publications [Ruth et al., 2020 ; Ruth et al., 2019 ] and had a range from 0.0 to 181.56. The mean was 55.72 ( SD = 3 8 . 1 6 ). Low, moderate, and high science communicator groups were also created from this variable to help guide purposive sampling in the qualitative phase of the study. Moderate science communicators were categorized as those within one standard deviation above or below the mean ( M = 1 7 . 5 6 to 93.88, n = 1 0 4 ). High science communicators fell one standard deviation above the mean ( M > 9 3 . 8 8 , n = 3 2 ) and low science communicators were one standard deviation below the mean ( M < 1 7 . 5 6 , n = 2 6 ).

The survey collection procedures followed Dillman’s tailored design method [Dillman, Smyth and Christian, 2014 ], and the survey was active for 17 days in November 2017. After discarding incomplete responses, there were a total of 180 respondents in the study ( n = 1 8 0 , 31.6% response rate). The demographics for the total sample along with low, moderate, and high science communicator groups can be found in Tables 1 and 2 .


Table 1 : Description of survey respondents (categorical variables).
PIC


Table 2 : Description of survey respondents (continuous variables).
PIC

Quantitative data were analyzed in SPSS. Descriptive statistics were used to answer objective 1 and a hierarchical regression was used to answer objective 2. Model one included personal characteristics of the faculty, including tenure status, discipline, research appointment, and gender. Categorical variables were dummy coded, and the category with largest number of responses was treated as the control (tenure status: tenured , discipline: applied science , gender: male ; Field [ 2013 ]). Only the research appointment was included as a predictor because the research/teaching/extension appointments for each respondent were related and including all three appointments caused multicollinearity issues for the model. Research was chosen for the model because it accounted for the largest average appointment for the respondents. In the final model, the skewness and kurtosis for each variable of interest fell within ± 2 , which met assumptions for normality. Additionally, variance inflation factor (VIF) and tolerance were measured to identify multicollinearity issues. The VIF for all variables fell below one and the tolerance was greater than 0.2, which indicated multicollinearity was not a concern [Bowerman and O’Connell, 1990 ; Field, 2013 ; Menard, 1995 ]. Therefore, assumptions for normality and multicollinearity were met for multiple linear regression [Field, 2013 ].

One potential threat to this study was non-response bias [Lindner, Murphy and Briers, 2001 ]. To understand if the respondents in the study were representative of the population, their demographic characteristics were compared to the known demographic characteristics of non-respondents, which included rank, discipline, administrative position, and presented gender [Koch and Blohm, 2016 ; Lewis, Hardy and Snaith, 2013 ; Lindner, Murphy and Briers, 2001 ]. Chi-square analyses found associations between respondents/non-respondents and administrative position ( p = . 0 5 ) as well as discipline ( p = . 0 1 ), which indicated there was a larger proportion of respondents with administrative positions or in the social science field in the sample compared to non-respondents. There were no associations found with rank ( p = . 3 2 ) or gender ( p = . 6 1 ) between respondents and non-respondents. Due to a potential bias from an overrepresentation of social scientists and administrators in the sample, a series of t-tests were conducted to compare early to late respondents’ answers for variables of interest. Late respondents served as a proxy for non-respondents and were the later 50% of respondents who completed the survey [Lin and Schaeffer, 1995 ; Lindner, Murphy and Briers, 2001 ]. There were no significant differences for attitude ( p = . 9 6 ), perceptions of others’ attitude ( p = . 1 4 ), perceptions of future trends ( p = . 9 0 ), and effective science communication ( p = . 9 2 ) between early and late respondents. Therefore, non-response error was assumed to be limited.

3.3 Qualitative phase

Follow-up interviews were conducted with 13 ( n = 1 3 ) of the survey respondents. The purpose of the interview was to further explore areas of interest and significance from the survey. The interview guide was reviewed by a panel of experts prior to implementation to increase validity [Ary, Cheser Jacobs and Sorensen, 2010 ]. Interviews were conducted in February and March of 2018 and lasted approximately one hour. Participants were reminded at the beginning of the interview of the definition of science communication provided in the survey. The interview questions asked participants about their perceptions of science communication, including the benefits and barriers to engagement, how others in the discipline view science communication, and what could help them engage in science communication in the future.

Participants were purposively selected to participate in the interviews to have representation from high, moderate, and low science communicators. Thirty-one potential participants were invited to participant in follow-up interviews. Five high communicators ( n = 5 ) and five moderate communicators ( n = 5 ) agreed to participate in interviews. Multiple email requests were sent to 14 low communicators asking them to participate, but only three agreed ( n = 3 ). Interviews were conducted until saturation was met and themes consistently emerged. The demographics for interview participants can be found in Table 3 .


Table 3 : Description of interview participants.
PIC

All interviews were recorded for accuracy and a third-party company transcribed these recordings with the aid of the researchers’ notes. Interview transcripts were analyzed with MAXQDA 2018 [VERBI Software, 2017 ]. A priori coding was used to identify pre-determined codes related the spiral of silence [Kuzel, 1999 ]. The primary researcher identified perceptions of peers and perceptions of the public as codes to explore based on quantitative findings and prior literature [Dunwoody, 1986 ; Kennedy and Funk, 2016 ; Lundy et al., 2006 ]. The quantitative findings and qualitative findings were integrated in the discussion and interpretation of the findings.

This study used peer debriefing, member checking, clarifying researcher bias, use of an audit trail, and rich and thick descriptions to help increase the validity and reliability of the study. Credibility was increased through the use of member checking. Participants were given the opportunity to read over their transcripts and researcher conclusions to clarify anything they said or what was interpreted [Lincoln and Guba, 1985 ; Stake, 1995 ]. To increase the confirmability of the study, or how well the research was supported by data, a peer debriefer was used to ensure the main researcher was not overstating findings or including bias in the results [Lincoln and Guba, 1985 ]. Audit trails were kept to increase the confirmability of the research as well. The researcher kept detailed memos and notes for how themes were defined and collapsed during the coding process [Thomas and Magilvy, 2011 ]. Thick and rich descriptions of the participants and the context of the study were used to increase the transferability of findings. Finally, a clarification of researcher bias has been included to account for confirmability.

The primary researcher for this study is a graduate from UF and has degrees in both basic and social sciences from the university. The primary researcher also had a strong interest in science communication and had interacted with some of the interview participants at professional meetings at the university prior to the interviews.

4 Results

4.1 RQ1: What are UF faculty’s attitude toward science communication, perceptions of peers’ attitudes toward science communication, and perceptions of future trends related to science communication?

Constructs from the spiral of silence were described in objective one. Respondents had positive attitudes toward science communication ( M = 4 . 5 6 , SD = 0 . 5 4 , n = 1 7 8 ) and perceived others in their departments/discipline had slightly positive attitudes toward science communication ( M = 4 . 0 1 , SD = 0 . 7 8 ; n = 1 6 8 ). Additionally, the respondents agreed that peers in their department/discipline would have positive attitudes toward science communication in the future ( M = 3 . 8 1 , SD = 0 . 5 7 , n = 1 7 5 ).

4.2 RQ2: How do UF faculty’s personal characteristics, attitude toward science communication, perceptions of peers’ attitudes toward science communication, and perceptions of future trends related to science communication predict effective science communication?

A hierarchical regression was used for objective two (Table 4 ). The first model included the personal characteristics of the respondent, including tenure status, discipline area, percent research appointment, and gender. This model to predict effective science communication was statistically significant and could account for approximately 6% of the variance in effective science communication ( F ( 5 , 1 4 5 ) = 2 . 9 8 , p = . 0 1 , R 2 = . 0 6 ). The only significant predictor in the model was percent research appointment, and as research appointment increased one point, engagement in effective science communication decreased . 4 3 points ( b = . 4 3 , p < . 0 1 ).


Table 4 : Predictors of effective science communication.
PIC

The second model added the constructs from the spiral of silence, including attitude toward science communication, perceptions of their peers’ attitudes toward science communication, and future trends toward science communication. This model was statistically significant ( F ( 8 , 1 4 2 ) = 4 . 3 3 , p < . 0 1 , R 2 = . 1 5 ), and these variables could account for 10% of the unique variance in effective science communication ( Δ F ( 3 , 1 4 2 ) = 6 . 0 6 , p < . 0 1 , Δ R 2 = . 1 0 ). Research appointment remained a predictor of effective science communication ( b = . 3 7 , p = . 0 1 ), along with attitude toward science communication and perceptions of peers’ attitudes toward science communication. As attitude toward science communication increased one point, engagement in effective science communication increased 21.58 points ( b = 2 1 . 5 8 , p < . 0 1 ). However, a one-point increase in perceptions of peers’ attitude toward science communication lead to a 15.11 decrease ( b = 1 5 . 1 1 , p < . 0 1 ) in engagement in effective science communication. This last finding countered Noelle-Neumann’s [ 1974 ] proposed spiral of silence and warranted further investigation.

4.3 RQ3: How do UF faculty perceive others to view science and science communication?

Because the findings from the quantitative phase of the study did not fully support the spiral of silence, follow-up interviews were conducted with participants to understand how perceptions of their peer’s attitudes toward science communication influenced their own public engagement. Additionally, perceptions of the general public’s views of science and science communication were coded to understand if groups aside from academic peers influenced faculty’s engagement in science communication within the framework of the spiral of silence.

Perceptions of peers. The theme for perceptions of peers included discussion of how the participants believed their peers, departments, and administration viewed and valued science communications. When asked how the university valued science communication, Participant 88 (moderate communicator) said, “Well, I think [university administration has] definitely pushed it. They definitely pushed the idea of science communication, particularly in recent years”. Participant 93 (high communicator) had a similar impression and explained, “In UF/IFAS, our senior vice president talks about it all the time. I mean, it is a top talking point for him. I think they understand it as an important issue”.

While science communication appeared valued by administration, participants also expressed that science communication was “undervalued by the faculty”, (Participant 9 — moderate communicator). The participant went on to explain, “Science communication is highly undervalued by the faculty. It is moderately undervalued by UF/IFAS administration. I think, in the profession, it is moderately undervalued”. Participant 155 (low communicator) had similar thoughts about how science communication was valued: “I never really received mentoring around it. My supervisor certainly had poor experiences engaging with the public on things. They never did it. We never really saw that as an aspect of the career”.

Participants were also asked about how their specific department and faculty peers valued science communication. Overall, participants described a lack of interest related to science communication in their departments. “I think in [my peer’s] mind, they do not communicate with the general public”, explained Participant 158 (high communicator). Participant 29 (moderate communicator) thought the lack of engagement in science communication was due to a different reason:

[Some] departments have cultures that are so bad that, even if you would want to do [science communication], you will not do it because they will put the emphasis on how we get misrepresented or oversimplified and all that stuff. They say stuff like, “I would never ever do that because you sound so stupid, and this is obviously not correct”.

Participant 17 (high communicator) was able to provide a specific example of how her “department could care less about communicating to the general public”. She explained, “I just remember, for instance, I have traveled to Africa and done work in Africa and one of my colleagues said, ‘why would you go there?”’

These findings indicated participants perceived UF/IFAS administration placed value on science communication, but they believed it was not necessarily value by their peers or departments.

Perceptions of the public. Perceptions of the public was defined as how the faculty perceived the public to view science and science communication. Participants believed the public was often underestimated with their ability to understand science. “I think it is very doable to explain things to people. We don’t give them enough credit. You explain it, they’ll understand it”, said Participant 17 (high communicator). One low communicator in the participant group exemplified Participant 17’s concern faculty did not give the public enough credit and said, “I do not think [the public] knows what to do with the information [about research], why the information is relevant to them. I would even question if the information is relevant to them”.

While there were some diverging opinions in the public’s ability to understand science communication, most participants expressed concern about the amount of misinformation and distrust shared amongst the public. Participant 154 (moderate communicator) thought, “Increasingly, the public is becoming more and more skeptical about what we do and our worth”. Participant 37 (low communicator) believed the distrust stemmed from “people thinking scientists do not understand or cannot even predict” science. Participant 5 (high communicator) expanded upon this idea and explained,

I think especially in the funding climate that we are in right now, this unfortunate disconnect, whole fake news horrible mess that people are trusting scientists less and less, and their not trusting that, people are not, whether it is evolution, vaccines, genetically modified organisms, climate change — I mean I have no idea how we lost the trust of the people, but we have to some extent.

The participant’s area of research also influenced their perception for how the public would perceive their science communication. When asked how people reacted to her communication about disaster preparedness, Participant 158 (high communicator) shared, “I mean, I do not work on a controversial topic. I work on something that everybody loves — everybody loves a good disaster, so I would say it is probably pretty much in line with what they are thinking”. However, Participant 88 (high communicator) worked with a depleting natural resource in [State] and shared,

I would say I choose my words carefully, depending on what the topic was, and the meeting was. Some of those groups, you just have to be careful. They can take what you say and then twist it around to mean something else completely.

The participants had conflicting thoughts whether or not their research was relevant to the public, but they did agree the public was increasingly skeptical of research. Additionally, the context of the communication appeared to influence their expectations for success when communicating with the public.

5 Discussion

The purpose of this study was to understand the role of the spiral of silence on university faculty’s engagement in effective science communication. Interestingly, the quantitative findings somewhat diverged from how the spiral of silence might be expected to influence faculty’s engagement in science communication. Respondents possessed more favorable attitudes toward science communication compared to how they perceived their peers to view science communication; however, they agreed that future trends in attitude toward science communication would be positive. The regression model indicated an increased level of research appointment led to lower engagement in science communication. While this is not necessarily surprising, it is unfortunate to learn faculty most engaged in the research process are least likely to communicate about it to the public. Additionally, more favorable attitudes toward science communication increased engagement while positive perceptions of peers’ attitudes were associated with decreased engagement in effective science communication. Besley et al. [ 2018 ] had concluded peer norms did not influence engagement the way past researchers had proposed, and this research further supported this conclusion. The regression model accounted for a medium effect size [Cohen, 1988 ], yet the predictors did not operate as expected within the spiral of silence framework [Noelle-Neumann, 1974 ]. Therefore, there was a need to further investigate how social pressures may influence faculty’s engagement in science communication.

Aligned with Besley et al.’s [ 2018 ] recommendations, the qualitative phase further investigated how participants’ perceptions of others, both inside and outside academia, influenced their engagement in science communication. At a departmental level, peers’ perceptions of science communication appeared to diverge from the quantitative findings. Participants discussed how peers would question their work or did not see the value in science communication, which supported past research [Dunwoody, 1986 ; Lundy et al., 2006 ]. Additionally, these findings indicate that faculty peers may still subscribe to the notion of normal science communication, where the responsibility does not fall to the scientist to engage the public [Brüggemann, Lörcher and Walter, 2020 ]. However, there was a clear emphasis of post-normal science communication at UF/IFAS by administration [Brüggemann, Lörcher and Walter, 2020 ], which may lead to a change in thinking by all faculty in the future if administration continues to model this idea.

The regression model in objective two indicated an inverse relationship between engagement in science communication and perceptions of peers’ attitudes toward science communication. Some interview participants reported engaging in science communication despite their peers often questioning the value of outreach, which could help explain this quantitative finding. However, there is a need to research this area in greater depth to understand how faculty’s peers influence their views of post-normal science communication, particularly since it directly conflicts with the spiral of silence [Noelle-Neumann, 1974 ].

The qualitative phase also identified how perceptions of the public’s views toward science influence science communication. The faculty saw the need to communicate with the public and some held favorable attitudes toward the public, similar to past research [Besley, 2015 ]. This interest to engage with the public was mostly due to the public’s skepticism and the spread of fake news, which has been identified as an issue related to science literacy [Funk and Rainie, 2015 ]. These faculty may also be thinking about the budget cuts they had been faced with over the past year [Agricultural Education and Communication, 2017 ], which increased their awareness of the need for science communication to demonstrate their worth. However, similar to Llorente et al.’s [ 2019 ] research, some low communicators believed the public did not have the knowledge to understand their research and did not believe the information would be relevant to them.

Faculty who worked with controversial issues also described a hesitation when communicating those topics to different groups out of fear of being misrepresented. This finding is unsurprising given the public criticism their peer had received after engaging in science communication around a contentious topic [Kroll, 2015 ]. When faculty perceived their topic as non-controversial though, they appeared more comfortable discussing it with other groups.

During the interviews, faculty’s perceptions of the public’s attitude toward science mostly operated as expected within the context of the spiral of silence. If the topic was particularly controversial, faculty expressed a low willingness to express their thoughts, which aligned with the spiral of silence [Noelle-Neumann, 1974 ]. Interestingly, faculty’s perceptions that the public was skeptical of science in general appeared to lead them to wanting to communicate with those people even more. Typically, this group would not be expected to engage out of fear of isolation [Noelle-Neumann, 1974 ], but the effect was just the opposite. This finding may be due to external motivations to prove their impacts to secure funding or from intrinsic motivations to educate the public.

These qualitative findings indicated a need to revise the framework used for this study to include perceptions of the public rather than only perceptions of peers. Additional research can help science communication practitioners better understand how faculty’s perceptions of the public influenced their engagement in science communication, which could lead to a comprehensive model to help these faculty overcome some of the challenges and stigmas associated with public engagement. Future research should also explore additional internal factors that may influence engagement in science communication. Exploring faculty members’ incentives to engage in science communication training or how public engagement is counted toward promotion and tenure would also help to strengthen this research. Practitioners would also have more control over these internal influences that could increase engagement in science communication.

6 Conclusions

This research cannot be generalized to universities outside the scope of this study, but it does provide a preliminary understanding for how the spiral of silence could be applied to understand university faculty’s engagement in science communication. It will be important for science communication researchers and professionals to understand how social pressures, whether from peers, administration, or the public, influence faculty engagement in science communication to provide the proper training and support for these groups. Future research should include quantitative measures for perceptions of the public’s attitude toward science to better capture how faculty decide to engage the public. Additionally, future research should be conducted to understand the apparent inverse relationship between science communication engagement and perceptions of peers’ attitudes toward science communication. Replicating this study at other research institutions could provide a generalizable understanding for the role the spiral of silence plays in faculty members’ engagement in science communication.

References

Agricultural Education and Communication (2017). AEC impact: AEC4036/5037 advanced agricultural communication production . URL: http://aec.ifas.ufl.edu .

American Association for the Advancement of Science (AAAS) (1990). Science for all Americans . URL: http://www.project2061.org/publications/sfaa/online/sfaatoc.htm .

— (2017). AAAS communication toolkit . URL: https://www.aaas.org/page/communicating-engage .

Ary, D., Cheser Jacobs, L. and Sorensen, C. K. (2010). Introduction to research in education. 8th ed. Belmont, CA, U.S.A.: Wadsworth Cengage Learning.

Association of Public and Land-Grant Universities (2012). The land-grant tradition. Washington, D.C., U.S.A.: APLU. URL: http://www.aplu.org/library/the-land-grant-tradition/file .

Besley, J. C. (2015). ‘What do scientists think about the public and does it matter to their online engagement?’ Science and Public Policy 42 (2), pp. 201–214. https://doi.org/10.1093/scipol/scu042 .

Besley, J. C., Dudo, A., Yuan, S. and Lawrence, F. (2018). ‘Understanding scientists’ willingness to engage’. Science Communication 40 (5), pp. 559–590. https://doi.org/10.1177/1075547018786561 .

Bowerman, B. L. and O’Connell, R. T. (1990). Linear statistical models: an applied approach. Pacific Grove, CA, U.S.A.: Duxbury Press.

Brüggemann, M., Lörcher, I. and Walter, S. (2020). ‘Post-normal science communication: exploring the blurring boundaries of science and journalism’. JCOM 19 (03), A02. https://doi.org/10.22323/2.19030202 .

Bucchi, M. (1996). ‘When scientists turn to the public: alternative routes in science communication’. Public Understanding of Science 5 (4), pp. 375–394. https://doi.org/10.1088/0963-6625/5/4/005 .

Cohen, J. (1988). Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale, NJ, U.S.A.: Lawrence Erlbaum Associates.

Conrow, J. (6th September 2017). ‘Scientist Kevin Folta files GMO libel lawsuit against NYT’. Alliance for Science . URL: https://allianceforscience.cornell.edu/blog/2017/09/scientist-kevin-folta-files-gmo-libel-lawsuit-against-nyt/ .

Cresswell, J. W. and Plano Clark, V. L. (2011). Designing and conducting mixed-methods research. 2nd ed. Thousand Oaks, CA, U.S.A.: SAGE Publications.

Dillman, D. A., Smyth, J. D. and Christian, L. M. (2014). Turbulent times for survey methodology. 4th ed. Hoboken, NJ, U.S.A.: John Wiley & Sons.

Dudo, A. (2013). ‘Toward a model of scientists’ public communication activity: the case of biomedical researchers’. Science Communication 35 (4), pp. 476–501. https://doi.org/10.1177/1075547012460845 .

Dunwoody, S. (1986). ‘The scientist as a source’. In: Scientists and journalists: reporting science as news. Ed. by S. M. Friedman, S. Dunwoody and C. L. Rogers. New York, NY, U.S.A.: Free Press, pp. 3–16.

Field, A. (2013). Discovering statistics using IBM SPSS statistics. 4th ed. London, U.K.: SAGE Publications.

Fingerhut, H. (20th July 2017). ‘Republicans skeptical of colleges’ impact on U.S., but most see benefits for workforce preparation’. Pew Research Center . URL: http://www.pewresearch.org/fact-tank/2017/07/20/republicans-skeptical-of-colleges-impact-on-u-s-but-most-see-benefits-for-workforce-preparation/ .

Funk, C. and Kehaulani Goo, S. (10th September 2015). ‘A look at what the public knows and does not know about science’. Pew Research Center . URL: http://www.pewinternet.org/2015/09/10/what-the-public-knows-and-does-not-know-about-science/ .

Funk, C. and Rainie, L. (29th January 2015). ‘Public and scientists’ views on science and society’. Pew Research Center . URL: http://www.pewinternet.org/2015/01/29/public-and-scientists-views-on-science-and-society/ .

Kennedy, B. and Funk, C. (5th December 2016). ‘Many Americans skeptical about scientific research on climate and GM foods’. Pew Research Center . URL: http://www.pewresearch.org/fact-tank/2016/12/05/many-americans-are-skeptical-about-scientific-research-on-climate-and-gm-foods/ .

Kim, S.-H., Kim, H. and Oh, S.-H. (2014). ‘Talking about genetically modified (GM) foods in South Korea: the role of the Internet in the spiral of silence process’. Mass Communication and Society 17 (5), pp. 713–732. https://doi.org/10.1080/15205436.2013.847460 .

Koch, A. and Blohm, M. (2016). Nonresponse bias. GESIS Survey Guidelines. Mannheim, Germany: GESIS — Leibniz Institute for the Social Sciences. https://doi.org/10.15465/gesis-sg_en_004 .

Kroll, D. (10th September 2015). ‘What the New York Times missed on Kevin Folta and Monsanto’s cultivation of academic scientists’. Forbes . URL: https://www.forbes.com/sites/davidkroll/2015/09/10/what-the-new-york-times-missed-on-kevin-folta-and-monsantos-cultivation-of-academic-scientists/ .

Kuzel, A. J. (1999). ‘Sampling in qualitative inquiry’. In: Doing qualitative research. Ed. by B. F. Crabtree and W. L. Miller. 2nd ed. Thousand Oaks, U.S.A.: SAGE Publications, pp. 33–46. URL: https://uk.sagepub.com/en-gb/eur/doing-qualitative-research/book9279 .

Lewis, E. F., Hardy, M. and Snaith, B. (2013). ‘Estimating the effect of nonresponse bias in a survey of hospital organizations’. Evaluation & the Health Professions 36 (3), pp. 330–351. https://doi.org/10.1177/0163278713496565 .

Lin, I.-F. and Schaeffer, N. C. (1995). ‘Using survey participants to estimate the impact of nonparticipation’. Public Opinion Quarterly 59 (2), pp. 236–258. https://doi.org/10.1086/269471 .

Lincoln, Y. S. and Guba, E. G. (1985). Naturalistic inquiry. Newbury Park, CA, U.S.A.: SAGE Publications.

Lindner, J. R., Murphy, T. H. and Briers, G. E. (2001). ‘Handling nonresponse in social science research’. Journal of Agricultural Education 42 (4), pp. 43–53. https://doi.org/10.5032/jae.2001.04043 .

Llorente, C., Revuelta, G., Carrió, M. and Porta, M. (2019). ‘Scientists’ opinions and attitudes towards citizens’ understanding of science and their role in public engagement activities’. PLoS ONE 14 (11), e0224262. https://doi.org/10.1371/journal.pone.0224262 .

Lundy, L., Ruth, A., Telg, R. and Irani, T. (2006). ‘It takes two: public understanding of agricultural science and agricultural scientists’ understanding of the public’. Journal of Applied Communications 90 (1), pp. 55–68. https://doi.org/10.4148/1051-0834.1290 .

Menard, S. (1995). Applied logistic regression analysis. Thousand Oaks, CA, U.S.A.: SAGE Publications.

Mooney, C. (2012). The Republican brain: the science of why they deny science — and reality. Hoboken, NJ, U.S.A.: John Wiley & Sons.

Moran, H., Karlin, L., Lauchlan, E., Rappaport, S. J., Bleasdale, B., Wild, L. and Dorr, J. (2020). ‘Understanding research culture: what researchers think about the culture they work in’. Wellcome Open Research 5, 201. https://doi.org/10.12688/wellcomeopenres.15832.1 .

National Academies of Sciences, Engineering and Medicine (NAS) (2016). Science literacy: concepts, contexts and consequences. Washington, DC, U.S.A.: The National Academies Press. https://doi.org/10.17226/23595 .

National Science Board (NSB) (2016). ‘Science and technology: public attitudes and understanding’. In: Science and engineering indicators. Washington, D.C., U.S.A.: National Science Board, pp. 7.1–7.101.

Navarro, K. and McKinnon, M. (2020). ‘Challenges of communicating science: perspectives from the Philippines’. JCOM 19 (01), A03. https://doi.org/10.22323/2.19010203 .

Ndlovu, H., Joubert, M. and Boshoff, N. (2016). ‘Public science communication in Africa: views and practices of academics at the National University of Science and Technology in Zimbabwe’. JCOM 15 (06), A05. https://doi.org/10.22323/2.15060205 .

Nelson, G. D. (1999). ‘Science literacy for all in the 21st century’. Educational Leadership 57 (2), pp. 14–17. URL: http://www.project2061.org/publications/articles/articles/ascd.htm?txtRef=https%3A%2F%2Fwww%2Egoogle%2Ecom%2F&txtURIOld=%2Fresearch%2Farticles%2Fascd%2Ehtm .

Neresini, F. and Bucchi, M. (2011). ‘Which indicators for the new public engagement activities? An exploratory study of European research institutions’. Public Understanding of Science 20 (1), pp. 64–79. https://doi.org/10.1177/0963662510388363 .

Noelle-Neumann, E. (1974). ‘The spiral of silence: a theory of public opinion’. Journal of Communication 24 (2), pp. 43–51. https://doi.org/10.1111/j.1460-2466.1974.tb00367.x .

— (1993). The spiral of silence: public opinion — our social skin. 2nd ed. Chicago, IL, U.S.A.: University of Chicago Press.

Pearson, G. (2001). ‘The participation of scientists in public understanding of science activities: the policy and practice of the U.K. Research Councils’. Public Understanding of Science 10 (1), pp. 121–137. https://doi.org/10.3109/a036860 .

Poliakoff, E. and Webb, T. L. (2007). ‘What factors predict scientists’ intentions to participate in public engagement of science activities?’ Science Communication 29 (2), pp. 242–263. https://doi.org/10.1177/1075547007308009 .

Porten-Cheé, P. and Eilders, C. (2015). ‘Spiral of silence online: how online communication affects opinion climate perception and opinion expression regarding the climate change debate’. Studies in Communication Sciences 15 (1), pp. 143–150. https://doi.org/10.1016/j.scoms.2015.03.002 .

Priest, S. H. (2008). ‘North American audiences for news of emerging technologies: Canadian and US responses to bio- and nanotechnologies’. Journal of Risk Research 11 (7), pp. 877–889. https://doi.org/10.1080/13669870802056904 .

Prior, M. (2007). Post-broadcast democracy: how media choice increases inequality in political involvement and polarizes elections. Cambridge, U.K.: Cambridge University Press. https://doi.org/10.1017/CBO9781139878425 .

Rusnak, P. (9th June 2017). ‘State budget cuts threaten to take big bite from UF/IFAS programs’. Growing Produce . URL: https://www.growingproduce.com/vegetables/state-budget-cuts-threaten-to-take-big-bite-from-ufifas-programs/ .

Ruth, T. K., Rumble, J. N., Galindo-Gonzalez, S., Lundy, L. K., Carter, H. S. and Folta, K. M. (2019). ‘Can anyone hear us? An exploration of echo chambers at a land-grant university’. Journal of Applied Communications 103 (2), 6. https://doi.org/10.4148/1051-0834.2242 .

Ruth, T. K., Rumble, J. N., Lundy, L. K., Galindo, S., Carter, H. S. and Folta, K. M. (2020). ‘Motivational influences on land-grant faculty engagement in science communication’. Journal of Agricultural Education 61 (2), pp. 77–92. https://doi.org/10.5032/jae.2020.02077 .

Scheufele, D. A., Hardy, B. W., Brossard, D., Waismel-Manor, I. S. and Nisbet, E. (2006). ‘Examining the links between structural heterogeneity, heterogeneity of discussion networks, and democratic citizenship’. Journal of Communication 56 (4), pp. 728–753. https://doi.org/10.1111/j.1460-2466.2006.00317.x .

Sheskin, D. J. (2004). Handbook of parametric and nonparametric statistical procedures. 3rd ed. New York, NY, U.S.A.: CRC Press. https://doi.org/10.1201/9781420036268 .

Stake, R. E. (1995). The art of case study research. Thousand Oaks, CA, U.S.A.: SAGE Publications.

Thomas, E. and Magilvy, J. K. (2011). ‘Qualitative rigor or research validity in qualitative research’. Journal for Specialists in Pediatric Nursing 16 (2), pp. 151–155. https://doi.org/10.1111/j.1744-6155.2011.00283.x .

Tsfati, Y., Jomini Stroud, N. and Chotiner, A. (2014). ‘Exposure to ideological news and perceived opinion climate: testing the media effects component of spiral-of-silence in a fragmented media landscape’. The International Journal of Press/Politics 19 (1), pp. 3–23. https://doi.org/10.1177/1940161213508206 .

UF/IFAS (2013). Briefing book. University of Florida. URL: http://ifas.ufl.edu .

VERBI Software (2017). MAXQDA 2018 online manual . URL: https://www.maxqda.com/help-max18/welcome .

Authors

Dr. Taylor K. Ruth is an Assistant Professor of the Science of Science Communication in the Department of Agricultural Leadership, Education and Communication at the University of Nebraska-Lincoln. E-mail: taylor.ruth@unl.edu .

Dr. Joy N. Rumble is an Assistant Professor of Agricultural Communication in the Department of Agricultural Communication, Education, and Leadership at the Ohio State University. E-mail: rumble.6@osu.edu .

Dr. Lisa K. Lundy is a Professor of Agricultural Communication in the Department of Agricultural Education and Communication at the University of Florida. E-mail: lisalundy@ufl.edu .

Dr. Sebastian Galindo is a Research Associate Professor in the Department of Agricultural Education and Communication at the University of Florida. E-mail: sgalindo@ufl.edu .

Dr. Hannah S. Carter is the Dean of Cooperative Extension at the University of Maine. E-mail: hcarter@maine.edu .

Dr. Kevin M. Folta is a Professor in the Department of Horticultural Sciences at the University of Florida. E-mail: kfolta@ufl.edu .