1 Introduction

Trust in science is a critical issue in the current climate of distrust surrounding the COVID-19 vaccine and the “post-truth” discourse. Many people have become increasingly skeptical about scientific information and its sources, and this skepticism, part of a broader trend, has led to mistrust of science [Lupia et al., 2024]. Consequently, misinformation and doubts about scientific integrity have gained traction, leading to significant societal and health impacts. Trust in science is one of the most important concepts in the contemporary debate about the relationship between science and society [Weingart & Guenther, 2016]. It is essential for ensuring public engagement with scientific initiatives, fostering informed decision-making, and maintaining social cohesion. The erosion of this trust can hinder public acceptance of scientific recommendations, such as vaccination campaigns [Hall Jamieson et al., 2021], and impede scientific research and its application in policy making.

However, this is not a new phenomenon. The field of science communication has been discussing it for decades. In 2000, the Select Committee on Science and Technology of the House of Lords (the second chamber of the UK Parliament) published a report titled “Science and Society, 3rd report”. The report highlighted “an apparent crisis of trust” in science and emphasized the need to rebuild trust by promoting dialogue between science and society [House of Lords, 2000]. This aligns with the discourse on Public Engagement with Science (PES), where trust becomes a significant factor [Bauer et al., 2007].

According to Bauer and Falade [2021], Public Understanding of Science (PUS) underwent a significant transformation in the mid-1990s. The Bodmer Report, published by the Royal Society in 1985, played a crucial role in promoting PUS at that time. It suggested that increasing public knowledge would lead to more positive attitudes toward science. The Bodmer Report noted that most existing surveys have focused on the public’s attitude toward science and technology, neglecting to assess their understanding of science. The report recommended promoting research to measure PUS and evaluate the effects of improved understanding [The Royal Society, 1985]. The scientific knowledge questionnaire (discussed later), initially developed as part of the ‘Scientific Literacy’ paradigm [Bauer et al., 2007; Miller, 1998], gained further relevance in the context of Public Understanding of Science (PUS) research in the mid-1990s.

In addition, the relationship between scientific literacy and trust, referred to as the ‘deficit model’, faced immense criticism in the 1990s [Irwin & Wynne, 1996]. The deficit model refers to the notion that a lack of scientific knowledge leads to negative attitudes towards science. Conversely, it suggests that increased knowledge leads to positive attitudes. However, empirical evidence supporting this model is limited and controversial [Allum et al., 2008]. Critics argued that public mistrust in science was not solely due to a lack of understanding. They suggested that this was related to broader social and cultural factors. Trust as a concept is generally seen as relational. It is often operationalized as trustworthiness on the science side, measured in the dimensions of competence, integrity, and benevolence [Reif & Guenther, 2021]. The House of Lords report was a leading report signifying a new relationship between science and society, replacing the previous PUS model [House of Lords, 2000]. It emphasized the need for an open and transparent deliberative process to rebuild trust.

Even if such a claim were entirely justified, it is important to examine what constitutes public trust/mistrust in science. Understanding the nuances and factors that influence public trust is crucial for addressing the challenges faced in science communication. This study aims to examine the structure of trust in science from two new comparative perspectives: first, by comparing how people in Japan and other countries understand science and their attitudes towards it, and second, by contrasting trust in science with trust in the humanities.

First, we compare Japan with two other countries, the UK and the US, to observe the characteristics of the Japanese people’s understanding of and attitudes toward science. As researchers based in Japan, we focus on this country because we are directly engaged in addressing issues such as public mistrust towards scientific information and the influence of cultural factors on trust. While utilizing the research findings referenced above from Western countries, it is essential to consider cultural differences that are currently underexplored. The first research question is, What are the unique characteristics of public trust in science in Japan compared to the UK and the US? In exploring the distinctive features of Japan, it is imperative to include comparable countries. Although a broader international comparison would be ideal, budgetary constraints limited the scope. Consequently, the UK and the US were chosen because of their frequent use as comparative benchmarks in Japanese studies and the substantial differences previously documented between these countries and Japan.

Second, we contrast trust in science and its related factors with those of its fusion partner, humanities. Comparing the factors that influence trust in different academic domains provides valuable insights into the specific elements that contribute to trust. The second research question is: how do the factors that influence trust in science differ from or overlap with those that influence trust in the humanities, in Japan and in comparison to the UK and the US, to understand the unique characteristics of Japan? In Japan, the educational system commonly divides academic tracks into those focused on natural sciences and engineering (similar to STEM fields) and those focused mainly on humanities, which also include some social sciences. This division is typically made early in a student’s educational path, often before university entrance. As this dichotomy is prevalent in Japan, it allows us to effectively compare trust in science with a closely related yet distinct domain. This approach will enable us to identify whether the factors that build trust in science are unique or share commonalities with those that build trust in the humanities. Understanding these nuances is crucial for developing targeted strategies to enhance public trust across different fields. Examining the relationship between PUS indicators, such as scientific literacy and attitudes towards science, and trust indicators, like trustworthiness, competence, integrity, and benevolence, will provide a deeper understanding of how these concepts are interrelated and how they contribute to public trust in science.

This study collected and described opinions from various volunteers accessed through a web-based survey utilizing crowdsourcing services to achieve these two goals.

2 Literature review

2.1 Cross-national comparison

First, we provide an outline of the results of previous cross-national comparative studies. In 2020, the Pew Research Center reported the cross-national comparative survey results of public trust in scientists [Funk et al., 2020]. Respondents were asked to rate an item that sought to determine whether they “trust scientists to do what is right (for surveyed public)” on a 3-point scale (A lot, Some, Not too much, Not at all, and Don’t know).1 The trust ratios of “A lot” and “Some” altogether are almost the same in the three countries, and most participants chose one of these options. However, the ratio between “A lot” and “Some” is almost the same in the UK and the US, while in Japan, “A lot” accounted for less than half of “Some” responses. Thus, it is difficult to say that scientists have acquired a high level of trust in Japan.

However, it is important to note that this survey specifically asked about trust in scientists, not science itself. The close relationship between the public’s perception of scientists and science suggests that interpreting these results as indicative of general trust in science is not unreasonable. For instance, Wolff et al. [2024] found that the Trust in Science and Scientists Scale (TISS), developed to measure trust, did not differentiate between science and scientists, revealing a unified two-factor structure for direct and reverse items. This empirical evidence supports the notion that trust in scientists can validly reflect trust in science because the public often does not distinguish between them. In addition, the survey analyzed several individual differences related to public trust in scientists.

When examining the influence of educational background and political ideology, the percentages of respondents who answered “A lot” to the previous question were divided into two groups, stratified by low and high educational background, and then compared. The results revealed that, in Japan, the percentage of respondents in both groups was 23%; in the UK, they were 38% and 53%, respectively; and in the US, they were 30% and 43%, respectively. While people with higher educational backgrounds were found to be more trusting of science in the UK and the US, no such differences based on the educational backgrounds of respondents were observed among people in Japan.

Moreover, differences in public trust in scientists were examined based on data from the UK and the US in relation to the three main ideologies of the political spectrum (left, center, and right). Among the respondents who chose “A lot”, the percentages of those with right- and left-leaning views were 35% and 62% in the UK and 20% and 62% in the US, respectively, indicating a strong correlation between liberal political ideology and public trust in science (the percentages of centrists who answered “very much” were not reported). Furthermore, Gauchat [2012] analyzed data from General Social Surveys conducted in the US from 1974–2010 and found that while the degree of public trust in science among people with liberal-leaning views changed little over time, people with conservative-leaning views grew more distrustful of science after 1990. Thus, we see a clear link between political ideology and public trust in science in the UK and the US in recent years. Moreover, the link between conservative political ideology and distrust of science has also been highlighted in a study conducted in Germany [Mede et al., 2021]. However, since the influence of political ideology on attitudes toward science in Japan was not evaluated in the survey conducted by the Pew Research Center [2020] and no other similar studies were found, the relationship between the two remains unclear.

However, as already mentioned, surveys have also been conducted to assess the PUS and their attitudes toward science under the traditional PUS model. Durant et al. developed a questionnaire to assess “public understanding of science and science-based technologies that comprised 23 quizzes on basic scientific knowledge [Durant et al., 1989].

In Japan, survey results are regularly reported by the Ministry of Education, Culture, Sports, Science and Technology (MEXT). In 2011, the National Institute of Science and Technology Policy (NISTEP) at MEXT reported the results of a web-based survey conducted in 2009 involving the same three countries (n = 2191 in Japan and n = 1500 both in the UK and the US) examined in the present study. NISTEP is a national institute deeply engaged in the Japanese government’s science and technology policy planning process. Since 1976, they have conducted surveys on attitudes toward science and technology approximately every five years in Japan. NISTEP, in collaboration with researchers in Europe and the US, launched the cross-national comparative study for the PUS in 1990 and has conducted surveys that included common questions to assess scientific literacy since 1991. In March 2009, they conducted comparative surveys in Japan, the UK and the US, where registered Internet research companies monitored respondents. This survey compared the proportion of correct answers to ten questions on the “understanding of the basic notions of science and technology” based on a 1991 survey, with only minor changes. It revealed that the mean correct answer rate for Japanese respondents (62.1%) was almost the same as that for the UK (66.4%) and the US (64.1%) respondents [Kuriyama et al., 2011]. Over time, trends have improved compared to a previous face-to-face survey conducted in 2001 [Okamoto et al., 2001], wherein the mean correct answer rates for Japanese, the UK, and the US respondents were 51%, 63%, and 62%, respectively. However, the report also highlighted differences in the demographics of the respondents due to the different survey mediums used and added that we cannot simply compare the results of the two surveys and assume that the difference between Japan and the other two countries has been reduced [Kuriyama et al., 2011].

2.2 Difference between science and humanities

Academic fields, including sciences and humanities, exhibit several differences that influence educational approaches and societal contributions. In general, science, particularly natural sciences, regards natural phenomena through quantitative methods, such as observational and experimental approaches closely intertwined with technological advancements. Conversely, the humanities, often overlapping with the social sciences, focus on the complexities of human behavior and society, mostly emphasizing qualitative methodologies and critical descriptions and reflections [American Academy of Arts & Sciences, 2013]. While science aims to contribute directly to economic progress and improve living standards, the humanities primarily engage in discussions regarding the desirability of economic progress and the meaning of human well-being. Although not directly, the humanities contribute indirectly by shaping cultural understanding and ethical perspectives, which are crucial for a well-rounded societal development.

The distinction between the humanities and sciences is crucial because each contributes uniquely to society. Science, particularly natural science, often yields direct applications through technology that visibly impact economic development. However, the humanities provide deep insights into human nature, ethics, and culture, which are essential for understanding societal values and making informed policy decisions. This dichotomy underscores the need for a balanced approach to education and policy-making that values both empirical evidence and ethical considerations.

Considering these distinctions, we might expect public trust to vary significantly between these fields. Natural sciences could be perceived as more ‘useful’ due to their direct contributions to technology and the economy. However, the humanities may be appreciated more for their role in developing critical thinking and ethical reasoning. Furthermore, integrative educational approaches like STEAM, which combine the analytical strengths of STEM with reflective insights into the arts, are likely to lead to more comprehensive problem-solving skills and a greater understanding of complex societal issues [Khine & Areepattamannil, 2019; National Academies of Sciences, Engineering, and Medicine, 2018].

Moreover, while trust in science has been measured in some cross-national social surveys, such as the World Value Survey [Inglehart et al., 2022], less emphasis has been placed on systematically measuring public attitudes towards the humanities and their research. Given the growing societal demand for a synthesis between empirical and interpretive knowledge, it is crucial to explicitly include humanities in such surveys to ensure that a complete picture of public trust in academia is obtained.

3 Methods

A web-based survey was conducted in Japan, the UK and the US between February and March 2021. Table S1 in the Supplementary material shows an overview of the study and respondent’s information.

We commissioned Cross Marketing Inc. to recruit participants for the survey in Japan from their registered pool of respondents, aiming to collect 1,200 responses divided into 12 groups combining gender and age.2 Data from 1,280 out of the initial 1,867 respondents were used in the analysis after excluding data from respondents who did not answer the attention check questions correctly. The attention check questions specified certain response options, as described at the end of this section. The sampling method in Japan differs from that in the UK and the US and is described in detail later. The age distribution of the population is heavily skewed toward older adults in Japan. However, web-based survey companies only have a few older registered respondents and may require additional costs to collect a sufficient sample. Therefore, we allocated an equal sample size to each age group. Consequently, the Japanese data was skewed toward younger respondents compared to the census data.

The UK and the US survey participants were recruited by Prolific, Inc. Using representative sampling services of the same company, we launched the survey to collect 1,200 responses from a representative sample stratified by gender, age, and ethnicity.3 Data from 1,129 UK and 1,127 US respondents were used in the analysis after excluding data from respondents who did not answer the attention check questions correctly.

Table S1 shows that the period during which the surveys were conducted in each country differed slightly. Considering the COVID-19 pandemic, this difference may have had a distinctive impact compared with other ordinal periods. In particular, since several disciplines, from medicine to social sciences and humanities, were involved in solving the social problems caused by COVID-19, the responses may have been influenced to some extent by the COVID-19 situation in each country. However, at the time of the survey, all three countries were in a similar national situation in the sense that they were in a serious crisis, with no substantial differences. In early February 2021, the number of infected persons in Japan increased rapidly, and a state of emergency was declared in four prefectures in the Tokyo metropolitan area. Similarly, in late February 2021 in the US, the number of infected persons increased rapidly, and aggressive infection control measures were implemented. In early March 2021, the number of infected people in the UK was so high that a third lockdown was implemented.

The survey was approved by the Research Ethics Committee of the Graduate School of Human Sciences, Osaka University (HB019-103).

3.1 Overview of questionnaire items

The survey was created using Qualtrics. Table 1 presents an overview of the questions used for data analysis. The questionnaire also included other questions to assess the understanding of and attitude toward research integrity. However, since these items were evaluated after questions pertaining to science, they did not affect the responses.

PIC
Table 1: Overview of questionnaire items.

3.2 Demographics

The demographic attributes of the respondents included age, gender, and educational background.

3.3 Political ideology

Respondents were asked to rate their political ideologies on an 11-point scale, with 0 indicating liberal, 5 indicating centrist, and 10 indicating conservative ideologies. This question was used consistently across Japan, the US, and the UK, with direct translation for Japanese respondents. In addition, 140 respondents in Japan, 118 in the UK, and 42 in the US chose “Don’t know” when this option was provided. Data from these respondents were excluded from political ideology analysis.

3.4 Science or humanities person identity

Research on science identity, such as Chen et al. [2021], has shown that a strong science identity significantly influences performance and various attitudes toward science. Inspired by these findings, we included science and humanities identity in our study. Respondents were asked to rate their identity as a science or humanities person on a 5-point scale, with one indicating a strong identity as a science person and five indicating a strong identity as a humanities person. This was intended to capture their intuitive self-perceptions regarding their inclinations towards science or humanities.

3.5 Engaging with scientific research

To determine whether the respondents were currently involved in some scientific research activities such as data collection or analysis either as part of their job or as a hobby, we asked them to rate their level of involvement in “scientific research (involving some form of data collection and analysis)” on a 4-point scale with one indicating “not at all involved” and four indicating “very involved”.

3.6 Scientific knowledge

To determine the level of basic scientific knowledge of the respondents, we used 11 items commonly asked in surveys conducted worldwide, including Japan (Survey of Attitudes toward Science and Technology 2001), the UK (Eurobarometer 55.2), and the US (Science and Engineering Indicators 2002), regarding the understanding of basic science and technology concepts. For each question, respondents were asked to choose one of three options (True, False, Do not know). The percentage of correct answers was calculated, and the number of correct answers was tallied. Since the survey was not meant to be a quiz competition, the respondents were instructed to choose “Don’t know” if they did not know the right answer instead of searching for it.

3.7 Research literacy

To determine the respondents’ basic literacy skills in scientific research, we developed seven new survey items. First, we thoroughly reviewed existing tools and studies on scientific literacy to identify the fundamental skills and knowledge areas essential for understanding scientific research. Notable studies reviewed include Meyer, Shanahan, and Laugksch’s [2005] work on students’ conceptions of research, Laugksch and Spargo’s [1996] development of scientific literacy test items based on AAAS literacy goals, and Nuhfer et al.’s [2016] study using a concept inventory to assess the reasoning component of citizen-level science literacy. Additionally, Kolstø’s [2001] work on a framework for understanding scientific literacy from the perspective of citizenship, particularly focusing on the limitations of science and the need for critical attitudes when addressing controversial socio-scientific issues, provides important insights into the broader implications of scientific literacy. Furthermore, Lederman et al.’s [2002] study on the “Views of Nature of Science” questionnaire offers a valuable approach to assess learners’ conceptions of the nature of science, emphasizing the importance of accurately understanding the epistemological underpinnings of science for fostering deeper scientific literacy. Based on this review, we collaboratively brainstormed and developed a set of initial items. This collaborative effort ensured that the items covered several important SL aspects related to scientific literacy. The initial items were reviewed and refined through multiple discussions among all co-authors. This iterative process involved evaluating the clarity, relevance, and coverage of each item, leading to the final set of seven items used in the survey. The respondents were asked to select a statement on scientific research that they thought was correct, and the number of correct answers (correct literacy selected and incorrect literacy not selected) was tallied.

Item analysis based on the collected data revealed that one item (The more data (number of people or animals included in a study), the better) had a significantly lower percentage of correct responses (19%) than the other six items (49–91%). This item also had a low correlation with the total score (r = .17, r = .54–.64 for the other six items) and was an impairment of unidimensionality. Therefore, this item was excluded from the analysis. After calculating the total score for the remaining six items, the correlation with each item was found to be sufficiently high (r = .53–.68). The eigenvalues of the correlation matrix for the six items suggest that they can be considered unidimensional.4

3.8 Trust in research results

Respondents were asked to rate the degree of trust they had toward research results in the fields of science and humanities from 0 (I do not trust them at all) to 100 (I trust them absolutely). Although trust is a multidimensional concept, we used a single-item scale to simplify the measurement and reduce the respondent burden. Single-item measures have been employed and tested in various large-scale survey studies on public perceptions of science and research. For example, Wintterlin et al. [2022] used a single-item question to measure trust in science. They noted that such measures have been shown to correlate strongly with multi-item measures of trust in science in other large-scale surveys.

3.9 Attention check

Two attention-check questions were included to ensure the quality of responses. Both questions specified the response options. The first attention check was embedded in the “Scientific knowledge” section, where respondents were asked to select “Don’t know” for the statement: “This question confirms that you are answering seriously. Please select ‘Don’t know’”. The second was included in the “Assessment of misconducts” section, where respondents were instructed to select “Not very problematic” for the statement: “Important: This question tests your attention. Please select ‘Not very problematic’”.

4 Results

4.1 Demographics

All data and analysis scripts (R4.2.0) can be found in the Open Science Framework ( https://osf.io/njp92/). Table S2 in the Supplementary material summarizes the respondents’ demographics for each country. As mentioned earlier, due to sampling allocation, the Japanese sample is skewed towards younger respondents compared to the other two countries. The percentage of the population with a university degree in each country surveyed was slightly lower in Japan but significantly higher in the UK and the US than in 2020, according to the Organization for Economic Co-operation and Development’s official statistics (Japan: 52.68%; the UK: 49.39%; the US: 50.06% [Organisation for Economic Co-operation and Development, 2021]. However, no weighting correction was applied.

4.2 Descriptive statistics and intercountry comparisons

Table 2 presents the descriptive statistics (mean, standard deviation, and Pearson’s correlation coefficients) for the main variables calculated for each country. Tables S3 and S4 show the percentage of correct answers for each item of scientific knowledge and research literacy, respectively. Figure S1 shows the frequency distribution of political ideology, and Figure 1 shows the box and violin plot for public trust in scientific research.

PIC
Table 2: Means, standard deviations, and correlations (upper: Japan; middle: the UK; lower: the US). Means with different superscript alphabets are statistically significant.

PIC

Figure 1: Box and violin plot of trust in research results (left: scientific research; right: humanities research).

Considering the age distribution bias toward younger respondents in the Japanese sample compared to the actual population ratio, analyses of covariance (ANCOVA) were performed to compare the means of each country, controlling for age.5 To avoid an increase in type I errors due to multiple testing, the significance level of the main effect was corrected to 0.057 = 0.007.

Regarding political ideology, the comparisons revealed that the Japanese participants were significantly more conservative than those in the UK and US (see Figure S1 for frequency distribution). These results are consistent with a recent representative social survey in Japan that showed a higher propensity toward political conservatism among the Japanese than among other nationalities [Hanibuchi, 2022].

Japanese participants more often identified themselves as humanities than those in the UK and the US. The percentage trends of those who expressed their identity as humanities in Japan are similar to those of the previous social survey (humanities: 54.3%; science: 28.3%; unsure: 17.4%) [Hanibuchi, 2022].

The level of research literacy of Japanese participants was almost the same as that of the two Anglo-Saxon countries. Data on research literacy cannot be compared with previous studies because the exploratory items used for measurement were developed specifically for this study. However, the weak but positive correlation with scientific knowledge for surveyed countries suggests a certain degree of validity of the results.

However, it showed that the level of scientific knowledge in Japan was significantly lower than that in the UK and the US. These results, different from those of a cross-national comparative survey conducted by MEXT in 2009, follow the trends of the survey conducted in 2001, showing a rather large disparity in the present survey. After excluding one question (Milk that has been contaminated by radiation is safe to drink after being boiled [False]), which was not included among the ten questions used in the cross-national comparison, the mean percentages of correct answers used in the analysis for Japan, the UK, and the US were 54.4%, 72.8%, and 73.8%, respectively. This result may be attributed to the difference in education levels of the respondents (63.8% of the respondents graduated from University/Graduate School in the UK and 68.1% in the US, compared to 47.1% in Japan). However, it could also be due to differences in attitudes toward the “Don’t know” option. In Japan, “Don’t know” was selected more frequently than in the UK and the US. As shown in Table S3, the average selection rates for the 11 items were 29.6%, 16.0%, and 17.4%, respectively. In four of the 11 items, the most frequent response was “Don’t know”. In the survey conducted by NISTEP in 2009, the average percentages of “Don’t know” for the ten items were 23.2%, 19.3%, and 18.4%, thus indicating a common tendency for “Don’t know” responses in Japan and also highlighting a wide difference between Japan and the UK and the US. The Japanese have a significantly higher tendency to avoid uncertainty than other countries [Hofstede, 2001]. When dealing with a question that has a definitively correct answer, they may be more likely to avoid making a clear choice if they feel their knowledge is uncertain and may select “Don’t know”.

After examining the cross-national comparisons of other variables, we now examine the analysis directly related to our first research question, the unique characteristics of public trust in science in Japan compared to the UK and the US. Public trust in research results in Japan was lower than that in the UK and the US for both science and humanities research; it was also lower for humanities research than for science research. In the survey data collected from Japanese respondents by the Pew Research Center [2020], midpoint responses (“Some”) accounted for most responses. The violin plot in Figure 1 shows that while this trend is evident in the dataset on trust in humanities research results (as indicated by the higher density around the value of 50) and not as pronounced in the trust in scientific research results, overall, data on trust toward science showed more downward trends in Japan than in the UK and the US.

4.3 Predictors of trust in research results

Next, we focus on the analysis related to our second research question, which explores how the factors that influence trust in science differ from or overlap with those that influence trust in the humanities, in Japan and in comparison to the UK and the US, to understand the unique characteristics of Japan. Multiple regression analyses were conducted for each country to examine whether individual demographic attributes, understanding of science, science or humanities person identity, and current engagement in scientific research were predictors of public trust in scientific or humanities research. The dependent variable was “trust in research results” of science or humanities, and the independent variables included science or humanities person identity, engagement with scientific research, scientific knowledge, research literacy, and personal attributes such as age, gender, educational background, and political ideology. Table 3 summarizes the results for scientific research and Table 4 for humanities research. Table 5 presents a summary highlighting the similarities and differences in the results across the three countries.

First, a common characteristic shared by all countries that can be inferred from the data on scientific research is that a higher educational background (university degree) and a higher level of scientific knowledge increase public trust in research results. This is consistent with the Pew Research findings for the UK and the US but not for Japan (no differences due to education were found). Moreover, while a stronger identity as a science person and liberal ideology elicited higher levels of trust in the UK and the US, in Japan, these variables did not significantly affect public trust.

PIC
Table 3: Predictors of trust in research results (science): multiple linear regressions.

PIC
Table 4: Predictors of trust in research results (humanities): multiple linear regressions.

PIC
Table 5: Predictors of trust in research results (science/humanities): summary of findings.

Data on humanities research showed that a stronger identity as a humanities person elicited higher levels of trust in all surveyed countries. In contrast to scientific research, a higher level of scientific knowledge was not a predictor of public trust in research results. However, similar to scientific research, while a stronger humanities identity and liberal ideology elicited higher levels of trust, these variables did not significantly affect public trust in research results in Japan. Furthermore, trust in humanities and science was higher for those who were more educated in the UK, but no significant difference was found between the US and Japan.

While liberal ideology was found to be a predictor of public trust in research results in science and humanities in the UK and the US, even when controlling for the effect of other variables more closely related to science, this was not the case in Japan.

5 Discussion

The analysis of data collected from an online survey administered to participants in Japan, the UK, and the US showed that a higher level of scientific knowledge increased public trust in scientific research results in all surveyed countries. However, this is merely a correlation. Given that science persons in the UK and the US have a higher trust level in scientific research results, they have a higher level of trust simply because they like science. Nonetheless, trust in scientific research results was not necessarily high for those involved in academic and scientific research.

Furthermore, a high level of scientific knowledge is not associated with trust in humanities research results. As the importance of integrating humanities and science is recognized, a healthy development of science and technology cannot be expected if trust in humanities research results are disregarded. In this context, high levels of research literacy are linked to trust in both science and humanities research results in Japan and the US. However, as these findings do not apply to the UK, it is necessary to examine the factors influencing trust in science and humanities research in that country.

The following significant observation is that the level of research literacy in Japan was the same as that in the UK and the US. However, the levels of scientific knowledge and public trust in the research results were lower. Regarding factors associated with trust, a comparison of science and humanities showed somewhat different associations with the respondents’ educational backgrounds. Moreover, the results revealed that in Japan, unlike in the UK and the US, political ideology was not a predictor of public trust in research results. In the following paragraphs, we provide a more detailed discussion of the noteworthy results.

The lower level of scientific knowledge in Japan observed in this study compared to the 2009 survey cannot be adequately explained by differences in the survey media, which was previously suggested to explain the comparative differences between the results of the 2001 and 2009 surveys reported by NISTEP at MEXT [2011]. Moreover, as the survey sought responses to the same questions using almost the same format, it is difficult to explain the disparity between the results regarding measurement bias. Meanwhile, a higher level of scientific knowledge in Japan was found to be a predictor of public trust in scientific research conducted through a scientific approach.

One research question for this study was to examine the characteristics of Japan in comparison with the UK and the US. In the regression analysis conducted to assess trust levels in science and humanities, Japan exhibited a complete absence of political ideology. This is notably different from the UK and the US, where political ideology has a significant influence. In Japan, political ideology was not a predictor of public trust in science and humanities, and little correlation was observed with scientific knowledge or research literacy. While political ideology is often used as a parameter to explain individual differences in various psychological and behavioral variables in Europe and the US, not just limited to the understanding of and trust in science, this is often not true in Japan. These results require further exploration, including an examination of the possibility that the meaning of political ideology, as measured in Japan, differs from that in Europe and the US [Jou & Endo, 2016]. Future research should also explore individual differences variables that can increase or decrease trust in science other than knowledge and literacy in Japan, such as political ideology in the UK and the US.

Another research question was to compare factors associated with trust in academics between the disciplines of science and humanities. While there was a consistent association between greater trust in science and higher education, no such association was found for humanities in Japan and the US. This finding reflects the differences between science and humanities as academic disciplines. Teaching knowledge through education can steadily foster trust in science (to some extent), but this is not straightforward for the humanities. This should be considered when promoting interdisciplinary research.

This study has several limitations. One important limitation is that data were obtained through a Web survey. Web surveys offer many advantages over mail, telephone, and face-to-face surveys for collecting responses from establishments, including reduced costs, shorter field periods, faster data processing, and potential improvements in data quality [Bethlehem & Biffignandi, 2011]. However, it is necessary to mention the possibility that the sample may not be adequately representative of the population in Web surveys. Unlike random or systematic sampling often used in home-visit surveys or mail surveys, Web surveys are limited to Internet users and, in this study, to those registered with crowdsourcing services. This bias toward Internet users means that a high level of digital literacy among Web survey respondents may influence their trust in science and technology. Digital literacy can facilitate easier access to reliable scientific information and enhance critical thinking skills [Gilster, 1997], potentially leading to higher trust in science and technology. Notably, this is a potential effect, and the exact relationship between digital literacy and trust in science and technology requires further investigation.

Furthermore, as mentioned in the Results section, we cannot exclude the possibility that some of the differences detected across Japan, the UK and the US are due to their specific response styles. Over the years, various surveys have observed that extreme options (extreme response styles) are more likely to be chosen in Europe and the US. However, middle options (middle response styles) are more likely to be chosen in Japan. Harzing [2006] confirmed this in a cross-national survey of 26 countries. In this study, the middle option was selected more frequently in Japan than in the UK and the US for questions with a midpoint, such as the evaluation of trust in science and humanities or political ideology. Although we could not actively adjust for this in the current study, it is necessary to be aware of the possibility of this bias, especially when interpreting the results related to average differences between Japan, the UK and the US.

Finally, the categorization of “science” and “humanities” relies on the conventional Japanese classification. For Japanese people, this dichotomy is natural and was considered the best way to contrast trust in scientific research results with trust in humanities research results. However, this dichotomy, particularly the inclusion of social sciences within the humanities, may have been perceived differently in the UK and the US compared to Japan. In future research, we aim to explore better and more universal categorizations.

6 Conflicts of interest

The author declares no conflicts of interest associated with this manuscript.

Acknowledgments

This research was supported by AMED under Grant Number JP19oa0310006.

References

Allum, N., Sturgis, P., Tabourazi, D., & Brunton-Smith, I. (2008). Science knowledge and attitudes across cultures: a meta-analysis. Public Understanding of Science, 17(1), 35–54. https://doi.org/10.1177/0963662506070159

American Academy of Arts & Sciences. (2013). The heart of the matter: the humanities and social sciences for a vibrant, competitive, and secure nation. Cambridge, MA, U.S.A. https://www.aau.edu/sites/default/files/AAU%20Files/Key%20Issues/Humanities/Heart-of-the-Matter-The-Humanities-and-Social-Sciences-for-a-Vibrant-Competitive-and-Secure-Nation.pdf

Bauer, M. W., Allum, N., & Miller, S. (2007). What can we learn from 25 years of PUS survey research? Liberating and expanding the agenda. Public Understanding of Science, 16(1), 79–95. https://doi.org/10.1177/0963662506071287

Bauer, M. W., & Falade, B. A. (2021). Public understanding of science: survey research around the world. In M. Bucchi & B. Trench (Eds.), Routledge handbook of public communication of science and technology (3rd ed., pp. 238–266). Routledge. https://doi.org/10.4324/9781003039242

Bethlehem, J., & Biffignandi, S. (2011). Handbook of web surveys. John Wiley & Sons. https://doi.org/10.1002/9781118121757

Chen, S., Binning, K. R., Manke, K. J., Brady, S. T., McGreevy, E. M., Betancur, L., Limeri, L. B., & Kaufmann, N. (2021). Am I a science person? A strong science identity bolsters minority students’ sense of belonging and performance in college. Personality and Social Psychology Bulletin, 47(4), 593–606. https://doi.org/10.1177/0146167220936480

Durant, J. R., Evans, G. A., & Thomas, G. P. (1989). The public understanding of science. Nature, 340(6228), 11–14. https://doi.org/10.1038/340011a0

Funk, C., Tyson, A., Kennedy, B., & Johnson, C. (2020). Science and scientists held in high esteem across global publics. Pew Research Center. Washington, DC, U.S.A. https://www.pewresearch.org/science/wp-content/uploads/sites/16/2020/09/PS_2020.09.29_global-science_REPORT.pdf

Gauchat, G. (2012). Politicization of science in the public sphere: a study of public trust in the United States, 1974 to 2010. American Sociological Review, 77(2), 167–187. https://doi.org/10.1177/0003122412438225

Gilster, P. (1997). Digital literacy. Wiley Computer Pub.

Hall Jamieson, K., Romer, D., Jamieson, P. E., Winneg, K. M., & Pasek, J. (2021). The role of non-COVID-specific and COVID-specific factors in predicting a shift in willingness to vaccinate: a panel study. Proceedings of the National Academy of Sciences, 118(52), e2112266118. https://doi.org/10.1073/pnas.2112266118

Hanibuchi, T. (Ed.). (2022). Mapping Japan’s major cities using a social survey [in Japanese]. Kokon Shoin.

Harzing, A.-W. (2006). Response styles in cross-national survey research: a 26-country study. International Journal of Cross Cultural Management, 6(2), 243–266. https://doi.org/10.1177/1470595806066332

Hofstede, G. (2001). Culture’s consequences: comparing values, behaviors, institutions, and organizations across nations (2nd ed.). SAGE Publications.

House of Lords. (2000). Science and society — Third report. House of Lords Select Committee on ScienceTechnology.

Inglehart, R., Haerpfer, C., Moreno, A., Welzel, C., Kizilova, K., Diez-Medrano, J., Lagos, M., Norris, P., Ponarin, E., & Puranen, B. (Eds.). (2022). World Values Survey: All Rounds — Country-Pooled Datafile. JD Systems Institute & WVSA Secretariat. https://doi.org/10.14281/18241.17

Irwin, A., & Wynne, B. (Eds.). (1996). Misunderstanding science? The public reconstruction of science and technology. Cambridge University Press. https://doi.org/10.1017/CBO9780511563737

Jou, W., & Endo, M. (2016). Generational gap in Japanese politics: a longitudinal study of political attitudes and behaviour. Palgrave Macmillan. https://doi.org/10.1057/978-1-137-50342-8

Kanda, Y., Nishikawa, K., Matsumoto, K., Okamura, A., & Igami, M. (2021). Digest of Japanese science and technology: Indicators 2021 [Research Material 311]. National Institute of Science and Technology Policy. https://www.nistep.go.jp/wp/wp-content/uploads/NISTEP-RM311-SummaryE.pdf

Khine, M. S., & Areepattamannil, S. (Eds.). (2019). STEAM education: theory and practice. Springer. https://doi.org/10.1007/978-3-030-04003-1

Kolstø, S. D. (2001). Scientific literacy for citizenship: tools for dealing with the science dimension of controversial socioscientific issues. Science Education, 85(3), 291–310. https://doi.org/10.1002/sce.1011

Kuriyama, T., Sekiguchi, H., Otake, Y., & Chayama, H. (2011). International comparison of the public attitudes towards and understanding of science and technology — comparative study of Internet survey in Japan, the United States of America, and the United Kingdom [Research Material 196] (in Japanese). National Institute of Science and Technology Policy. http://hdl.handle.net/11035/883

Laugksch, R. C., & Spargo, P. E. (1996). Development of a pool of scientific literacy test-items based on selected AAAS literacy goals. Science Education, 80(2), 121–143. https://doi.org/10.1002/(sici)1098-237x(199604)80:2%3C121::aid-sce1%3E3.0.co;2-i

Lederman, N. G., Abd-El-Khalick, F., Bell, R. L., & Schwartz, R. S. (2002). Views of nature of science questionnaire: toward valid and meaningful assessment of learners’ conceptions of nature of science. Journal of Research in Science Teaching, 39(6), 497–521. https://doi.org/10.1002/tea.10034

Lupia, A., Allison, D. B., Hall Jamieson, K., Heimberg, J., Skipper, M., & Wolf, S. M. (2024). Trends in US public confidence in science and opportunities for progress. Proceedings of the National Academy of Sciences, 121(11), e2319488121. https://doi.org/10.1073/pnas.2319488121

Mede, N. G., Schäfer, M. S., Ziegler, R., & Weißkopf, M. (2021). The “replication crisis” in the public eye: Germans’ awareness and perceptions of the (ir)reproducibility of scientific research. Public Understanding of Science, 30(1), 91–102. https://doi.org/10.1177/0963662520954370

Meyer, J. H. F., Shanahan, M. P., & Laugksch, R. C. (2005). Students’ conceptions of research. I: A qualitative and quantitative analysis. Scandinavian Journal of Educational Research, 49(3), 225–244. https://doi.org/10.1080/00313830500109535

Miller, J. D. (1998). The measurement of civic scientific literacy. Public Understanding of Science, 7(3), 203–223. https://doi.org/10.1088/0963-6625/7/3/001

National Academies of Sciences, Engineering, and Medicine. (2018). The integration of the humanities and arts with sciences, engineering, and medicine in higher education: branches from the same tree (D. Skorton & A. Bear, Eds.). The National Academies Press. https://doi.org/10.17226/24988

Nuhfer, E. B., Cogan, C. B., Kloock, C., Wood, G. G., Goodman, A., Delgado, N. Z., & Wheeler, C. W. (2016). Using a concept inventory to assess the reasoning component of citizen-level science literacy: results from a 17,000-student study. Journal of Microbiology & Biology Education, 17(1), 143–155. https://doi.org/10.1128/jmbe.v17i1.1036

Okamoto, S., Niwa, F., Shimizu, K., & Sugiman, T. (2001). The 2001 survey for public attitudes towards and understanding of science & technology in Japan [NISTEP Report no. 72] (in Japanese). National Institute of Science and Technology Policy. https://nistep.repo.nii.ac.jp/?action=repository_action_common_download&item_id=4385&item_no=1&attribute_id=13&file_no=3

Organisation for Economic Co-operation and Development. (2021). Education at a glance 2021: OECD indicators. OECD Publishing. https://doi.org/10.1787/b35a14e5-en

Reif, A., & Guenther, L. (2021). How representative surveys measure public (dis)trust in science: a systematisation and analysis of survey items and open-ended questions. Journal of Trust Research, 11(2), 94–118. https://doi.org/10.1080/21515581.2022.2075373

Strydhorst, N. A., & Landrum, A. R. (2022). Charting cognition: mapping public understanding of COVID-19. Public Understanding of Science, 31(5), 534–552. https://doi.org/10.1177/09636625221078462

The Network for the Public Communication of Science and Technology. (2021, December 9). Science communication and trust in science, scientists, and science institutions [Webinar]. https://www.pcst.network/webinar-science-communication-and-trust-in-science-scientists-and-science-institutions/

The Royal Society. (1985). The public understanding of science. The Royal Society. London, U.K. https://royalsociety.org/news-resources/publications/1985/public-understanding-science/

Weingart, P., & Guenther, L. (2016). Science communication and the issue of trust. JCOM, 15(05), C01. https://doi.org/10.22323/2.15050301

Wintterlin, F., Hendriks, F., Mede, N. G., Bromme, R., Metag, J., & Schäfer, M. S. (2022). Predicting public trust in science: the role of basic orientations toward science, perceived trustworthiness of scientists, and experiences with science. Frontiers in Communication, 6, 822757. https://doi.org/10.3389/fcomm.2021.822757

Wolff, S. M., Breakwell, G. M., & Wright, D. B. (2024). Psychometric evaluation of the Trust in Science and Scientists Scale. Royal Society Open Science, 11(4), 231228. https://doi.org/10.1098/rsos.231228

Notes

1. The respective percentages (excluding missing answers) were as follows: 23%, 57%, 10%, 1%, and 8% among Japanese respondents, 42%, 37%, 11%, 7%, 4% among the UK respondents; and 38%, 39%, 12%, 9%, and 2% among the US respondents.

2. The groups were divided by gender (female, male) and age (18–19, 20–29, 30–39, 40–49, 50–59, 60–70) with 50 respondents in the 18–19 age group and 110 respondents in each of the other groups.

3. The sample was stratified by gender (female, male), age (18–27, 28–37, 38–47, 58+), and ethnicity (Asian, black, mixed, other, white).

4. The eigenvalues were 2.30, 1.07, 0.75, 0.70, 0.61, and 0.57.

5. Analyses of variance (ANOVA) without controlling for age were also performed, and the results were largely unchanged. This suggests that the age bias in the Japanese data was not a significant concern.

About the authors

Asako Miura is a Professor at Graduate School of Human Sciences, Osaka University, Japan.

E-mail: yk581111@gmail.com X: @asarin

Mei Yamagata is an Assistant Professor at Faculty of Culture and Information Science, Doshisha University, Japan.

E-mail: yamagatamei7@gmail.com

Jin Higashijima is a Associate Professor at Graduate School of global and Transdisciplinary Studies, Chiba University, Japan.

E-mail: jhigashi@chiba-u.jp

Toshiya Kobayashi is a Associate Professor at School of Interdisciplinary Science and Innovation, Kyusyu University, Japan.

E-mail: kobayashi.toshiya.303@m.kyushu-u.ac.jp

Masaki Nakamura is a Professor at Center for Education in Liberal Arts and Sciences, Osaka University, Japan.

E-mail: n.masaki.celas@osaka-u.ac.jp X: @nmasaki

Supplementary material

Available at https://doi.org/10.22323/2.23080203
Table S1. Overview of surveys
Table S2. Demographics of respondents
Table S3. Frequency distribution and mean percentage of correct answers for each item of scientific knowledge
Table S4. Percentage of correct answers for each item of research literacy
Figure S1. Frequency distribution of political ideology