1 Introduction
Artificial Intelligence (AI) has emerged as a rapidly advancing technology and industry in Taiwan, attracting significant investments from both the government and corporations for diverse applications. However, there exists a noticeable gap in the perceptions of the general public. While some align with institutions such as governments, corporations, and the science community, believing that integrating AI into public services can lead to a fairer and more convenient society, others strongly oppose AI. The opposition is fueled by substantial evidence suggesting that widespread AI implementation can result in existential crises, including massive unemployment, totalitarianism, social class rigidity, and injustice [Chang, Liao, Chao, Liu & Lee, 2024; Chiu, Zhu & Corbett, 2021; Lin, Tian & Cheng, 2024; Xu & Wang, 2021]. This paper seeks to explore the factors that effectively predict the lay public’s perceptions of the risks and benefits associated with AI in Taiwan.
Divergent attitudes towards AI have become particularly pronounced during the ongoing COVID-19 pandemic. With an approval rating nearing 80%, the Taiwanese government has leveraged AI and big data analysis to manage and track citizens’ travel histories [Shan, 2020]. Chen et al. [2020] attribute Taiwan’s success in keeping infected cases below 450 by early May 2020 to the integration of AI services, such as the Internet of Things, real-time tracking via mobile phones, and thermometers measuring and reporting citizens’ body temperatures in public venues, into hospital and case reporting systems.
While Taiwan has reaped benefits from the government’s adoption of AI for COVID-19 control and prevention, there is resistance from the public against further incorporation of AI for broader purposes. Yeh et al. [2021] highlight, through their survey study, that people in Taiwan, on the whole, hold an optimistic attitude towards AI. They believe that AI has the capability to enhance human quality of life in health, education, and technology. However, concerns arise regarding the adoption of AI for broader purposes until there is assurance that it will not pose threats to the environment, human sustainability, and clear ethical guidelines are in place. Similar concerns are supported by Liu et al.’s [2022] research, which indicates that despite acknowledging AI’s potential technological benefits, healthcare professionals and patients in Taiwan are reluctant to use AI in medical settings due to its lack of explainability.
Several studies employing the technology acceptance model analyze the perceptions of the Taiwanese population towards AI [Chiu et al., 2021; Huang, Hsieh, Li, Chang & Fan, 2020; Lin & Xu, 2022; Lin et al., 2024]. The findings suggest that perceived usefulness and ease of use do not necessarily lead to intentions to use AI, especially when there are doubts about AI’s adherence to legal norms and subjective norms.
The study of AI benefit and risk perceptions in Taiwan, particularly from a political spectrum perspective, remains understudied. This research aims to predict the lay public’s attitudes towards AI using political ideologies as predictive variables, considering motivated reasoning [McCright, Dentzman, Charters & Dietz, 2013]. People’s existing political stances on issues such as immigration policy, equality, and government power balance may extend to their attitudes towards novel technology. For instance, Pechar, Bernauer and Mayer [2018] point out that political orientations influence the public’s views on AI, with liberals expressing concerns about potential negative impacts, while conservatives focus on the advantages revolutionary technologies could bring to human society. One of the study’s objectives is to bridge the gap in the literature and understand how liberalism and conservatism influence people’s benefit and risk perceptions of AI, as well as their support for AI regulations.
Contrary to studies advocating that people’s attitudes towards technology are determined by pre-existing political ideologies, other research suggests that attitudes towards technology are rational and depend on individuals’ news consumption preferences and technological knowledge. Gherhe and Obrad’s [2018] survey emphasizes that disparate attitudes towards AI stem from individuals’ understandings of it, with participants’ school majors significantly predicting their perceptions. Science majors tend to be optimistic about strong AI applications, while humanities majors approach AI cautiously due to uncertainties about potential negative impacts on their careers. Chen and Wen [2019] indicate that regular consumers of science news express more faith in AI, whereas those consistently consuming political news tend to have low trust in corporations and doubt their benign use of AI.
Building on the understanding that attitudes towards technology are shaped by various factors, including news consumption and knowledge levels, the subsequent investigation extends beyond political ideologies to explore how perceptions of AI and attitudes towards regulations are influenced by broader societal factors. Another objective of this research is to investigate how people’s perceptions of AI and their attitudes towards regulations are influenced not only by political awareness but also by exposure to science news and acquisition of science knowledge.
2 Literature review
2.1 Political ideologies and science perceptions
Scientific facts are expected to be trustworthy and contribute to fostering public consensus on issues and policymaking. However, there are instances where societies become more divided despite the presence of scientific evidence. Social sciences have endeavored to comprehend why objective scientific facts often fail to promote social unanimity.
One frequently employed framework is the theory of motivated reasoning, asserting that even scientifically tested facts are not perceived as entirely objective [Taber, Cann & Kucsova, 2009]. Individuals interpret and understand science in their own ways [Lupia, McCubbins & Popkin, 2000]. Kunda [1990] argues that political orientations influence people’s interpretations and understanding, resulting in more polarized public opinions on scientific topics like climate change and genetically modified food when these issues become official political agendas.
Taber and Lodge [2006] highlight that political ideology shapes individuals’ responses to controversies through confirmation bias, where selective attention, exposure, partial apprehension, and recall mechanisms hinder them from impartially seeking and absorbing information that challenges their existing opinions. In the face of new information, those with rigid beliefs resist altering their viewpoints, often distorting ideologically dissonant evidence to support their perspectives. Behavioral scientists note stronger backfire effects, such as source derogation and forgery, which align with motivated reasoning [Kahan, 2013; Byrne & Hart, 2009].
Mooney [2012] contends that skepticism within the science community is inherently linked to the conservatism-liberalism political spectrum. Gauchat’s survey [2012] reveals an increasing number of self-identified conservatives expressing skepticism toward scientific evidence cited by the U.S. government over the past thirty years, particularly regarding climate issues. Conversely, liberals show minimal signs of radicalization.
Nam, Jost and Van Bavel [2013] assert that political orientation influences trust in science through information processing. Conservatives tend to rely on heuristics, trusting information from authorities and experts, and having confidence in their own information efficacy and instincts. Liberals, on the other hand, are more likely to employ systematic processing, being critical of expert opinions and seeking information from diverse sources. However, McCright et al. [2013] argue that this dichotomous view is overly simplistic.
Examining the relationship between political ideology and perceptions of science, McCright et al. [2013] conclude that conservatives are not universally opposed to all sciences and scientists. Their responses to “reflexive science”, dedicated to examining the detrimental impact of modern science on society, are more active. Conservatives tend to doubt its validity, perceiving it as serving the political purposes of liberal parties. Liberals also exhibit skepticism toward specific scientific fields, such as chemistry and fast-food science, believing these contribute to mass compromise, environmental harm, and the concentration of power and wealth in the hands of the privileged.
Approaching the impact of political ideology on science perceptions from the theory of motivated reasoning, rather than the heuristic-systematic model of information processing (HSM), Pechar et al. [2018, p. 296] research aligns with the conclusions of McCright et al. [2013]. It emphasizes that “resistance to change” is a form of motivated reasoning, where individuals’ prior preferences shape how they understand new information. Most individuals rely on social values, identities, and worldviews to interpret information, determining whether they accept scientific information based on their orientation towards the information source and its implications for their cultural values and identities.
H1: Political liberalism relates a) positively to benefit perceptions of AI and b) negatively to risk perceptions of AI.
RQ1: How is political ideology related to AI regulation support?
2.2 Media use
Individuals’ worldviews are significantly influenced by the information to which they are exposed, as the media play a crucial role in framing and describing novel technologies [McCright et al., 2013]. Chuan, Tsai and Cho [2019] conducted an analysis of the content coverage of AI in major U.S. newspapers, including USA Today, The New York Times, Los Angeles Times, New York Post, and Washington Post, over the past decade. Their findings reveal that AI received minimal attention until 2016, with around 100 articles discussing AI by the end of 2015. However, this number skyrocketed to 800 by 2018, coinciding with the broader application of AI. According to their analysis, the perceived benefits of AI outweighed the perceived risks, leading to an overall optimistic outlook on the future of AI in mainstream media by the end of 2018.
In contrast to science professionals, laypeople are more susceptible to accepting content from the media that exaggerates and frames the disastrous consequences of emerging technologies as inevitable threats. This is because fear-mongering media tends to attract a larger audience, particularly for such topics [Bucchi & Trench, 2008; Allan, 2002]. Research indicates a shift in the media’s attitudes towards AI, with Tussyadiah and Miller’s survey [2019] highlighting how mass media portrays AI as a potentially destructive invention. The media often emphasizes crises related to job losses, cyber-attacks, decreasing control over personal data and privacy, enhanced monitoring capabilities for companies and governments, and further marginalization of minority groups, including the uneducated, the poor, peripheral ethnic groups, and LGBT groups.
Chen and Wen’s [2019] investigation into the impact of different types of news consumption (e.g., political, scientific, generic) on perceptions of AI reveals that increasing perceptions of AI risks are linked to science news consumption. Those who regularly consume science news are more likely to trust AI, but distrust AI scientists if they lack faith in government and corporations. Given the high costs associated with AI developments, which necessitate financial support from the government and corporations, there is a perception that AI scientists may prioritize the interests of these entities over transparent reporting of potential risks. This skepticism is further fueled by the belief that governments and corporations lack discipline and respect for ethical considerations. Building on the insights from the aforementioned research, we present the following hypothesis and research question.
H2: Science news consumption relates a) positively to benefit perceptions of AI and b) negatively to risk perceptions of AI.
RQ2: How is science news consumption associated with AI regulation support?
2.3 Science knowledge
Knowledge serves as a catalyst for garnering public support for scientific development by alleviating anxiety and uncertainty associated with the incorporation of unknown technology into everyday life [Brossard & Shanahan, 2003; Bradshaw & Borchers, 2000]. Individuals with a higher perceived knowledge level are more likely to embrace emerging scientific applications promoted by governments. Perceived knowledge is defined as the self-reported level of mastery of knowledge in a specific domain [Cui & Wu, 2019]. This is because those confident in their technology literacy tend to focus on the perceived benefits of controversial science and believe that potential drawbacks can be controlled through measures endorsed by existing research [Chen & Wen, 2021]. Perceived science knowledge, therefore, is a subjective self-assessment of one’s technology and science literacy.
In evaluating citizens’ ability to discern disinformation related to controversial technologies, content knowledge becomes crucial. Content knowledge refers to theory-based understandings of scientific and natural laws, as well as techniques acquired through formal education and curricula [Stebner et al., 2022]. Jho, Yoon and Kim [2014], in a study on the expansion of the Gori nuclear power plant in South Korea, found that participants with a thorough science training background were more likely to base their decisions on both objective evidence and reasoning. In contrast, those with less science training tended to rely on moral values in their decision-making [Allum, Sturgis, Tabourazi & Brunton-Smith, 2008]. This conclusion aligns with other empirical research on the decision-making process. Means and Voss [1996], as well as Venville, Rennie and Wallace [2004], emphasize that content knowledge plays a more critical role than contextual knowledge (i.e., the capability of approaching scientific developments from localized cultural and social perspectives) and perceived knowledge in identifying technical problems [Means & Voss, 1996; Venville et al., 2004]. It also contributes to cultivating public support for scientific research and government involvement in technology development [Cui & Wu, 2019; Lewis & Leach, 2006]. Therefore, we present the next hypothesis and questions as follows.
RQ3: How is perceived knowledge associated with the benefit and risk perceptions of AI as well as AI regulation support?
H3: Content knowledge is positively associated with AI regulation support.
RQ4: How is content knowledge related to the benefit and risk perceptions of AI?
2.4 Scientific authority
Scientific authority is a predisposition influencing individuals’ responses to technical debates [Cui & Wu, 2019]. Those with a strong respect for scientific authority tend to view science as a source of politically unbiased truths. When confronted with scientific controversies, they are inclined to trust established scientific experts rather than formulating their own opinions or relying on their political intuition.
Camporesi, Vaccarella and Davis [2017] define respect for scientific authority as a tendency to “believe, endorse, and enact expert advice”. Empirical studies [Dohle, Wingen & Schreiber, 2020; Chen & Wen, 2019] suggest that public perceptions of scientific authority (e.g., scientists, scientific communities, science professionals) have much more significant effects on public acceptance of emerging technologies (e.g., targeted advertising, eHealth, smart devices, personalized social media) than the technologies themselves. Critical factors impacting the public’s deference to scientific authority encompass whether scientists transparently reveal their sponsors and conflicts of interest, prioritize society’s collective benefits and equity, make it their goal to maximize human well-being, conform to ethical guidelines in the process of research and development, and endeavor to prevent new technology from being misused or abused, especially by influential corporations and government.
Science gains authority and public respect because it is seen as a source of politically unbiased truths. However, some researchers argue that critical emerging technology development projects worldwide tend to be driven by governments through collaboration with science communities (i.e., scientists, research institutions, universities) [Bae & Lee, 2020]. With substantial investments flowing into the science community from governments, it becomes challenging to maintain that science built by the science community is always politically unbiased. Therefore, some research considers the science community more as an extension of governmental power than as an authority independent of political influences [Chen & Wen, 2021; Pechar et al., 2018].
Given the fundamental and nuanced role the science community plays between government and the lay public, how the lay public perceives the science community is vital in investigating the relationship between political ideology and perceptions of AI. This paper will discuss how scientific authority covariates in this interplay.
3 Method
3.1 Sample
Our participants who had not received formal education in AI or worked as AI professionals (e.g., AI engineers or programmers) were drawn from the database of the institution to which the authors were affiliated. The survey was conducted from February 1, 2020, to February 29, 2020. A total of 502 participants, all having the requested experience with narrow AI (defined as artificial intelligence systems specialized and trained for specific tasks or a limited set of tasks), were successfully surveyed. This group comprised 285 males and 217 females. The average age was 43 (). In terms of education, 43% held a bachelor’s degree, and 36% had a postgraduate degree. Regarding marital status, 56% were married, while 44% were not. In terms of employment, 79% worked full-time, and 8% were retired. Residents of major municipalities such as Taipei, New Taipei, Taoyuan, Taichung, Tainan, and Kaohsiung constituted 76% of the total sample.
3.2 Operationalization
Science news consumption: seven items were used to measure participants’ attention to science news provided by TV news, newspapers, websites of print news, online news agents, Facebook, LINE, and YouTube, with a five-point Likert scale (1 = nearly no attention, 5 = a great amount of attention). The outcomes of principal component factor analysis with varimax rotation indicated that the seven items form an index of science news consumption, and thus the seven items were averaged (, , ).
Perceived knowledge: one item (“How much do you think you know about AI?”) was borrowed from Cui and Wu [2019] with a five-point Likert scale (1 = not at all, 5 = very much; , ).
Content knowledge: sixteen item were adopted from Pega Systems Inc. [2017] including “AI refers to human-shaped robots alone” (false); “AI cannot deal with what it has never encountered” (false); “Due to controversy, AI has not been applied to daily technology” (false); “A gadget with AI means it possesses human consciousness” (false); “Which of the following is/are AI?” with five options provided: a) machine learning (true), b) artificial neural network (true), c) deep learning (true), d) natural language processing (true), e) none of above (false); and “What do you think AI at present can do?” with eleven options provided: a) ability to learn (true), b) to solve problems (true), c) to interpret speech (true), d) to replicate human interaction (true), e) to think logically (true), f) to play games (true), g) to run surveillance on people (true), h) to replace human jobs (true), i) to feel emotion (false), j) to control your mind (false), and k) to take over the world (false). One mark was assigned to each correctly answered item; the sixteen items summed up to the full mark, which was sixteen (, ).
Benefit perceptions of AI: five items were adopted with a five-point Likert scale (1 = highly disagree, 5 = highly agree) from Cui and Wu [2019]: a) AI will make life more convenient; b) AI will lower the cost of living; c) AI will solve the problems facing human society; d) AI’s advantages should not be underestimated; and e) AI will affect future generations of mankind. The outcomes of principal component factor analysis with varimax rotation suggested that the five items form an index of benefit perceptions, and hence the five items were averaged (, , ).
Risk perceptions of AI: three items with a five-point Likert scale (1 = highly disagree, 5 = highly agree) were borrowed from Wang [2017] to assess the participants’ perceived risks of AI: a) AI will change humans’ standards of living; b) AI will threaten human society; and c) AI will challenge the continuity of human society. The outcomes of principal component factor analysis with varimax rotation revealed that the three items form an index of benefit perceptions, and therefore the three items were averaged (, , ).
AI regulation support: four items with a five-point Likert scale (1 = highly disagree, 5 = highly agree) were adapted from Wang [2017] to evaluate participants’ support of governmental intervention and policy: a) the Taiwanese government should issue policies to guide AI development; b) an international treaty should exist to manage AI development; c) an agreement should exist in the scientific domain to regulate AI research and development; and d) policies should exist to guide AI’s commercial development. The outcomes of principal component factor analysis with varimax rotation stated that the four items form an index of regulation support, and consequently the four items were averaged (, , ).
Respect for science authority: four items with a five-point Likert scale (1 = highly disagree, 5 = highly agree) were adopted from Cui and Wu [2019]: a) scientists know best what is good for the public; b) it is important for scientists to get research done even if they displease people by doing it; c) scientists should do what they think is best, even if they must persuade people; and d) scientists should make the decisions about AI scientific research. The outcomes of principal component factor analysis with varimax rotation noted that the four items form an index of respect for science authority, and as a result the four items were averaged (, , ).
Political ideology: eight items with a five-point Likert scale (1 = highly disagree, 5 = highly agree) were borrowed from Hsu, Huang and Hwang [2019] to characterize participants’ conservatism and liberalism. Scores were summed up to form an index of political orientations (most conservative = 8, most liberal = 40, , , ). The eight items were respectively eight controversial referendum agendas as follows: a) abolition of the death penalty, b) legalization of same-sex marriage, c) establishment of legitimate red-light districts, d) legalization of euthanasia, e) permanent termination of the fourth nuclear power plant, f) decriminalization of adultery, g) amendment to the constitution to change the nation’s name to Taiwan, and h) reform of military service from compulsory to voluntary.
One of the most commonly employed theory to characterize the Western political spectrum is the moral foundation theory proposed by Haidt and Graham [2007], where the five foundations are the care/harm; fairness/cheating; loyalty/betrayal; authority/subversion; and sanctity/degradation foundations. Later Graham, Haidt and Nosek [2009], focusing on function rather than content, categorized these foundations into two groups: individualizing concerns and binding concerns. Concerns related to individualization, including the foundations of care/harm and fairness/cheating, focus on considering the individual as the central point of moral value, along with a focus on the rights and well-being of individuals. Concerns categorized as binding, comprising loyalty/betrayal, authority/subversion, and sanctity/degradation foundations, emphasize groups as the focal point of moral value and the preservation of existing social ethics. Empirical findings indicate that liberals’ moral concerns primarily align with individualizing foundations, while conservatives’ moral concerns encompass both individualizing and binding foundations, highlighting a subcultural distinction between American liberals and conservatives [Graham et al., 2013].
Supported by empirical evidence that the moral foundations theory provides a framework to characterize the moral intuition patterns by which individuals approach and respond to public issues, Day, Fiske, Downing and Trail [2014] indicate liberals and conservatives exhibit distinct moral orientations, with liberals emphasizing the principles of harm and fairness, while conservatives prioritize ingroup loyalty, authority, and purity. Liberals demonstrate a pronounced alignment with values centered around mitigating harm and ensuring fairness. They readily endorse statements emphasizing compassion for the suffering and advocating for equitable treatment in laws and governance. In contrast, conservatives lean towards affirming notions of ingroup loyalty, deference to authority, and the preservation of purity. They prioritize loyalty to one’s group over individual concerns, respect for traditional authorities in lawmaking, and the promotion of virtuous living through governmental support. This divergence in moral foundations underscores the ideological differences between liberals and conservatives, shaping their respective approaches to governance and societal values.
Hsu et al. [2019] argue that the individualizing and binding concerns, as per the moral foundation theory developed from an individualistic and Christian society, do not comprehensively capture the political spectrum of Taiwan, which is characterized by a highly collective and Confucian societal framework [Wu, 2013]. Therefore, the authors suggest the utilization of three indices recommended by Jost, Glaser, Kruglanski and Sulloway [2003]: resistance to change, endorsement of inequality, and desired distance from China, along with the traditional Confucian values. In the context of Taiwan, conservatism is defined as resistance to change, support for inequality, and adherence to Confucianism and traditional Chinese values as the dominant cultural influences. On the other hand, liberalism in Taiwan is characterized by a commitment to change, pursuit of equity, and a belief in cultural and social diversity.
In order to quantify the political spectrum of Taiwan, Hsu et al. [2019] propose eight items derived from Taiwan’s recent major referendums, challenging the conservative moral foundations, such as the legalization of same-sex marriage and euthanasia. Our study adopted these items from Hsu et al. [2019] to explore and analyze the political landscape in Taiwan.
4 Results
In the past year, participants reported using various AI services: email spam filters (63%), predictive search terms (67%), voice assistants (39%), online virtual assistants (57%), Facebook-recommended news (47%), online shopping recommendations (50%), home virtual assistants (7%), and reverse image searching (5%).
Dummy codes were initially assigned to the following independent variables: gender (0 = women), city of residence (0 = municipalities), marital status (0 = married), political party preference (0 = other parties), and employment (0 = non-full-time). These control variables were included in the regression analysis conducted on SPSS 21, with risk and benefit perceptions of AI, along with AI regulation support, set as the dependent variables. The results are presented in Table 1 below.
Our findings suggest that political ideology and party inclination are less indicative of AI perceptions than we anticipated. Political ideology is insignificantly related to benefit perceptions (H1a is not supported) and negatively to risk perceptions (H1b is supported).
Upon comparing our results with previous research, we speculate that the predictiveness of political ideology on AI perceptions and public support for AI regulations (RQ1) may depend on the type of AI. When it comes to narrow AI, liberalism is predictive, and conservatism is not. However, as the intensity of AI applications increases, conservatism becomes predictive. Our results align with Cui and van Esch [2022], who studied the correlation between political orientations and the benefit and risk perceptions of AI-enabled checkouts in the U.S. AI-powered self-checkout systems are automated solutions employed in retail environments, empowering customers to independently scan, bag, and complete payment for their purchases sans cashier intervention. Leveraging AI technology, these systems optimize the checkout procedure, enhancing operational efficiency and slashing labor expenses for retailers. Their results indicated that liberals tended to perceive AI-enabled checkouts as less risky, while conservatives were indifferent. Most AI perception research is currently centered around the American context, given the United States’ prominent position in the integration of narrow AI into daily services. Taiwan is still in the early stages of integrating narrow AI into daily life. However, as Taiwan catches up with the level of AI integration seen in the United States, it is likely that findings from current American research will also be applicable in Taiwan.
According to Cui and van Esch’s interpretation [2022], individuals with distinct political leanings are concerned about autonomy (i.e., power over one’s own outcomes). In situations where AI applications do not compromise autonomy, such as using AI-enabled checkouts, conservatives do not express significant preferences, whereas liberals, due to their inherently favorable stance towards new technology and the non-infringement on their autonomy, tend to perceive AI-enabled checkouts as less risky. As narrow AI applications do not bring about revolutionary convenience or change, people of any political inclination do not experience an increase in benefit perceptions.
However, as the level of AI application rises, such as when AI is integrated into self-driving cars, Peng [2020] found noticeable differences in AI perceptions between conservatives and liberals in a U.S. survey study. Conservatives expressed significant concerns and strongly supported strict regulation policies for driverless vehicles. Similar findings were supported by European research, where Araujo, Brosius, Goldberg, Möller and de Vreese [2023] studied the perceptions of people across Europe regarding the integration of AI into automatic decision making (ADM) in the media sector. Right-wing supporters tended to be more concerned and supportive of strict regulations. Schiff, Schiff and Pierson [2022] revealed deeper findings, studying U.S. citizens’ perceptions of government AI-powered automated decision systems (ADS). They found that conservatives felt more strongly about the public value failure of automated decision systems, such as lack of fairness, transparency, and responsiveness, compared to liberals. Yang et al. [2023] argue that governments will inevitably play a crucial role in the widespread adoption of AI, implying that AI applications will become politicized. As AI applications become more extensive and powerful, political ideology will become an increasingly important predictor of AI support.
In comparison to political ideology, science news consumption and knowledge are more predictive variables (H2a and H3 are supported). Our research findings align with previous studies [Yang et al., 2023; Selwyn & Cordoba, 2022; Cui & Wu, 2019; Pechar et al., 2018] indicating that individuals who are relatively informed about AI’s potential benefits are more likely to embrace novel technology, especially when technology is applied with appropriate regulations and surveillance (RQ2).
However, as science news exposure does not necessarily lead to content knowledge acquisition [Chen & Wen, 2021], this study distinguishes knowledge into perceived knowledge (participants’ self-perceived level of AI knowledge) and content knowledge (knowledge in a specific subject or domain usually acquired through a formal learning process, such as schooling and training). The aim is to understand the predictiveness of these two different types of knowledge on AI perceptions and regulation policy support (RQ3 and RQ4).
Perceived knowledge and content knowledge are antithetical predictors. Perceived knowledge is inversely associated with AI risk perceptions. In several risk communication studies, perceived knowledge tends to be an insignificant predictor of perceived risks because people often cannot accurately perceive their true knowledge level, which can be explained by the Dunning-Kruger effect. They are incapable of effectively evaluating potential risks in the environment. In our study, individuals who perceive themselves as knowledgeable about AI are inclined to perceive AI as less risky, potentially resulting in underestimating AI risks if they overestimated their own knowledge level of AI. On the other hand, content knowledge is a relatively objective measure of one’s knowledge level, providing a more accurate reflection of one’s understanding of AI. Its results mirror science news consumption, as people with a substantive understanding of AI tend to perceive AI as beneficial and capable of making human life more convenient.
Contrary to our anticipations, the majority of demographic variables — including age, education, gender, employment, and party affiliation — do not emerge as significant predictors of the dependent variables, resulting in a reduction of the adjusted values. However, they yield valuable insights, indicating that variations in Taiwanese AI users’ perceptions of AI and support for AI regulations are primarily influenced by factors such as science news consumption, acquisition of AI knowledge, respect for the science community, and political ideology. This information serves as a precise guide for shaping future policies related to AI promotion.
5 Discussion
5.1 Theoretical and practical implications
Taiwan’s utilization of AI in combating the COVID-19 pandemic has yielded commendable results, showcasing the efficacy of government initiatives in harnessing advanced technology for public health management. However, while the initial success in AI deployment for pandemic control is evident, there exists a palpable resistance among the populace towards its broader integration into various facets of society. This resistance stems from apprehensions regarding the potential ramifications of widespread AI implementation, particularly in realms beyond healthcare. Amidst the acknowledgment of AI’s transformative potential, concerns persist regarding its environmental impact, implications for long-term human sustainability, and the necessity for robust ethical frameworks to regulate its deployment. The public consensus veers towards cautious optimism, advocating for stringent assurances that AI adoption will not compromise societal well-being or infringe upon fundamental rights and values. Central to this discourse is the imperative need for clear and comprehensive ethical guidelines that delineate permissible boundaries and ensure accountability in AI utilization.
AI, as a revolutionary new technology, is bound to bring about profound changes in societal structures and individual lives. While there have been studies in Taiwan on the public’s perceptions and willingness to support AI, most of them are confined to specific domains, such as doctors’ willingness to use AI, public servants’ views on integrating AI into public systems, and teachers’ support for AI-assisted teaching. However, the adoption and application of AI will be a societal transformation that everyone needs to face and participate in. Therefore, there is an urgent need in Taiwan for research approaching AI issues from the lay public’s perspective, and this study plays that role.
Internationally, especially in Western countries, many researchers have recognized that the widespread adoption of AI will be a political agenda. Therefore, exploring the predictiveness of political orientation on AI perceptions and support for regulation policies from the perspective of motivated reasoning is crucial. Motivated reasoning is a common cognitive bias where individuals’ inherent political stances extend from core political issues to other informal political agenda matters. Regardless of their understanding of these other informal political agenda matters, their inherent political stance significantly influences their views on other things, a phenomenon supported by several studies.
Our findings reveal that political ideology is a weak predictor. By comparing with recent research, we find that the predictiveness of political ideology may be related to the intensity of AI applications. The differences in AI perceptions and regulation support between conservatism and liberalism increase with the intensity of AI applications. From the theoretical perspective of political ideology, according to Han, Park and Lee [2021], conservatism is associated with an inclination to distinguish humans as a distinct social entity from AI. Research on social essentialism indicates that individuals with conservative views tend to focus on the intrinsic nature of a group, categorizing social groups. Those with conservative views also prioritize group-level attributes to uphold group coherence and social order. As AI assumes elevated positions in social hierarchies, political conservatives may perceive a threat. Therefore, political ideology becomes an increasingly crucial independent variable in the politicization of AI.
Understanding the politicization trend of AI has practical implications. Since motivated reasoning is a common and hard-to-avoid cognitive bias, future promotional strategies for AI need to be customized based on AI application categories (e.g., narrow AI, broad AI, general AI), audience, and local political contexts. These strategies should be treated as sensitive political issues to avoid backfire due to incorrect persuasion methods. Considered persuasion strategies include framing (shifting the narrative while still discussing the same thing) and bypassing, guiding individuals away from their existing beliefs towards alternative beliefs that align with a conclusion contrary to their prejudice [Calabrese & Albarracín, 2023].
Aiming to test which of the approaches — political ideology, knowledge, and respect for scientific authority — would more effectively predict public attitudes towards AI, our study shows that, for predicting the lay public’s perceptions of narrow AI, science news consumption and knowledge, along with public respect for scientific authority, are more effective than political ideology. These findings play a crucial role in guiding future AI application policy communication. Optimistically, due to the effects of motivated reasoning, at least for narrow AI, it is not significant. Governments and research institutions can enhance the lay public’s awareness, positive perceptions of AI applications, and support for regulation policies through science news and content knowledge.
Last but not least, our findings indicate a positive association between respect for science authority and AI benefit perceptions, aligning with previous research [Chen & Wen, 2021; Pechar et al., 2018]. Science authority is a key channel for promoting AI and correct AI knowledge in the future, including AI experts, professionals, and scientists. Allowing them to regularly explain or interpret AI-related knowledge to the public is crucial. However, maintaining this communication channel between experts and laypeople requires long-term maintenance. Existing research suggests that public trust and respect for the AI science community [Chang et al., 2024; Chen & Wen, 2021; Cui & Wu, 2019] depend on whether AI scientists maintain information transparency and honest disclosure. This includes aspects such as the flow of investments, financial donors, the handling and use of research data consistent with the stated purposes, and the establishment and adherence to comprehensive ethical standards. Loss of confidence in the science community may pose a significant potential obstacle to AI promotion.
Acknowledgments
This study was supported by the Ministry of Science and Technology, Taiwan [MOST106-2511-S-004-003-MY3].
References
-
Allan, S. (2002). Media, risk and science. Buckingham, U.K.: Open University Press.
-
Allum, N., Sturgis, P., Tabourazi, D. & Brunton-Smith, I. (2008). Science knowledge and attitudes across cultures: a meta-analysis. Public Understanding of Science 17 (1), 35–54. Paper presented at the Annual Meetings of the American Association for the Advancement of Science. doi:10.1177/0963662506070159
-
Araujo, T., Brosius, A., Goldberg, A. C., Möller, J. & de Vreese, C. (2023). Humans vs. AI: the role of trust, political attitudes, and individual characteristics on perceptions about automated decision making across Europe. International Journal of Communication 17, 6222–6249. Retrieved from https://ijoc.org/index.php/ijoc/article/view/20612
-
Bae, S. J. & Lee, H. (2020). The role of government in fostering collaborative R&D projects: empirical evidence from South Korea. Technological Forecasting and Social Change 151, 119826. doi:10.1016/j.techfore.2019.119826
-
Bradshaw, G. A. & Borchers, J. G. (2000). Uncertainty as information: narrowing the science-policy gap. Conservation Ecology 4 (1), 7. doi:10.5751/es-00174-040107
-
Brossard, D. & Shanahan, J. (2003). Do citizens want to have their say? Media, agricultural biotechnology, and authoritarian views of democratic processes in science. Mass Communication and Society 6 (3), 291–312. doi:10.1207/s15327825mcs0603_4
-
Bucchi, M. & Trench, B. (Eds.) (2008). Handbook of public communication of science and technology. doi:10.4324/9780203928240
-
Byrne, S. & Hart, P. S. (2009). The boomerang effect. A synthesis of findings and a preliminary theoretical framework. Annals of the International Communication Association 33 (1), 3–37. doi:10.1080/23808985.2009.11679083
-
Calabrese, C. & Albarracín, D. (2023). Bypassing misinformation without confrontation improves policy support as much as correcting it. Scientific Reports 13 (1), 6005. doi:10.1038/s41598-023-33299-5
-
Camporesi, S., Vaccarella, M. & Davis, M. (2017). Investigating public trust in expert knowledge: narrative, ethics, and engagement. Journal of Bioethical Inquiry 14 (1), 23–30. doi:10.1007/s11673-016-9767-4
-
Chang, W. L.-Y., Liao, Y.-K., Chao, E., Liu, S.-Y. & Lee, T. S.-H. (2024). Ethical concerns about artificial intelligence: evidence from a national survey in Taiwan. Research Square. doi:10.21203/rs.3.rs-3765278/v1
-
Chen, C.-M., Jyan, H.-W., Chien, S.-C., Jen, H.-H., Hsu, C.-Y., Lee, P.-C., … Chan, C.-C. (2020). Containing COVID-19 among 627,386 persons in contact with the Diamond Princess cruise ship passengers who disembarked in Taiwan: big data analytics. Journal of Medical Internet Research 22 (5), e19540. doi:10.2196/19540
-
Chen, Y.-N. K. & Wen, C.-H. R. (2019). Taiwanese university students’ smartphone use and the privacy paradox. Comunicar 27 (60), 61–70. doi:10.3916/c60-2019-06
-
Chen, Y.-N. K. & Wen, C.-H. R. (2021). Impacts of attitudes toward government and corporations on public trust in artificial intelligence. Communication Studies 72 (1), 115–131. doi:10.1080/10510974.2020.1807380
-
Chiu, Y.-T., Zhu, Y.-Q. & Corbett, J. (2021). In the hearts and minds of employees: a model of pre-adoptive appraisal toward artificial intelligence in organizations. International Journal of Information Management 60, 102379. doi:10.1016/j.ijinfomgt.2021.102379
-
Chuan, C.-H., Tsai, W.-H. S. & Cho, S. Y. (2019). Framing artificial intelligence in American newspapers. In AIES ’19: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 339–344). doi:10.1145/3306618.3314285
-
Cui, D. & Wu, F. (2019). The influence of media use on public perceptions of artificial intelligence in China: evidence from an online survey. Information Development 37 (1), 45–57. doi:10.1177/0266666919893411
-
Cui, Y. & van Esch, P. (2022). Autonomy and control: how political ideology shapes the use of artificial intelligence. Psychology & Marketing 39 (6), 1218–1229. doi:10.1002/mar.21649
-
Day, M. V., Fiske, S. T., Downing, E. L. & Trail, T. E. (2014). Shifting liberal and conservative attitudes using moral foundations theory. Personality and Social Psychology Bulletin 40 (12), 1559–1573. doi:10.1177/0146167214551152
-
Dohle, S., Wingen, T. & Schreiber, M. (2020). Acceptance and adoption of protective measures during the COVID-19 pandemic: the role of trust in politics and trust in science. Social Psychological Bulletin 15 (4), e4315. doi:10.32872/spb.4315
-
Gauchat, G. (2012). Politicization of science in the public sphere: a study of public trust in the United States, 1974 to 2010. American Sociological Review 77 (2), 167–187. doi:10.1177/0003122412438225
-
Gherhe, V. & Obrad, C. (2018). Technical and humanities students’ perspectives on the development and sustainability of artificial intelligence (AI). Sustainability 10 (9), 3066. doi:10.3390/su10093066
-
Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S. P. & Ditto, P. H. (2013). Moral Foundations Theory: the pragmatic validity of moral pluralism. In P. Devine & A. Plant (Eds.), Advances in Experimental Social Psychology (pp. 55–130). doi:10.1016/b978-0-12-407236-7.00002-4
-
Graham, J., Haidt, J. & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology 96 (5), 1029–1046. doi:10.1037/a0015141
-
Haidt, J. & Graham, J. (2007). When morality opposes justice: conservatives have moral intuitions that liberals may not recognize. Social Justice Research 20 (1), 98–116. doi:10.1007/s11211-007-0034-z
-
Han, H., Park, S. & Lee, K. (2021). Does political orientation affect the evaluation of artificial intelligence? Asia Marketing Journal 23 (2), 50–67. doi:10.53728/2765-6500.1180
-
Hsu, H.-Y., Huang, L.-L. & Hwang, K.-K. (2019). Liberal-conservative dimension of moral concerns underlying political faction formation in Taiwan. Asian Journal of Social Psychology 22 (3), 301–315. doi:10.1111/ajsp.12367
-
Huang, Y.-K., Hsieh, C.-H., Li, W., Chang, C. & Fan, W.-S. (2020). Preliminary study of factors affecting the spread and resistance of consumers’ use of AI customer service. In AICCC ’19: Proceedings of the 2019 2nd Artificial Intelligence and Cloud Computing Conference (pp. 132–138). doi:10.1145/3375959.3375968
-
Jho, H., Yoon, H.-G. & Kim, M. (2014). The relationship of science knowledge, attitude and decision making on socio-scientific issues: the case study of students’ debates on a nuclear power plant in Korea. Science & Education 23 (5), 1131–1151. doi:10.1007/s11191-013-9652-z
-
Jost, J. T., Glaser, J., Kruglanski, A. W. & Sulloway, F. J. (2003). Political conservatism as motivated social cognition. Psychological Bulletin 129 (3), 339–375. doi:10.1037/0033-2909.129.3.339
-
Kahan, D. M. (2013). Ideology, motivated reasoning, and cognitive reflection. Judgment and Decision Making 8 (4), 407–424. doi:10.1017/S1930297500005271
-
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin 108 (3), 480–498. doi:10.1037/0033-2909.108.3.480
-
Lewis, J. & Leach, J. (2006). Discussion of socio-scientific issues: the role of science knowledge. International Journal of Science Education 28 (11), 1267–1287. doi:10.1080/09500690500439348
-
Lin, C.-Y. & Xu, N. (2022). Extended TAM model to explore the factors that affect intention to use AI robotic architects for architectural design. Technology Analysis & Strategic Management 34 (3), 349–362. doi:10.1080/09537325.2021.1900808
-
Lin, H., Tian, J. & Cheng, B. (2024). Facilitation or hindrance: the contingent effect of organizational artificial intelligence adoption on proactive career behaviour. Computers in Human Behavior 152, 108092. doi:10.1016/j.chb.2023.108092
-
Liu, C.-F., Chen, Z.-C., Kuo, S.-C. & Lin, T.-C. (2022). Does AI explainability affect physicians’ intention to use AI? International Journal of Medical Informatics 168, 104884. doi:10.1016/j.ijmedinf.2022.104884
-
Lupia, A., McCubbins, M. D. & Popkin, S. L. (Eds.) (2000). Elements of reason: cognition, choice, and the bounds of rationality. doi:10.1017/CBO9780511805813
-
McCright, A. M., Dentzman, K., Charters, M. & Dietz, T. (2013). The influence of political ideology on trust in science. Environmental Research Letters 8 (4), 044029. doi:10.1088/1748-9326/8/4/044029
-
Means, M. L. & Voss, J. F. (1996). Who reasons well? Two studies of informal reasoning among children of different grade, ability, and knowledge levels. Cognition and Instruction 14 (2), 139–178. doi:10.1207/s1532690xci1402_1
-
Mooney, C. (2012). The Republican brain: the science of why they deny science — and reality. Hoboken, NJ, U.S.A.: Wiley.
-
Nam, H. H., Jost, J. T. & Van Bavel, J. J. (2013). “Not for all the tea in China!” Political ideology and the avoidance of dissonance-arousing situations. PLoS ONE 8 (4), e59837. doi:10.1371/journal.pone.0059837
-
Pechar, E., Bernauer, T. & Mayer, F. (2018). Beyond political ideology: the impact of attitudes towards government and corporations on trust in science. Science Communication 40 (3), 291–313. doi:10.1177/1075547018763970
-
Pega Systems Inc. (2017). What consumers really think about AI: a global study. Retrieved from https://www.pega.com/ai-survey
-
Peng, Y. (2020). The ideological divide in public perceptions of self-driving cars. Public Understanding of Science 29 (4), 436–451. doi:10.1177/0963662520917339
-
Schiff, D. S., Schiff, K. J. & Pierson, P. (2022). Assessing public value failure in government adoption of artificial intelligence. Public Administration 100 (3), 653–673. doi:10.1111/padm.12742
-
Selwyn, N. & Cordoba, B. G. (2022). Australian public understandings of artificial intelligence. AI & Society 37 (4), 1645–1662. doi:10.1007/s00146-021-01268-z
-
Shan, S. (2020, March 30). Virus outbreak: most people happy with Chen as CECC head, survey finds. Taipei Times. Retrieved from https://www.taipeitimes.com/News/taiwan/archives/2020/03/30/2003733651
-
Stebner, F., Schuster, C., Weber, X.-L., Greiff, S., Leutner, D. & Wirth, J. (2022). Transfer of metacognitive skills in self-regulated learning: effects on strategy application and content knowledge acquisition. Metacognition and Learning 17 (3), 715–744. doi:10.1007/s11409-022-09322-x
-
Taber, C. S., Cann, D. & Kucsova, S. (2009). The motivated processing of political arguments. Political Behavior 31 (2), 137–155. doi:10.1007/s11109-008-9075-8
-
Taber, C. S. & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science 50 (3), 755–769. doi:10.1111/j.1540-5907.2006.00214.x
-
Tussyadiah, I. & Miller, G. (2019). Nudged by a robot: responses to agency and feedback. Annals of Tourism Research 78, 102752. doi:10.1016/j.annals.2019.102752
-
Venville, G., Rennie, L. & Wallace, J. (2004). Decision making and sources of knowledge: how students tackle integrated tasks in science, technology and mathematics. Research in Science Education 34 (2), 115–135. doi:10.1023/b:rise.0000033762.75329.9b
-
Wang, X. (2017). Understanding climate change risk perceptions in China: media use, personal experience, and cultural worldviews. Science Communication 39 (3), 291–312. doi:10.1177/1075547017707320
-
Wu, Y.-S. (2013). From identity to economy: shifting politics in Taiwan. Global Asia 8 (1), 114–119. Retrieved from https://www.globalasia.org/v8no1/focus/from-identity-to-economy-shifting-politics-in-taiwan_wu-yu-shan
-
Xu, N. & Wang, K.-J. (2021). Adopting robot lawyer? The extending artificial intelligence robot lawyer technology acceptance model for legal industry by an exploratory study. Journal of Management & Organization 27 (5), 867–885. doi:10.1017/jmo.2018.81
-
Yang, S., Krause, N. M., Bao, L., Calice, M. N., Newman, T. P., Scheufele, D. A., … Brossard, D. (2023). In AI we trust: the interplay of media use, political ideology, and trust in shaping emerging AI attitudes. Journalism & Mass Communication Quarterly. doi:10.1177/10776990231190868
-
Yeh, S.-C., Wu, A.-W., Yu, H.-C., Wu, H. C., Kuo, Y.-P. & Chen, P.-X. (2021). Public perception of artificial intelligence and its connections to the sustainable development goals. Sustainability 13 (16), 9165. doi:10.3390/su13169165
Authors
Chia-Ho Ryan Wen ,
a Doctoral Candidate at the Newhouse School of Public Communications at Syracuse
University (SU), has served as an Adjunct Professor at SU in the United States and as a
Mentor at the London School of Economics and Political Science in the United
Kingdom.
As a mixed-methods scholar, Wen delves into the interplay between information
consumption, perceptions, and behavioral tendencies, with a particular focus on the
dynamics between knowledge and misinformation in the contexts of public health and
emerging technology. He has presented his research at various major international
conferences and has also been honored with several research awards, including the
Catherine L. Covert Research Award, the SU Graduate Dean’s Award for Excellence in
Research and Creative Work, and the Newhouse Research Grant Award. His studies have
been published in journals such as Comunicar, Communication Studies, and Communication
& Society.
@Ryan_Wen_ E-mail: RyanWen@Alumni.LSE.ac.uk
Yi-Ning Katherine Chen
(Ph.D., The University of Texas at Austin) is a Distinguished Professor of Communication
and the Vice President for International Cooperation at National Chengchi University
(NCCU), Taiwan. She is also a Board Member of the Oversight Board for Meta.
Professor Chen served as the Dean of the College of Communication at NCCU from 2022
to 2024. Between 2014 and 2018, she was seconded as a Commissioner at the National
Communications Commission, Taiwan. She has spoken at various regulatory forums and
international academic conferences in Europe, the USA, and Asia, focusing on internet
user behavior and pay TV vs. OTT TV regulation.
Professor Chen has received numerous research awards from the Ministry of Science and
Technology and NCCU. Her academic research has been published in journals such as
Telecommunications Policy, Comunicar, Journalism Studies, New Media & Society, Public
Relations Review, and the Chinese Journal of Communication. Her research interests include
media content and its effects, social media in elections, and mobile communication and
privacy.
@kynchen E-mail: kynchen@nccu.edu.tw