1 Rationale
The public introduction of ChatGPT and other generative artificial intelligence (GenAI) since that time has marked a significant transformation in mediated communication [Hepp et al., 2023; Schäfer, 2023]. GenAI is “a class of machine learning technology that learns to generate new data from training data” [Houde et al., 2020, p. 1]. It has the potential to redefine the production and dissemination of knowledge in society [Hepp et al., 2023; Jungherr & Schroeder, 2023], including scientific knowledge [Alvarez et al., 2024; Biyela et al., 2024].
The disruptive nature of GenAI extends to science communication [Schäfer, 2023], as it constitutes a new kind of science communicator [see Guzman & Lewis, 2020] and a new content production tool for existing science communicators [Alvarez et al., 2024; Biyela et al., 2024]. Compared to previous communication-AI technologies, GenAI has greater agency [Guzman & Lewis, 2020; Hepp et al., 2023]. Thus, it plays an active role in knowledge formation and information interpretation and can take a greater part in individuals’ epistemic networks [Feinstein & Baram-Tsabari, 2024]. With individuals using GenAI to access scientific information [Greussing et al., 2024], research needs to carefully consider the emerging potentials, uses, and broader implications associated with AI technologies [Schäfer, 2023] — considering both the technologies currently in use and those that are yet to come.
As GenAI constitutes a new kind of science communicator, the intersection of AI technology and critical engagement with scientific information becomes more pressing. We consider critical engagement with scientific information as an active and thoughtful interaction with science-related content, often requiring the evaluation of the content’s credibility, accuracy, and relevance [see Osborne & Pimentel, 2022; Lin, 2014]. Addressing the conjuncture of AI technology and critical engagement with scientific information is relevant to multiple research avenues in science communication, including online engagement with scientific information on social media and science journalism [e.g., Spitale, Biller-Andorno & Germani, 2023; Tatalovic, 2018]. It also responds to both the crisis of misinformation dissemination [West & Bergstrom, 2021] and the calls for scholarly attention to GenAI’s susceptibility to “hallucinations” [Kidd & Birhane, 2023], wherein the AI provides grammatically and contextually valid, confident responses that are, nonetheless, inaccurate or false [Bang et al., 2023].
The importance of studying critical engagement with scientific information through GenAI points to two forming gaps. First, a theoretical gap, as we lack an overbridging conceptualization that will allow science communication researchers to link newly gained insights regarding GenAI with previous knowledge about more traditional technologies. Second, a practical gap, as we are missing a simple yet holistic framework that will afford and guide non-experts in AI systems in making sense of and differentiating between AI technologies. Work meeting these gaps would assist science communication researchers needing to identify and select technologies that are best suited for a particular research project, or for science communication educators who want to present and discuss the characteristics of a specific GenAI technology with their students.
Indeed, existing theories provide valuable insights. However, they are limited to either more traditional technologies [e.g., Taddicken & Krämer, 2021; Hendriks et al., 2020], particular uses [e.g., Lin, 2023], or more general insights not necessarily relevant to science communication [e.g., Hancock, Naaman & Levy, 2020; Lo, 2023; Sundar, 2020]. To date, we are missing a relevant conceptual framework that will support and guide non-experts in AI systems in how to make sense of and differentiate GenAI technologies.
2 Objective
The paper proposes a conceptual framework that has a two-fold aim: to encourage a more consistent and concatenate discourse regarding science communication and AI-based information technologies, and to equip non-experts in AI systems with a practical tool to understand and distinguish between AI technologies, recent and upcoming ones.
As a starting point towards developing an efficient and overarching conceptual framework that others might improve on, we draw inspiration and relevant concepts from theoretical and empirical literature from various fields. Among the theories and models we consulted are the heuristic model of online engagement with scientific information [Hendriks et al., 2020] and criteria for explaining the role of the internet [Lopez & Olvera-Lobo, 2018] from science communication studies, the fast and frugal model [Osborne & Pimentel, 2022] and the content-source integration model [Stadtler & Bromme, 2014] from science education studies, the TIME theory [Sundar, Jia, Waddell & Huang, 2015] and the HAII model [Sundar, 2020] from human-computer interaction (HCI) studies, as well as the three-tiered framework for evaluating relevancy and credibility during online inquiry [Forzani, 2020] and the curriculum design principles [McGrew & Breakstone, 2023] from education studies. We wove the extracted relevant and diversified concepts into a framework that brings together a wide range of perspectives. While this process does not cater for developing a coherent theory, the conceptual framework we have developed could be put to use by researchers and practitioners in various fields and applied to various GenAI technologies for various purposes.
Additionally, the framework is designed to encompass a broad spectrum of AI-based information technologies, extending beyond the scope of GenAI or specific large language models (LLMs) such as ChatGPT. In other words, the framework aims to be applicable to characterizing, evaluating, and comparing recent AI technologies, more traditional information technologies (e.g., Google Search, Wikipedia), and even upcoming ones; it is also designed to be applicable to textual-base information technologies as well as multimodal ones (e.g., Gemini,1 Copilot2). The framework has been designed to characterize, evaluate, and compare technologies (i.e., models, platforms, applications, software, etc.) that enable individuals to access scientific information online, in contrast to technologies that do not retrieve or generate scientific information, like word processors or painters devoid of GenAI components. We define the technologies that can be characterized, evaluated, and compared using this conceptual framework as computer systems, software, and networks that facilitate the dissemination of scientific information to the public [‘Information Technology’, 2024; Nisbet & Scheufele, 2009]. In terms of the conceptual framework, we define AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments” [OECD, 2019, p. 15].
The conceptual framework described in this paper intends to serve as a reflective guide, facilitating better understanding and accurate descriptions of AI-based information technologies in their applications for critical engagement with scientific information online. In turn, researchers and practitioners who use the framework may informedly select and better utilize said technologies.
3 The framework
The framework is hierarchically structured, focusing on the technology while accounting for the particularities of the socio-demographic contexts in which it is being used. Similarly to a theoretical rubric, it lists selected criteria that contemplate basic technological features, features that may stimulate emotional and motivational responses, and features that might impact different tactics for critical evaluation of scientific information. This section first considers the context in which the technology is being used, then deconstructing the technology itself.
3.1 The context
Employing an agile approach, the framework first factors in who is using the technology, for what purpose, and in what specific setting. To illustrate, a technology’s suitability may differ when it is intended to assist a Zulu-speaking 4th grader in composing a brief essay on available energy sources, compared to when expected to aid an English-speaking young adult in deciding whether to adopt a fruitarian lifestyle. Consequently, the framework recognizes the impact of contextual factors on the evaluation and utilization of AI-based information technologies [Lopez & Olvera-Lobo, 2018; Sartori & Bocca, 2023; Suchman, Blomberg, Orr & Trigg, 1999].
From a science communication perspective, the characterization of an AI-based information technology will unfold in distinct ways, contingent upon the specific socio-demographic, epistemic, behavioral, and cultural contexts of the technology’s use [Canfield et al., 2020]. Research shows, for instance, that information technologies can elicit different responses among different age groups [Chattaraman, Kwon, Gilbert & Ross, 2019; Pradhan, Lazar & Findlater, 2020]. Additionally, cultural and regional differences impact the access to relevant and appropriate information about science [Dabran-Zivan et al., 2023]. Besides these macro-level contextual factors, the impact of critical engagement with scientific information through AI-based information technologies is also shaped by individual and situational factors. Among these factors are individuals’ levels of different literacies [Sharon & Baram-Tsabari, 2020; Lamb, Polman, Newman & Smith, 2014], attitudes toward artificial intelligence, along with prior knowledge about and experiences with AI technology [Bao et al., 2022]. Notably, prior experiences hold particular significance for shaping expectations toward AI-based information technologies [Fortunati, Edwards, Manganelli, Edwards & de Luca, 2022], encompassing not only direct encounters with AI systems but also indirect ones, such as the exposure to public narratives surrounding the technology, fueled by advertisement, media representations, and pop culture [Natale & Ballatore, 2020; Sartori & Bocca, 2023].
With these contextual factors in mind, the users of the conceptual framework can then systematically work through the technological considerations, which are described in the next sections.
3.2 The technology
To provide a wide and structured perspective to characterizing the technology, the conceptual framework adheres to the fundamental elements of communication [Lasswell, 1948]: communicator, channel, receiver, and message. Considering that recent AI systems function as both a communicator and a channel [Guzman & Lewis, 2020], the remaining framework combines three lenses: technological properties (i.e., communicator and channel), user experience (i.e., receiver), and content presentation (i.e., message). While all three lenses focus on technological features, the user experience lens contemplates features that affect how users perceive the technology and its use, and the content presentation lens contemplates features that can impact the critical evaluation of the provided information. Although the distinction between lenses may seem artificially separated, this structured approach promotes deliberate reflection from different perspectives. In short, these lenses establish complementary and interrelated points of view, consisting of distinct dimensions (see Table 1). (For a more detailed account of the concepts, their use, and sources incorporated into this part of the framework, see Supplementary material.)
3.3 First lens: technological properties
Although science communication researchers may be untrained to fully comprehend the inner workings of an AI system, they can still look at its fundamental components, objectives, and capabilities [see Taddicken & Krämer, 2021; Fernandes, Rodrigues & Ferreira, 2020] to gain a better understanding of the AI’s strengths and limitations. We propose two key dimensions: basic technological properties and qualities of the output. Their significance is rooted in theoretical and empirical work that addresses technological characteristics of AI systems [e.g., Bang et al., 2023; OECD, 2022, 2023; Zuccon & Koopman, 2023] and information and communication technologies (ICTs) in the context of critical engagement with scientific information [e.g., Hendriks et al., 2020; Lamb et al., 2014].
The basic technological properties dimension characterizes the AI’s main objectives (i.e., what it is designed to do well), its knowledge base (i.e., what kind of information it relies on), and its capabilities in science and math (i.e., how well it can answer science-related questions and solve mathematical problems). Although not deriving directly from science communication, it is important to consider the technological structures of the technology in use, as it determines the structure of communication [Taddicken & Krämer, 2021, p. 4]. In greater detail, these criteria consider what sources the AI draws from, the recency of the data it relies on, and whether it prioritizes information from more reliable sources, such as scientific literature and reputable journalistic outlets [see Barzilai et al., 2023; Polman, Newman, Saul & Farrar, 2014]. These characteristics determine a wide range of the technology’s capabilities, performance, and applicability [OECD, 2022]. They also inform on the potential suitability of a technology for a specific task and the degree to which it can support valuable engagement with scientific information [see Hoffmann, 2019; OECD, 2023; Zajko, 2021]. For instance, LLMs such as ChatGPT were trained on — and therefore rely on — millions of web pages, including less scientifically reliable sources [see Schäfer, 2023]. These LLMs are designed to generate responses based on probabilistic predictions [Chan, 2023] and outperform the majority of the human population in science but less so in mathematics [OECD, 2023]. These characteristics are translated to a set of capabilities and limitations that differ dramatically from those, for example, of Wolfram |Alpha,3 which is designed to solve and explain mathematics problems, or of Elicit,4 which searches for, and extracts text from, academic publications.
The output’s qualities dimension addresses media richness (i.e., what modes of information it provides and supports), multitasking (i.e., how many and what kind of tasks it can perform), and the quality of the technology’s outputs (i.e., whether and to what degree the technology is susceptible to delivering irrelevant or inaccurate information).
Different AI technologies can provide different modes of communication and interaction [see Dambanemuya & Diakopoulos, 2021; Sundar & Lee, 2022]. According to Media Richness Theory [Daft, Lengel & Trevino, 1987], specific technological properties can enable clearer communication [Daft et al., 1987; Ishii, Lyons & Carr, 2019], thereby supporting understanding in diverse ways. We suggest applying two criteria relevant for AI-based information technologies and science communication: the ability to transmit multiple cues and language variety [see also Hendriks et al., 2020; Tang, 2024]. Regarding the first criterion, AI technologies that allow for voice communication, for instance, add another layer of richness, as they can convey tone and emphasis. A richer AI interface, in turn, will not only support understanding, but may also provide a more engaging interaction and encourage users’ motivation to use the technology [Liu, Liao & Pratt, 2009]. Regarding the latter, we suggest asking not only how many and which languages the technology supports, or to what degree, but also if it can mitigate low levels of foundational literacies [see Sharon & Baram-Tsabari, 2020] and the digital language divide [Dabran-Zivan et al., 2023].
Multitasking considers the technology’s ability “to carry out a multitude of tasks without specific fine-tuning” [Bang et al., 2023, p. 4]. This ability has become a significant characteristic of current technologies, especially in GenAI [Schäfer, 2023]. This prompts us to consider the number and range of tasks the technology can undertake (whether simultaneously or not) and how well it performs each task. Such tasks can encompass generating textual answers (e.g., explaining human cells’ structure), translating textual descriptions into visual depictions (e.g., visually depicting human cells’ structure), retrieving information (e.g., searching the web for additional diagrams of human cells), etc. Technologies that can simply and quickly switch between tasks may function as a “one-stop shop”, allowing users to complete different tasks without the need for multiple applications or devices. Thus, they can simplify both the use of technology and the engagement with scientific information. Adding to the main objective criterion in the previous dimension, the multitasking dimension illuminates what else the technology is good for and whether the context criteria align with the technology’s capacity.
Considering the output’s quality — understood, inter alia, as the accuracy and relevance of the information provided — has long been a central concern in science communication research [Bucchi & Trench, 2014; Olesk et al., 2021]. Said concern has become even more significant with individuals’ use of GenAI for science-related content [Greussing et al., 2024] as, “[w]ith the advent of GenAI, the misinformation dilemma has escalated [Shin, Koerber & Lim, 2024, p. 2]. Hence, the framework evaluates whether the technology produces accurate, relevant [see Lamb et al., 2014], and clear outputs in general as well as when faced with either vague (i.e., “what about …”) or detailed misinformed queries or prompts [Dambanemuya & Diakopoulos, 2021; Zuccon & Koopman, 2023]. Such inputs are prone to result in inaccurate or irrelevant information.
Overall, examining AI technologies through the lens of technological properties offers valuable insights into the technology’s purpose, capabilities and limitations, including the type of information it can provide, how it can support understanding, and its potential to misinform.
3.4 Second lens: user experience
Emotion and motivation are key factors in facilitating and encouraging public engagement with scientific information [Dubovi & Tabak, 2021; Hendriks et al., 2020], as are trust in scientific information and its sources [Taddicken & Krämer, 2021]. However, theories in science communication link emotional engagement and trust with human or institutional actors [e.g., Taddicken & Krämer, 2021; Dubovi & Tabak, 2021; Weingart & Guenther, 2016]. Hence, we turn to theories and models in HCI literature and adopt concepts that characterize the technology and are relevant for online engagement with scientific information.
System and interface design affects users’ satisfaction with, perception of, and emotional response to the overall engagement with the technology and the information it mediates [Lopez & Olvera-Lobo, 2018; Nielsen, 2024; Sundar, 2020]. Furthermore, the technological features structure how users interact with scientific information in online environments, even what role users can take [Taddicken & Krämer, 2021]. The second lens, therefore, addresses technological features that can affect how users experience the technology and, consequently, how they engage with information about science. More specifically, they inform inquiries into how individuals are being afforded to use and perceive both the technology and the information it provides. The users addressed in this lens are understood as any individual or group interacting with AI-based information technologies to engage with scientific information. The selected mixture of theoretical concepts informing this lens lean into literature on science communication [e.g., Hendriks et al., 2020], science education [e.g., Long & Magerko, 2020], human-computer interaction (HCI) [e.g., Sundar, 2020], user experience (UX) [e.g., Nielsen, 2024], and of AI systems [e.g., Wang, Camacho, Jing & Goel, 2022].
Altogether, the user experience lens combines four dimensions, as detailed below: interactivity, user agency, transparency, and costs and benefits. These selected dimensions, while probably incomplete, are designed to agree with both Hendriks and colleagues’ [2020] definition of online engagement with scientific information as well as Sundar’s [2020] model for human-AI interaction. The first concentrates on “goal-directed [ …] and effortful activity in dealing with scientific information in online information environments” [Hendriks et al., 2020, p. 2]. The latter provides “a deeper understanding of the human experience of algorithms in general and the psychology of human-AI interaction in particular” [Sundar, 2020, p. 74].
Interactivity emerges as one of the most significant factors in explaining technologies’ uses and effects [Sundar, 2020], for instance, by affecting users’ learning and remembering of information [Greussing, Kessler & Boomgaarden, 2020]. This is relevant not only to new media in general, but also to online engagement with scientific information in particular [Hendriks et al., 2020]. Interactivity refers to the reciprocal exchange of information between the participants involved in an interaction [e.g., Liu & Shrum, 2002]. As an experiential construct, interactivity has three facets [Sohn, 2011] — sensory, semantic, and behavioral — that contribute to the users’ experience. Thus, a communication situation can be perceived as interactive not only through behavioral engagement (e.g., clicking on a link or writing a prompt), but also through the breadth and depth of the sensory experience (e.g., the presence of nonverbal elements such as pictures) or rhetorical communication (e.g., receiving personally relevant messages that make one feel recognized by the interaction partner).
Interactivity cues embedded in technology contribute to the feeling of natural, face-to-face communication, thus molding the mode and tone of science communication [Fähnrich, 2021; see also Lopez & Olvera-Lobo, 2018], especially when the technology exhibits human-like characteristics [Chong, Yu, Keeling & de Ruyter, 2021; Gambino, Fox & Ratan, 2020; Sundar, 2020]. Anthropomorphism refers to attributing human traits or qualities to non-human entities, indicating their potential for social interaction [Gambino et al., 2020; see also Wang et al., 2022]. It is elicited in various ways, such as by the AI’s appearance (e.g., visual or auditory cues, having a name or gender), mode of interactivity (e.g., conversational abilities), or interactivity style (i.e., is it task-oriented, more formal, and more purposeful, or is it social-oriented, more casual, and social-emotional affective?). Anthropomorphism plays a key role in shaping how users perceive AI technologies. For example, trust can be fostered by task-oriented, purposeful communication [Keeling, McGoldrick & Beatty, 2010], which also encourages superior cognitive outcomes and self-efficacy, particularly among users with low digital literacy [Chattaraman et al., 2019]. In addition, anthropomorphism has been found to evoke perceived social presence, credibility, and competence, and to influence users’ “behavioral intention, compliance with advice, and satisfaction” [Chong et al., 2021, p. 5]. Although these aspects centered on technology, their impact for engagement with scientific information is meaningful [see Taddicken & Krämer, 2021; Dubovi & Tabak, 2021; Weingart & Guenther, 2016]. While beneficial, anthropomorphism also presents a risk: the perceived trustworthiness or competence could conceal the actual technical limitations of the technology (see lens 1).
Interactivity further concerns guidance, which is closely related to what UX principles term learnability: the user’s ability to use a new interface or technology they have not seen before [Nielsen, 1993, 2024]. Adapting this concept to critical engagement with scientific information in online environments, an AI-based information technology can, and perhaps should, mitigate unskilled use, encourage collaboration and social interaction around science, and inspire further engagement with scientific information [see Hendriks et al., 2020; Long & Magerko, 2020]. For instance, while advanced search options in Google Search are available only through the settings menu at the bottom of the page, Microsoft’s Copilot provides some instructions and examples on how it could be used.
User agency is a second “hallmark of successful user experience with personalized services” [Sundar, 2020, p. 82], influencing online engagement with scientific information [Hendriks et al., 2020]. Rooted in the fundamental human need for autonomy, user agency provides individuals with a sense of ownership and authority in their interactions with technology [Kang & Lou, 2022]. This fundamental need, however, is redefined in automatic AI-based information environments [Hepp et al., 2023]. It is about users’ experience of control over their actions and, through these actions, control over the behavior and outputs of the technology [Coyle, Moore, Kristensson, Fletcher & Blackwell, 2012].
While users’ agency involves multiple factors, we suggest focusing on three criteria concerning previous sessions, personalization, and suitability. These criteria provide multifaceted yet simple insights into users’ ability to mold or redesign their engagement with scientific information before and after its occurrence. Thus, users’ ability to refine or delete their previous interactions will give them greater control and confidence [Nielsen, 2024], which can then cultivate opportunities to learn new information [Kang & Sundar, 2016]. Also augmenting users’ control is the ability to personalize the system [Sundar, 2020; see also Hendriks et al., 2020] by feedbacking the technology and adjusting its settings (e.g., selecting conversation style in Microsoft’s Copilot). Accommodating individual preferences should be carefully considered as it might create conflict with privacy implications [see Brewer, Pierce, Upadhyay & Park, 2022]. That being said, users’ agency is also critically dependent on the system’s suitability to its audiences and their needs [see Long & Magerko, 2020]. To enhance users’ agency and foster critical engagement with scientific information, the technology should ensure compatibility with diverse audiences, including people with special needs or children.
Transparency is the third dimension of user experience. Indeed, assessing transparency is not always easy, as its manifestations are not always apparent due to design, commercial, and other considerations. Nevertheless, assessing transparency is part of the framework as it can shape the quality of human-AI interaction [Sundar, 2020, p. 80] and, in turn, users’ engagement with scientific information [Shin et al., 2024]. We address the concept of transparency from a user experience perspective rather than from an ethical or legal one, contemplating how and what users can learn about the AI they use and its operation. The transparency dimension first distinguishes between thin and thick AIs. While thin AIs are systems whose involvement is not apparent (for example, autocomplete suggestions, personalization algorithms), the presence of thick AIs “is more apparent on the interface” [Sundar & Lee, 2022, p. 382]. The transparency dimension also considers whether the technology clearly communicates the system’s capabilities and limitations [see Nielsen, 2024; Sundar & Lee, 2022]. This is of particular importance for science communication, as misinformation generated by AI might seem very compelling to users [Spitale et al., 2023]. Finally, the transparency triangle considers the AI’s explainability, i.e., human understandable descriptions of the AI’s rational and decision process [Doran, Schulz & Besold, 2017; Long & Magerko, 2020; OECD, 2022]. Explainability allows users to “appropriately calibrate trust and reliance [in the technology], to detect potential errors in machine reasoning, and to further help audit the technology” [Miller, Hoffman, Amir & Holzinger, 2022, p. 1]. These criteria, while partial, illuminate how users can learn not only about the presence of an AI system, but also about its capabilities and limitations, and the practical aspects of how it operates.
The concept of costs and benefits considers the ability of the technology “to advance the interest of the user at minimal cost” [Sundar, 2020, p. 80], thus corresponding with the effortfulness that characterizes critical engagement with scientific information online [Hendriks et al., 2020; Lopez & Olvera-Lobo, 2018]. In this context, Sundar [2020, p. 83] explains that “user actions on the interface, such as searching, choosing settings and making decisions, are costs against which the benefits of AI media, such as tailored content and convenience, would be assessed”. We suggest, therefore, to evaluate the extent to which the technology eases information searching, speeds its processing, and expands its scope. One simple criterion for such evaluation is the number of “steps” (e.g., prompting, querying, opening additional tabs, reading different web pages, etc.) required of users to complete a task [see OECD, 2021; Perez et al., 2015]. For example, while searching for science information using Google’s Gemini only requires phrasing a prompt, doing the same using Google Search requires phrasing a query, evaluating the relevance and quality of possible sources, selecting some, reading them, and integrating the information [see Hendriks et al., 2020]. Beyond this simplistic numeric criterion, from an information literacy perspective, the costs and benefits dimension also considers the complexity of the task, evaluating the skill set and skill level required to complete it.
In conclusion, this lens aims to bring into focus whether and how technological features contribute to users’ engagement with scientific information [see Evans & Gibbons, 2007; Hendriks et al., 2020]. According to Nielsen [2024], a technology can allow users control and freedom by catering to both inexperienced and experienced users, providing continuous interaction that leverages users’ language and communication, while affording multiple easy ways to correct or undo their mistakes. Such a technology will foster users’ positive experience and, in turn, a more efficient and satisfying engagement with scientific information [see Hendriks et al., 2020; Schäfer, 2023].
3.5 Third lens: content presentation
Research shows that scientific disinformation produced by GenAI can be more convincing than that produced by humans [Spitale et al., 2023]. Hence, practicing critical engagement with scientific information online is crucial and involves, inter alia, evaluating the credibility of the information source, the reasoning components and structure, and the overall argument in light of the scientific consensus [McGrew & Breakstone, 2023; Osborne & Pimentel, 2022; see also Halpern, 2014]. This third lens addresses whether, how, and to what extent the presentation of the content (rather than the capabilities of users) supports these practices of critical engagement with scientific information in online environments. These aspects also echo theories in science communication that describe tactics for minimizing uncertainty in public engagement with scientific information [Lammers, Ferrari, Wenmackers, Pattyn & Van de Walle, 2024]. Drawing from literature on science education [e.g., Stadtler & Bromme, 2014], critical thinking [e.g., Halpern, 2014], and digital literacy [e.g., McGrew & Breakstone, 2023], this part of the framework informs inquiries into whether the technology equips its users with opportunities to critically evaluate the reliability, validity, and accuracy of the information retrieved. Corresponding with Osborne and Pimental’s [2022] “fast and frugal” heuristic model, the content presentation lens addresses three fundamental dimensions — sources, reasoning, and consensus — with the following details.
Evaluating source credibility is an efficient indicator of information reliability [McGrew & Breakstone, 2023; Osborne & Pimentel, 2022; see also Lopez & Olvera-Lobo, 2018] when content knowledge of the discussed issue is insufficient [Forzani, 2020]. This often involves assessing the expertise or benevolence of the source [Stadtler & Bromme, 2014] or investigating what other reliable sources say about them [McGrew & Breakstone, 2023; Osborne & Pimentel, 2022]. These assessments, however, are irrelevant for GenAI technologies that function as the source. Hence, addressing source evaluation becomes crucial for scholars and educators, as it impacts the potential uses of such tools. Following these lines, the source dimension focuses on two criteria: (1) whether the content presentation tends to identify relevant information sources (e.g., “according to the World Health Organization”), which will allow users to evaluate if the information can be trusted [Bromme, Kienhues & Porsch, 2010]; and (2) whether the output provides a direct link to a specific source (e.g., hyperlinks, URLs, articles’ titles, etc.), thus enabling users to corroborate the information [Barzilai, Thomm & Shlomi-Elooz, 2020] and assess both information reliability and accuracy.
Reasoning evaluation is the second dimension under the content presentation lens. In the words of Tseng and colleagues [2021, p. 1156], evaluating scientific claims “involve[s] the assessment of scientific arguments for validity of reasoning and the quality of evidence that is used”. Furthermore, reasoning evaluation can become a decisive practice in the case of recent GenAI technologies that often neglect identifying the source or rather do so incorrectly. Hence, although evaluating the scientific claims themselves might be more complicated for non-experts [Osborne & Pimentel, 2022], evaluating the argumentation may become a valuable avenue to establish information credibility [Barzilai et al., 2020; Forzani, 2020], especially when information about the source is scarce or invalid [Dabran-Zivan & Baram-Tsabari, n.d.]. To allow users to evaluate the scientific claims and reappraise the plausibility of alternative explanations [Tseng et al., 2021; see also Halpern, 2014; McGrew & Breakstone, 2023], the reasoning dimension prompts users to consider the inclusion of evidence, scientific or otherwise, and whether the argument addresses alternative claims [Halpern, 2014].
Scientific consensus is the third dimension, functioning as “the public benchmark of reliability” [Osborne & Pimentel, 2022, p. 247], and as an influential factor in science communication [Chinn, Lane & Hart, 2018; van Stekelenburg, Schaap, Veling, van ’t Riet & Buijzen, 2022]. The consensus embodies what a decisive majority of experts agree on. As such, it portrays “the product of extensive empirical work that has been examined critically from many perspectives”, constituting “our best bet of what to trust” [Osborne & Pimentel, 2022, p. 247]. This dimension asks whether the technology supports easy understanding of the scientific consensus — for instance, by often addressing it directly — and whether the technology allows users to learn about the specifics of scientific disagreements.
To summarize, the framework suggests observing first the context, on macro and micro levels, followed by the technology, offering guiding criteria to characterize the basic technological properties, the user experience, and the content presentation.
4 To conclude
As GenAI technologies “themselves [are] becoming communicative participants” [Hepp et al., 2023, p. 42], they continue to shape and transform the way information is disseminated and consumed. Thus, understanding the underlying mechanisms and implications of these systems becomes crucial. In this paper, we proposed a conceptual framework that enables science communication researchers and educators to effectively characterize, evaluate, and compare AI-based information technologies in the context of critical engagement of scientific information.
Practically, this framework allows researchers and educators to understand and reflect on the strengths, weaknesses, and workings of AI technologies in science communication. From a theoretical standpoint, the framework equips researchers to keep pace with the rapid developments in the AI terrain and fosters cumulative research rather than isolated investigations.
Because of the recency of the technology and the lack of empirically tested theories regarding critical engagement with scientific information through AI-based information technologies, naturally, this framework is incomplete, and more work is needed to determine its usefulness. Furthermore, as a multidisciplinary endeavour, drawing on theoretical perspectives from many disciplines, the framework might not provide a coherent theory. Hence, first and foremost, this effort hopes to stir scholarly discussion toward further development and finetuning of the framework, leading to a more refined and encompassing tool for future research and practice.
This framework represents an initial step. It is a reflection aid designed to support researchers and educators with a limited understanding of AI systems in selecting relevant technologies or presenting them as most suitable to a particular context. By highlighting important dimensions of critical engagement with scientific information through AI, the framework may lead to developing relevant strategies for critical engagement with AI-generated content, and for identifying gaps in existing empirical research, including both known unknowns and actual blind spots.
Acknowledgments
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: this research was supported by the Niedersächsisches Vorab program, funded by the Lower Saxony Ministry for Science and Culture, Germany.
References
-
Alvarez, A., Caliskan, A., Crockett, M. J., Ho, S. S., Messeri, L. & West, J. (2024). Science communication with generative AI. Nature Human Behaviour 8 (4), 625–627. doi:10.1038/s41562-024-01846-3
-
Bang, Y., Cahyawijaya, S., Lee, N., Dai, W., Su, D., Wilie, B., … Fung, P. (2023). A multitask, multilingual, multimodal evaluation of ChatGPT on reasoning, hallucination, and interactivity. arXiv: 2302.04023
-
Bao, L., Krause, N. M., Calice, M. N., Scheufele, D. A., Wirz, C. D., Brossard, D., … Xenos, M. A. (2022). Whose AI? How different publics think about AI and its social impacts. Computers in Human Behavior 130, 107182. doi:10.1016/j.chb.2022.107182
-
Barzilai, S., Mor-Hagani, S., Abed, F., Tal-Savir, D., Goldik, N., Talmon, I. & Davidow, O. (2023). Misinformation is contagious: middle school students learn how to evaluate and share information responsibly through a digital game. Computers & Education 202, 104832. doi:10.1016/j.compedu.2023.104832
-
Barzilai, S., Thomm, E. & Shlomi-Elooz, T. (2020). Dealing with disagreement: the roles of topic familiarity and disagreement explanation in evaluation of conflicting expert claims and sources. Learning and Instruction 69, 101367. doi:10.1016/j.learninstruc.2020.101367
-
Biyela, S., Dihal, K., Gero, K. I., Ippolito, D., Menczer, F., Schäfer, M. S. & Yokoyama, H. M. (2024). Generative AI and science communication in the physical sciences. Nature Reviews Physics 6 (3), 162–165. doi:10.1038/s42254-024-00691-7
-
Brewer, R., Pierce, C., Upadhyay, P. & Park, L. (2022). An empirical study of older adult’s voice assistant use for health information seeking. ACM Transactions on Interactive Intelligent Systems 12 (2), 13. doi:10.1145/3484507
-
Bromme, R., Kienhues, D. & Porsch, T. (2010). Who knows what and who can we believe? Epistemological beliefs are beliefs about knowledge (mostly) to be attained from others. In L. D. Bendixen & F. C. Feucht (Eds.), Personal epistemology in the classroom: theory, research, and implications for practice (pp. 163–194). doi:10.1017/CBO9780511691904.006
-
Bucchi, M. & Trench, B. (2014). Science communication research: themes and challenges. In M. Bucchi & B. Trench (Eds.), Routledge handbook of public communication of science and technology (2nd ed., pp. 1–14). doi:10.4324/9780203483794
-
Canfield, K. N., Menezes, S., Matsuda, S. B., Moore, A., Mosley Austin, A. N., Dewsbury, B. M., … Taylor, C. (2020). Science communication demands a critical approach that centers inclusion, equity, and intersectionality. Frontiers in Communication 5, 2. doi:10.3389/fcomm.2020.00002
-
Chan, A. (2023). GPT-3 and InstructGPT: technological dystopianism, utopianism, and “Contextual” perspectives in AI ethics and industry. AI and Ethics 3 (1), 53–64. doi:10.1007/s43681-022-00148-6
-
Chattaraman, V., Kwon, W.-S., Gilbert, J. E. & Ross, K. (2019). Should AI-based, conversational digital assistants employ social- or task-oriented interaction style? A task-competency and reciprocity perspective for older adults. Computers in Human Behavior 90, 315–330. doi:10.1016/j.chb.2018.08.048
-
Chinn, S., Lane, D. S. & Hart, P. S. (2018). In consensus we trust? Persuasive effects of scientific consensus communication. Public Understanding of Science 27 (7), 807–823. doi:10.1177/0963662518791094
-
Chong, T., Yu, T., Keeling, D. I. & de Ruyter, K. (2021). AI-chatbots on the services frontline addressing the challenges and opportunities of agency. Journal of Retailing and Consumer Services 63, 102735. doi:10.1016/j.jretconser.2021.102735
-
Coyle, D., Moore, J., Kristensson, P. O., Fletcher, P. & Blackwell, A. (2012). I did that! Measuring users’ experience of agency in their own actions. In CHI ’12: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2025–2034). doi:10.1145/2207676.2208350
-
Dabran-Zivan, S. & Baram-Tsabari, A. (n.d.). The importance of science education, scientific knowledge, and evaluation strategies for the successful detection of COVID-19 misinformation. To appear.
-
Dabran-Zivan, S., Baram-Tsabari, A., Shapira, R., Yitshaki, M., Dvorzhitskaia, D. & Grinberg, N. (2023). “Is COVID-19 a hoax?”: auditing the quality of COVID-19 conspiracy-related information and misinformation in Google Search results in four languages. Internet Research 33 (5), 1774–1801. doi:10.1108/INTR-07-2022-0560
-
Daft, R. L., Lengel, R. H. & Trevino, L. K. (1987). Message equivocality, media selection, and manager performance: implications for information systems. MIS Quarterly 11 (3), 355–366. doi:10.2307/248682
-
Dambanemuya, H. K. & Diakopoulos, N. (2021). Auditing the information quality of news-related queries on the Alexa voice assistant. In Proceedings of the ACM on Human-Computer Interaction. Vol. 5(CSCW1), 83. doi:10.1145/3449157
-
Doran, D., Schulz, S. & Besold, T. R. (2017). What does explainable AI really mean? A new conceptualization of perspectives. arXiv: 1710.00794
-
Dubovi, I. & Tabak, I. (2021). Interactions between emotional and cognitive engagement with science on YouTube. Public Understanding of Science 30 (6), 759–776. doi:10.1177/0963662521990848
-
Evans, C. & Gibbons, N. J. (2007). The interactivity effect in multimedia learning. Computers & Education 49 (4), 1147–1160. doi:10.1016/j.compedu.2006.01.008
-
Fähnrich, B. (2021). Conceptualizing science communication in flux — a framework for analyzing science communication in a digital media environment. JCOM 20 (03), Y02. doi:10.22323/2.20030402
-
Feinstein, N. W. & Baram-Tsabari, A. (2024). Epistemic networks and the social nature of public engagement with science. Journal of Research in Science Teaching. doi:10.1002/tea.21941
-
Fernandes, G. W. R., Rodrigues, A. M. & Ferreira, C. A. (2020). Professional development and use of digital technologies by science teachers: a review of theoretical frameworks. Research in Science Education 50 (2), 673–708. doi:10.1007/s11165-018-9707-x
-
Fortunati, L., Edwards, A. P., Manganelli, A. M., Edwards, C. & de Luca, F. (2022). Do people perceive Alexa as gendered? A cross-cultural study of people’s perceptions, expectations, and desires of Alexa. Human-Machine Communication 5, 75–97. doi:10.30658/hmc.5.3
-
Forzani, E. (2020). A three-tiered framework for proactive critical evaluation during online inquiry. Journal of Adolescent & Adult Literacy 63 (4), 401–414. doi:10.1002/jaal.1004
-
Gambino, A., Fox, J. & Ratan, R. A. (2020). Building a stronger CASA: extending the Computers Are Social Actors paradigm. Human-Machine Communication 1, 71–85. doi:10.30658/hmc.1.5
-
Greussing, E., Guenther, L., Baram-Tsabari, A., Dabran-Zivan, S., Jonas, E., Klein-Avraham, I., … Song, H. J. (2024). Predicting and describing the use of generative AI in science-related information search: insights from a multinational survey. In Science communication in the age of artificial intelligence. Book of abstracts. Annual conference of the “Science Communication” Division of the German Communication Association. University of Zürich, Zürich, Switzerland, June 6–7, 2024 (pp. 43–44).
-
Greussing, E., Kessler, S. H. & Boomgaarden, H. G. (2020). Learning from science news via interactive and animated data visualizations: an investigation combining eye tracking, online survey, and cued retrospective reporting. Science Communication 42 (6), 803–828. doi:10.1177/1075547020962100
-
Guzman, A. L. & Lewis, S. C. (2020). Artificial intelligence and communication: a Human-Machine Communication research agenda. New Media & Society 22 (1), 70–86. doi:10.1177/1461444819858691
-
Halpern, D. F. (2014). Thought and knowledge: an introduction to critical thinking (5th ed.). New York, NY, U.S.A.: Psychology Press. doi:10.4324/9781315885278
-
Hancock, J. T., Naaman, M. & Levy, K. (2020). AI-mediated communication: definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication 25 (1), 89–100. doi:10.1093/jcmc/zmz022
-
Hendriks, F., Mayweg-Paus, E., Felton, M., Iordanou, K., Jucks, R. & Zimmermann, M. (2020). Constraints and affordances of online engagement with scientific information — a literature review. Frontiers in Psychology 11, 572744. doi:10.3389/fpsyg.2020.572744
-
Hepp, A., Loosen, W., Dreyer, S., Jarke, J., Kannengießer, S., Katzenbach, C., … Schulz, W. (2023). ChatGPT, LaMDA, and the hype around communicative AI: the automation of communication as a field of research in media and communication studies. Human-Machine Communication 6, 41–63. doi:10.30658/hmc.6.4
-
Hoffmann, A. L. (2019). Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society 22 (7), 900–915. doi:10.1080/1369118X.2019.1573912
-
Houde, S., Liao, V., Martino, J., Muller, M., Piorkowski, D., Richards, J., … Zhang, Y. (2020). Business (mis)use cases of generative AI. In IUI 2020 Workshop on Human-AI Co-Creation with Generative Models, Cagliari, Italy. arXiv: 2003.07679
-
Information Technology (2024), In Merriam-Webster.com Dictionary. Retrieved from https://www.merriam-webster.com/dictionary/information+technologies
-
Ishii, K., Lyons, M. M. & Carr, S. A. (2019). Revisiting media richness theory for today and future. Human Behavior and Emerging Technologies 1 (2), 124–131. doi:10.1002/hbe2.138
-
Jungherr, A. & Schroeder, R. (2023). Artificial intelligence and the public arena. Communication Theory 33 (2–3), 164–173. doi:10.1093/ct/qtad006
-
Kang, H. & Lou, C. (2022). AI agency vs. human agency: understanding human-AI interactions on TikTok and their implications for user engagement. Journal of Computer-Mediated Communication 27 (5), zmac014. doi:10.1093/jcmc/zmac014
-
Kang, H. & Sundar, S. S. (2016). When self is the source: effects of media customization on message processing. Media Psychology 19 (4), 561–588. doi:10.1080/15213269.2015.1121829
-
Keeling, K., McGoldrick, P. & Beatty, S. (2010). Avatars as salespeople: communication style, trust, and intentions. Journal of Business Research 63 (8), 793–800. doi:10.1016/j.jbusres.2008.12.015
-
Kidd, C. & Birhane, A. (2023). How AI can distort human beliefs. Science 380 (6651), 1222–1223. doi:10.1126/science.adi0248
-
Lamb, G., Polman, J. L., Newman, A. & Smith, C. G. (2014). Science news infographics: teaching students to gather, interpret, and present information graphically. The Science Teacher 81 (3), 25–30. Retrieved from https://www.jstor.org/stable/43683666
-
Lammers, W., Ferrari, S., Wenmackers, S., Pattyn, V. & Van de Walle, S. (2024). Theories of uncertainty communication: an interdisciplinary literature review. Science Communication 46 (3), 332–365. doi:10.1177/10755470241231290
-
Lasswell, H. D. (1948). The structure and function of communication in society. The Communication of Ideas: a Series of Addresses 37 (1), 136–139.
-
Lin, S.-S. (2014). Science and non-science undergraduate students’ critical thinking and argumentation performance in reading a science news report. International Journal of Science and Mathematics Education 12 (5), 1023–1046. doi:10.1007/s10763-013-9451-7
-
Lin, Z. (2023). Supercharging academic writing with generative AI: framework, techniques, and caveats. PsyArXiv. doi:10.31234/osf.io/9yhwz
-
Liu, S.-H., Liao, H.-L. & Pratt, J. A. (2009). Impact of media richness and flow on e-learning technology acceptance. Computers & Education 52 (3), 599–607. doi:10.1016/j.compedu.2008.11.002
-
Liu, Y. & Shrum, L. J. (2002). What is interactivity and is it always such a good thing? Implications of definition, person, and situation for the influence of interactivity on advertising effectiveness. Journal of Advertising 31 (4), 53–64. doi:10.1080/00913367.2002.10673685
-
Lo, L. S. (2023). The CLEAR path: a framework for enhancing information literacy through prompt engineering. The Journal of Academic Librarianship 49 (4), 102720. doi:10.1016/j.acalib.2023.102720
-
Long, D. & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. In CHI ’20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–16). doi:10.1145/3313831.3376727
-
Lopez, L. & Olvera-Lobo, M. D. (2018). Public engagement in science via Web 2.0 technologies. Evaluation criteria validated using the Delphi Method. JCOM 17 (02), A08. doi:10.22323/2.17020208
-
McGrew, S. & Breakstone, J. (2023). Civic online reasoning across the curriculum: developing and testing the efficacy of digital literacy lessons. AERA Open 9. doi:10.1177/23328584231176451
-
Miller, T., Hoffman, R., Amir, O. & Holzinger, A. (2022). Special issue on Explainable Artificial Intelligence (XAI). Artificial Intelligence 307, 103705. doi:10.1016/j.artint.2022.103705
-
Natale, S. & Ballatore, A. (2020). Imagining the thinking machine: technological myths and the rise of artificial intelligence. Convergence: the International Journal of Research into New Media Technologies 26 (1), 3–18. doi:10.1177/1354856517715164
-
Nielsen, J. (1993). Usability engineering. Boston, MA, U.S.A.: Academic Press.
-
Nielsen, J. (2024, January 30). 10 usability heuristics for user interface design. Nielsen Norman Group. Retrieved from https://www.nngroup.com/articles/ten-usability-heuristics/
-
Nisbet, M. C. & Scheufele, D. A. (2009). What’s next for science communication? Promising directions and lingering distractions. American Journal of Botany 96 (10), 1767–1778. doi:10.3732/ajb.0900041
-
OECD (2019). Artificial intelligence in society. Paris, France: OECD Publishing. doi:10.1787/eedfee77-en
-
OECD (2021). AI and the future of skills. Volume 1: Capabilities and assessments. Paris, France: OECD Publishing. doi:10.1787/5ee71f34-en
-
OECD (2022). OECD framework for the classification of AI systems. OECD Digital Economy Papers, No. 323. Paris, France: OECD Publishing. doi:10.1787/cb6d9eca-en
-
OECD (2023). Is education losing the race with technology? AI’s progress in maths and reading. Paris, France: OECD Publishing. doi:10.1787/73105f99-en
-
Olesk, A., Renser, B., Bell, L., Fornetti, A., Franks, S., Mannino, I., … Zollo, F. (2021). Quality indicators for science communication: results from a collaborative concept mapping exercise. JCOM 20 (03), A06. doi:10.22323/2.20030206
-
Osborne, J. & Pimentel, D. (2022). Science, misinformation, and the role of education. Science 378 (6617), 246–248. doi:10.1126/science.abq8093
-
Perez, S. L., Paterniti, D. A., Wilson, M., Bell, R. A., Chan, M. S., Villareal, C. C., … Kravitz, R. L. (2015). Characterizing the processes for navigating Internet health information using real-time observations: a mixed-methods approach. Journal of Medical Internet Research 17 (7), e173. doi:10.2196/jmir.3945
-
Polman, J. L., Newman, A., Saul, E. W. & Farrar, C. (2014). Adapting practices of science journalism to foster science literacy. Science Education 98 (5), 766–791. doi:10.1002/sce.21114
-
Pradhan, A., Lazar, A. & Findlater, L. (2020). Use of intelligent voice assistants by older adults with low technology use. ACM Transactions on Computer-Human Interaction 27 (4), 31. doi:10.1145/3373759
-
Sartori, L. & Bocca, G. (2023). Minding the gap(s): public perceptions of AI and socio-technical imaginaries. AI & Society 38 (2), 443–458. doi:10.1007/s00146-022-01422-1
-
Schäfer, M. S. (2023). The Notorious GPT: science communication in the age of artificial intelligence. JCOM 22 (02), Y02. doi:10.22323/2.22020402
-
Sharon, A. J. & Baram-Tsabari, A. (2020). Can science literacy help individuals identify misinformation in everyday life? Science Education 104 (5), 873–894. doi:10.1002/sce.21581
-
Shin, D., Koerber, A. & Lim, J. S. (2024). Impact of misinformation from generative AI on user information processing: how people understand misinformation from generative AI. New Media & Society. doi:10.1177/14614448241234040
-
Sohn, D. (2011). Anatomy of interaction experience: distinguishing sensory, semantic, and behavioral dimensions of interactivity. New Media & Society 13 (8), 1320–1335. doi:10.1177/1461444811405806
-
Spitale, G., Biller-Andorno, N. & Germani, F. (2023). AI model GPT-3 (dis)informs us better than humans. Science Advances 9 (26), eadh1850. doi:10.1126/sciadv.adh1850
-
Stadtler, M. & Bromme, R. (2014). The content-source integration model: a taxonomic description of how readers comprehend conflicting scientific information. In D. N. Rapp & J. L. G. Braasch (Eds.), Processing inaccurate information: theoretical and applied perspectives from cognitive science and the educational sciences (pp. 379–402). doi:10.7551/mitpress/9737.003.0023
-
Suchman, L., Blomberg, J., Orr, J. E. & Trigg, R. (1999). Reconstructing technologies as social practice. American Behavioral Scientist 43 (3), 392–408. doi:10.1177/00027649921955335
-
Sundar, S. S. (2020). Rise of machine agency: a framework for studying the psychology of human-AI interaction (HAII). Journal of Computer-Mediated Communication 25 (1), 74–88. doi:10.1093/jcmc/zmz026
-
Sundar, S. S., Jia, H., Waddell, T. F. & Huang, Y. (2015). Toward a theory of interactive media effects (TIME): four models for explaining how interface features affect user psychology. In S. S. Sundar (Ed.), The handbook of the psychology of communication technology (pp. 47–86). doi:10.1002/9781118426456.ch3
-
Sundar, S. S. & Lee, E.-J. (2022). Rethinking communication in the era of artificial intelligence. Human Communication Research 48 (3), 379–385. doi:10.1093/hcr/hqac014
-
Taddicken, M. & Krämer, N. (2021). Public online engagement with science information: on the road to a theoretical framework and a future research agenda. JCOM 20 (03), A05. doi:10.22323/2.20030205
-
Tang, K.-S. (2024). Informing research on generative artificial intelligence from a language and literacy perspective: a meta-synthesis of studies in science education. Science Education 108 (5), 1329–1355. doi:10.1002/sce.21875
-
Tatalovic, M. (2018). AI writing bots are about to revolutionise science journalism: we must shape how this is done. JCOM 17 (01), E. doi:10.22323/2.17010501
-
Tseng, A. S., Bonilla, S. & MacPherson, A. (2021). Fighting “bad science” in the information age: the effects of an intervention to stimulate evaluation and critique of false scientific claims. Journal of Research in Science Teaching 58 (8), 1152–1178. doi:10.1002/tea.21696
-
van Stekelenburg, A., Schaap, G., Veling, H., van ’t Riet, J. & Buijzen, M. (2022). Scientific-consensus communication about contested science: a preregistered meta-analysis. Psychological Science 33 (12), 1989–2008. doi:10.1177/09567976221083219
-
Wang, Q., Camacho, I., Jing, S. & Goel, A. K. (2022). Understanding the design space of AI-mediated social interaction in online learning: challenges and opportunities. In Proceedings of the ACM on Human-Computer Interaction. Vol. 6(CSCW1), 130. doi:10.1145/3512977
-
Weingart, P. & Guenther, L. (2016). Science communication and the issue of trust. JCOM 15 (05), C01. doi:10.22323/2.15050301
-
West, J. D. & Bergstrom, C. T. (2021). Misinformation in and about science. Proceedings of the National Academy of Sciences 118 (15), e1912444117. doi:10.1073/pnas.1912444117
-
Zajko, M. (2021). Conservative AI and social inequality: conceptualizing alternatives to bias through social theory. AI & Society 36 (3), 1047–1056. doi:10.1007/s00146-021-01153-9
-
Zuccon, G. & Koopman, B. (2023). Dr ChatGPT, tell me what I want to hear: how prompt knowledge impacts health answer correctness. arXiv: 2302.13793
Notes
1. https://gemini.google.com/app.
2. https://www.bing.com/chat?q=Microsoft+Copilot&FORM=hpcodx.
3. https://www.wolframalpha.com/about.
About the authors
Dr. Inbal Klein-Avraham is a postdoctoral fellow at the Faculty of Education in Science and Technology, Technion — Israel Institute of Technology. Her current research focuses on publics’ critical engagement with science via AI-based information technologies. Her previous studies were published, inter alia, in New Media and Society, Journalism Studies, and more.
E-mail: inbal.klein@campus.technion.ac.il
Dr. Esther Greussing is a postdoctoral researcher at the Institute for Communication Science at Technische Universität Braunschweig, Germany. She holds a Ph.D. in Communication from the University of Vienna, Austria. Her research focuses on the use and effects of science communication in the digital age, particularly exploring how emerging information technologies shape public engagement with science.
E-mail: e.greussing@tu-braunschweig.de X: @estherGreussing
Prof. Dr. Monika Taddicken heads the Institute for Communication Science at Technische Universität Braunschweig, Germany, a member of the T9-Alliance. Her main interest is the intersection of digital and science communication. Her research focuses primarily on the user perspective. In addition, she has a strong methodological interest and applies a variety of different empirical methods.
E-mail: m.taddicken@tu-braunschweig.de X: @m_taddicken
Shakked Dabran-Zivan is a Ph.D. candidate under the supervision of Prof. Ayelet Baram-Tsabari at the Technion — Israel Institute of Technology. As misinformation and conspiracy theories are becoming increasingly widespread and accessible online, her research examines how science literacy is helping counter false information. Her studies explore the relationship between individual abilities, societal resources, the future of artificial intelligence as a mediating force, and the possible implications for this relationship in a world characterized by post-truth phenomena.
E-mail: shakkeda@gmail.com
Evelyn Jonas is a research assistant at the Institute for Communication Science at Technische Universität Braunschweig, Germany. She holds a Master’s degree in Media Technology and Communication from Technische Universität Braunschweig. She is currently working on her Ph.D. project on trust in communicative artificial intelligence as a new intermediary for science-related information.
E-mail: evelyn.jonas@tu-braunschweig.de X: @eveptr
Prof. Ayelet Baram-Tsabari, a former science journalist, is a professor of science education and communication at the Faculty of Education in Science and Technology, Technion — Israel Institute of Technology. Her research focuses on supporting public engagement with science and effective science communication. Baram-Tsabari hosts a science communication MOOC on edX and serves as an editorial board member on the journals Public Understanding of Science, Science Communication, and the International Journal of Science Education: Part B.
E-mail: ayelet@technion.ac.il X: @Ayelet_bt
Supplementary material
Available at https://doi.org/10.22323/2.23060205
A detailed account of the three lenses