1 Introduction

Generative artificial intelligence (GenAI) made a splash in November 2022 through the introduction of an application called ChatGPT. Even though research on GenAI has existed since the 1960s, the application’s readily available interface made the technology accessible to the public [Chow, 2023]. As of January 2025, ChatGPT exceeded 100 million users [Duarte, 2025]. Media coverage has since ranged from optimistic to cautious, reflecting the broad reach of GenAI in sectors like business, law, public relations, entertainment, education, and science [Baidoo-anu & Owusu Ansah, 2023]. This paper focuses on how experts, including scientists, engage with the public about GenAI on social media, exploring the dynamics of public engagement with science and technology (PES). Communicating science involves a range of methods including journalists’ reports of scientific findings, scientists’ public lectures, and research papers. Kappel and Holmen [2019] describe two paradigms of science communication: the knowledge dissemination model (i.e., the one-way science communication model) and the public participation model (i.e., the interactive science communication model). This paper adopts the public participation model, often referred to as “Public Engagement with Science (PES)”, which is primarily rooted in Science and Technology Studies (STS) [Weingart et al., 2021]. The PES model has gained increased relevance due to the rise of social media platforms that allow the non-expert public to directly interact with scientists and experts, for example on Reddit’s AMA (Ask Me Anything) sessions [e.g., Tang et al., 2021]. With GenAI, the underlying technology’s complexity contrasts with its accessible, user-friendly interface, allowing even the non-expert public (hereafter the public) to engage with it through natural language. This unique dynamic invites further exploration into how the public not only asks questions about GenAI but also contributes practical insights on social media, making it an ideal subject for study within the framework of the PES model.

While discussing science communication using the PES model, Schäfer [2023] emphasizes that engaging the public is a key responsibility of experts in universities, alongside research and teaching. This “third mission” of higher education highlights the need for scientists to communicate science to a broader audience, including the complex and evolving topic of GenAI. AI scientists alongside experts from various AI-related fields engage with GenAI, although it is not only experts but also laypeople who interact with it. Thus, investigating how experts communicate about GenAI on social media is essential, as these platforms afford dialogues about scientific topics between scientists and the public [Hara & Chae, 2025]. Therefore, this paper explores how experts and laypeople engage in the co-production of knowledge about GenAI on social media and examines the roles both groups play in shaping the understanding of the technology.

2 Literature review

2.1 Emergence of co-production of scientific knowledge on social media

One traditional view of scientific communication follows a relatively linear model where knowledge is disseminated from scientists to the public, known as the “Public Education Model” [Callon, 1999]. This model emphasizes the role of experts in conveying information to a passive public, with experts often drawing boundaries between themselves and non-experts — a concept referred to as “boundary work” [Gieryn, 1983]. Previous research focused on how scientists establish authority and how journalists, healthcare professionals, and government organizations act as traditional mediators in transmitting scientific knowledge [Mo Jang, 2014].

In recent years, the communication of scientific knowledge has incorporated additional approaches. Social media platforms have transformed the public into active participants, not just recipients, of information. This shift has led to what Bucchi [2016] describes as a “crisis of [traditional] mediators” [p. 265], as platforms like Wikipedia and other collaborative spaces facilitate the public’s contribution to knowledge production. Rather than relying on traditional intermediaries, individuals increasingly engage in the co-production of scientific content, reflecting a more interactive communication process.

Due to the public’s increasing engagement with the co-production of scientific knowledge, the study of knowledge production in science and technology studies (STS) has long explored the role of laypeople in contributing to scientific knowledge [Latour, 1988; Star, 1995; Callon, 1999; Wynne, 1992]. There are three primary perspectives: (a) scientists retain exclusive authority over knowledge production, and the boundary between scientists and the public is generally impermeable [Wynne, 1992, 1996]; (b) laypeople and scientists engage in partnerships to co-produce knowledge [Callon, 1999]; (c) laypeople contribute to knowledge production independently, particularly facilitated by social media and its affordances for interactivity [Casiraghi et al., 2024; Hara & Chae, 2025]. This paper focuses on the third layperson-driven contribution model by addressing the public’s increasing role in co-producing knowledge on social media platforms.

In the layperson-driven contribution model, knowledge collaboration in online environments can take two forms: knowledge reuse, which involves sharing pre-existing knowledge [Markus, 2001; Majchrzak et al., 2004], and the collaborative production of new knowledge, exemplified by platforms like Wikipedia and X [Simons et al., 2024; Casiraghi et al., 2024]. While previous research explored layperson contributions to new scientific knowledge, such as new drug development [Callon & Rabeharisoa, 2003], the rise of social media necessitates further investigation into how the public participates in this process, especially regarding emerging technologies like GenAI.

Recent studies have begun to explore how scientists and experts engage in GenAI discussions on social media. For example, Miyazaki et al. [2024] analyzed discussions on X, revealing that “professors” and “researchers” (excluding data scientists) frequently discussed GenAI between 2019 and 2023. However, there is limited research on how scientists and experts engage with the public to co-produce knowledge on social media, particularly that which reflects their experiences, perspectives, hopes, and concerns about GenAI. Recognizing the void, this study builds on the co-production of knowledge model to investigate how experts and laypeople contribute to shaping public understanding of GenAI on social media.

2.2 Occupations and fields of experts engaged in social media discussions about GenAI

GenAI is anticipated to have different impacts across various occupations and fields, as each sector encompasses distinct tasks that are susceptible to replacement or enhancement by GenAI technologies [Gmyrek et al., 2023]. This implies that not only the public, but also the scientific community, may experience varying degrees of engagement in and perceive different levels of relevance of GenAI’s advancement.

Moreover, public discussions about GenAI extend beyond its scientific and technical aspects by delving into its potential applications across a range of academic disciplines. This is significant considering its accessibility to both the public and scientific fields outside of traditional computer science [e.g., Frey & Osborne, 2023]. Therefore, understanding whether individual experts belong to a scientific community directly related to GenAI (e.g., computer science) or not (e.g., ethics education) is crucial for contextualizing their perspectives and stakes in the advancement of GenAI. Informed by this context and previous studies [e.g., Miyazaki et al., 2024], we investigated the following research question:

RQ 1: Who are the experts, in terms of occupation, communicating about GenAI with the public on social media?

2.3 Topics of experts’ and public discussions about generative AI

Following the emergence of GenAI, including ChatGPT, researchers are now discussing its broad social impacts and domain-specific implications, such as in education [Yu, 2023] and the workforce [Lund & Wang, 2023]. As the new technology reaches a wider range of users with an easy-to-use interface not requiring programming skills, the public not only experiences a new and unobtrusive influence, but also interacts with technology that they can adopt in their daily lives. Thus, the public may engage with experts and others to share their thoughts on the new technology and its implications alongside learning and teaching about what it is and how it can be used effectively.

In the early stages of adopting new technologies, it is important to understand which topics are discussed and how to address concerns or misunderstandings about the technology, as seen in previous cases such as public Wi-Fi and nanotechnology [Chiang & Tang, 2022; Cobb & Macoubrie, 2004]. In turn, understanding the public’s responses will help guide the technology’s future development. Additionally, studying the contexts of the public’s early reflections of their needs and concerns regarding the new technology will inform public education efforts [e.g., Fuglerud et al., 2021]. Thus, we explored the following research question:

RQ 2: What aspects of GenAI do experts and the public discuss on social media?

2.4 Roles of experts and the public in knowledge co-production on generative AI

One should not view the public simply as passive stakeholders of GenAI. However, while numerous previous studies have examined the potential impacts of GenAI on education and science communication [e.g., Chiu, 2024; Schäfer, 2023], they posit that the public passively adopts or is driven by new technology. Therefore, the exploration of how non-expert individuals communicate, learn, and, in some cases, teach about using ‘GenAI in practice’ remains largely underexplored.

Given this gap, in addition to RQ2, we explored how experts and the public participate in knowledge ‘co-production’ [Callon, 1999] about GenAI on social media platforms. As GenAI is widely used and explored for better application by the public, experts and publics may seek, share, and interpret relevant technical information and experiences with GenAI tools. This dynamic represents the “social construction of knowledge in online environments” [Hara & Sanfilippo, 2016; König, 2013].

In the previous literature, researchers have identified different types of roles that participants have played online to support knowledge collaboration activities. Hara and Sanfilippo [2016] classified 15 roles of knowledge collaboration by examining online discussions about MMR vaccination on three different platforms.

While Hara and Sanfilippo identified these roles by analyzing discussions about the contentious topic of child vaccination, we believe that considering these roles in our analysis of knowledge co-production will be informative.

In the knowledge co-production process concerning GenAI, we anticipate a departure from the conventional roles assigned by the deficit model, which typically casts scientists and experts as the primary disseminators of knowledge and the public as passive recipients [Cortassa, 2016]. Guided by previous studies [for summaries and references of the roles, read Hara & Sanfilippo, 2016], we formulated the following research question:

RQ 3: What types of roles do experts and the public take on when discussing GenAI on social media?

2.5 Predictors of public engagement

In the framework of PES, understanding the roles of various actors in scientific discourse and the themes they address is paramount [Stilgoe et al., 2014]. This understanding is vital for fostering informed dialogue, democratizing science and technology, and enhancing public trust and understanding [Invernizzi, 2020; Lemke & Harris-Wai, 2015]. It also increases the inclusion of diverse perspectives and fosters an environment where science and technology (in this context, GenAI) are accessible and accountable to the broader public [Chilvers, 2013; Hara et al., 2019], thus moving technology assessment upstream [Weingart et al., 2021].

Therefore, we argue that examining actors’ roles is crucial for addressing challenges associated with GenAI; it ensures that the development and implementation of the technology is not only scientifically sophisticated, but also socially responsible and ethical [Luckett, 2023]. Thus, we emphasize the necessity of recognizing and analyzing the contributions of these diverse actors, as they are instrumental for driving effective public engagement in science and technology. Based on this point, we formulated the following research question:

RQ 4: How do different social roles influence engagement levels with expert posts and public replies regarding GenAI?

3 Methods

Based on our study objectives, we employed two methodological approaches to achieve comprehensive communication between experts and the public regarding GenAI. Data was collected from the social media platform X. Our computational approach focused on topic modeling, while manual content analysis focused on the social roles in the discourse concerning GenAI.

3.1 Data collection (computational analysis)

We first analyzed the topics of discussions between experts and the public regarding GenAI on X. The data collection consisted of three steps: 1) harvesting posts containing GenAI, 2) extracting experts among the authors of the posts about GenAI, and 3) collecting replies from the public to the experts’ posts about GenAI.

For original posts about GenAI, we used data from an existing study [Miyazaki et al., 2024], which is a comprehensive collection of posts about GenAIs from X. The GenAIs included in this investigation were ChatGPT, Bing Chat, DALL-E, DALL-E 2, Stable Diffusion, Midjourney, Craiyon, GitHub Copilot, GPT-2, and GPT-3.

To extract experts from the authors of GenAI posts, we used the authors’ profile texts. Specifically, we extracted only authors with the words “scientist”, “researcher”, and “professor” in their profiles. This operationalization acknowledges the diverse fields impacted by GenAI, reflecting the multidisciplinary nature of discussions surrounding the technology. It captures expertise from domains such as law, business, education, and creative industries, where professionals contribute to shaping public understanding of generative AI’s social and technical impacts. We extract finer-grained expertise and other attributes of these profiles through topic modeling, which we describe in a later section.

To obtain replies from the public to the posts of experts, we used X’s Academic API. The data collection period was between Nov. 30, 2022 (the date of the release of ChatGPT) and June 23, 2023. As a result, we accumulated 50,487 posts about GenAI by 15,291 experts, and 36,315 replies to these posts by 26,878 users.

3.2 Topic modeling

To understand the contents of the posts and replies about GenAI and to identify the attributes of their authors, we applied topic modeling to the collected textual data (i.e., the contents of the posts and replies as well as the profiles of the authors). Topic modeling is an unsupervised machine learning technique often used to classify large amounts of textual information, a typical example being LDA. For this study, we used Biterm Topic Model (BTM) [Yan et al., 2013], an LDA-based approach that specializes in short texts. We selected the optimal number of topics following Zhao et al. [2015] by examining the perplexity of the posts and replies.

The results of topic modeling provide representative words that have a high probability of belonging to each group. In this study, we generated the top 10 words per group and labeled the content of each topic by studying the outputs of the topic modeling.

3.3 Manual content analysis: social role within the network

For manual content analysis, we randomly extracted data from the main data file used for computational analysis. To ensure that we have fair representation of the public’s replies within each thread, each of our randomly selected main tweets had at least 3 replies, resulting in 99 main tweets and 901 replies for our manual content analysis. In total, we collected 1,000 original posts and replies to identify the social roles played by the experts (main tweets) and the public (replies) on the platform.

We adapted our codebook from Hara and Sanfilippo’s [2016] study on various roles participants play in the online knowledge-sharing process. Additionally, we incorporated two more roles in our codebook (i.e., ‘corrector’ and ‘reactors’) based on the data (Table 1).

PIC
Table 1: Codebook for social roles with descriptions.

Two additional codes were employed to categorize texts that are not in English (non-English) and texts that do not convey any identifiable meanings (cannot code) for coding purposes.

Inter-coder reliability (ICR) was tested to ensure general agreement on the data coding categories. ICR was assessed using two coders based on 20% (200 posts and replies) of the total data. For our reliability score, percentage agreement varied from 95.5 to 100%; Scott’s Pi, Cohen’s K, and Krippendorff’s Alpha scores ranged from 0.78 for ‘knowledge shaper/compiler’ (hereafter KSC), to the highest score of 1 for the ‘facilitator’ role. Thus, the ICR scores were sufficient to proceed with the final data coding.

3.4 Regression analysis of social roles and public engagement

To explore the influences of social roles on public engagement, we employed negative binomial regression (hereafter NBR), consistent with methodologies used in previous research that considered social media engagement as dependent variables [e.g., Jung et al., 2022]. This analytical choice was influenced by the nature of our dependent variable, public engagement, which is tallied by the number of likes, replies, and reposts. These metrics usually follow Poisson distributions but demonstrate positive skewness, mainly due to some posts achieving significantly elevated engagement levels [Moran et al., 2020]. Owing to the shortcomings of Poisson regression in managing over dispersed count data, we selected NBR as a more suitable alternative for modeling such data, as endorsed by existing literature [Hilbe, 2014].

For our analysis, we conducted separate NBRs for two distinct groups based on the groups of posts: posts from experts and replies from the public. This bifurcated approach allowed us to isolate and compare the effects of social roles on public engagement within each group independently.

4 Results

4.1 RQ 1: Who are the experts, in terms of occupation, communicating about GenAI with the public on social media?

Table 2 represents the output of the topic modeling to the profile texts, identifying eight distinct topics from the data. The results show the labels and representative words, along with the ratio of proportion for each group within our data corpus. Since this data consists of experts, most of the groups appeared to be represented by their respective specialties (e.g., healthcare, security, and AI). Conversely, some groups of experts in our data put their hobbies and writing activities at the forefront. It is worth noting that experts from seemingly less technical fields, such as those of legality and business, are the largest groups in our findings.

PIC
Table 2: Classification of expert posts authors by topic modeling.

4.2 RQ 2: What aspects of GenAI do experts and the public discuss on social media?

We also employed topic modeling to explore the topics discussed by experts and the public about GenAI on social media. Table 3 shows the output of the topic model regarding the experts’ posts about GenAI. We found that the discussions primarily focused on the various types of AI and their applications in different sectors (e.g., education, arts, language, and image).

PIC
Table 3: Topic model classification of generative AI tweets posted by experts.

To identify the topics of interest among the public, we conducted topic modeling of replies to the experts’ original posts (Table 4). Our findings indicated similar discussion patterns between experts and the public with only subtle differences. Although the experts’ posts addressed specific aspects of GenAI (i.e., code, search, language, and art) (Table 3), replies from the public focused more on the system model performance and its usage in general (i.e., question, usage of generative AI, and system performance) (Table 4).

PIC
Table 4: Topic model classification of replies from the public to experts.

4.3 RQ 3: What types of roles do experts and the public take on when discussing GenAI on social media?

In our analysis, we identified a variety of social roles adopted by authors of (a) posts from experts and (b) replies from the public. Notably, a significant proportion of experts functioned as ‘KSC’, comprising 26.26%. A larger percentage of the public served as ‘Reactors’ (17.31%) and ‘Movers’ (15.09%), which implies the different roles of the two groups in science communication (Table 5).

PIC
Table 5: Frequency and percentages for social roles.

4.4 RQ 4: How do different social roles influence engagement levels with expert posts and public replies regarding GenAI?

To further understand the potential influences of social roles on engagement in the context of science communication about GenAI, we examined the dynamics between these roles and the following engagement within two groups: (a) posts from experts and (b) replies from the public. Specifically, we employed NBR analyses to investigate how social roles influence engagement such as likes, replies, and reposts. This approach allowed us to identify patterns and differences in engagement related to social roles across these two groups. Two social roles — ‘Distractor’ and ‘Mover’ — were not included in the regression analysis because no posts classified under these social roles were in ‘posts from experts’.

1. Likes The results of NBR analysis indicated that certain roles were significantly associated with the likelihood of a post or a reply receiving likes. Notably, ‘Helper’ (b = 2.92, p <.001) and ‘Judge’ (b = 2.16, p <.001) showed a strong positive relationship with like counts, whereas ‘Corrector’ was negatively associated (b = -1.74, p = .030). ‘Facilitator’, although showing a negative coefficient (b = -5.32), did not reach statistical significance (p = .075). In the cases of replies from the public, ‘KSC’ had the strongest positive association with like counts (b = 3.10, p <.001), followed by ‘Corrector’ (b = 0.87, p <.001), ‘Connector’ (b = 0.60, p <.001), and ‘Supporter’ (b = 0.67, p <.001). In contrast, ‘Seeker’ was negatively associated with like counts (b = -1.26, p <.001). ‘Judge’ exhibited a positive coefficient (b = 0.30), but this was not statistically significant (p = .109).

Interestingly, the results suggest that the public is more likely to ‘like’ posts from experts when they fulfill the roles of ‘Judge’ or ‘Helper’. Especially considering that the experts’ social behaviors known to generate social media engagement (e.g., supporter, reactor) [Jiang et al., 2022; Wang & Yang, 2020] and public replies with the same roles (i.e., judge, helper) were not significantly more engaged, this may represent the desire for credible information and guidance on emerging technologies as found in other similar contexts [e.g., Dedema & Hara, 2023; Rogers-Hayden & Pidgeon, 2007]. Expert posts that correct misinformation likely garner fewer likes because these corrections often address complex, technical issues not posed by the public. Thus, they may not resonate without sufficient background knowledge.

In contrast, replies from the public that facilitate knowledge building, specifically those labeled as ‘KSC’, ‘Corrector’, and ‘Supporter’, received more engagement. This pattern indicates that while the public views experts as information sources, they engage with peer replies as part of a collective learning process. Nevertheless, public replies from ‘Seekers’ attract fewer likes, probably due to their specificity, which may not have broad relevance and attract individuals with specific interests or expertise.

2. Replies NBR analysis was conducted to explore the influence of various social roles on the reply counts of posts from experts or replies from the public. For the experts’ posts, the model revealed that the role of ‘Helper’ was significantly associated with an increase in replies (b = 0.86, p = .002). However, no other roles were significantly associated with reply count, including ‘Judge’ (b = 0.46, p = .135) and ‘Corrector’ (b = -0.63, p = .404). The ‘Facilitator’ role displayed a negative coefficient (b = -0.13), but this was not statistically significant (p = .897). Among replies from the public, several roles demonstrated a significant positive association with reply count. ‘KSC’ showed the strongest positive effect (b = 2.38, p <.001), followed by ‘Corrector’ (b = 0.85, p <.001), ‘Connector’ (b = 0.50, p = .018), and ‘Facilitator’ (b = 0.47, p = .029). In contrast, ‘Seeker’ (b = 0.56, p = .112), ‘Reactor’ (b = -0.13, p = .512), and ‘Judge’ (b = -0.35, p = .309) were not significantly associated with reply count. Notably, ‘Supporter’ exhibited a negative coefficient (b = -0.06), but this was not statistically significant (p = .783).

The findings suggest that the public participates in the experts’ knowledge-sharing process by posing follow-up questions. When their initial inquiries are addressed by expert ‘Helpers’, the questioners and others often seek further clarification or pose additional directions of the topic discussed, recognizing that these experts are willing to assist them. This type of engagement could be further enhanced by the increased number of replies, signaling the credibility of the expert authors. These types of replies have the possibility of promoting additional replies, resulting in more engagements within the thread.

The cases of public replies indicate a noteworthy propensity among the public to expressively share their own opinions, thoughts, and reactions through replies to their peers. Consequently, replies from the public may aim to modify knowledge (i.e., ‘KSC’, ‘Corrector’, and ‘Connector’) or facilitate collaborative learning (i.e., facilitators), each bringing distinctive viewpoints to the discussion.

3. Reposts NBR analysis was conducted to explore the influence of various social roles on repost (formerly retweet) counts for posts/replies made by experts and the public. For the experts’ posts, roles such as ‘Helper’ (b = 2.92, p <.001) and ‘Judge’ (b = 1.58, p <.001) were significantly and positively associated with an increased likelihood of being reposted. In contrast, ‘Corrector’, although it showed a negative relationship with repost counts (b = -1.84), did not reach statistical significance (p = .095). Similarly, ‘Facilitator’ demonstrated a negative coefficient (b = -3.63), which was not statistically significant (p = .225). In the analysis of replies from the public, ‘KSC’ emerged with the strongest positive association with repost counts (b = 3.52, p <.001). Other roles such as ‘Corrector’ (b = 0.92, p = .001), ‘Connector’ (b = 1.15, p <.001), and ‘Supporter’ (b = 0.71, p = .005) were also positively correlated with repost counts. Although ‘Seeker’ was found to be negatively associated with repost counts (b = -0.63), this was not significant (p = .388). ‘Judge’, showed a positive coefficient (b = 0.61), but also did not achieve statistical significance (p = .068).

These findings align closely with other instances of public engagement. In communications involving expert authors, posts from a ‘Helper’ or ‘Judge’ were often reposted by the public, because their posts might have been perceived as informative and originating from credible sources or may invoke public curiosity.

Regarding replies from the public, roles like ‘Corrector’, ‘Connector’, and ‘Supporter’ were often reposted by the fellow public. This pattern of reposting suggests that individuals were spreading information they perceived to be accurate provided by their peer ‘Correctors’ by engaging in the collaborative refinement of knowledge through interactions with ‘Connectors’ and amplifying supportive perspectives with ‘Supporters’. This type of behavior may illustrate that, within the context of GenAI discussions, the public was not only actively exchanging information derived from non-expert individuals with firsthand experiences of GenAI tools, but also facilitating the evolution of knowledge and offering support to peers by sharing their posts about GenAI to others on social media.

5 Discussion

Based on the framework of PES and focusing on co-production of knowledge, the current study explored the dynamics between experts and the public in discussions about GenAI on social media platforms, especially on X — a platform known as suitable for publicly sharing information and engaging with the non-expert public [Lee et al., 2020]. Our findings offer insights into (a) which types of experts discuss GenAI, (b) which issues were discussed, (c) the roles that experts and the public had in the GenAI discussion, and (d) how these roles contributed to public engagement concerning GenAI.

5.1 RQ 1: Experts talking about GenAI

The current study unveiled that experts in our dataset outside the fields of science and technology were actively participating in co-production of knowledge about GenAI. Based on the classification of experts according to the biographies on their X accounts, 17.9% were identified as ‘legal experts’, followed by 15.6% as those in the ‘business’ sector, and 15% as ‘AI scientists’. These varied categories represent that GenAI influences various fields and is not limited to scientific and technical areas (e.g., computer science), but rather extends to law, business, healthcare, and creative writing, which were recently identified by numerous news media as fields relevant to AI [e.g., Neal, 2024].

Considering the extensive and profound impacts of GenAI across various sectors and on society overall, the discussion has attracted not only scientists, but also non-scientist experts on X. As supported by the knowledge co-production perspective [Callon, 1999], these experts engage in the conversation to educate the public, gain insights into public perspectives, and inspire and stimulate interest in GenAI from diverse viewpoints. They also legitimize or critique the application of GenAI in specific fields and everyday life through expressing support or concerns about its innovations.

We argue that such diverse voices beyond directly related fields like computer science would enhance ‘responsible innovation’ in GenAI [Stilgoe et al., 2014]. In other words, discussions on social media welcome inputs from various stakeholders in multiple fields, who may benefit or face risks from emerging technology. Such engagement between a wider range of experts and the public can contribute to promoting a better direction for advancing the technology [Sykes & Macnaghten, 2013]. This discourse is achievable with the shared values established through dialogues that transcend specific fields and backgrounds, rather than allowing individuals with existing influence and specialized knowledge to monopolize the conversation and ultimately guide innovation in GenAI [Sykes & Macnaghten, 2013].

5.2 RQ 2: Topics of discussion

Discussions about GenAI encompass a wide range of topics, indicating the technology’s perceived versatility concerning its potential and its remarkable influence on multiple fields. According to the findings, experts most frequently discussed the ‘educational use of AI’ (22.4%), followed by the ‘usage of generative AI’ (18.4%), and ‘AI coding’ (17.6%). Regarding replies from the public, ‘AI models’ were the leading topic (32.8%), followed by the ‘educational use of AI’ (25.5%) and ‘questions’ (17%).

By examining the discussion topics of posts from experts and replies from the public, we identified gaps and commonalities. For instance, while experts broadened the scope of discussions about AI to include ‘AI arts’ and ‘AI image generation’, replies from the public were more focused on ‘text’ and ‘language’-based models, including in their questions (e.g., ‘question’: 17%). This trend may reflect the current state of GenAI applications in the everyday lives of the public, who are increasingly adopting text-based generative AI models like ChatGPT, which are accessible through natural language [Wood, 2024]. However, as we observe advancement in GenAI across various modalities — including images, audio, and video (e.g., DALL-E for images and Sora for videos) — experts are ‘educating’ and inviting discussions about more sophisticated models that could bring both benefits and potential harms to society [e.g., Yazdani et al., 2024].

The relationship between education and GenAI has emerged as a particularly vibrant discussion topic, both in posts from experts and replies from the public. Numerous studies have highlighted both the promises and concerns of GenAI in educational environments [e.g., Baidoo-anu & Owusu Ansah, 2023]. GenAI adoption in education is increasingly influenced by social and cultural discussions about its implications. This is where the public’s replies provided additional context for the experts’ comments.

Despite finding that experts and the public often discuss the same topics (i.e., educational use and AI usage), our study also identified a possible knowledge/opinion gap between expert and public discussion as indicated by Su et al. [2017]. While experts’ discussions mentioned an array of specialized features related to GenAI (e.g., arts, codes, language, and image), public discussions were mostly driven by general curiosity (e.g., question) and everyday use (e.g., system performance).

5.3 RQ 3: Roles of experts and the public in discussion

Following previous studies [e.g., Hara & Sanfilippo, 2016], we explored the roles of individuals engaged in dialogues between scientists and the public on X. We identified traditional boundary work [Gieryn, 1983] in terms of roles of ‘experts’ as ‘educators’ (e.g., ‘KSC’; 26.26%, ‘Judge’; 16%) and ‘the public’ as ‘learners or responders’ to new scientific information (e.g., ‘Supporter’; 10.32%; ‘Reactors’; 17.31%). This tendency resembles traditional forms of science communication, which is that scientists are the ones delivering knowledge.

However, in the current context, the experts also took roles to seek relevant information (‘Seeker’; 14.14%) in various areas by asking questions like “What is the best explanation of [what is known of] Chat[G]PT’s training process and data?” and “Will you use #ChatGPT to create first draft of manuscripts?” while also contributing as reactors (‘Reactor’; 10.10%). In contrast, the public contributed to knowledge collaboration by ‘moving’ current discussions to adjacent topics (‘Mover’; 15.09%) and ‘correcting’ inaccurate information about GenAI (‘Corrector’; 7.33%). This dynamic illustrates how the public actively contributed their own experiences and knowledge while also introducing new issues they seemed to believe warranted collective discussion. For example, in a response to experts’ comment on “Constructive policy on ChatGPT in the classroom”, one public ‘mover’ replied “Let’s make our country policies on ChatGPT”. Another layperson acting as a corrector replied to experts’ thoughts on incorporating ChatGPT as a co-author replied “Not being a moral agent, an AI system cannot be a coauthor because it cannot take responsibility for the content of the paper. Do you acknowledge spellcheck and grammar check as co-authors already?”.

These trends suggest that GenAI is becoming accessible to the public and enabling hands-on experiences with these tools. Meanwhile, experts — who are embracing this new technology and contemplating its societal impacts — are also seeking opinions and information from the public and responding to their feedback. In this case, the public is also contributing their opinions and knowledge to the discussion. From the PES perspective, this dynamic demonstrates how ‘science and technology’ are contextualized by both experts and the public [Hayes et al., 2020], and ‘responsible’ experts are initiating discussions with hands-on stakeholders to facilitate more informed decisions about issues related to GenAI [Delgado et al., 2011]. Given these points, dialogue between experts and the public concerning GenAI serves as remarkable examples of how experts and the public are sharing knowledge and learning from each other about the use and performance of GenAI in real-world contexts. As such, the traditional model of boundary work is not necessarily applicable in this context.

5.4 RQ 4: Engagement and roles

The analysis of social media engagement reveals that posts from experts acting as ‘Helpers’ and ‘Judges’ tended to receive more engagement of at least two types (i.e., ‘likes’ and ‘reposts’). In contrast, in the cases of replies from the public, those who contributed to external information sources (‘KSC’) and corrected misleading information (‘Corrector’) tended to receive more engagement of at least two types (i.e., ‘likes’ and ‘replies’).

These trends present compelling evidence that the public no longer just includes ‘learners’ or ‘responders’ to scientific information; they are rather becoming ‘producers and consumers’ of scientific knowledge themselves. Within the co-production of knowledge framework, this suggests that the public brings their own value-relevant concerns about technology into discussions with scientists or experts, thereby contributing to co-creating new knowledge with experts.

In the current context, the public actively collaborates with experts from diverse backgrounds, utilizing their own information sources to support experts in addressing and discussing various aspects of new technologies. Essentially, the public has become a partner to experts in the ‘co-production’ of scientific dialogue concerning GenAI [Callon, 1999; Joly & Kaufmann, 2008]. Given this context, this study contributes to an understanding of co-production of scientific knowledge on social media in which dialogues between scientists and the public are facilitated.

6 Limitations

Despite the insights offered by the current study regarding the dynamics of interactions on social media between experts and the public concerning GenAI through the framework of PES, several limitations warrant consideration.

First, the research period overlaps with the reshaping process of X following its purchase by Elon Musk, CEO of Tesla and SpaceX, who announced his bid to buy Twitter on April 14, 2022, and concluded the acquisition on October 28, 2022. In June 2024, X removed the legacy blue check marks previously used to verify the authenticity of accounts and replaced them with a paid verification system called Twitter Blue [Silberling et al., 2024]. Additionally, groups of experts have engaged in an ‘exodus’ from Twitter, expressing concerns about the potential negative consequences of the acquisition [Kupferschmidt, 2022]. Therefore, further study might be required to explore how the new verification system influences individuals’ perceptions of ‘experts’ and how the exodus affects the expert-public networks [Bastian, 2023; Stokel-Walker, 2023].

However, the current study identified ‘experts’ based on bios with signals associated with their expertise (e.g., educational and professional credentials), which helps their audiences perceive them as experts [Harris et al., 2024], rather than relying on blue checkmarks or Twitter Blue. We also acknowledge that X still plays a role as a self-curated international knowledge publication, accessible online at no cost, providing real-time, continuously updated information [Casiraghi et al., 2024; López-Goñi & Sánchez-Angulo, 2018]. A previous study also found that X remains the preferred social media platform for high-profile researchers, and its alternatives have not yet challenged X’s dominance [Siebert et al., 2023].

Given these points, we believe the current study effectively captures the essential features of science communication on the platform, but we also acknowledge that this study’s use of X may limit its generalizability across the diverse range of social media platforms, especially ones emerging as alternatives to X for science communication. Future research should explore how these alternative platforms, such as Mastodon [Vidal Valero, 2023], may serve the needs of communicating science in the context of generative AI and how they compare to X.

Second, while our study provides valuable insights into the relationship between social roles and engagement metrics, it is important to acknowledge potential confounding variables. Specifically, factors such as verification status, follower count, and multimedia use may impact engagement metrics like likes, shares, and comments. Verification status, often associated with credibility [Edgerly & Vraga, 2019] and follower count, which can enhance visibility and network effects, could skew the engagement levels observed [Hu et al., 2015]. Additionally, multimedia use, known to generally increase engagement [Thongmak, 2024], might interact with specific social roles in complex ways. Therefore, future research should incorporate normalization techniques and control for these factors to provide a more accurate measure of the relationship between social roles and engagement metrics. This approach will help validate our results and address potential biases introduced by these variables, ensuring a more robust understanding of social media communication dynamics.

Third, the study’s focus on direct interactions between experts and the public does not capture the breadth of interactions among non-expert individuals, which can occur independently of expert mediation on social media. Considering that GenAI represents a highly experiential technology compared to prior innovations like nanotechnology or nuclear energy, understanding how lay publics engage in knowledge production and dissemination without expert intervention is crucial. Future studies should investigate these aspects to provide a more comprehensive understanding of knowledge co-production within the PES approach in the context of GenAI.

Fourth, the expert data used in this study is based on self-reported information from X’s accounts. Although a comprehensive set of discussions on X was obtained, the complete accuracy of the profiles cannot be guaranteed. Future research using online-offline reciprocal data, such as combining this with questionnaires, is recommended for further study.

Lastly, the methodology employed in this study — combining topic modeling with manual analysis — offers only a ‘snapshot’ of the evolving discourse between experts and the public regarding GenAI. Given the burgeoning interest in this technology and its potential to attract substantial public attention [Pratt, 2024], future research must aim to capture the dynamic shifts in social media discourse concerning participants, discussed topics, and the social roles within these discussions over an extended period. This includes people’s perceptions towards GenAI as an advanced technology and its mass use. With this approach, a theoretical framework such as sociotechnical imaginaries [e.g., Richter et al., 2023] may be fruitful.

A Statistical analysis of knowledge collaboration roles

PIC
Table 6: Chi square homogeneity test.

PIC
Table 7: Chi square test using proportion.

References

Baidoo-anu, D., & Owusu Ansah, L. (2023). Education in the era of generative artificial intelligence (AI): understanding the potential benefits of ChatGPT in promoting teaching and learning. Journal of AI, 7(1), 52–62. https://doi.org/10.61969/jai.1337500

Bastian, H. (2023, August 20). How is science Twitter’s “Mastodon migration” panning out? PLOS Blog: Absolutely Maybe. https://absolutelymaybe.plos.org/2023/08/20/how-is-science-twitters-mastodon-migration-panning-out/

Bucchi, M. (2016). Editorial. Public Understanding of Science, 25(3), 264–268. https://doi.org/10.1177/0963662516634497

Callon, M. (1999). The role of lay people in the production and dissemination of scientific knowledge. Science, Technology and Society, 4(1), 81–94. https://doi.org/10.1177/097172189900400106

Callon, M., & Rabeharisoa, V. (2003). Research “in the wild” and the shaping of new social identities. Technology in Society, 25(2), 193–204. https://doi.org/10.1016/s0160-791x(03)00021-6

Casiraghi, L., Kim, E., & Hara, N. (2024). Tweeting on thin ice: scientists in dialogic climate change communication with the public. First Monday, 29. https://doi.org/10.5210/fm.v29i6.13543

Chiang, C.-Y., & Tang, X. (2022). Use public wi-fi? Fear arouse and avoidance behavior. Journal of Computer Information Systems, 62(1), 73–81. https://doi.org/10.1080/08874417.2019.1707133

Chilvers, J. (2013). Reflexive engagement? Actors, learning, and reflexivity in public dialogue on science and technology. Science Communication, 35(3), 283–310. https://doi.org/10.1177/1075547012454598

Chiu, T. K. F. (2024). The impact of Generative AI (GenAI) on practices, policies and research direction in education: a case of ChatGPT and Midjourney. Interactive Learning Environments, 32(10), 6187–6203. https://doi.org/10.1080/10494820.2023.2253861

Chow, A. R. (2023, February 8). How ChatGPT managed to grow faster than TikTok or Instagram. Time. https://time.com/6253615/chatgpt-fastest-growing/

Cobb, M. D., & Macoubrie, J. (2004). Public perceptions about nanotechnology: risks, benefits and trust. Journal of Nanoparticle Research, 6(4), 395–405. https://doi.org/10.1007/s11051-004-3394-4

Cortassa, C. (2016). In science communication, why does the idea of a public deficit always return? The eternal recurrence of the public deficit. Public Understanding of Science, 25(4), 447–459. https://doi.org/10.1177/0963662516629745

Dedema, M., & Hara, N. (2023). Public engagement with science during and about COVID-19 via Twitter: who, when, what, and how. In S. Yang, X. Zhu & P. Fichman (Eds.), The usage and impact of ICTs during the Covid-19 pandemic. Routledge. https://doi.org/10.4324/9781003231769

Delgado, A., Lein Kjølberg, K., & Wickson, F. (2011). Public engagement coming of age: from theory to practice in STS encounters with nanotechnology. Public Understanding of Science, 20(6), 826–845. https://doi.org/10.1177/0963662510363054

Duarte, F. (2025, January 6). Number of ChatGPT users (Jan 2025). Exploding Topics. https://explodingtopics.com/blog/chatgpt-users

Edgerly, S., & Vraga, E. K. (2019). The blue check of credibility: does account verification matter when evaluating news on Twitter? Cyberpsychology, Behavior, and Social Networking, 22(4), 283–287. https://doi.org/10.1089/cyber.2018.0475

Frey, C. B., & Osborne, M. (2023). Generative AI and the future of work: a reappraisal. The Brown Journal of World Affairs, 30, 1–17. https://bjwa.brown.edu/30-1/generative-ai-and-the-future-of-work-a-reappraisal/

Fuglerud, K. S., Halbach, T., & Snaprud, M. (2021). Involving diverse users for inclusive technology development. In K. Blashki (Ed.), Proceedings of the International Conferences Interfaces and Human Computer Interaction 2021 and Game and Entertainment Technologies 2021. IADIS. https://www.iadisportal.org/digital-library/involving-diverse-users-for-inclusive-technology-development

Gieryn, T. F. (1983). Boundary-work and the demarcation of science from non-science: strains and interests in professional ideologies of scientists. American Sociological Review, 48(6), 781–795. https://doi.org/10.2307/2095325

Gmyrek, P., Berg, J., & Bescond, D. (2023). Generative AI and jobs: a global analysis of potential effects on job quantity and quality [ILO Working Paper, 96]. https://doi.org/10.54394/FHEM8239

Hara, N., Abbazio, J., & Perkins, K. (2019). An emerging form of public engagement with science: Ask Me Anything (AMA) sessions on Reddit r/science. PLoS ONE, 14(5), e0216789. https://doi.org/10.1371/journal.pone.0216789

Hara, N., & Chae, S. W. (2025). Cross-platform analysis of mediated science communication during the COVID-19 pandemic. In N. Hara & P. Fichman (Eds.), Social informatics. Routledge.

Hara, N., & Sanfilippo, M. R. (2016). Co-constructing controversy: content analysis of collaborative knowledge negotiation in online communities. Information, Communication & Society, 19(11), 1587–1604. https://doi.org/10.1080/1369118x.2016.1142595

Harris, M. J., Murtfeldt, R., Wang, S., Mordecai, E. A., & West, J. D. (2024). Perceived experts are prevalent and influential within an antivaccine community on Twitter. PNAS Nexus, 3(2), pgae007. https://doi.org/10.1093/pnasnexus/pgae007

Hayes, C., Stott, K., Lamb, K. J., & Hurst, G. A. (2020). “Making every second count”: utilizing TikTok and systems thinking to facilitate scientific public engagement and contextualization of chemistry at home. Journal of Chemical Education, 97(10), 3858–3866. https://doi.org/10.1021/acs.jchemed.0c00511

Hilbe, J. M. (2014). Modeling count data. Cambridge University Press. https://doi.org/10.1017/CBO9781139236065

Hu, Y., Farnham, S., & Talamadupula, K. (2015). Predicting user engagement on Twitter with real-world events. Proceedings of the International AAAI Conference on Web and Social Media, 9(1), 168–177. https://doi.org/10.1609/icwsm.v9i1.14638

Invernizzi, N. (2020). Public participation and democratization: effects on the production and consumption of science and technology. Tapuya: Latin American Science, Technology and Society, 3(1), 227–253. https://doi.org/10.1080/25729861.2020.1835225

Jiang, H., Cheng, Y., Yang, J., & Gao, S. (2022). AI-powered chatbot communication with customers: dialogic interactions, satisfaction, engagement, and customer behavior. Computers in Human Behavior, 134, 107329. https://doi.org/10.1016/j.chb.2022.107329

Joly, P.-B., & Kaufmann, A. (2008). Lost in translation? The need for ‘upstream engagement’ with nanotechnology on trial. Science as Culture, 17(3), 225–247. https://doi.org/10.1080/09505430802280727

Jung, A.-K., Stieglitz, S., Kissmer, T., Mirbabaie, M., & Kroll, T. (2022). Click me…! The influence of clickbait on user engagement in social media and the role of digital nudging. PLoS ONE, 17(6), e0266743. https://doi.org/10.1371/journal.pone.0266743

Kappel, K., & Holmen, S. J. (2019). Why science communication, and does it work? A taxonomy of science communication aims and a survey of the empirical evidence. Frontiers in Communication, 4, 55. https://doi.org/10.3389/fcomm.2019.00055

König, R. (2013). WIKIPEDIA: between lay participation and elite knowledge representation. Information, Communication & Society, 16(2), 160–177. https://doi.org/10.1080/1369118x.2012.734319

Kupferschmidt, K. (2022). As Musk reshapes Twitter, academics ponder taking flight. Science, 378, 583–584. https://doi.org/10.1126/science.adf6617

Latour, B. (1988). Science in action: how to follow scientists and engineers through society. Harvard University Press.

Lee, N. M., Abitbol, A., & VanDyke, M. S. (2020). Science communication meets consumer relations: an analysis of Twitter use by 23andMe. Science Communication, 42(2), 244–264. https://doi.org/10.1177/1075547020914906

Lemke, A. A., & Harris-Wai, J. N. (2015). Stakeholder engagement in policy development: challenges and opportunities for human genomics. Genetics in Medicine, 17(12), 949–957. https://doi.org/10.1038/gim.2015.8

López-Goñi, I., & Sánchez-Angulo, M. (2018). Social networks as a tool for science communication and public engagement: focus on Twitter. FEMS Microbiology Letters, 365(2), fnx246. https://doi.org/10.1093/femsle/fnx246

Luckett, J. (2023). Regulating generative AI: a pathway to ethical and responsible implementation. International Journal on Cybernetics & Informatics, 12(5), 79–92. https://doi.org/10.5121/ijci.2023.120508

Lund, B. D., & Wang, T. (2023). Chatting about ChatGPT: how may AI and GPT impact academia and libraries? SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4333415

Majchrzak, A., Cooper, L. P., & Neece, O. E. (2004). Knowledge reuse for innovation. Management Science, 50, 174–188. https://www.jstor.org/stable/30046057

Markus, M. L. (2001). Toward a theory of knowledge ruse: types of knowledge reuse situations and factors in reuse success. Journal of Management Information Systems, 18, 57–93.

Miyazaki, K., Murayama, T., Uchiba, T., An, J., & Kwak, H. (2024). Public perception of generative AI on Twitter: an empirical study based on occupation and usage. EPJ Data Science, 13, 2. https://doi.org/10.1140/epjds/s13688-023-00445-y

Mo Jang, S. (2014). Seeking congruency or incongruency online? Examining selective exposure to four controversial science issues. Science Communication, 36(2), 143–167. https://doi.org/10.1177/1075547013502733

Moran, G., Muzellec, L., & Johnson, D. (2020). Message content features and social media engagement: evidence from the media industry. Journal of Product & Brand Management, 29(5), 533–545. https://doi.org/10.1108/jpbm-09-2018-2014

Neal, J. (2024, February 14). The legal profession in 2024: AI. Harvard Law Today. https://hls.harvard.edu/today/harvard-law-expert-explains-how-ai-may-transform-the-legal-profession-in-2024/

Pratt, M. K. (2024, June 17). The 10 biggest issues IT faces today. CIO. https://www.cio.com/article/228199/the-12-biggest-issues-it-faces-today.html

Richter, V., Katzenbach, C., & Schäfer, M. S. (2023). Imaginaries of artificial intelligence. In S. Lindgren (Ed.), Handbook of critical studies of artificial intelligence (pp. 209–223). Edward Elgar Publishing. https://doi.org/10.4337/9781803928562.00024

Rogers-Hayden, T., & Pidgeon, N. (2007). Moving engagement “upstream”? Nanotechnologies and the Royal Society and Royal Academy of Engineering’s inquiry. Public Understanding of Science, 16(3), 345–364. https://doi.org/10.1177/0963662506076141

Schäfer, M. S. (2023). The Notorious GPT: science communication in the age of artificial intelligence. JCOM, 22(02), Y02. https://doi.org/10.22323/2.22020402

Siebert, M., Siena, L. M., & Ioannidis, J. P. A. (2023). Twitter and Mastodon presence of highly-cited scientists. bioRxiv. https://doi.org/10.1101/2023.04.23.537950

Silberling, A., Corrall, C., & Stringer, A. (2024, June 5). Elon Musk’s X: a complete timeline of what Twitter has become. TechCrunch. https://techcrunch.com/2024/06/05/elon-musk-twitter-everything-you-need-to-know/

Simons, A., Kircheis, W., Schmidt, M., Potthast, M., & Stein, B. (2024). Who are the “Heroes of CRISPR”? Public science communication on Wikipedia and the challenge of micro-notability. Public Understanding of Science, 33(7), 918–934. https://doi.org/10.1177/09636625241229923

Star, S. L. (Ed.). (1995). Ecologies of knowledge: work and politics in science and technology. SUNY Press.

Stilgoe, J., Lock, S. J., & Wilsdon, J. (2014). Why should we promote public engagement with science? Public Understanding of Science, 23(1), 4–15. https://doi.org/10.1177/0963662513518154

Stokel-Walker, C. (2023). Twitter changed science — what happens now it’s in turmoil? Nature, 613(7942), 19–21. https://doi.org/10.1038/d41586-022-04506-6

Su, L. Y.-F., Scheufele, D. A., Bell, L., Brossard, D., & Xenos, M. A. (2017). Information-sharing and community-building: exploring the use of Twitter in science public relations. Science Communication, 39(5), 569–597. https://doi.org/10.1177/1075547017734226

Sykes, K., & Macnaghten, P. (2013). Responsible innovation — opening up dialogue and debate. In R. Owen, J. Bessant & M. Heintz (Eds.), Responsible innovation: managing the responsible emergence of science and innovation in society (pp. 85–107). Wiley. https://doi.org/10.1002/9781118551424.ch5

Tang, Y., Abbazio, J. M., Hew, K. F., & Hara, N. (2021). Exploration of social cues in technology-mediated science communication: a multidiscipline analysis on ‘Ask Me Anything (AMA)’ sessions in Reddit r/science. JCOM, 20(07), A04. https://doi.org/10.22323/2.20070204

Thongmak, M. (2024). Twitter content strategies to maximize engagement: the case of Thai Banks. Computers in Human Behavior, 152, 108081. https://doi.org/10.1016/j.chb.2023.108081

Vidal Valero, M. (2023). Thousands of scientists are cutting back on Twitter, seeding angst and uncertainty. Nature, 620(7974), 482–484. https://doi.org/10.1038/d41586-023-02554-0

Wang, Y., & Yang, Y. (2020). Dialogic communication on social media: how organizations use Twitter to build dialogic relationships with their publics. Computers in Human Behavior, 104, 106183. https://doi.org/10.1016/j.chb.2019.106183

Weingart, P., Joubert, M., & Connoway, K. (2021). Public engagement with science — origins, motives and impact in academic literature and science policy. PLoS ONE, 16(7), e0254201. https://doi.org/10.1371/journal.pone.0254201

Wood, L. (2024, February 7). Global natural language processing (NLP) market report 2023–2028: generative AI acting as a catalyst for the transforming NLP market. Yahoo Finance. https://finance.yahoo.com/news/global-natural-language-processing-nlp-092300463.html

Wynne, B. (1992). Misunderstood misunderstanding: social identities and public uptake of science. Public Understanding of Science, 1(3), 281–304. https://doi.org/10.1088/0963-6625/1/3/004

Wynne, B. (1996). May the sheep safely graze? A reflexive view on the expert-lay knowledge divide. In S. Lash, B. Szerszynski & B. Wynne (Eds.), Risk, environment and modernity: towards a new ecology (pp. 44–83). SAGE Publications. https://doi.org/10.4135/9781446221983.n3

Yan, X., Guo, J., Lan, Y., & Cheng, X. (2013). A biterm topic model for short texts. WWW ’13: Proceedings of the 22nd International Conference on World Wide Web, 1445–1456. https://doi.org/10.1145/2488388.2488514

Yazdani, S., Saxena, N., Wang, Z., Wu, Y., & Zhang, W. (2024). A comprehensive survey of image and video generative AI: recent advances, variants, and applications. https://doi.org/10.13140/RG.2.2.30721.63842

Yu, H. (2023). Reflection on whether Chat GPT should be banned by academia from the perspective of education and teaching. Frontiers in Psychology, 14, 1181712. https://doi.org/10.3389/fpsyg.2023.1181712

Zhao, W., Chen, J. J., Perkins, R., Liu, Z., Ge, W., Ding, Y., & Zou, W. (2015). A heuristic approach to determine an appropriate number of topics in topic modeling. BMC Bioinformatics, 16(Suppl 13), S8. https://doi.org/10.1186/1471-2105-16-s13-s8

About the authors

Noriko Hara (Ph.D. Indiana University) is a professor of information science and the department chair of the Information & Library Science Department in the Luddy School of Informatics, Computing, and Engineering at Indiana University, Bloomington. Her current research interests are technology-mediated public engagement with science and Social Informatics.

E-mail: nhara@iu.edu

Eugene Kim (M.A. Indiana University Bloomington) is a Ph.D. candidate in The Media School at Indiana University Bloomington. His research interests include risk/health/science communication, public relations, and social media.

E-mail: eugekim@iu.edu

Shohana Akter is a Ph.D. student in Information Science at Indiana University Bloomington. Her current research interest revolves around studying human online behavior and communication in social media.

E-mail: sakter@iu.edu

Kunihiro Miyazaki (Ph.D. The University of Tokyo) is a specially appointed assistant professor at the University of Tokyo. He conducts research in computational social science, particularly in (social) media analysis and opinion mining using machine learning techniques.

E-mail: kunihirom@acm.org