1 Introduction

The rapidly advancing influence of artificial intelligence (AI) and machine learning across various work and life domains has become increasingly evident. This evolution is also apparent within the higher education landscape. Beyond teaching and research, AI challenges the digital infrastructures and organizational processes of universities. The most significant development to date is the advent of novel generative AI tools, most notably the release of ChatGPT in November 2022. ChatGPT is an advanced AI text model, among many others like Bing, Bard, or Perplexity, capable of conducting human-like conversations and addressing a wide range of questions and requests [OpenAI et al., 2023; Touvron et al., 2023; Wolfram, 2023]. Powered by billions of digital documents, ChatGPT has been trained with extensive knowledge across various domains, enabling it to process text, generate context-specific responses, such as text, code, images or videos, process complex inquiries, and even perform problem-solving tasks [Gozalo-Brizuela & Garrido-Merchán, 2023; Ray, 2023]. However, it has been observed that ChatGPT often struggles to distinguish facts from fiction [Rawte, Sheth & Das, 2023; Zhang et al., 2023], a critical issue in academic and scientific contexts, especially given its current limitations in handling scientific citations accurately [Athaluri et al., 2023; Bhattacharyya, Miller, Bhattacharyya & Miller, 2023; McGowan et al., 2023].

The capabilities of such chatbots — summarizing texts, transforming notes into fluent texts, emails, or letters, generating social media posts, and creating images, presentation slides, or videos from simple text inputs — hint at the potential for transforming university communication [De Silva, Mills, El-Ayoubi, Manic & Alahakoon, 2023; Lopezosa, Codina, Pont-Sorribes & Vállez, 2023; Ray, 2023]. These abilities can potentially influence communication strategies and objectives by enabling new pathways for efficient, personalized, and scalable communication methods. Thus, the question of integrating generative AI tools becomes crucial for university communications, the central theme of this study.

The integration of AI tools like ChatGPT and other machine communication systems raises important ethical questions which are also relevant to university communications [Kieslich, Keller & Starke, 2022; Lund et al., 2023; Yan et al., 2023]. These include data privacy concerns, as these tools gain access to sensitive information [Arthur et al., 2023] or ethical considerations when AI is employed in decision-making processes [Dutta, 2018]. The use of such tools can impact roles and skills within workgroups, potentially creating a new digital divide, and there is the challenge of building trust in their usage [Ray, 2023]. Nevertheless, generative AI tools hold the potential to make university communications more effective and efficient by enabling quick, personalized responses and automating administrative tasks [Matz et al., 2023; Parycek, Schmid & Novak, 2023].

This study aims to empirically explore the use of generative AI tools in university communication for the first time, through a survey conducted among press offices and communication departments of all German universities. This ties into a general finding by Mike Schäfer [2023], who recently showed that the topic of artificial intelligence within the research field of science communication has been virtually unexplored. This research aims to bridge the gap in understanding generative AI tools’ adoption in German university communications, focusing on influencing factors, professional needs, and expectations. It evaluates the integration impact of AI on communication practices, strategies, and university organizational structures, assessing how these technologies meet professional expectations and shape internal debates and communication objectives.

The study is structured into several chapters to address the various aspects of the investigation. The following sections provide an overview of key concepts and developments in the context of university communication, followed by the methodological approach and characteristics of the conducted online survey. The following section presents the survey results, which tap into experiences with the use of generative AI and expectations and assessments of the considered AI tools. The last two sections discus and contextualize these findings, concluding with a perspective on the future development of university communication in the era of generative AI.

2 Background

2.1 Digital transformation in university communication

University communication, a specific variant of science communication, has some unique features and challenges. Unlike general science communication, which aims to disseminate scientific findings to a broad audience, university communication additionally targets the academic community, including students, faculty, and researchers, as well as external stakeholders such as political, economic, and societal sectors [Elken, Stensaker & Dedze, 2018; Fürst, Vogler, Sörensen & Schäfer, 2022; Peters, 2022]. Fähnrich, Metag, Post and Schäfer [2019, p. 8] define university communication as “all forms of communication in, from, and about universities, encompassing their production, content, usage, and impact, conducted by actors both within and outside the university organization”. This study focuses on the practical work of central and departmental communications teams at universities. Four general communication functions are performed by these departments: Public Relations, Marketing, Public Affairs and Science Communication, covering a wide range of specific communication activities [Entradas, Bauer, Marcinkowski & Pellegrini, 2024].

In the context of digitalization, university communication has undergone significant transformation. University communication now incorporates a variety of additional communication channels and target groups. It has assumed a key position in science communication and has become a crucial player in public perception and opinion formation about science [Fähnrich, Kuhnhenn & Raaz, 2019]. Institutionally, digitalization has led universities to increasingly rely on digital platforms for communication with their target audiences [Metag & Schäfer, 2017]. Neuberger et al. [2021] argue that science communication, in a broader sense, has taken on an expanded role due to digitalization, promoting both the scientification of society and the socialization of science. Digitalization allows greater transparency in the phases of knowledge production and verification. Social media has enabled universities to present their research findings and activities to a wider audience, thereby contributing to an intensified dialogue between science and society, at least in form [Bélanger, Bali & Longden, 2014; Gutiérrez & Del Pino, 2023]. Private universities, in particular, have shown a more active use of social media and more types of media for their communications [Lovari & Giglietto, 2012; Peruta & Shields, 2017]. This direct outreach to broad publics reveals an increasing disintermediation, specifically bypassing professional science journalism [Neuberger et al., 2021, p. 24]. However, the current state of research has yet to offer predictions on the consequences of (generative) artificial intelligence in university communication.

2.2 Disruptive potential of generative AI on academia

Three aspects of the role of generative AI in academic work can serve as a starting point for further considerations on university communication: quality and efficiency through AI support, relevant factors and challenges, as well as the organizational integration of AI.

In terms of quality and efficiency, generative AI, exemplified by Large Language Models (LLMs) such as GPT, has shown remarkable capabilities in processing and generating human language, including performing complex computer programming tasks [Dwivedi et al., 2023]. In academic settings, these advances have significant implications for the quality and efficiency of academic writing and learning. For instance, LLMs like ChatGPT have demonstrated proficiency in passing medical and other professional examinations, underlining their potential to support educational objectives [Gilson et al., 2022; Kasneci et al., 2023; Lieberman, 2023].

The deployment of generative AI in academia is not without its challenges and considerations. The impact of these technologies extends beyond their technical capabilities, encompassing ethical, legal, and societal dimensions. Concerns regarding data biases, safety issues, and the potential for exacerbating inequalities highlight the need for a cautious approach to AI integration [Azaria, Azoulay & Reches, 2023; Fecher, Hebing, Laufer, Pohle & Sofsky, 2023; Hosseini & Horbach, 2023; Lund et al., 2023; McGowan et al., 2023; Ray, 2023]. Furthermore, the expectation that LLMs will significantly alter job profiles and the labor market underscores the importance of understanding and navigating these factors to harness the benefits of AI while mitigating its risks [Eloundou, Manning, Mishkin & Rock, 2023].

Hence, the successful integration of generative AI into higher education and research institutions requires careful consideration of organizational aspects. These include addressing data protection and ethical concerns, which are paramount given AI’s potential to offer individualized learning pathways [Ninaus & Sailer, 2022; Zawacki-Richter, Marín, Bond & Gouverneur, 2019]. Ethical issues such as academic integrity, plagiarism, and fraud necessitate strategic approaches that emphasize critical thinking, fact-checking, and adjustments in teaching methodologies and examinations [Farrokhnia, Banihashem, Noroozi & Wals, 2024; Gleason, 2022; Kasneci et al., 2023; Lund et al., 2023; van Wyk, Adarkwah & Amponsah, 2023]. Moreover, the varying stances of scientific publishers toward AI-generated texts, from viewing them as plagiarism to restricting LLM authorship, indicate the complexity of integrating AI into academic publishing processes [Stokel-Walker, 2023; Thorp, 2023]. These organizational challenges highlight the need for institutions to develop comprehensive strategies that not only leverage capabilities of AI but also safeguard academic standards and integrity.

All of this has profound implications for university communication: by improving the quality and efficiency of content creation, facilitating personalized communication strategies, and addressing organizational needs, generative AI could transform how universities engage with diverse stakeholders, including students, faculty, and the broader academic community, thereby fostering more dynamic, responsive, and inclusive communication ecosystems. However, there are many pitfalls to avoid, which will be discussed next.

2.3 AI and science communication

An article by Mike Schäfer [2023] in the Journal of Science Communication provides the most current assessment of generative AI in the context of science communication. He emphasizes the importance of generative AI and its potential impacts on science communication, underscoring the need for further research to evaluate the relationship between AI and science communication. As Schäfer’s bibliometric analyses reveal, the topic of AI is virtually non-existent in the literature on science communication. He identifies four particularly relevant avenues for future research in this area: (1) analyses of public communication about AI, (2) investigation of user interactions with ChatGPT and similar tools, i.e., communication with AI, (3) the impacts of generative AI on the fundamentals of science communication, and (4) conceptual work and theory development regarding human-machine communication. Schäfer asserts that the science communication community must quickly adapt to the upcoming questions related to AI, as it has the potential to transform many aspects of life relevant to science communication. An opinion article by Könneker [2024] states that LLM-based AI tools are transforming scientific communication by enhancing productivity, greater educational equity and creating new dissemination pathways, such as participatory practices, yet they also bring challenges such as misinformation and the potential for misuse, underscoring the indispensable role of independent media and quality journalism.

Some cues from communication related studies, such as journalism, can be drawn with respect to AI and science communication, however. Yang et al. [2023] showed that increased trust in institutions correlates with heightened AI support, influenced by perceptions of risks and benefits. Pavlik [2023] highlights the efficiency gains in journalism through ChatGPT, though stressing the need for media education to address AI’s ethical implications. Credibility and trustworthiness of AI-written and human-written texts does not differ much on neutral texts but it does on evaluative texts [Lermann Henestrosa, Greving & Kimmerle, 2023]. Jakesch, French, Ma, Hancock and Naaman [2019] identified the “Replicant Effect”, where the existence of AI-written texts in a communication arena can foster distrust. Moreover, Longoni, Fradkin, Cian and Pennycook [2022] find that AI written headlines are perceived less accurate. Glikson and Asscher [2023] found that AI usage in emotional communication diminishes perceived authenticity and forgiveness, a crucial factor in crisis communication effectiveness. Karinshak, Liu, Park and Hancock [2023] demonstrated AI’s efficacy in crafting pro-vaccination messages under human oversight, yet public health communications are still preferred from human sources. Finally, Kreps and Kriner [2023] showed that AI-generated emails by legislators to their constituents were less credible, cautioning against overreliance on unedited AI content in professional communication. These findings collectively underline the nuanced impact of AI on communication perception and the importance of balancing efficiency with authenticity and credibility.

In addition to the implications of AI-mediated communication, structural aspects of its various implementations loom over communicating organizations. Fears about job losses, quality and ethics related to automation and AI have also emerged in journalism [Munoriyarwa, Chiumbu & Motsaathebe, 2023; Noain-Sánchez, 2022; Peña-Fernández, Meso-Ayerdi, Larrondo-Ureta & Díaz-Noci, 2023]. Furthermore, Zerfass, Hagelstein and Tench [2020] show communication professionals have a limited understanding of AI and lack individual competencies of these technologies. Institutional pressures, from data privacy laws to ethical norms, shape the use of analytics in digital communications, highlighting the influence of coercive, normative, and mimetic forces in strategic communication [Economou, Luck & Bartlett, 2023].

Studies on human-machine communication and AI-mediated communication reveal further insights into the evolving interactions between humans and AI technologies that are relevant to science communication. Bergner, Hildebrand and Häubl [2023] found that verbal embodiment properties in conversational AI significantly shape consumer-brand relationships by influencing consumer perceptions. A meta-analysis by Huang and Wang [2023] suggests that AI can be more persuasive than humans, highlighting AI’s potential in influencing decisions. Research by Mieczkowski and Hancock [2022] on the agency, expertise, and roles of AI systems in communication indicates that perceptions of AI’s agency and expertise significantly affect AI-mediated interactions. Wenker [2023] investigates how AI-generated smart replies impact language and agency in the workplace, indicating changes in communication dynamics. These examples indicate that the nature of communication is evolving as AI systems are increasingly taking part in decision-making and content creation.

3 Generative AI in university communication

3.1 Relevant concepts

The adoption and integration of generative AI within academic institutions, particularly in the context of university communication, can be effectively analyzed through the lens of established theories such as the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT). TAM, introduced by Davis [1986], focuses on perceived usefulness and ease of use as primary factors driving technology acceptance. This model is instrumental in understanding the motivations behind university communicators’ adoption of AI tools, such as chatbots, by evaluating their functional benefits against user demands for simplicity and efficiency. On the other hand, UTAUT, proposed by Venkatesh, Morris, Davis and Davis [2003], expands this perspective by incorporating performance expectancy, effort expectancy, social influence, and facilitating conditions as determinants of technology use.

These theoretical frameworks together provide a comprehensive understanding of the factors influencing the adoption of generative AI in university settings. For example, TAM’s focus on perceived ease of use and usefulness is critical in assessing whether AI tools meet the specific needs of university communication, such as enhancing the quality and efficiency of creating and disseminating information. Meanwhile, UTAUT’s inclusion of social influence and facilitating conditions sheds light on the broader environmental and organizational support necessary for the successful integration of AI technologies.

In a similar vein, Socio-Technical Systems theory (STS), recognizing the interplay between technological innovations and organizational structures, regards technologies as being shaped by and embedded within social contexts [Bijker, Hughes & Pinch, 2012; Orlikowski, 1992]. This perspective is crucial for analyzing the integration of generative AI tools like ChatGPT into existing university practices, potentially catalyzing new forms of communication and organizational dynamics [Leonardi, 2011]. The socio-technical perspective unfolds in three primary directions: the social constructivist approach views technological and scientific developments as shaped by social contexts; interpretative flexibility allows for multiple meanings of technological artifacts based on their social surroundings; and closure mechanisms establish a prevailing interpretation of technology, marginalizing alternative views to ensure social cohesion [Leonardi & Barley, 2008; Pinch & Bijker, 1984].

In the context of this study, the implications of AI tools on university communication encompass their influence on the social dynamics within academia and university administration, the perception of their optimal applications, and the evolving consensus regarding their intended purposes and utility. This exploration offers insights into how generative AI reshapes communication strategies and practices in higher education settings.

3.2 Research questions and assumptions

Applying these concepts collectively, it becomes evident that for generative AI tools to be effectively incorporated into university communication strategies, they must be perceived as useful and easy to use, supported by an organizational environment that fosters technological innovation, and harmonized with the social and organizational fabric of the institution. This requires a comprehensive approach that considers not only the technological capabilities and operational needs but also the socio-technical environment, ensuring that AI tools are embedded in a manner that respects and enhances existing communication practices and organizational dynamics.

Given the concepts discussed above and the insights from the literature review we can derive specific research questions that guide the empirical examination of the topic. This study started with the central question of what experiences university press offices and communication departments have already had with novel generative AI tools and what their expectations are for future developments. We can now turn our attention to considerations of quality and efficiency, organizational integration of generative AI tools, as well as limiting factors and needs:

  • RQ1: How do universities differ in their adoption of generative AI tools? Given the differences in resources or that private universities have shown to be more prone to digital modes of communication, one might assume that there are distinct patterns across the university landscape.

  • RQ2: What are the expectancies associated with generative AI? High performance and reasonable effort (UTAUT) are assumed to be relevant factors for adoption and potential use of these tools. Given the ethical concerns discussed above, one might expect a number of barriers to meeting these expectations.

  • RQ3: What needs and features are currently driving the perceived usefulness of AI tools? This question touches on the issues of perceived efficiency and quality of communication (as discussed in TAM). This is reflected in the needs and the important functions of the tools. One might assume that efficiency gains dominate the expectations of the usefulness of generative AI at this early stage of adoption.

  • RQ4: How do universities deal with the development of generative AI? Building on the socio-technical perspective one might expect that concerns, internal debates, and strategies to play a role in the use and acceptance of new AI tools.

4 Methods and data

The object of analysis is the press offices of universities as they are in charge of implementing generative AI for university communication. Data is collected through a survey among the communication heads at German universities. The survey was conducted in May 2023 at an early stage after the introduction of the aforementioned AI tools, and developments in this area are highly dynamic. The study aims to provide an initial insights into the impact in the field of university communication. At the same time, it seems advisable to repeat such surveys at a later date to observe changes in the use and assessment of these tools over time.

4.1 Sampling and survey methods

The present study was conducted as a partially standardized online survey among German universities, including universities of applied science (UAS), artistic universities (AU) and cooperative state universities (“Duale Hochschule”, CSU).1 All German universities that are state or state-recognized, including private, artistic, and theological universities, with a minimum of 200 students, were included in the sample (n = 318). The contacts have always been the heads of press offices and communications departments, as they are considered to be in the best position to assess communication strategies and practices. They were selected as the single respondent of each university. Contact data (names, email) were obtained from the website hochschulkompass.de (as of May 2023) which lists all universities in Germany along with key characteristics and contact data.2

Generative AI applications comprise various forms text, image, code, audio or video creation [Gozalo-Brizuela & Garrido-Merchán, 2023]. Determining which specific generative AI tools to include as examples was straightforward. For each application, such as text or image generation, the most common tools as of May 2023 were researched using Google search. The examples were intended to illustrate specific tools that the respondents might use or have heard of. Table 1 breaks down the functions and example tools.

The questionnaire (see appendix A) was programmed in LimeSurvey as a closed online survey with a fixed respondent group. Before starting the survey, two practitioners from different universities provided feedback for improvements of the questionnaire. After initiating the survey, adjustments to the respondent group were made due to occasional invalid email addresses or the respective person no longer being employed at the university. The survey explicitly asked about the use, expectations, and needs regarding generative AI tools. Only the question about use of specific AI tools was mandatory, all others could be skipped during the survey and no filters were applied. The questionnaire included several additional questions on relevance, satisfaction, budget, specific functions, and challenges of AI-supported tools. It also inquired about the role of tools like ChatGPT in internal discussions at universities and how respondents assess the future development of university communication through such tools. The survey commenced on May 8, 2023, and concluded on June 2, 2023. Universities were invited via email and received two reminders during the survey period. Data analysis was performed using the programming language R with the software RStudio.

PIC
Table 1. Applications and example AI Tools in the Survey.

4.2 Response rate and representativeness of the collected data

The total population for this survey consists of 318 universities, out of which 101 participated in the survey. This results in a response rate of 32%, a highly satisfactory figure considering the generally high number of surveys directed at universities. This response rate allows for comparisons among subgroups, such as by type of university. However, the question of the representativeness of the yielded sample remains to be addressed. To this end, the total population is compared with the sample along three characteristics: the type of university, legal status of the university, and the size of the university. These are fundamental characteristics for classifying universities and, by extension, the higher education landscape — a high congruence between both data sources can thus be taken as an indicator of representativeness (see appendix A for a detailed breakdown).

Regarding the type of university, there is a relatively high level of representativeness. Universities are slightly overrepresented compared to the total population (38% to 34%), while UAS are slightly underrepresented (46% to 51%). AU and CSU are well represented in the survey. The distribution of universities by legal status indicates that public universities are slightly overrepresented in the survey (79% to 71%), while private universities are underrepresented (13% to 22%). Church-affiliated universities are almost evenly represented in the survey. Despite minor deviations, the representativeness regarding governance can be considered satisfactory. University size was also taken into account. Smaller universities with up to 2,000 students are underrepresented in the survey compared to the total population (29% to 37%), while those with 2,000 to 5,000 students (28% to 23%) and 10,000 to 20,000 students are overrepresented (17% to 11%). The groups of universities with 5,000 to 10,000 (16%) and more than 20,000 students (11%) are similarly represented in both the total population and the sample. It can be concluded that the survey exhibits more than satisfactory representativeness overall, though slight deviations are observed in the representation of different types of universities, governance structures, and university sizes. These minor deviations are unlikely to significantly impact the general conclusions of the study.

5 Survey results

5.1 Use of AI tools in university communication

In the first part of the questionnaire, respondents were asked about their concrete experiences with generative AI tools. The focus was primarily on which tools are already in use, for what purposes, and how satisfactory the results have been so far.

The first and central question of the survey was: “Which of these AI tools, which mostly generate content based on simple text inputs (prompts), do you or your department currently use for the communication and public relations work of your university?”. This question directly relates to RQ1 (see above). There were five response options, ranging from “I am not aware of any of these services”, “I have heard of this service but have not yet used it” to “I have already tried it”, up to regular use in three gradations (“at least once a month”, “once a week”, and “daily”). The analysis of the use of AI tools in the communication and public relations departments of universities shows a wide range of usage frequencies and levels of awareness (see Table 2).

PIC
Table 2. Awareness and usage of generative AI tools.

There is a broad base of experience with text creation tools like ChatGPT (40% had tried it), but regular use is still limited (22%). With regard to other chatbots that also have integrated web search (e.g. Bing Chat) or document uploads (e.g. ChatPDF), awareness is high, but usage is low. Tools for the automated creation of presentation slides are the least well known and no regular use was found. AI-supported translation and language correction tools (e.g. DeepL, Grammarly) show the highest frequency of use with 73% regular use. Other tools for applications like image generation (e.g. Dall-E2, Midjourney), automatic transcription (e.g. Tucan, Otter.ai), video creation (e.g. Synthesia, Veed.io), creating designs (e.g. Microsoft Designer) show marginal usage and mediocre awareness.

For subgroups, the results break down as follows (see appendix A for full table): there were no significant differences for usage and awareness between types of higher education institution (university, university of applied science (UAS), artistic university) for text creation with ChatGPT et al. (Chi-square = 17.68, df = 15, p-value = 0.279).3 Regular use of ChatGPT ranged between 24% (university) and 17% (artistic university). The patterns for other functions such as translation were roughly similar too. However, a significant difference for text creation tools was identified by legal status, i.e. public or private (Chi-square = 21.54, df = 10, p-value = 0.0176).4 Private universities showed a much higher regular use of text creation chatbots (44%) compared to public universities (20%). No significant difference for text creation tools could be identified by university size (Chi-square = 21.56, df = 20, p-value = 0.364). However, large universities (20,000 students or more) reported the highest regular use (36%) of ChatGPT et al. and translation tools (91%). Small universities (under 2,000 students) reported 16% regular use of text creation chatbots.

5.2 Specific use cases for AI tools

In an open text questions, survey participants were asked to identify the generative AI-supported tools they find most relevant for their work and the specific applications for these tools: for ChatGPT respondents mentioned: Preparation and editing of social media posts (N = 7); Creation and editing of editorial texts, including alternative phrasing suggestions, headlines, and framework development (N = 6); Support in brainstorming, strategy, and concept development (N = 2); Composition of texts for various occasions, such as speeches, program notes for concerts, brochure texts, press releases, artist biographies (N = 2). For DeepL it was: Translation of texts into English, including emails, websites, social media posts, academic texts, texts for bilingual websites, and quick translations (N = 16); Support and verification of translations, including alternative phrasing suggestions (N = 3). For Dall-E2 and Midjourney generation of appropriate images (N = 1) and of stock images (N = 2) was mentioned.

Some respondents indicated that they have not yet used any of the tools or are still in the testing phase. Others also mentioned that there is currently a lack of knowledge about these tools to use them adequately.

5.3 Satisfaction with the use of AI tools

The survey also aimed to assess satisfaction with AI tools among respondents (relating to RQ2). When asked, “How satisfied are you with the results achieved by your department through the use of AI tools in your public relations work?” the responses (Table 3) indicated a range of satisfaction levels. Overall, 25% of respondents reported being somewhat dissatisfied. Another 25% indicated they were somewhat satisfied. A smaller group, 5%, expressed being very dissatisfied, while an equal percentage (5%) were very satisfied. Notably, 41% of respondents had mixed feelings, indicating they were partly satisfied and partly dissatisfied. Moreover, satisfaction is slightly lower for private universities, but not significantly (Wilcoxon rank-sum test, p = 0.369). This is somewhat surprising as private universities had shown a significantly higher usage of text creation AI tools.

PIC
Table 3. Satisfaction with the use of AI tools.

These figures show that the majority of respondents have mixed feelings about their satisfaction with AI tools (mean and median = 3, SD = 0.943), with only a small number expressing high levels of either satisfaction or dissatisfaction. Despite exploring various potential relationships, we did not identify any robust connections between satisfaction level and the other questions of the survey.

5.4 Challenges and difficulties

The currently low usage of most of the AI tools discussed here could be related to difficulties encountered in their utilization (see RQ2). Respondents were asked to identify important challenges they faced when utilizing generative AI tools in their public relations work (Table 4). 52% cited data protection concerns as the top issue, followed by 42% reporting ethical concerns. Technical issues and difficulties in optimal tool usage were noted by 24% and 36% respectively. Meanwhile, 20% of respondents highlighted the lack of tool adaptability and insufficient training opportunities, although these were less commonly reported issues. Significantly, a majority of respondents did not mention any of these selected challenges, underscoring a varied perception of difficulties in using AI tools.

PIC
Table 4. Important challenges or difficulties in using the tools.

5.5 Important functions of AI tools

Respondents were asked to identify AI tool functions or features important or appealing for their department’s work (Table 5). This relates to RQ3. Automated translations emerged as the most valued feature, with a mean score of 3.2 and 40.6% of respondents considering it important, indicating a significant demand for multilingual communication capabilities. Revision and editing of texts also scored relatively high (mean = 2.9, SD = 1.4), deemed important by 25.7% of participants, highlighting the emphasis on content quality. In contrast, the creation of personalized content and graphics received lower importance scores (mean = 2.2 and 2.4, respectively), with less than 10% of respondents marking them as crucial. Automated text generation and social media content optimization were considered important by 13.9% and 18.8% of respondents, respectively, suggesting a moderate interest in these functions.

PIC
Table 5. Important functions of generative AI tools.

5.6 Needs and goals of using AI tools

Additionally, respondents were queried about the specific needs and objectives (see RQ3) their department aims to achieve in communication and public relations efforts through the application of the aforementioned AI tools (Table 6). While “increasing efficiency in communication” was important to 48.9%. “Improving communication quality” was considered relevant by only 14.8%, and “expanding the reach of communication” was significant for just 9.1%. Notably, 72.7% identified “time saving in content creation” as a key benefit, marking it as the most valued aspect of AI tool usage. In contrast, “personalization of communication” was a priority for only 2.3%, indicating diverse perceptions of the value offered by these tools.

PIC
Table 6. Needs and goals of using AI tools.

5.7 Budget for AI tools

Some of the AI services offer paid subscriptions that provide extended functions or enable more intensive use. In order to integrate the services into other programs or scripts, an interface access (API) is usually required, which is billed according to usage. In short, professional use of AI tools is usually associated with costs. Hence, the questionnaire included a question on the available budget for AI tools (Table 7). The majority, 44%, reported a modest monthly budget of up to 50 euros. A further 23% had a budget ranging between 50 and 150 euros, while only 5% allocated between 150 and 500 euros. Notably, no university reported a budget between 500 and 1,000 euros, and a mere 3% had a budget exceeding 1,000 euros. Additionally, 26% were unsure of their exact budget allocation. The findings, drawn from the responses of about one-third of all participants (n = 39), suggest cautious interpretation, as it is likely that universities not using AI tools may have skipped this question.

PIC
Table 7. Budget for AI tools.

5.8 Generative AI in internal university debates

To capture the broader perspective, the respondents were asked about the significance of generative AI tools such as ChatGPT in their institution’s internal discussions (Table 8), which relates to RQ4. Generative AI tools like ChatGPT are discussed in over half of universities, primarily in committees and commissions (51.8%), yet only 4.8% have established guidelines or regulations for their use, highlighting a gap between discussion and policy implementation. While 26.5% of universities have dedicated working groups or committees for generative AI, only 2.4% have strategic goals or initiatives in place, suggesting these tools are not yet a strategic focus in most universities. Training programs for generative AI are rare (15.7%), and for 30.1% of universities, these tools are not a central issue, reflecting varied levels of prioritization and integration.

PIC
Table 8. Generative AI tools in internal university debates.

5.9 Outlook on the future of university communication

Finally, the respondents were asked to provide an outlook on the potential transformation of higher education communication in Germany due to generative AI tools such as ChatGPT, Bing Chat, etc., in the upcoming years. They were also requested to highlight any significant risks or opportunities they foresee. These topics relate to RQ2, RQ3 and RQ4. Respondents foresee a range of changes, opportunities, and challenges stemming from the adoption of AI tools in university communications. They predict that these tools will lead to enhanced efficiency and quicker processes (N = 9), with some tasks shifting towards theme selection (N = 2) and a notable transformation in the way texts are created (N = 9). The most significant opportunities highlighted include time and labor savings (N = 12), alongside benefits such as boosted creativity (N = 2), improved text quality, and superior translation capabilities (N = 3).

On the flip side, respondents also express concerns over potential risks associated with AI reliance, such as errors and fake content (N = 10), persisting data protection and copyright issues (N = 4), diminished reflective communication, job losses (N = 4), and reduced personal interaction (N = 5). These insights underscore the multifaceted impact AI tools are expected to have on the field of university communications, reflecting a balanced view of their transformative potential and the challenges they may bring.

6 Discussion

The study provides a pioneering exploration of the use of generative AI tools in university communication, supported by initial assumptions based on existing research and conceptual considerations. The findings confirm the slow adoption of new technologies in universities, likely due to technical difficulties, ethical, and data protection concerns, especially in public institutions where legal compliance is crucial. Private universities, however, have been quicker to integrate generative AI tools like chatbots. Although AI translation programs like DeepL are widely used, there is still a cautious approach to other AI applications. There are active internal debates on generative AI, but few strategic guidelines or training in place yet and budgets for AI tools are still very small.

Reflecting on the first question related to university differences in AI tool adoption (see RQ1), a broad variance in patterns emerged. Although higher AI adoption rates specifically for text creation (ChatGPT) in private universities are noted, paralleling previous findings about social media use and diversity of media types [Lovari & Giglietto, 2012; Peruta & Shields, 2017], it is crucial to consider that this observation does not present a universal narrative. For example, no significant variance in AI tool usage across higher education institutions of different types and sizes was discerned. Moving onto the second question on expectancies associated with generative AI (RQ2), the reported progress shows two facets. The use of generative AI tools has yielded satisfactory performance, particularly in areas where time-saving attributes are pivotal. Concurrently, it is wedged with practical impediments such as facts-from-fiction discernment and data protection concerns that resonate with other studies [Rawte et al., 2023; Zhang et al., 2023; Ninaus & Sailer, 2022; Zawacki-Richter et al., 2019; Arthur et al., 2023], underscoring the urge for robust verification mechanisms and profound understanding. Coming to the third question regarding the driving needs and features of AI tools that drive usefulness (RQ3), respondents widely acknowledged these tools’ power in automating tasks and amplifying efficiency. This reconfirms the benefits of efficiency observed in earlier works [Matz et al., 2023; Parycek et al., 2023]. However, personalized communication and quality improvement, which are sizable needs, are not quite comprehensively seen as important features of generative AI tools at present, thus signalling an underexplored possibility. Addressing the fourth question on how universities deal with the developments (RQ4), the research partially supports the assumed relationship between perceived usefulness, AI tool adoption, and continued usage. The impact of ethical considerations and concerns regarding job displacement are undeniable and line up with previous literature [Azaria et al., 2023; Fecher et al., 2023; Dutta, 2018; Ray, 2023; Munoriyarwa et al., 2023; Noain-Sánchez, 2022; Peña-Fernández et al., 2023]. But the gap in organizational support marks a pressing issue to address. These combined aspects indicate the relevance of an ongoing internal discourse, driving the necessity for actionable, robust ethical guidelines.

The findings also demonstrate the usefulness of TAM, UTAUT, and STS as concepts for analyzing AI adoption in university communication. For example, the use of TAM and UTAUT has provided insights into how social influences and perceived ease of use promote rapid integration in private universities and how efficiency considerations dominate use. Simultaneously, STS has helped explain the slower adoption in public universities, where AI integration is heavily influenced by institutional structures and widespread legal and ethical considerations. More generally, these concepts together are valuable for exploring the different dynamics of AI tool adoption in universities, driven by organizational, legal, and social factors.

This study has some limitations that are worth mentioning. First, the survey was conducted at an early stage of generative AI and the situation is adapting quickly, as ever more new AI functions and tools are introduced. Second, the leading executives of university press offices were invited to participate, not all staff member of the departments. Hence, clandestine or informal, experimental usage by some employees is not accounted for. Finally, the measurement items were developed pragmatically as there were no consolidated measurement frameworks available for this purpose. One can assume that further research will identify some common measures that will facilitate cross-national research on this topic.

7 Conclusions

This study provides initial insights into the current situation surrounding AI in university communication in 2023, laying groundwork for further research. The empirical findings from a survey of 318 German universities initially appear sobering: AI-supported translations and language corrections like DeepL are the only widely established AI tools. ChatGPT and other chatbots are regularly used by about one fifth of universities, with other specialized applications being rare. The primary reason seems to be the recent market introduction of these technologies, coupled with ethical, legal, and data protection concerns in public institutions like universities. Despite this, internal discussions are underway, and critical questions about authorship and plagiarism in scientific journals and publishers are also being raised elsewhere in the academic system. There is clearly room for expanding practical knowledge and training about AI, indicating a need for more comprehensive training programs and guidance for university communication.

The results thus imply a need for a “strategic alignment” [Volk & Zerfass, 2020] of university communication within the field given the low adoption rates, uncertainties and ongoing challenges associated with generative AI tools. Thus, communications departments should closely monitor scholarly debates and emerging practices at other universities regarding generative AI (e.g. data protection or specific use cases). However, university communications must still comply with the institution’s overarching AI policies, which presents a potential dilemma when communicators try to adapt to recommendations from the field that are more permissive (or restrictive) than internal guidelines. At societal level, the challenges of verifying AI-generated content and ethical concerns affect public trust in academic institutions and need to be addressed responsibly. Furthermore, Jarzabkowski and Kaplan [2015] remind us that strategy serves as a guide through complexity rather than a fixed route, suggesting that universities should develop adaptive policies that observe and manage politicized issues in AI adoption. Navigating the future of university communication in the AI era requires a compass, not a map.

A Supplementary tables

PIC
Table 9. Survey questionnaire. Translated from German original.

PIC
Table 10. Distributions of base population and sample.

PIC
Table 11. Awareness and usage of AI tools for sub-groups.

References

Arthur, L., Costello, J., Hardy, J., O’Brien, W., Rea, J., Rees, G. & Ganev, G. (2023). On the challenges of deploying privacy-preserving synthetic data in the enterprise. arXiv: 2307.04208

Athaluri, S. A., Manthena, S. V., Kesapragada, V. S. R. K. M., Yarlagadda, V., Dave, T. & Duddumpudi, R. T. S. (2023). Exploring the boundaries of reality: investigating the phenomenon of artificial intelligence hallucination in scientific writing through ChatGPT references. Cureus 15 (4), e37432. doi:10.7759/cureus.37432

Azaria, A., Azoulay, R. & Reches, S. (2023). ChatGPT is a remarkable tool — for experts. arXiv: 2306.03102

Bélanger, C. H., Bali, S. & Longden, B. (2014). How Canadian universities use social media to brand themselves. Tertiary Education and Management 20 (1), 14–29. doi:10.1080/13583883.2013.852237

Bergner, A. S., Hildebrand, C. & Häubl, G. (2023). Machine talk: how verbal embodiment in conversational AI shapes consumer-brand relationships. Journal of Consumer Research 50 (4), 742–764. doi:10.1093/jcr/ucad014

Bhattacharyya, M., Miller, V. M., Bhattacharyya, D. & Miller, L. E. (2023). High rates of fabricated and inaccurate references in ChatGPT-generated medical content. Cureus 15 (5), e39238. doi:10.7759/cureus.39238

Bijker, W. E., Hughes, T. P. & Pinch, T. (Eds.) (2012). The social construction of technological systems: new directions in the sociology and history of technology (Anniversary edition). Cambridge, MA, U.S.A.: The MIT Press.

Davis, F. D. (1986). A technology acceptance model for empirically testing new end-user information systems: theory and results (Ph.D. Thesis, Massachusetts Institute of Technology, Sloan School of Management). Retrieved from http://hdl.handle.net/1721.1/15192

De Silva, D., Mills, N., El-Ayoubi, M., Manic, M. & Alahakoon, D. (2023). ChatGPT and generative AI guidelines for addressing academic integrity and augmenting pre-existing chatbots. In 2023 IEEE International Conference on Industrial Technology (ICIT). Orlando, FL, U.S.A. 4–6 April 2023. doi:10.1109/ICIT58465.2023.10143123

Dutta, B. M. (2018). The ethics of artificial intelligence in legal decision making: an empirical study. Psychology and Education 55 (1), 292–302. doi:10.48047/pne.2018.55.1.38

Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., … Wright, R. (2023). Opinion paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management 71, 102642. doi:10.1016/j.ijinfomgt.2023.102642

Economou, E., Luck, E. & Bartlett, J. (2023). Between rules, norms and shared understandings: how institutional pressures shape the implementation of data-driven communications. Journal of Communication Management 27 (1), 103–119. doi:10.1108/jcom-01-2022-0009

Elken, M., Stensaker, B. & Dedze, I. (2018). The painters behind the profile: the rise and functioning of communication departments in universities. Higher Education 76 (6), 1109–1122. doi:10.1007/s10734-018-0258-x

Eloundou, T., Manning, S., Mishkin, P. & Rock, D. (2023). GPTs are GPTs: an early look at the labor market impact potential of large language models. arXiv: 2303.10130

Entradas, M., Bauer, M. W., Marcinkowski, F. & Pellegrini, G. (2024). The communication function of universities: is there a place for science communication? Minerva 62 (1), 25–47. doi:10.1007/s11024-023-09499-8

Fähnrich, B., Kuhnhenn, M. & Raaz, O. (2019). Organisationsbezogene Theorien der Hochschulkommunikation. In B. Fähnrich, J. Metag, S. Post & M. S. Schäfer (Eds.), Forschungsfeld Hochschulkommunikation (pp. 61–94). doi:10.1007/978-3-658-22409-7_4

Fähnrich, B., Metag, J., Post, S. & Schäfer, M. S. (2019). Hochschulkommunikation aus kommunikationswissenschaftlicher Perspektive. In B. Fähnrich, J. Metag, S. Post & M. S. Schäfer (Eds.), Forschungsfeld Hochschulkommunikation (pp. 1–21). doi:10.1007/978-3-658-22409-7_1

Farrokhnia, M., Banihashem, S. K., Noroozi, O. & Wals, A. (2024). A SWOT analysis of ChatGPT: implications for educational practice and research. Innovations in Education and Teaching International 61 (3), 460–474. doi:10.1080/14703297.2023.2195846

Fecher, B., Hebing, M., Laufer, M., Pohle, J. & Sofsky, F. (2023). Friend or foe? Exploring the implications of large language models on the science system. AI & Society. doi:10.1007/s00146-023-01791-1

Fürst, S., Vogler, D., Sörensen, I. & Schäfer, M. S. (2022). Communication of higher education institutions: historical developments and changes over the past decade. Studies in Communication Sciences 22 (3), 459–469. doi:10.24434/j.scoms.2022.03.4033

Gilson, A., Safranek, C., Huang, T., Socrates, V., Chi, L., Taylor, R. A. & Chartash, D. (2022). How does ChatGPT perform on the medical licensing exams? The implications of large language models for medical education and knowledge assessment. medRxiv. doi:10.1101/2022.12.23.22283901

Gleason, N. (2022, December 9). ChatGPT and the rise of AI writers: how should higher education respond? Times Higher Education. Retrieved June 6, 2023, from https://www.timeshighereducation.com/campus/chatgpt-and-rise-ai-writers-how-should-higher-education-respond

Glikson, E. & Asscher, O. (2023). AI-mediated apology in a multilingual work context: implications for perceived authenticity and willingness to forgive. Computers in Human Behavior 140, 107592. doi:10.1016/j.chb.2022.107592

Gozalo-Brizuela, R. & Garrido-Merchán, E. C. (2023). A survey of generative AI applications. arXiv: 2306.02781

Gutiérrez, V. & Del Pino, A. D. (2023). The impacts of social media in higher education institutions: how it evolves social media in universities. In Information Resources Management Association (Ed.), Research anthology on applying social networking strategies to classrooms and libraries (pp. 50–68). doi:10.4018/978-1-6684-7123-4.ch004

Hosseini, M. & Horbach, S. P. J. M. (2023). Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other Large Language Models in scholarly peer review. Research Square. doi:10.21203/rs.3.rs-2587766/v1

Huang, G. & Wang, S. (2023). Is artificial intelligence more persuasive than humans? A meta-analysis. PsyArXiv. doi:10.31234/osf.io/ehg7n

Jakesch, M., French, M., Ma, X., Hancock, J. T. & Naaman, M. (2019). AI-mediated communication: how the perception that profile text was written by AI affects trustworthiness. In CHI ’19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Glasgow, U.K., May 4–9, 2019. doi:10.1145/3290605.3300469

Jarzabkowski, P. & Kaplan, S. (2015). Strategy tools-in-use: a framework for understanding “technologies of rationality” in practice. Strategic Management Journal 36 (4), 537–558. doi:10.1002/smj.2270

Karinshak, E., Liu, S. X., Park, J. S. & Hancock, J. T. (2023). Working with AI to persuade: examining a large language model’s ability to generate pro-vaccination messages. Proceedings of the ACM on Human-Computer Interaction 7 (CSCW1), 116. doi:10.1145/3579592

Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., … Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. EdArXiv. doi:10.35542/osf.io/5er8f

Kieslich, K., Keller, B. & Starke, C. (2022). Artificial intelligence ethics by design. Evaluating public perception on the importance of ethical design principles of artificial intelligence. Big Data & Society 9 (1). doi:10.1177/20539517221092956

Könneker, C. (2024, March 21). The challenge of science communication in the age of AI. Stanford Social Innovation Review. Retrieved March 25, 2024, from https://ssir.org/articles/entry/science-communication-artificial-intelligence

Kreps, S. & Kriner, D. (2023, March 21). How generative AI impacts democratic engagement. Brookings. Retrieved June 10, 2023, from https://www.brookings.edu/techstream/how-generative-ai-impacts-democratic-engagement/

Leonardi, P. M. (2011). When flexible routines meet flexible technologies: affordance, constraint, and the imbrication of human and material agencies. MIS Quarterly 35 (1), 147–167. doi:10.2307/23043493

Leonardi, P. M. & Barley, S. R. (2008). Materiality and change: challenges to building better theory about technology and organizing. Information and Organization 18 (3), 159–176. doi:10.1016/j.infoandorg.2008.03.001

Lermann Henestrosa, A., Greving, H. & Kimmerle, J. (2023). Automated journalism: the effects of AI authorship and evaluative information on the perception of a science journalism article. Computers in Human Behavior 138, 107445. doi:10.1016/j.chb.2022.107445

Lieberman, M. (2023, January 4). What is ChatGPT and how is it used in education? Education Week. Retrieved June 6, 2023, from https://www.edweek.org/technology/what-is-chatgpt-and-how-is-it-used-in-education/2023/01

Longoni, C., Fradkin, A., Cian, L. & Pennycook, G. (2022). News from generative artificial intelligence is believed less. In FAccT ’22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. Seoul, Republic of Korea, June 21–24, 2022 (pp. 97–106). doi:10.1145/3531146.3533077

Lopezosa, C., Codina, L., Pont-Sorribes, C. & Vállez, M. (2023). Use of generative artificial intelligence in the training of journalists: challenges, uses and training proposal. El Profesional de la Información 32 (4), e320408. doi:10.3145/epi.2023.jul.08

Lovari, A. & Giglietto, F. (2012). Social media and Italian universities: an empirical study on the adoption and use of Facebook, Twitter and Youtube. SSRN Electronic Journal. doi:10.2139/ssrn.1978393

Lund, B. D., Wang, T., Mannuru, N. R., Nie, B., Shimray, S. & Wang, Z. (2023). ChatGPT and a new academic reality: Artificial Intelligence-written research papers and the ethics of the large language models in scholarly publishing. Journal of the Association for Information Science and Technology 74 (5), 570–581. doi:10.1002/asi.24750

Matz, S., Teeny, J., Vaid, S. S., Peters, H., Harari, G. M. & Cerf, M. (2023). The potential of generative AI for personalized persuasion at scale. PsyArXiv. doi:10.31234/osf.io/rn97c

McGowan, A., Gui, Y., Dobbs, M., Shuster, S., Cotter, M., Selloni, A., … Corcoran, C. M. (2023). ChatGPT and Bard exhibit spontaneous citation fabrication during psychiatry literature search. Psychiatry Research 326, 115334. doi:10.1016/j.psychres.2023.115334

Metag, J. & Schäfer, M. S. (2017). Hochschulen zwischen Social Media-Spezialisten und Online-Verweigerern. Eine Analyse der Online-Kommunikation promotionsberechtigter Hochschulen in Deutschland, Österreich und der Schweiz. Studies in Communication | Media 6 (2), 160–195. doi:10.5771/2192-4007-2017-2-160

Mieczkowski, H. & Hancock, J. T. (2022). Examining agency, expertise, and roles of AI systems in AI-mediated communication. OSF Preprints. doi:10.31219/osf.io/asnv4

Munoriyarwa, A., Chiumbu, S. & Motsaathebe, G. (2023). Artificial intelligence practices in everyday news production: the case of South Africa’s mainstream newsrooms. Journalism Practice 17 (7), 1374–1392. doi:10.1080/17512786.2021.1984976

Neuberger, C., Weingart, P., Fähnrich, B., Fecher, B., Schäfer, M. S., Schmid-Petri, H. & Wagner, G. G. (2021). Der digitale Wandel der Wissenschaftskommunikation. Berlin, Germany: Berlin-Brandenburgische Akademie der Wissenschaften. Retrieved from https://edoc.bbaw.de/opus4-bbaw/frontdoor/index/index/year/2021/docId/3526

Ninaus, M. & Sailer, M. (2022). Zwischen Mensch und Maschine: Künstliche Intelligenz zur Förderung von Lernprozessen. Lernen und Lernstörungen 11 (4), 213–224. doi:10.1024/2235-0977/a000386

Noain-Sánchez, A. (2022). Addressing the impact of artificial intelligence on journalism: the perception of experts, journalists and academics. Communication & Society 35 (3), 105–121. doi:10.15581/003.35.3.105-121

OpenAI, Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., … Zoph, B. (2023). GPT-4 Technical Report. arXiv: 2303.08774

Orlikowski, W. J. (1992). The duality of technology: rethinking the concept of technology in organizations. Organization Science 3 (3), 398–427. doi:10.1287/orsc.3.3.398

Parycek, P., Schmid, V. & Novak, A.-S. (2023). Artificial Intelligence (AI) and automation in administrative procedures: potentials, limitations, and framework conditions. Journal of the Knowledge Economy. doi:10.1007/s13132-023-01433-3

Pavlik, J. V. (2023). Collaborating with ChatGPT: considering the implications of generative artificial intelligence for journalism and media education. Journalism & Mass Communication Educator 78 (1), 84–93. doi:10.1177/10776958221149577

Peña-Fernández, S., Meso-Ayerdi, K., Larrondo-Ureta, A. & Díaz-Noci, J. (2023). Without journalists, there is no journalism: the social dimension of generative artificial intelligence in the media. El Profesional de la Información 32 (2), e320227. doi:10.3145/epi.2023.mar.27

Peruta, A. & Shields, A. B. (2017). Social media in higher education: understanding how colleges and universities use Facebook. Journal of Marketing for Higher Education 27 (1), 131–143. doi:10.1080/08841241.2016.1212451

Peters, H. P. (2022). The role of organizations in the public communication of science — early research, recent studies, and open questions. Studies in Communication Sciences 22 (3), 551–558. doi:10.24434/j.scoms.2022.03.3994

Pinch, T. J. & Bijker, W. E. (1984). The social construction of facts and artefacts: or how the sociology of science and the sociology of technology might benefit each other. Social Studies of Science 14 (3), 399–441. doi:10.1177/030631284014003004

Rawte, V., Sheth, A. & Das, A. (2023). A survey of hallucination in large foundation models. arXiv: 2309.05922

Ray, P. P. (2023). ChatGPT: a comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems 3, 121–154. doi:10.1016/j.iotcps.2023.04.003

Schäfer, M. S. (2023). The Notorious GPT: science communication in the age of artificial intelligence. JCOM 22 (02), Y02. doi:10.22323/2.22020402

Stokel-Walker, C. (2023). ChatGPT listed as author on research papers: many scientists disapprove. Nature 613 (7945), 620–621. doi:10.1038/d41586-023-00107-z

Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science 379 (6630), 313. doi:10.1126/science.adg7879

Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., … Lample, G. (2023). LLaMA: open and efficient foundation language models. arXiv: 2302.13971

van Wyk, M. M., Adarkwah, M. A. & Amponsah, S. (2023). Why all the hype about ChatGPT? Academics’ views of a chat-based conversational learning strategy at an open distance e-learning institution. Open Praxis 15 (3), 214–225. doi:10.55982/openpraxis.15.3.563

Venkatesh, V., Morris, M. G., Davis, G. B. & Davis, F. D. (2003). User acceptance of information technology: toward a unified view. MIS Quarterly 27 (3), 425–478. doi:10.2307/30036540

Volk, S. C. & Zerfass, A. (2020). Alignment: explicating a key concept in strategic communication. In H. Nothhaft, K. P. Werder, D. Verčič & A. Zerfass (Eds.), Future directions of strategic communication (pp. 105–123). doi:10.4324/9780429295638

Wenker, K. (2023). Who wrote this? How smart replies impact language and agency in the workplace. Telematics and Informatics Reports 10, 100062. doi:10.1016/j.teler.2023.100062

Wolfram, S. (2023, February 14). What is ChatGPT doing… and why does it work? Stephen Wolfram Writings. Retrieved June 4, 2023, from https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

Yan, L., Sha, L., Zhao, L., Li, Y., Martinez-Maldonado, R., Chen, G., … Gašević, D. (2023). Practical and ethical challenges of large language models in education: a systematic scoping review. arXiv: 2303.13379

Yang, S., Krause, N. M., Bao, L., Calice, M. N., Newman, T. P., Scheufele, D. A., … Brossard, D. (2023). In AI we trust: the interplay of media use, political ideology, and trust in shaping emerging AI attitudes. Journalism & Mass Communication Quarterly. doi:10.1177/10776990231190868

Zawacki-Richter, O., Marín, V. I., Bond, M. & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education — where are the educators? International Journal of Educational Technology in Higher Education 16 (1), 39. doi:10.1186/s41239-019-0171-0

Zerfass, A., Hagelstein, J. & Tench, R. (2020). Artificial intelligence in communication management: a cross-national study on adoption and knowledge, impact, challenges and risks. Journal of Communication Management 24 (4), 377–389. doi:10.1108/jcom-10-2019-0137

Zhang, Y., Li, Y., Cui, L., Cai, D., Liu, L., Fu, T., … Shi, S. (2023). Siren’s song in the AI ocean: a survey on hallucination in large language models. arXiv: 2309.01219

Author

Dr. Justus Henke has been a research associate at the Institut für Hochschulforschung Halle-Wittenberg (HoF) since 2012. Since 2019 he is also junior research group leader. His research focuses on science communication, science management, the third mission of universities, citizen science, artificial intelligence and university funding.
@HenkeJustus E-mail: justus.henke@hof.uni-halle.de

Endnotes

1Dataset, codebook and questionnaire available at: https://doi.org/10.5281/zenodo.10254904.

2See https://www.hochschulkompass.de/. The database did not contain any descriptive data about the respondents other than the name of the head of department and the name of the communications department. It was not considered necessary to ask respondents for their personal details as they were asked to answer on behalf of the entire department.

3Results for CSU cannot be presented due to the low N.

4Results for Church-affiliated universities cannot be presented due to the low N.