1 Introduction
The rapid development and spread of artificial intelligence (AI) and machine learning have significantly transformed numerous areas of life and work in recent years. Generative AI (genAI) tools, such as ChatGPT, have attracted significant attention since their launch in November 2022 [OpenAI, 2023; Touvron et al., 2023]. These tools are capable of conducting human-like conversations, generating texts, images, or videos, and processing complex queries [Gozalo-Brizuela & Garrido-Merchán, 2023; Ray, 2023]. This technological shift is increasingly evident in higher education, particularly within university communication. University communication broadly encompasses the internal and external organizational communication of higher education institutions [Fähnrich et al., 2019]. It plays a central role in conveying scientific findings, promoting dialogues with the public, and supporting administrative processes. GenAI offers promising applications, from streamlining administrative tasks to producing professional communication content in varied formats. However, its adoption also poses challenges, particularly concerning the factual accuracy and authenticity of AI-generated outputs.
In 2023, an empirical study examined the application and perception of genAI tools in university communication for the first time [Henke, 2023, 2024]. Surveying communication departments of German higher education institutions, the study analyzed usage patterns, challenges, and opportunities. The findings revealed that AI-supported translation and language correction tools were the most widely used, while tools like ChatGPT saw limited adoption. Satisfaction with genAI tools was mixed, with broader implementation hindered by technical difficulties, ethical concerns, data protection issues, and limited awareness of their diverse applications. These findings align with other studies identifying similar barriers to genAI adoption [Athaluri et al., 2023; Bhattacharyya et al., 2023; McGowan et al., 2023; Rawte et al., 2023; Zhang et al., 2023]. There are also connections here to related studies on generative AI in the scientific context [De Silva et al., 2023; Lopezosa et al., 2023; Ray, 2023]. This presents a particular challenge, as not only the communication of science but also the production of scientific knowledge itself is increasingly supported and challenged by genAI [Elbadawi et al., 2024; Messeri & Crockett, 2024; Prillaman, 2024; Royal Society, 2024; Stone, 2023; Tate et al., 2023]. Science communicators must not only adapt to genAI in their work but also navigate how to effectively communicate about a rapidly evolving, genAI-influenced scientific landscape.
Drawing from the second wave of the survey, conducted in May 2024, this study aims to gain new insights into the current applications and perception of generative AI tools in higher education communication and to trace changes from the previous year. In light of the results of last year’s survey and technological advancements, the following research questions are eminent: (RQ1) How have the acceptance and use of generative AI tools developed since the first survey? (RQ2) How did this impact communication strategies of the universities? (RQ3) Are ethical and privacy challenges still relevant, and to what extent? (RQ4) Which new challenges and assessments of the role of generative AI have emerged?
In the further course of this study, I will first present central developments in the field of higher education communication, followed by a detailed description of the methodological approach and the characteristics of the conducted survey. I will then present the results and discuss them in the context of the results of the first wave as well as current trends and future perspectives.
2 Background and assumptions
Germany’s higher education landscape consists mainly of public universities, which operate under a state-level regulatory framework that varies across the 16 Länder, while private universities generally have more flexibility but must meet regulatory standards to be recognised by the state [Kehm, 2018]. This partly extends to the press offices or communication departments, which enjoy a relatively high degree of institutional autonomy. Higher education communication, which represents a specific form of science communication, has some unique characteristics and challenges. Unlike general science communication, which primarily aims to convey scientific findings to a broad audience, higher education communication also addresses the academic community, including students, faculty, and researchers, as well as external stakeholders such as politics, business, and society [Elken et al., 2018; Fürst et al., 2022; Peters, 2022]. Fähnrich et al. [2019, p. 8] define higher education communication as “all forms of communication in, from, and about higher education institutions, including their production, content, use, and impact, carried out by actors within and outside the higher education organization”. This study focuses on the practical work of central communication teams at universities. These departments perform four general communication functions: public relations, marketing, public affairs, and science communication, covering a wide range of specific communication activities [Entradas, Marcinkowski et al., 2023; Entradas, Bauer et al., 2023].
With regard to genAI and science communication, Schäfer [2023] emphasizes genAI’s importance and its potential impacts on science communication, pointing out the need for further research. He identifies four relevant research directions: (1) analyses of public communication about genAI, (2) investigation of user interactions with ChatGPT and similar tools, (3) the impacts of generative AI on the fundamentals of science communication, and (4) conceptual work on human-machine communication. Schäfer stresses that the science communication community must quickly adapt to these new questions, as genAI could transform many life-relevant aspects of science communication, which has implications for trust in science [Alvarez et al., 2024; Biyela et al., 2024; Dunn et al., 2023]. This study follows the third line of research by investigating the adoption of genAI in the field of university communication. Furthermore, Carsten Könneker [2024] highlights in an opinion article that AI-based tools are transforming science communication through productivity increases, greater educational equity, and new dissemination pathways such as participatory practices. At the same time, they bring challenges such as misinformation and potential for misuse, which underscores the indispensable role of independent media, human control and quality journalism [Dijkstra et al., 2024; Wihbey, 2024].
The initial study of genAI adoption in university communication in the year 2023 [Henke, 2024] drew on three complementary theoretical perspectives that together illuminate both individual and organizational dimensions of technology adoption. The Technology Acceptance Model (TAM) [Davis, 1986] posits that perceived usefulness and ease of use drive individual adoption patterns, suggesting potential feedback loops as users gain experience. The Unified Theory of Acceptance and Use of Technology (UTAUT) [Venkatesh et al., 2003] extends this by emphasizing institutional factors like performance expectancy, social influence, and facilitating conditions — particularly relevant for understanding adoption patterns across different university types. Socio-Technical Systems Theory (STS) [Bijker et al., 1987; Leonardi, 2011; Orlikowski, 1992] adds critical insight into how technologies become embedded within organizational contexts through mutual adaptation between technical capabilities and existing social structures. Of particular relevance is the concept of ‘interpretive flexibility’ introduced by Bijker et al. [1987], which suggests that technological artifacts can be understood differently by various social groups, leading to different patterns of use and integration. Orlikowski [1992] later applied this concept specifically to information technology in organizations, showing how users shape technology use to fit existing structures while incrementally adjusting those structures. This theoretical synthesis suggests a multi-level process where individual acceptance, institutional support, and organizational adaptation interact to shape technology adoption patterns, potentially explaining why adoption might proceed at different rates across institutional contexts.
Building on these insights, the follow-up study posits several research assumptions linked to its research questions: First, we expect increased adoption rates as initial barriers to perceived ease of use diminish and social systems adapt (RQ1). Second, following UTAUT’s performance expectancy construct and STS’s focus on organizational routines, genAI tools will likely integrate more deeply into communication strategies as practices and expectations co-evolve. (RQ2). Third, while ethical and privacy concerns may persist, their manifestation likely evolves as organizations develop socio-technical arrangements to manage them (RQ3). Finally, TAM’s emphasis on perceived usefulness and STS’s focus on emergent practices suggest new challenges and opportunities may arise as users and organizations push the boundaries of genAI use (RQ4).
3 Methods
3.1 Sampling und data collection
The units of analysis are German university communication and press offices, with data collected from their respective heads as key informants as they oversee and observe the adoption of genAI in their departments. The data was collected through a survey of communication directors at German universities. The survey was conducted in May 2024, approximately one and a half years after the introduction of the mentioned AI tools, with developments in this area continuing to be highly dynamic. While the study captures data from 2023 and 2024, its cross-sectional and anonymous design precludes longitudinal tracking of individual institutions’ development. Changes between years thus reflect aggregate shifts in the higher education landscape rather than institutional trajectories.
The methodology and the questionnaire remained largely consistent with the 2023 survey to enable direct comparison of question items. However, based on qualitative responses from the previous wave and field observations, several items were added to the questionnaire. These additions included questions about factual accuracy of AI-generated content, which emerged as a substantialconcern in 2023’s open responses, and items (e.g, the use of own AI chatbots) that reflect developments observed in the field.
The present study was conducted as a partially standardized online survey among German universities, including universities of applied sciences (UAS), art colleges, and corporate state universities. All state or state-recognized German higher education institutions, including private, artistic, and theological universities, with at least 200 students were included in the sample (n=318). Contact data (names, email addresses) were obtained from the website hochschulkompass.de (as of May 2024), which lists all universities in Germany with essential characteristics and contact information. The contacts were always the heads of press offices and communication departments, as they are best positioned to evaluate communication strategies and practices. They were selected as the sole respondent for each university.
GenAI applications encompass various forms of text, image, code, audio, or video creation [Gozalo-Brizuela & Garrido-Merchán, 2023]. The selection of specific generative AI tools as examples was straightforward. For each application, such as text or image generation, the most common tools as of May 2024 were identified through a Google search. The examples were intended to illustrate specific tools that might be familiar to the respondents. Table 1 shows applications and example tools.
The questionnaire (see appendix) was programmed in LimeSurvey as a closed online survey with a fixed group of participants. Before the survey began, two practitioners from different universities provided feedback to improve the questionnaire. Additionally, individual items from the previous year’s survey were adjusted or supplemented. After the survey started, adjustments were made to the participant group, as occasionally invalid email addresses were encountered or the respective person was no longer employed at the university. The survey explicitly asked about the use, expectations, and needs regarding generative AI tools. Only the question about the use of specific AI tools was mandatory; all other questions could be skipped, and no filters were applied. The questionnaire contained several additional questions about the relevance, satisfaction, budget, specific functions, and challenges of AI-supported tools. It also inquired about the role of tools like ChatGPT in internal discussions at universities and how respondents assess the future development of university communication through such tools. The survey began on April 29, 2024, and ended on June 5, 2024. Universities were invited via email and received two reminders during the survey period. Data analysis was performed using the programming languages R and Python, as well as the RStudio software.
3.2 Response rate and representativeness
The survey of 318 higher education institutions yielded 82 responses, representing a 25 % response rate. This rate allows for subgroup comparisons and is considered satisfactory given the frequency of surveys in higher education. To assess representativeness, the sample was compared to the population across three characteristics: institution type, governing body, and size. The distribution of institution types in the sample closely matched the population, with slight underrepresentation of universities (31 % vs. 34 %) and universities of applied sciences (47 % vs. 51 %), and overrepresentation of artistic institutions (21 % vs. 14 %). Cooperative state universities were accurately represented at 1%.
Regarding legal status, public institutions were slightly overrepresented (75 % vs. 71 %), private institutions underrepresented (17 % vs. 22 %), and church-affiliated institutions closely matched (8 % vs. 7 %). The size distribution showed some deviations, with smaller institutions (up to 2,000 students) and medium-sized institutions (5,000–10,000 students) overrepresented, while larger institutions (over 10,000 students) were slightly underrepresented. Despite these minor deviations, the overall representativeness of the survey is satisfactory. The slight biases in size and type are unlikely to substantially impact the study’s general conclusions.
4 Results
4.1 Development of adoption and use cases
The survey results on the use of genAI-supported tools in the communication and public relations departments of German higher education institutions show notable differences in the prevalence and regularity of use of various services. Regular use is defined here as the sum of responses for at least daily, weekly, or monthly use. Particularly striking is the regular use of translation and language correction tools like DeepL, which shows the highest proportion among all AI tools surveyed at 80 %, with 41 % of respondents using these tools at least once daily. This indicates a high demand for efficient and precise language processing.
In the area of text generation without web search, such as through ChatGPT, regular use is at 59 %. 23 % of respondents use this service daily and another 24 % weekly. This also suggests a high relevance of these tools for content creation. Less frequently used are tools for text generation with web search, such as Microsoft Copilot, with regular use at 33 %. Document analysis tools like ChatPDF and presentation slide generators like Slides.ai are only regularly used by 13 % and four % of respondents respectively, indicating a lower need in these areas.
Tools for automatic transcription (22 %) and for creating designs and mockups (22 %) show moderate use, while services for image and audio generation are used markedlyless at 27 % and one % respectively. Notably, video generation tools like Synthesia are not used regularly at all. These results reflect the different requirements and priorities in communication work at higher education institutions, where translation and text generation tools show particularly high usage rates. Statistical tests for differences across institutional characteristics indicate significant disparities between private and public institutions (75% vs. 60% regular use, Chi-square=24.9, p=0.001) and by subject focus (Chi-square=23.9, p=0.02), with institutions having balanced subject profiles showing highest adoption rates.
The data reveals substantial increases in generative AI adoption across most tools between 2023 and 2024, with text generation without web search showing the most dramatic rise from 22 % to 59 % regular use, followed by text generation with web search (5 % to 33 %). Notable growth occurred in image generation (3 % to 27 %), transcription (1 % to 22 %), and design generation (1 % to 22 %), while translation tools maintained high usage with a modest increase from 73 % to 80 %. These shifts suggest a rapid maturation in AI tool adoption, with users expanding beyond basic translation to embrace more sophisticated content generation capabilities. (Figure 2)
The analysis of the open-ended responses (n=64) shows that generative AI-supported tools are mainly used in four application areas: text generation, translation, image editing, and specialized functions (Figure 3). ChatGPT and DeepL are the most popular for text and translation, respectively. Image editing tools like Adobe Express, Midjourney and Dall-E are also widely used. Usage patterns vary across institution types. Universities with broad profiles tend to use a wider range of tools, including specialized ones like Perplexity.ai. Universities of applied sciences focus on core tools like ChatGPT and DeepL. Artistic colleges and private institutions emphasize creative applications, particularly in image editing.
Compared to 2023, several new use cases have emerged or gained prominence in 2024. Notably, there is increased utilization of AI tools for strategic communication planning and content curation, with respondents mentioning the use of tools like Perplexity.ai for research-backed content development and ChatGPT for brainstorming communication strategies. The use of genAI for multilingual communication has expanded beyond simple translation to include cultural adaptation of content. Some institutions report using AI tools for crisis communication preparation and social media response templates, applications that were not mentioned in the previous year. Additionally, there is growing integration of AI tools in workflow automation, particularly in coordinating communication across multiple channels and platforms. GenAI usage has, thus, become more sophisticated and broader in scope.
The analysis of the monthly budgets of communication departments at German higher education institutions for the use of generative AI services shows a strong concentration on lower budget categories. 54 of the surveyed university communication departments provided responses to this question. The majority of institutions (40 %) report having a monthly budget of “up to 50 euros” for AI services. This suggests that many departments are using either free versions or low-cost subscriptions. Another notable segment (37 %) invests “between 50 and 150 euros” monthly. This group might combine extended subscription models or several specialized services, yet still remains within a moderate cost framework. Only 10% of institutions report spending “between 150 and 500 euros” monthly on AI services. None of the surveyed departments reported having budgets in the categories of “500 to 1,000 euros” or “more than 1,000 euros” per month. Furthermore, 13 % of respondents could not provide an exact figure and chose the option “Don’t know”.
The budget allocation data shows a slight decrease compared to 2023 in departments spending under 50 euros monthly (44 % to 40 %), while those spending 50–150 euros increased substantially from 23 % to 37 %, and higher-budget ranges (150–500 euros) doubled from 5 % to 10 %. Notably, uncertainty about AI tool budgets decreased remarkably, with “Don’t know” responses dropping from 26 % to 13 %, suggesting improved budget tracking and planning for AI implementation.
4.2 Impact on strategies, debates and satisfaction
The survey reveals relevant impacts of AI tools on higher education communication strategies. 36 % of respondents report improved efficiency, while 33 % note increased adaptability to various communication channels. However, only 8 % report changes in team roles and responsibilities, and just 4 % observe a stronger focus on data-driven decision-making. Notably, 38 % of respondents identify a greater need for technical expertise and training, highlighting the importance of skill development in AI tool usage. Conversely, 32 % report no meaningfulchanges in their communication practices, indicating varied adoption rates across institutions.
These findings suggest that while AI tools are enhancing efficiency and adaptability in many departments, their impact on organizational structure and decision-making processes remains limited. The results underscore the growing importance of AI literacy in higher education communication, while also revealing that a relevant portion of institutions have yet to experience major changes from AI implementation.

The comparative data on organizational impacts shows substantial increases since 2023 in perceived benefits, with efficiency improvements rising from 23 % to 36 % and adaptability to different communication channels nearly tripling from 12 % to 33 %. A notable increase in the need for technical expertise and training (24 % to 38 %) coupled with a decrease in “no significant changes” (45 % to 32 %) suggests broader and deeper AI integration, though structural impacts like changed roles (6 % to 8 %) and data-driven decision-making (4 %) remained relatively stable.
The use of genAI has also been an issue in internal university debates. Respondents reported a varied engagement with generative AI tools in German higher education institutions. 53 % report regular committee discussions on AI, while 36 % have established dedicated working groups. 37 % offer AI-related training for staff and students. However, only 14 % have implemented AI chatbots or formal usage guidelines, and 11 % have defined strategic AI initiatives.
Notably, 28 % of institutions report that generative AI is not yet a central topic of internal discussion. This diverse landscape suggests that while many institutions are actively exploring and integrating AI tools, a notable portion are still in the early stages of adoption or have yet to prioritize these technologies in their institutional strategies.
The data shows notable increases in formal AI governance structures between 2023 and 2024, with guidelines tripling (5 % to 14 %), strategic initiatives rising sharply (2 % to 11 %), and training offerings more than doubling (16 % to 37 %). While committee discussions remained stable around 52–53%, working groups increased (27 % to 36 %), and 14 % reported implementing their own AI chatbots in 2024, suggesting more concrete implementation steps despite AI remaining a non-central topic for about 30 % of institutions across both years.
In addition, satisfaction with the use of genAI varies across applications. The evaluation of satisfaction with generative AI tools in public relations at German higher education institutions shows an overall moderate satisfaction (Table 3). The average satisfaction with experiences using generative AI tools is 3.2 (on a scale from 1 “very dissatisfied” to 5 “very satisfied”), with the proportion of users rating the tools above three at 40 percent. Satisfaction varies depending on the specific application area. For text generation, the mean value is 3.1, with 34 % of respondents indicating values above three. Document evaluation shows a lower satisfaction value of 2.9, with only 31 % of users rating the tools positively in this area. The situation is similar for image generation with a mean of 3.1 and 41 % positive ratings. Lower satisfaction values are found in the generation of audio and video, with mean values of 2.4 each and only 11 % positive ratings. This suggests challenges or lower expectations in these areas.
The transcription of audio content shows slightly higher satisfaction with a mean of 3.3 and 43 % positive ratings. Creating presentation slides also achieves a mean of 3.2, with 45 % of users reporting positive experiences. For design generation, satisfaction is at 3.1 with 38 % positive ratings. The highest satisfaction value is found for translation and language corrections with a mean of 3.4 and 58 % positive ratings. User expectations seem to be more strongly met in this area. Overall, satisfaction with generative AI tools varies considerably and strongly depends on the specific application area. While some areas such as translation and text transcription achieve higher satisfaction values, there is clear potential for improvement in audio and video generation.
Moreover, satisfaction with AI tools has slightly improved since 2023, with an average value of 3.2 in 2024 compared to 3.0 in the previous year. The previous survey did only include an overall assessment of satisfaction, which is why we cannot compare details for specific applications.

4.3 Relevant challenges and difficulties
The analysis of the survey results on challenges in using generative AI tools (Table 6) shows that certain difficulties are perceived as particularly relevant, while others are less relevant (on a scale from 1 “not at all relevant” to 5 “very relevant”). Factual accuracy and reliability represent the greatest challenge, with a high mean value of 4.0. In total, 72 % of respondents rate this challenge as relevant (values greater than three). Similarly relevant are data protection concerns, which also have high priority with a mean value of 3.9 and 68 positive ratings. These areas require special attention and measures to improve reliability and protect sensitive data.
Ethical concerns follow with a mean value of 3.4, with 51 % of respondents considering these important. These values indicate that ethical aspects, such as fair use and potential biases in generated content, are important considerations. Difficulties in optimal use of the tools are also seen as a relevant challenge with a mean value of 3.2 and 47 % positive ratings. A factor here could be that users have difficulties in fully exploiting the potential of the tools, possibly due to operational complexity or insufficient support.
Other challenges, such as lack of personalization or adaptability (mean 2.9) and lack of further training opportunities (mean 2.8), show medium relevance values with 35 % and 36 % ratings in the relevant or very relevant range respectively. These areas could be addressed through targeted training and improved customization options. Technical problems (mean 2.3) and acceptance within the institution (mean 2.6) are seen as less relevant, with only 12 % and 24 % positive ratings respectively. Thus, it can be stated that most users do not experience serious technical difficulties and the acceptance of the tools within the institutions is relatively high.

From 2023 to 2024, technical problems decreased notably (from 24 % to 12 %), while concerns about factual fidelity and reliability remained the dominant challenge (72 %) alongside growing data protection concerns (increased from 52 % to 68 %). The data shows increased difficulties in optimal tool usage (36 % to 47 %) and lack of personalization (20 % to 35 %), suggesting that as technical barriers diminish, implementation and customization challenges become more prominent. Notably, the need for training opportunities has grown (20 % to 36 %), while ethical concerns increased moderately (42 % to 51 %), indicating a shift from technical to practical and ethical considerations in AI adoption. (Figure 7)
4.4 Evolution of priorities and expectations
Looking at the reported needs and goals, we can see clear preferences for certain aspects of using generative AI tools. Time saving in content creation is the most important goal, with a mean value of 4.5. A total of 88 % of respondents rate this aspect as very important (values 3). This underscores the importance of efficiency in daily communication work. Similarly high is the increase in communication efficiency, which is rated with a mean value of 4.3. An equally high 82 % of participants see this as an important benefit of AI tools, indicating a desire for optimized and accelerated communication processes.
The simplification of work processes is also a central need, with a mean value of 4.2 and 80 % positive ratings. This shows that users expect AI tools to help simplify workflows and reduce administrative effort. Improving communication quality is considered moderately important, with a mean value of 3.2 and 44 % positive ratings. This suggests that while the quality of communication is important, it is not prioritized as highly as efficiency and time aspects. Less important are expanding the reach of communication (mean 2.8) and personalizing communication (mean 2.4), which are considered important or very important by only 27 % and 18 % of respondents respectively. These areas seem to play a lesser role in the current use of AI tools.

The comparative data on needs and goals for generative AI tools reveals a dramatic increase in efficiency-focused priorities, with efficiency in communication jumping from 49 % to 82 % and time-saving rising from 73 % to 88 % between 2023 and 2024. Quality improvement and communication reach showed notable increases (from 15 % to 44 % and 9 % to 26 % respectively), while personalization saw a substantial rise from just 2 % to 18 %, suggesting growing sophistication in AI tool usage. With 80 % of respondents in 2024 citing process simplification as important, the data indicates a clear shift toward viewing AI tools as integral to streamlining communication workflows rather than just experimental technology. (Figure 8)
This is also reflected in the expectations of the respondents on genAI. In an open question, survey participants were asked what important changes they expect in university communication through generative AI tools in the coming years. There were 34 responses, which were analyzed and summarized in terms of content. Efficiency gains and work facilitation are the most anticipated benefits (n=14). However, concerns about quality and skepticism persist, particularly regarding the use of platitudes and filler words in AI-generated content (n=6). Respondents also anticipate improvements in multilingual communication, internationalization (n=6), and personalized, target group-specific messaging (n=5). Some foresee a shift in communicator roles towards content curation rather than creation (n=4).
While efficiency is seen as the primary opportunity, notable risks are identified. These include potential quality loss, misinformation, and increased need for fact-checking (n=10). Data protection, privacy concerns (n=7), and the potential loss of institutional individuality and creativity (n=5) are also noted as risks. Interestingly, larger, technically-focused public universities appear slightly more open to AI adoption than smaller, specialized institutions. However, across all institution types, a balanced view emerges, recognizing both the potential benefits and challenges of integrating AI into university communication strategies.
A comparative analysis of survey responses between 2023 and 2024 reveals a shift from general concerns about AI adoption to specific implementation challenges in university communications. While efficiency gains remained the primary benefit (n=14 in 2024 vs. n=12 in 2023), new opportunities emerged around multilingual communication and personalized audience targeting. Notably, concerns about job displacement (n=4) and loss of personal interaction (n=5) present in 2023 disappeared in 2024. However, data privacy concerns intensified (n=7, up from n=4), while quality concerns remained consistent (n=10 both years). The nature of these concerns evolved from general skepticism to specific issues around factual accuracy and content authenticity.
New considerations in 2024 included resource constraints (n=2) and integration of AI topics into institutional communications (n=2). This evolution suggests a maturation in institutional understanding of AI technologies, characterized by more practical implementation considerations compared to the broader concerns of 2023. The shift aligns with the general trajectory of technology adoption in higher education, where initial apprehension gives way to more practical implementation considerations.
5 Discussion
The findings reveal a substantial maturation in the adoption and integration of generative AI tools in German university communications between 2023 and 2024. Analyzing these changes through our theoretical framework provides insights into how AI integration has evolved. Addressing our first research question about AI acceptance and use development (RQ1), the most striking finding is the nearly threefold increase in regular text generation tool usage, with ChatGPT leading this trend. The dramatic increase (22 % to 59 %) exemplifies TAM’s technology acceptance cycle: as perceived ease of use improved through exposure and training (evidenced by the doubling of training programs), perceived usefulness increased, creating a positive feedback loop that accelerated adoption. However, UTAUT’s emphasis on facilitating conditions helps explain the persistent public-private adoption gap — private institutions’ greater autonomy in implementing support structures creates more favorable conditions for AI integration, while public institutions’ complex stakeholder obligations and regulatory requirements create friction in the acceptance cycle.
Regarding impact on communication strategies (RQ2), the qualitative responses reveal a crucial evolution from basic content generation to strategic applications. New use cases in research-backed content development, crisis communication preparation, and cultural adaptation of multilingual content indicate growing sophistication. This shift, combined with substantially increased adaptability regarding genAI, supports UTAUT’s performance expectancy predictions. However, only few institutions report changes in team roles and responsibilities despite many identifying increased need for technical expertise and training. STS theory suggests that technological adoption occurs through a process of mutual adaptation between technical and social systems [Leonardi, 2011]. Our findings of limited structural changes despite increased technical expertise needs illustrate this mutual adaptation process: rather than drastically reorganizing team structures, institutions are gradually evolving existing roles to accommodate AI capabilities. This aligns with the concept of ‘interpretive flexibility’ where organizations shape technology use to fit existing structures while incrementally adjusting those structures [Bijker et al., 1987; Orlikowski, 1992]. The persistence of traditional team roles alongside growing technical demands reflects what Bijker et al. [1987] term ‘socio-technical configurations’ — relatively stable arrangements that balance innovation with institutional continuity.
Examining the evolution of ethical and data protection challenges (RQ3), we observe a notable shift from technical to practical concerns. While technical barriers have diminished substantially, data protection concerns have intensified, becoming the dominant challenge alongside factual reliability. This aligns with STS theory’s emphasis on embedded social values in technological adoption [Bijker et al., 1987; Pinch & Bijker, 1984], as institutions grapple with implementation rather than technical hurdles.
Finally, addressing new challenges and assessments (RQ4), institutional responses show clear maturation through increased training offerings, formal guidelines, and strategic initiatives. Qualitative responses reveal a marked shift from general concerns about job displacement to specific implementation challenges around resource constraints and genAI integration. The dramatic increase in efficiency-focused priorities, alongside new emphasis on quality improvement and personalization, suggests a transition from viewing genAI as experimental technology to seeing it as an integral tool for workflow optimization.
The interplay between individual acceptance factors (TAM/UTAUT) and organizational adaptation processes (STS) helps explain the observed pattern of rapid tool adoption alongside gradual structural change. While individual users quickly embrace tools that demonstrate clear utility (following TAM’s usefulness principle), organizational structures evolve more slowly through what STS theory describes as a process of negotiation between technical capabilities and existing social arrangements. This theoretical synthesis helps explain why we see high individual-level adoption metrics alongside relatively conservative organizational transformation.
Several limitations should be considered when interpreting these findings: while the study achieved a satisfactory response rate of 25 %, the slight underrepresentation of larger universities (10,000 students) and slight overrepresentation of artistic institutions may affect the generalizability of results across the German higher education landscape. The reliance on self-reported data from communication department heads, while providing valuable insights into organizational decision-making, may be subject to social desirability bias and potentially overestimate the sophistication of AI tool implementation. Finally, both the cross-sectional nature of the data and its focus on German institutions — with their specific regulatory and organizational characteristics — limit our ability to draw conclusions about the temporal development of AI adoption and its manifestation in other national contexts.
6 Conclusions
The integration of generative AI tools in university communication requires a comprehensive approach that considers technological capabilities, operational needs, and the socio-technical environment, as evidenced by the pivotal shifts between 2023 and 2024. The study’s findings confirm varied adoption patterns across institution types, with private institutions showing more frequent and diverse use of genAI tools compared to public ones. Satisfaction with AI tools is moderate, with persistent challenges in factual accuracy and data protection. These factors substantially influence acceptance and usage. The primary drivers for genAI adoption are increased efficiency and time savings, with respondents reporting notable improvements in these areas. Many institutions have established internal debates, working groups, and training programs for genAI tools, with training offerings more than doubling and formal guidelines tripling between 2023 and 2024, highlighting the importance of organizational support for successful integration. However, data protection and quality concerns remain central issues, necessitating careful management of genAI tool integration.
The study highlights a higher education communication landscape undergoing pivotal transformation, with genAI tools becoming increasingly integral yet still seeking a clearly defined role. Communication departments face the challenge of leveraging genAI’s efficiency without compromising quality or individuality. This may involve deploying genAI for routine tasks, such as specialized AI chatbots, while reallocating the time saved to creative and strategic activities. Following STS theory’s emphasis on mutual adaptation, we can expect the next phase of genAI integration to produce more sophisticated socio-technical arrangements where AI capabilities and organizational practices co-evolve. UTAUT’s performance expectancy construct suggests that as facilitating conditions mature, institutions will develop novel hybrid approaches that transcend current efficiency-focused applications toward more strategic combinations of AI and human expertise. Communication departments would benefit from establishing clear guidelines that anticipate these evolving dynamics. Further research should examine how varying organizational structures shape the development of such hybrid approaches and their implementation across different institutional contexts.
A Data availability
The dataset for the 2024 wave is available here: https://doi.org/10.5281/zenodo.12166389.
The dataset for the 2023 wave is available here: https://doi.org/10.5281/ZENODO.10246987.
B Survey questionnaire
Translated from German original.
C Additional tables
References
-
Alvarez, A., Caliskan, A., Crockett, M. J., Ho, S. S., Messeri, L., & West, J. (2024). Science communication with generative AI. Nature Human Behaviour, 8, 625–627. https://doi.org/10.1038/s41562-024-01846-3
-
Athaluri, S. A., Manthena, S. V., Kesapragada, V. S. R. K. M., Yarlagadda, V., Dave, T., & Duddumpudi, R. T. S. (2023). Exploring the boundaries of reality: investigating the phenomenon of artificial intelligence hallucination in scientific writing through ChatGPT references. Cureus. https://doi.org/10.7759/cureus.37432
-
Bhattacharyya, M., Miller, V. M., Bhattacharyya, D., & Miller, L. E. (2023). High rates of fabricated and inaccurate references in ChatGPT-generated medical content. Cureus. https://doi.org/10.7759/cureus.39238
-
Bijker, W. E., Hughes, T. P., & Pinch, T. (Eds.). (1987). The social construction of technological systems: new directions in the sociology and history of technology. MIT Press.
-
Biyela, S., Dihal, K., Gero, K. I., Ippolito, D., Menczer, F., Schäfer, M. S., & Yokoyama, H. M. (2024). Generative AI and science communication in the physical sciences. Nature Reviews Physics, 6, 162–165. https://doi.org/10.1038/s42254-024-00691-7
-
Davis, F. D. (1986). A technology acceptance model for empirically testing new end-user information systems: theory and results [Ph.D. thesis]. Massachusetts Institute of Technology. http://hdl.handle.net/1721.1/15192
-
De Silva, D., Mills, N., El-Ayoubi, M., Manic, M., & Alahakoon, D. (2023). ChatGPT and generative AI guidelines for addressing academic integrity and augmenting pre-existing chatbots. 2023 IEEE International Conference on Industrial Technology (ICIT), 1–6. https://doi.org/10.1109/icit58465.2023.10143123
-
Dijkstra, A. M., de Jong, A., & Boscolo, M. (2024). Quality of science journalism in the age of Artificial Intelligence explored with a mixed methodology (A. Gesser-Edelsburg, Ed.). PLOS ONE, 19, e0303367. https://doi.org/10.1371/journal.pone.0303367
-
Dunn, A. G., Shih, I., Ayre, J., & Spallek, H. (2023). What generative AI means for trust in health communications. Journal of Communication in Healthcare, 16, 385–388. https://doi.org/10.1080/17538068.2023.2277489
-
Elbadawi, M., Li, H., Basit, A. W., & Gaisford, S. (2024). The role of artificial intelligence in generating original scientific research. International Journal of Pharmaceutics, 652, 123741. https://doi.org/10.1016/j.ijpharm.2023.123741
-
Elken, M., Stensaker, B., & Dedze, I. (2018). The painters behind the profile: the rise and functioning of communication departments in universities. Higher Education, 76, 1109–1122. https://doi.org/10.1007/s10734-018-0258-x
-
Entradas, M., Bauer, M. W., Marcinkowski, F., & Pellegrini, G. (2023). The communication function of universities: is there a place for science communication? Minerva, 62, 25–47. https://doi.org/10.1007/s11024-023-09499-8
-
Entradas, M., Marcinkowski, F., Bauer, M. W., & Pellegrini, G. (2023). University central offices are moving away from doing towards facilitating science communication: a European cross-comparison (R. Wolniak, Ed.). PLOS ONE, 18, e0290504. https://doi.org/10.1371/journal.pone.0290504
-
Fähnrich, B., Metag, J., Post, S., & Schäfer, M. S. (2019). Hochschulkommunikation aus kommunikationswissenschaftlicher Perspektive. In B. Fähnrich, J. Metag, S. Post & M. S. Schäfer (Eds.), Forschungsfeld Hochschulkommunikation (pp. 1–21). Springer VS. https://doi.org/10.1007/978-3-658-22409-7_1
-
Fürst, S., Vogler, D., Sörensen, I., & Schäfer, M. S. (2022). Communication of higher education institutions: historical developments and changes over the past decade. Studies in Communication Sciences, 22. https://doi.org/10.24434/j.scoms.2022.03.4033
-
Gozalo-Brizuela, R., & Garrido-Merchán, E. C. (2023). A survey of generative AI applications (version 2). https://doi.org/10.48550/arXiv.2306.02781
-
Henke, J. (2023). Hochschulkommunikation im Zeitalter der KI: Erste Einblicke in die Nutzung und Perspektiven generativer KI-Tools. Germany, Institut für Hochschulforschung (HoF). https://www.hof.uni-halle.de/web/dateien/pdf/ab_122.pdf
-
Henke, J. (2024). Navigating the AI era: university communication strategies and perspectives on generative AI tools. JCOM, 23, A05. https://doi.org/10.22323/2.23030205
-
Kehm, B. M. (2018). Higher education systems and institutions, Germany. In P. Teixeira & J. C. Shin (Eds.), Encyclopedia of international higher education systems and institutions (pp. 1–10). Springer. https://doi.org/10.1007/978-94-017-9553-1_369-1
-
Könneker, C. (2024). The challenge of science communication in the age of AI. Stanford Social Innovation Review. https://doi.org/10.48558/5JNC-WA59
-
Leonardi. (2011). When flexible routines meet flexible technologies: affordance, constraint and the imbrication of human and material agencies. MIS Quarterly, 35, 147. https://doi.org/10.2307/23043493
-
Lopezosa, C., Codina, L., Pont-Sorribes, C., & Vállez, M. (2023). Use of generative artificial intelligence in the training of journalists: challenges, uses and training proposal. El Profesional de la información, e320408. https://doi.org/10.3145/epi.2023.jul.08
-
McGowan, A., Gui, Y., Dobbs, M., Shuster, S., Cotter, M., Selloni, A., Goodman, M., Srivastava, A., Cecchi, G. A., & Corcoran, C. M. (2023). ChatGPT and Bard exhibit spontaneous citation fabrication during psychiatry literature search. Psychiatry Research, 326, 115334. https://doi.org/10.1016/j.psychres.2023.115334
-
Messeri, L., & Crockett, M. J. (2024). Artificial intelligence and illusions of understanding in scientific research. Nature, 627, 49–58. https://doi.org/10.1038/s41586-024-07146-0
-
OpenAI. (2023). GPT-4 technical report. https://doi.org/10.48550/arXiv.2303.08774
-
Orlikowski, W. J. (1992). The duality of technology: rethinking the concept of technology in organizations. Organization Science, 3, 398–427. https://doi.org/10.1287/orsc.3.3.398
-
Peters, H. P. (2022). The role of organizations in the public communication of science — early research, recent studies and open questions. Studies in Communication Sciences, 22. https://doi.org/10.24434/j.scoms.2022.03.3994
-
Pinch, T. J., & Bijker, W. E. (1984). The social construction of facts and artefacts: or how the sociology of science and the sociology of technology might benefit each other. Social Studies of Science, 14, 399–441. https://doi.org/10.1177/030631284014003004
-
Prillaman, M. (2024). Is ChatGPT making scientists hyper-productive? The highs and lows of using AI. Nature, 627, 16–17. https://doi.org/10.1038/d41586-024-00592-w
-
Rawte, V., Sheth, A., & Das, A. (2023). A survey of hallucination in large foundation models. https://doi.org/10.48550/arXiv.2309.05922
-
Ray, P. P. (2023). ChatGPT: a comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems, 3, 121–154. https://doi.org/10.1016/j.iotcps.2023.04.003
-
Royal Society. (2024). Science in the age of AI how artificial intelligence is changing the nature and method of scientific research. https://royalsociety.org/-/media/policy/projects/science-in-the-age-of-ai/science-in-the-age-of-ai-report.pdf
-
Schäfer, M. S. (2023). The notorious GPT: science communication in the age of artificial intelligence. JCOM, 22, Y02. https://doi.org/10.22323/2.22020402
-
Stone, J. A. (2023). Artificial Intelligence-generated research in the literature: is it real or is it fraud? Medical Acupuncture, 35, 103–104. https://doi.org/10.1089/acu.2023.29231.editorial
-
Tate, T. P., Doroudi, S., Ritchie, D., Xu, Y., & Warschauer, M. (2023). Educational research and AI-generated writing: confronting the coming tsunami. https://doi.org/10.35542/osf.io/4mec3
-
Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., Rodriguez, A., Joulin, A., Grave, E., & Lample, G. (2023). LLaMA: open and efficient foundation language models. https://doi.org/10.48550/arXiv.2302.13971
-
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: toward a unified view. MIS Quarterly, 27, 425–478. https://doi.org/10.2307/30036540
-
Wihbey, J. (2024). AI and epistemic risk for democracy: a coming crisis of public knowledge? SSRN Electronic Journal, 4805026. https://doi.org/10.2139/ssrn.4805026
-
Zhang, S., Heck, P. R., Meyer, M. N., Chabris, C. F., Goldstein, D. G., & Hofman, J. M. (2023). An illusion of predictability in scientific results: even experts confuse inferential uncertainty and outcome variability. Proceedings of the National Academy of Sciences, 120. https://doi.org/10.1073/pnas.2302491120
About the author
Justus Henke is senior researcher at the Institute for Higher Education Research Halle-Wittenberg (HoF). His research focuses on science communication, science management, the third mission of universities, citizen science, artificial intelligence and university funding.
E-mail: justus.henke@hof.uni-halle.de Bluesky: @justushenke.bsky.social