1 Context and relevance

“Type ‘AI images’ into your search engine and you will notice a pattern.” [Better Images of AI, 2024]1 The pattern suggested here is a predominance of sci-fi inspired and anthropomorphized images of artificial intelligence (AI), such as humanoid robots, cyborgs, or Terminator-like depictions. This “clichéd”2 [Romele, 2022, p. 5] portrayal of AI and the frequent use of robotic or sci-fi imagery have been widely observed and critically discussed by various scholars and researchers [e.g., Meinecke & Voss, 2018; Mustaklem, 2024; Romele, 2022; Schmitt, 2021], as well as by initiatives like the project group “AI Myths”3 [AI Myths, 2025].

In recent years, a lively debate has emerged regarding the lack of variety in AI visualizations and the predominant use of sci-fi imagery. Critics — including researchers such as Daniel Leufer, Arian Prabowo, Merve Hickok, and Beth Singler — argue that inappropriate visual representations of AI can create public misconceptions. Moreover, they suggest that such depictions may distort or impair the public’s understanding of how AI systems function and what their capabilities and limitations are [e.g., Kurenkov, 2019; Mustaklem, 2024]. For example, the NGO Better Images of AI [2024] and the authors for AI Myths [2025] warn that visually relating machine intelligence to human intelligence or portraying AI as robots foster unrealistic expectations of AI. Additionally, they highlight how such visual representations obscure human accountability in AI development, potentially fueling apocalyptic public imaginaries and fears [Westerlund, 2020].

These concerns are particularly relevant given the crucial role of visuals in the social construction of reality [Lobinger & Geise, 2015; Lucht et al., 2013]. From a constructivist perspective, it is assumed that the visual representation of AI shapes society’s knowledge and perceptions of the technology [Hepp et al., 2017; Kalwa, 2022]. By directly addressing the human sense of sight, visualizations create a sense of reality. Moreover, images of AI render an otherwise invisible technology visible. Through this visibility, they construct a reality that may, at times, be unrealistic [Grittmann & Ammann, 2011; Müller, 2003].

As part of our study, we examine whether the concerns regarding the visualizations of AI are mirrored empirically. We are investigating how selected German quality print media visualize and frame articles on AI. Our paper is organized as follows: We firstly examine the importance of visual media representations of AI as a compelling subject for investigation. Subsequently, we present our theoretical background, give an overview of the current state of research, and derive our research questions. We then delineate our methodological approach and present the results of our analyses. Our paper concludes with a summary and discussion of the findings.

2 Why an analysis of images of AI in news media coverage?

As a matter of fact, images are an integral part of journalistic reporting [Alpuim & Ehrenberg, 2023; Geise & Maubach, 2024; Zhai et al., 2024] and generate a high level of attention among the recipients, even more than pure text [Geise & Maubach, 2024]. They can be processed quickly and are easy to mentally fixate and remember [Geise et al., 2015; Geise & Maubach, 2024; Geise & Rössler, 2012; Kong, 2019; Müller, 2003]. Moreover, images are “powerful framing tools” [Rodriguez & Dimitrova, 2011, p. 50; see also Geise et al., 2015] — when textual and visual framing are in conflict, visual frames often prevail [Rodriguez & Dimitrova, 2011].

This is particularly relevant given the public’s fragmented understanding of AI, which is, at best, “patchy,” [Nader et al., 2022, p. 713; see also Neudert et al., 2020] despite AI’s increasing integration into daily life. Consequently, mediated images of AI are expected to play a crucial role in shaping public perceptions, influencing not only how AI is understood but also the expectations, fears, and hopes associated with it [Cave et al., 2018; Kong, 2019].

According to Gamson et al. [1992], especially images produced and distributed by mass media are particularly influential in shaping how recipients construct meaning about social and political issues. This view that people’s knowledge and ideas about AI are influenced by what they receive in media coverage is also expressed in news coverage itself [e.g., Naughton, 2019; see also Kalwa, 2022]. Even further go Ouchchy et al. [2020] or Nussberger et al. [2022], as they believe that news portrayal of AI could even influence AI research and development, legislation and regulation.

However, various actors argue that AI visualizations in media coverage are often inappropriate in certain ways:

… if you were writing a news article about apples, you wouldn’t put a photo of a pear at the top. But if you’re reading a story about large language models, you have a photo of a robot at the top, even though there are no robots anywhere near large language models. [Mustaklem, 2024]

Even technology that is per definition not robotics (e. g. artificial intelligence or simple software) is routinely referred to as a robot and illustrated with pictures of humanoid robots that have nothing to do with the technology at the center of the article. [Meinecke & Voss, 2018, p. 211]

Moreover, Schmitt [2021] argues that many images of AI are not only unrelated to the news articles and merely decorative but can also be harmful to public perception. Instead of fostering public debate on critical issues to inform policymaking for emerging technologies, these images reinforce narratives of robotic dominance or escapist utopian visions.

3 Multimodal framing

Assuming that media representations of issues may shape how they are perceived and evaluated, our study is theoretically bound to the framing theory [Tewksbury & Scheufele, 2009]. A frame can be outlined as an interpretation scheme, a perspective from which we view a problem or event [Potthoff, 2012]. Consecutively, framing in the production of news describes the process of selecting and emphasizing certain aspects of an issue, while others recede into the background, thereby suggesting certain classifications, evaluations, or decisions without explicitly evaluating [Reese, 2007]. The resulting interpretive frameworks provided by the media lie at the core of the framing approach and can simplify and profoundly influence how information is processed [Geise & Maubach, 2024].

Framing can take place on different modal levels, for example on the text or visual level. In a multimodal news piece, visual framing — often presented through press photographs, or short videos — interacts with textual elements such as headlines, subheadings, captions, and the article body to form a cohesive multimodal media unit [Geise & Xu, 2024]. However, despite an increasing number of framing studies, framing has primarily been analyzed with a focus on textual media messages [Geise et al., 2015; Geise & Xu, 2024; Wessler et al., 2016]. The “question of how issues are framed through images that stand alone or accompany text has remained relatively under-researched.” [Rodriguez & Dimitrova, 2011, p. 49; see also Jungblut & Zakareviciute, 2019; Brause et al., 2023] Research on multimodal framing — understood in our context as the emphasis on particular aspects of perceived reality in a communicative setting using both visual and textual elements [based on Geise et al., 2015] — remains scarce4 [Jungblut & Zakareviciute, 2019; Powell et al., 2015]. However, if one analyzes images in journalistic reporting, they mostly appear in multimodal contexts, accompanying spoken or written word rather than standing alone [Arifin & Lennerfors, 2022]. Thus, it seems appropriate to combine both modalities in integrative analyses [Jungblut & Zakareviciute, 2019; Wessler et al., 2016].

Through our analysis, we want to find out whether certain visualizations dominate German print media coverage on AI and if they are factually associated with the reported AI technologies to be illustrated. Therefore, we look for so-called image-to-text-gaps capturing divergences of the communicated perspectives through written contents and accompanying images in multimodal news articles [Geise & Xu, 2024].

4 State of research and research questions

Despite the importance of the subject, there are, to our knowledge, very few (quantitative) analyses available on AI visualizations in media coverage [see also Zhai et al., 2024; Brause et al., 2023].

Textual communication about AI (in general) has already been analyzed [e.g., Brennen et al., 2018; Kieslich et al., 2022; Obozintsev, 2018; Ouchchy et al., 2020; Roe & Perkins, 2023; Sun et al., 2020; Vergeer, 2020]. Images of AI in news coverage were — if at all — mostly recorded as additions. To our knowledge, the existing research on this topic is limited to a recently published automated image analysis of AI representations in news articles from the website AI Topics [Zhai et al., 2024], as well as two conference proceedings: one examining AI representations in news photographs in the U.S. and China [Kong, 2019], and another analyzing the visual reporting on AI in eight German national quality media following the launch of ChatGPT [Grittmann & Brink, 2024]. In their automated news image analysis, Zhai et al. [2024] found robots to be the most commonly used image type, increasing over time (2015–2019). Further, they identified three dominant visual frames in news coverage, namely a psychological distance frame (which, for example, shows AI applications in our daily lives and represents AI through the physical traits of products), a so-called dialectical relationships frame (which portrays the relationship between AI and humans, for example AI as friend or rival), and a sensationalism frame (determined, for example, by the colors used in the images, especially red and blue, or by celebrities shown).5 According to Kong’s presentation [2019], the analyzed news images paid more attention to humans than to machines. Humans were present in 72 (New York Times) respectively 85 percent (China Daily) of the images of AI in the analyzed newspapers. If AI applications were visualized in the images, humanoid robots were the typical form — which, as described in the introduction, can be seen as problematic. In sum, the images analyzed by Kong [2019] conveyed a rather positive attitude towards AI. Grittmann and Brink’s [2024] project is planning a quantitative image type analysis and an iconographic analysis. However, the study has not yet been completed.

Additionally, Meinecke and Voss [2018] in their paper “Robotics in Science Fiction and Media Discourse”, dedicate a subchapter to robots in media coverage. They find that robots are frequently used to illustrate AI in general; however, their findings do not appear to be based on a systematic content analysis but rather on selected examples.

Beyond that, to our knowledge, there are only few studies that analyze (mostly fictional) visual AI narratives in literature and film using qualitative research approaches [e.g., Cave et al., 2018; Hermann, 2023; Xanke & Bärenz, 2012].

Our study seeks to address this gap. We argue, following Pentzold et al. [2018], that visualizations of AI articles should be considered as an own object of analysis, with its inherent representational logic. As in journalism, depicting invisible technologies and providing tangible representations of such phenomena is particularly challenging [Pentzold et al., 2018]. This challenge also fuels the discussion on potentially problematic AI visualizations. In light of this, we aim to investigate the following research questions:

RQ1.

How often are articles on artificial intelligence illustrated in selected German print media?

RQ2.

Which visualization types are predominantly used in German print media coverage?

RQ3.

What can be seen in the images (pictorial objects) attached to German print media articles on AI?

RQ4.

To what extent do the pictorial objects match the respective AI that is the subject of the news article?

RQ5.

Which multimodal frames can be identified in German news media coverage about AI?

Furthermore, with our research, we want to take into consideration the rapid technological development of AI reinforced through the achievement of significant milestones in the recent past, such as the development of “a machine that could usefully work on the problem of self-improvement.” [Solomonoff, 1985, p. 150] Therefore, we pose the following additional research question:

RQ6.

What changes can be observed in these aspects over time?

5 Method

To answer our research questions, our study uses a mixed-methods design containing a qualitative as well as a quantitative visual and multimodal content analysis [according to Grittmann & Lobinger, 2011] of illustrated German national print media articles on AI.

While content analysis is one of the most frequently used methods in communication studies, visual content analysis remains an underexplored area of research [Rössler, 2010], despite the significant increase in visual elements in media coverage since the 19th century [Wilke, 2011; Geise & Rössler, 2012]. Consequently, several researchers complain about the marginality of the image as a central research object [Grittmann & Lobinger, 2011; Schnettler & Bauernschmidt, 2018]. Our study thus contributes to the expansion of the state of research — on multimodal content analysis as well as on visualizations of AI.

Our analysis focuses on two time periods: January 1 to December 31, 2019, and November 1, 2022, to October 31, 2023. We selected 2019 because it was designated as the ‘Science Year of Artificial Intelligence’ by the German Federal Ministry of Education and Research [Bundesministerium für Bildung und Forschung, 2021], reflecting an effort to position AI as a key technological and societal topic and marking the early stages of widespread public engagement with AI in Germany. The second period was chosen as it encompasses one year following the introduction of ChatGPT in Germany in November 2022, which triggered extensive media coverage, changed the media’s narratives on AI [Ryazanov et al., 2025] and increased public engagement with AI by making it accessible to a broad audience [e.g., Kero et al., 2023]. This temporal juxtaposition allows for an analysis of how media visualizations have evolved in response to these milestones. By analyzing these two time periods, we aim to identify changes in visualization strategies and explore how technological advancements may influence media framing.

As news media titles, we chose six national quality newspapers and news magazines in Germany, representing the political spectrum from left- to right-leaning media [Scheufele & Engelmann, 2013], namely: Süddeutsche Zeitung (SZ), Frankfurter Allgemeine Zeitung (FAZ), Die Welt (DW), Die Tageszeitung (taz), Der Spiegel, and Die Zeit. We focus our analysis on national quality newspapers and news magazines for two key reasons. Firstly, quality newspapers/magazines are considered to be “leading media”, widely read by political and business leaders as well as journalists. Secondly, the quality news media examined in this study dedicated significant attention to the topic of AI during the analysis period, whereas initial investigations into other types of media, such as tabloid newspapers, revealed a scarcity of illustrated AI-related articles in the analyzed time periods. As a result, national daily newspapers and magazines emerged as the most suitable choice for our analysis.

Within the database wiso all articles during the analysis period that contained the keywords “artificial intelligence” or “AI”6 in their headline or subtitle were initially selected (to preferably analyze only those articles that deal with AI as a main topic).7 Since our focus was solely on illustrated articles, the PDF versions of the print articles were subsequently manually scanned for images. If no PDF versions were available in the database,8 we retrieved the identified articles from the original print editions stored at the Badische Landesbibliothek or downloaded the missing PDF files from the media outlets’ web archives. Our search resulted in n = 589 illustrated articles with n = 818 images in total (some articles contained more than one image).

Following Geise and Rössler [2012], we distinguished between several dimensions of image analysis in both study periods: On the representation level, we manually coded formal features like medium, date, section, and visualization type of the image. On the object level, we coded the visual objects with different levels of detail. Per image, various image objects (like human, robot, computer etc.) with multiple specifying subcategories could be coded, for example if an image shows a human and a robot at the same time. On the tendency and meaning level, we coded the main AI subject covered in the article using inductive coding of the article’s headlines, subtitles, and text, as well as multimodal frames.

To enable such a quantitative coding of frames, several multimodal frames were preliminarily identified within the articles published before ChatGPT using a qualitative iconographic-iconological approach [Panofsky, 1979; see also Grittmann & Ammann, 2011; Geise & Rössler, 2012]. This methodological three-step is used for the systematic interpretation of images and does not highlight their forms or motifs, but the central pictorial content of an image. As the so-called pre-iconography (“primary subject”), we focused on existing semiotic image objects, such as persons, non-human objects or actions (what is depicted?) to provide an objective description of the existing image objects, including individual everyday theoretical experiences [Müller, 2003]. Secondly, we noted the iconography (“secondary subject”), that reconstructs the thematic embedding of the image by deriving further information from the article title and subtitle, image headline, and caption. Thirdly, the iconographic analysis was expanded by deriving the actual meaning and central image statement in an interpretative act, which we summarized (so-called iconology) [Panofsky, 1975].

For the quantitative coding, the qualitatively identified frames were translated into binary frame variables (e.g.: Are potential uses, chances or opportunities associated with AI visually shown or textually mentioned? — Yes/No) and then deductively coded by referring to the image and its caption as well as the article headline, teaser, and subtitle (multimodal approach) for images published before and after ChatGPT.

Since it was both theoretically plausible and empirically evident during the recording of these frames that a single AI article could affirm multiple frame variables (e.g., addressing both opportunities and risks), we applied hierarchical cluster analysis in a third and final step, following the approach of Matthes and Kohring [2004]. This allowed us to identify frame variable combinations that could indicate overarching composed frames in both study periods. The squared Euclidean distance was employed as commonly used proximity measure to assess the similarity or dissimilarity between the variables to be clustered [Jain et al., 1999]. For the clustering algorithm, the Ward method was selected as the agglomerative approach. In agglomerative methods, the data points are first considered individually and then gradually combined into clusters (bottom-up-method) [Universität Zürich, 2023; Ward, 1963]. We organized the illustrated articles into clusters that minimized intra-cluster variance while maximizing inter-cluster differences. To determine the optimal number of clusters, we considered both conceptual criteria (e.g.: What is meaningful in the context of our study? Here we have oriented ourselves to the qualitatively determined number of frames) and statistical measures. Statistically, the optimal number of clusters was determined using the “allocation overview” provided by SPSS. This overview outlines the step-by-step merging of clusters, with the “coefficients” column indicating the level of heterogeneity combined at each step. As the clustering process progresses, the coefficient values increase, reflecting the rising heterogeneity. The number of clusters was determined by identifying the point where the increase in heterogeneity between successive steps was disproportionately large for the first time.

By applying this mixture of qualitative, quantitative and cluster methods, we relate to several previous studies in which frames are derived interpretatively from a selection of the study material in a first step and then quantified through content analysis [e.g., Eilders & Lüter, 2000; Meyer, 1995].

The quantitative content analysis was conducted for both image samples (before and after ChatGPT) by a team of three coders. They underwent multiple training sessions and were provided with a detailed coding manual. The complete codebook for our analysis is attached as an appendix. The intercoder reliability values for formal features and pictorial objects ranged between 0.74–1 (Krippendorff’s alpha) and for AI subject and frame variables between 0.80–0.95 (Holsti9); except for the frame variable chances (Holsti coefficient of 0.64). However, we did not analyze the frame variables individually, but always as a “bundle” together with all other frame variables within the framework of the cluster analysis. Since the overall coefficient (average value) for all frame variables is 0.86 (Holsti), we decided to proceed with these scores.

6 Results

6.1 Amount of visualized AI news coverage

Regarding our first research question, our data show that within our analyzed news media sample in 2019, n = 125 visualized articles (containing at least one article image) on the main topic of AI were published, while in the second year of analysis we counted n = 464 articles. Thus, the quantity of visualized AI articles considerably increased in 2022/23 (χ2(1) = 195.11, p < 0.05). Consequently, the number of images within these news articles on AI has more than quadrupled (from 150 images in 2019 to 668 images in 2022/23; χ2(1) = 328.02, p < 0.05). On average, in 2019 each visualized article on AI contained 1.2 images. In 2022/23, it was 1.44 images per article (RQ6).

6.2 Visualization types in AI news coverage

Regarding visualization types (RQ2), in 2019, the vast majority of images of AI — approximately two thirds — were photographs, followed by illustrations (visualizations drawn by hand or digitally) (see Figure 1). Together, these two visualization types make up 86 percent of all images of AI in German news media coverage. In 2022/23, the dominance of these two visualization types decreased (74%) and overall, the types of visualization became more diverse (RQ6). In particular, photorealistic images, data visualizations and collages have significantly increased (χ2(8) = 35.26, p < 0.001).

PIC

Figure 1: Visualization types of images of AI in German print media coverage (in %).

6.3 Pictorial objects in AI news coverage

Regarding the question of what can be seen in the images (RQ3), we coded n = 219 pictorial objects within the 150 images from 2019 and n = 1009 pictorial objects within the 668 images from 2022/23.

Our data shows that AI was most often illustrated by pictures of humans, not visualizing the AI itself, but usually the protagonists of the article (44% in 2019; 45% in 2022/23; see also Kong, 2019), followed by robots (16% in 2019; 7% in 2022/23) or computers (9% in 2019; 13% in 2022/23) (see Figure 2). Diving into more detail, the human subjects pictured were most often persons with different characteristics (e.g., gender or profession). Looking at those, we found slightly more male people10 (50% in 2019; 62% in 2022/23), but as the difference between male and female actors appears rather small, one cannot assume a male dominance or male-bias [as presumed by Jeong Gu, 2020; or Roesler et al., 2023, for example] in the visualizations of articles on AI. Regarding the professions, we identified scientists (19% in 2019; 21% in 2022/23) and managers as well as people associated with culture, e.g., musicians and artists (18–20%), as the most common visualized professions.

These visualizations picturing the human protagonists of the article are followed by visualizations that tried to capture AI directly (37% vs. 23%).

Surprisingly, robot images thereby played a significantly less important role in the later period (decline from 16% to 7%; χ2(10) = 44.63, p < 0.001) and the relationship between AI visualized as robots compared to AI pictured as computer object changed in favor of the computer depictions (RQ6). Out of 1009 pictorial objects in 2022/23 only n = 67 robots were identified (7%), which are nearly exclusively humanoid cyborgs (97%). This dominance of human-like androids in images that contain robots is observable in both periods, but the variety of these depictions decreased over time (80% humanoid cyborgs in 2019). The decline in robot pictures may be linked to new contacts with AI technologies, following the publication of ChatGPT and other tools based on Large Language Models (LLMs). In these terms, journalist’s imagination of what AI is and how it looks like might have shifted from imagining AI as human-like, autonomous robots to seeing it as a “simple” computer program that can be used in everyday life. At least, this is suggested through the pictorial objects in 2022/23, where ChatGPT is the most visualized computer element alongside computer chips (both 19%).

PIC

Figure 2: Pictorial objects of the visualizations (in %).

6.4 Image-to-text gaps within AI news coverage

To assess whether the pictorial objects correspond to the AI described in the articles, we examined which types of AI were referenced in the text for the three most frequently used pictorial objects (humans, computer components, and robots). Since human figures, as previously noted, typically depict the protagonists of the articles rather than the AI itself, we cannot identify an image-to-text gap in these cases. The second most frequently identified computer components in 2022/23 (15%) are used most often to visualize articles thematically focused on Natural Language Processing and Image Processing in both time periods (see Tables 1 and 2; RQ6). Since this seems appropriate in the broadest sense, we cannot speak of a striking image-to-text gap here either. Interestingly, the proportion of illustrated articles that mentioned at least one type of generative AI has almost quintupled from 2019 to 2022/23 (11% in 2019 compared to 54% in 2022/23; χ2(44) = 266.35, p < 0.001) in particular by addressing LLMs (69%) (RQ6, see Figure 3). ChatGPT accounts for the largest share of LLMs, as it captures nearly a quarter of the n = 833 addressed AI specifications within the visualized articles in 2022/23 (66% of LLM-centered articles). Other LLMs such as Google Bard (7% of LLM-centered articles) or Meta’s LLaMa (2% of LLM-centered articles) were seldomly mentioned in articles about AI.

PIC

Figure 3: Types of AI mentioned in German news media coverage on AI (in %).

In terms of robot images, which are particularly criticized as being inappropriately chosen to visualize certain AI types, we can state that if robot images occur, they most often illustrate articles that revolve around reinforcement learning (in 2022/23 and 2019), medical AI and image processing (both in 2019) (see Tables 1 and 2). Reinforcement learning is a subfield of machine learning in which an intelligent agent makes decisions within an environment to maximize a cumulative reward [Eßer, 2023]. This method is typically applied to the domain of robotics, which is why a visualization of articles on this type of AI through robot images seems largely suitable. In the case of image processing, visualizations through robot images seem appropriate if the technology is used to help robots to navigate, identify and locate objects, identify faces or similar. In the case of medical AI, images of robots only seem appropriate if robot assistants in the operating room are addressed, which was only the case in two articles from 2019. Apart from that, robot images are often used as symbols for AI in general when no specific AI is addressed in the news articles (however, significantly less often in 2022/23 compared to 2019; χ2(10) = 22.94, p = 0.011; RQ6). Articles on generative AI (GenAI) models (e.g., LLMs like ChatGPT or image generating AI like Dall-E) are relatively seldom visualized with robot images (8% of GenAI-articles in 2022/23). Accordingly, our analysis does not support the often-presumed divergence between textual and visual AI coverage, particularly regarding robot images.

PIC
Table 1: Pictorial objects assigned to types of AI in German news media coverage on AI 2019 (in %).

PIC
Table 2: Pictorial objects assigned to types of AI in German news media coverage on AI 2022/23 (in %).

6.5 Multimodal frames in AI news coverage

To address our fifth research question, we conducted a hierarchical cluster analysis of the six coded frame variables (chances, risks, competition, cultural debate, development, human role model) to search for natural groupings in the data. This analysis yielded a total of five groups representing multimodal frame variable combinations in AI news coverage for 2019 (see Figure 4) and seven of such for 2022/23 (see Figure 5).

PIC

Figure 4: Frame variable combinations of AI visualizations in German news media coverage on AI in 2019 (occurrence of combinations in %).

PIC

Figure 5: Frame variable combinations of AI visualizations in German news media coverage on AI in 2022/23 (occurrence of combinations in %).

The cluster or frame variable combination in 2019 and in 2022/23 to which most images of AI belong is the “chances frame”, where the potential of AI (applications) across various aspects of life are highlighted, predominantly portraying social benefits or advantages in a positive or uncritical light.11 Over time, however, we see a small decrease in this frame variable combination (from 27% to 22%12), whereas the so-called “risk frame” (where the social risks of the widespread, hasty, or unthoughtful use of AI applications are addressed) remains relatively stable (22% and 21%13). In 2022/23, however, the frame variable combinations have become more differentiated (RQ6). On the one hand, a new “mixed evaluations frame”, which weighs both chances and risks equally (10%), has emerged. Further, the formerly combined frame variables “cultural debate”14 and “role model human being”15 have become two separate combinations in 2022/23. Small decreases can be observed in the “competition frame” (from 17% to 15%16), which addresses national or international competition in AI development, research, and implementation, and in the “development frame” (from 11% to 7%17), which focuses on the technical/scientific aspects of producing new AI applications or further developing existing models.

7 Summary

This study examined how German quality print media visualize articles on artificial intelligence, challenging prevalent assumptions about the dominance of images of humanoid robots and cyborgs in media coverage. By analyzing 818 images across two distinct periods (2019 and 2022/23) in an interpretative-quantifying approach, we demonstrated that in both periods of analysis, humans were the most often visualized pictorial objects (as many visualizations focused on protagonists of the articles). In the later analysis period, robots even played a less important role in visualizing AI than in 2019, while computers became more prominent (RQ3 and RQ6).

Returning to the question posed in our paper’s title, we can thus provide a clear answer: Yes, German print media articles on AI feature a greater variety of visual elements beyond just humanoid robots and cyborgs. Furthermore, our analysis did not reveal striking image-to-text gaps. Most robot images were used in robotic topic contexts, whereas for example articles on GenAI models were seldomly visualized with robot images (RQ4), neither in 2019 nor in 2022/23 (RQ6).

Further, our analysis showed a boost in AI coverage and AI article visualizations in the latest years. The number of images within news articles on AI have more than quadrupled (RQ1 and RQ6). Thereby, the types of visualizations in AI news coverage became more diverse; collages, pictograms, and data visualizations came in and replaced some of the predominantly used photographs (RQ2 and RQ6).

Regarding the identified multimodal frames, five frame variable combinations were found in AI news coverage in 2019 and seven in 2022/23. The “chances frame”, which highlights AI applications as opportunities or advantages for society and social actors, was the most frequently occurring frame in both periods of analysis. However, over time, German news media coverage has increasingly balanced the opportunities and risks of AI applications. Together, the “chances”, “risks”, and “mixed evaluations” frames accounted for more than 50 percent of the analyzed reporting (RQ5 and RQ6).

8 Discussion

Building on the trends identified by Righetti and Carradore [2019] and others [e.g., Fast & Horvitz, 2017], our findings indicate that AI has gained significant public and societal relevance over time, particularly following the introduction of ChatGPT in Germany. Overall, the diversification of visualization types and frames at the later date of analysis indicates an increasingly nuanced18 and more differentiated journalistic reporting in German quality print media on AI. The finding that robots have become less central to the visual representation of AI suggests, on the one hand, a more diverse portrayal and, on the other, a potentially more realistic depiction of existing AI technologies — one that increasingly balances hopes and fears [see also Chuan et al., 2019; Ryazanov et al., 2025].

By shedding light on how AI is visually illustrated in German news media coverage, our study underscores the importance of multimodal framing in potentially shaping public understanding of emerging technologies. Fortunately, we found that German print media generally avoid the image-to-text gaps often criticized in AI visualizations, offering representations that align more closely with the content of the articles. This presumably contributes to a more accurate and balanced public discourse around AI.

However, several limitations of our study should be noted. For instance, methodically, we must state that the coding of individual frame variables resulted in low reliability values in certain cases (while the aggregated average was satisfactory), which needs to be considered while relating to our results.

Additionally, we can only make descriptive statements for the AI visualizations within a small part of the German print media coverage, so our findings cannot simply be transferred to other media titles, genres, or countries — which could be a starting point for follow-up studies.

Future research could examine the use of AI-generated images in article illustrations and investigate how emerging tools, such as AI-generated images, might further shape the portrayal and perception of AI in society. As this specific type of AI visualization was not analyzed separately in our study, it presents an avenue for further exploration. However, they must not necessarily accompany articles dealing with AI but can be chosen by news actors to illustrate a variety of other topics, which could be another interesting starting point for future studies.

Based on our research, we recommend that journalists continue to pay attention to the connection between AI visualization and content (for example, not using robots to illustrate LLMs). Additionally, they should reflect on the implicit messages that might be transported by using a certain type of AI visualization, e.g., a threatening looking robot or a “neutral” screenshot of the ChatGPT user interface. Both presupposes of course that journalists in fact really want to visualize AI more often and accurately. However, Hung [2018] suggests that the concrete content of an image may be secondary to other relevance criteria, such as composition, emotion, or symbolism. Additionally, societal and organizational constraints may limit the available or acceptable options and visual frames for journalists [Thomson et al., 2024].

To support practitioners in that matter, Kurenkov [2019] outlines a list of best practices for news media coverage of AI. This initiative aligns with the work of Better Images of AI [2024], which has developed a comprehensive guide for the utilization of images of AI [Dihal & Duarte, 2023] and offers a list of dos and don’ts for enhancing its visualization.

Examining the work of these initiatives, we argue that analyzing the visual representation of AI in news media coverage is crucial for society. Our study makes an important contribution to the existing body of research, as no comparable study with a visual focus on AI in news media coverage has been conducted to date.

From a societal perspective, our study illuminates the role of media in constructing narratives about AI, which may influence public expectations, fears, and opportunities associated with this technology. By examining how media representations evolve, we provide insights that are relevant not only to journalism but also to policymakers, educators, and communicators aiming to foster informed and critical engagement with AI. In a broader sense, the study invites reflection on how visual representations of emerging technologies can either reinforce or challenge societal misconceptions, advocating for deliberate, context-sensitive choices in visual media coverage.

Acknowledgments

We thank Paul Klär for support with coding.

During the preparation of this work, the authors used DeepL Translator to support the translation of the original German-language codebook (provided in the appendix) into English. Additionally, DeepL Translator and ChatGPT-4o were used to assist with the translation of individual words and phrases from German to English, as well as for grammar, spelling corrections, and language refinement. After using these tools, the authors reviewed and edited the content as needed and take full responsibility for the final publication.

Funding details. The author(s) received no financial support for the research, authorship, and/or publication of this article. Declaration of interest statement. No financial interest or benefit has arisen from the direct applications of our project.

References

AI Myths. (2025). AI = shiny humanoid robots. https://www.aimyths.org/ai-equals-shiny-humanoid-robots#how-can-we-stop-the-terrible-and-inappropriate-robots

Alpuim, M., & Ehrenberg, K. (2023). Warum Bilder so wirkmächtig sind. Bonn Institute. https://www.bonn-institute.org/news/psychologie-im-journalismus-5

Arifin, A. A., & Lennerfors, T. T. (2022). Ethical aspects of voice assistants: a critical discourse analysis of Indonesian media texts. Journal of Information, Communication and Ethics in Society, 20, 18–36. https://doi.org/10.1108/jices-12-2020-0118

Better Images of AI. (2024). Have you noticed that news stories and marketing material about Artificial Intelligence are typically illustrated with clichéd and misleading images? https://betterimagesofai.org/

Brause, S. R., Zeng, J., Schäfer, M. S., & Katzenbach, C. (2023). Media representations of artificial intelligence: surveying the field. In S. Lindgren (Ed.), Handbook of critical studies of artificial intelligence (pp. 277–288). Edward Elgar Publishing.

Brennen, J. S., Howard, P. N., & Kleis-Nielsen, R. (2018). An industry-led debate: how U.K. media cover artificial intelligence [Fact sheet]. Reuters Institute for the Study of Journalism. https://doi.org/10.60625/risj-v219-d676

Bundesministerium für Bildung und Forschung. (2021). Wissenschaftsjahr 2019. Bundesministerium für Bildung und Forschung. https://www.wissenschaftsjahr.de/2019/indexb6b3.html?id=657

Cave, S., Craig, C., Dihal, K., Dillon, S., Montgomery, J., Singler, B., & Taylor, L. (2018). Portrayals and perceptions of AI and why they matter. Apollo-University of Cambridge Repository. https://royalsociety.org/-/media/policy/projects/ai-narratives/AI-narratives-workshop-findings.pdf

Chuan, C.-H., Tsai, W.-H. S., & Cho, S. Y. (2019). Framing artificial intelligence in American newspapers. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 339–344. https://doi.org/10.1145/3306618.3314285

Dihal, K., & Duarte, T. (2023). Better images of AI: a guide for users and creators. The Leverhulme Centre for the Future of Intelligence; We; AI. https://blog.betterimagesofai.org/wp-content/uploads/2023/02/Better-Images-of-AI-Guide-Feb-23.pdf

Eilders, C., & Lüter, A. (2000). Research note. Germany at war: competing framing strategies in German public discourse. European Journal of Communication, 15, 415–428. https://doi.org/10.1177/0267323100015003009

Eßer, J. (2023). Introduction to reinforcement learning — a robotics perspective. LAMARR Institute for Machine Learning and Artificial Intelligence. https://lamarr-institute.org/blog/reinforcement-learning-and-robotics/

Fast, E., & Horvitz, E. (2017). Long-term trends in the public perception of artificial intelligence. Proceedings of the AAAI Conference on Artificial Intelligence, 31, 963–969. https://doi.org/10.1609/aaai.v31i1.10635

Gamson, W. A., Croteau, D., Hoynes, W., & Sasson, T. (1992). Media images and the social construction of reality. Annual Review of Sociology, 18, 373–393. https://doi.org/10.1146/annurev.so.18.080192.002105

Geise, S., Lobinger, K., & Brantner, C. (2015). Fractured Paradigm? Theorien, Konzepte und Methoden der Visuellen Framingforschung: Ergebnisse einer systematischen Literaturschau. In S. Geise & K. Lobinger (Eds.), Visual Framing. Perspektiven und Herausforderungen der Visuellen Kommunikationsforschung (pp. 41–75). Halem.

Geise, S., & Rössler, P. (2012). Visuelle Inhaltsanalyse: ein Vorschlag zur theoretischen Dimensionierung der Erfassung von Bildinhalten. Medien & Kommunikationswissenschaft, 60, 341–451. https://doi.org/10.5771/1615-634x-2012-3-341

Geise, S., & Maubach, K. (2024). Catch me if you can: how episodic and thematic multimodal news frames shape policy support by stimulating visual attention and responsibility attributions. Frontiers in Communication, 9, 1305048. https://doi.org/10.3389/fcomm.2024.1305048

Geise, S., & Xu, Y. (2024). Effects of visual framing in multimodal media environments: a systematic review of studies between 1979 and 2023. Journalism & Mass Communication Quarterly. https://doi.org/10.1177/10776990241257586

Grittmann, E., & Ammann, I. (2011). Quantitative Bildtypenanalyse. In T. Petersen & C. Schwender (Eds.), Die Entschlüsselung der Bilder: Methoden zur Erforschung visueller Kommunikation. Ein Handbuch (pp. 163–178). Halem.

Grittmann, E., & Brink, L. (2024). The new humans, the new power: artifical intelligence in visual news coverage [Paper presentation]. Generative images — generative imageries: challenges of visual communication (research) in the age of AI.

Grittmann, E., & Lobinger, K. (2011). Quantitative Bildinhaltsanalyse. In T. Petersen & C. Schwender (Eds.), Die Entschlüsselung der Bilder: Methoden zur Erforschung visueller Kommunikation. Ein Handbuch (pp. 147–162). Halem.

Guenther, L., Brüggemann, M., & Elkobros, S. (2022). From global doom to sustainable solutions: international news magazines’ multimodal framing of our future with climate change. Journalism Studies, 23, 131–148. https://doi.org/10.1080/1461670x.2021.2007162

Hepp, A., Loosen, W., Hasebrink, U., & Reichertz, J. (2017). Konstruktivismus in der Kommunikationswissenschaft. Über die Notwendigkeit einer (erneuten) Debatte. Medien & Kommunikationswissenschaft, 65, 181–206. https://doi.org/10.5771/1615-634x-2017-2-181

Hermann, I. (2023). Artificial intelligence in fiction: between narratives and metaphors. AI & Society, 38, 319–329. https://doi.org/10.1007/s00146-021-01299-6

Hung, T.-Y. (2018). A study on the relevance criteria for journalistic images. Journal of Library & Information Science, 44, 4–24. https://doi.org/10.6245/JLIS.201810_44(2).0001

Jain, A. K., Murty, M. N., & Flynn, P. J. (1999). Data clustering: a review. ACM Computing Surveys, 31, 264–323. https://doi.org/10.1145/331499.331504

Jeong Gu, Y. (2020). The disembodiment of digital subjects and the disappearance of women in the representations of cyborg, artificial intelligence and posthuman. Asian Women, 36, 23–44. https://doi.org/10.14431/aw.2020.12.36.4.23

Jungblut, M., & Zakareviciute, I. (2019). Do pictures tell a different story? A multimodal frame analysis of the 2014 Israel-Gaza conflict. Journalism Practice, 13, 206–228. https://doi.org/10.1080/17512786.2017.1412804

Kalwa, N. (2022). Humanoide Roboter und Dystopien: Bilder von KI [Interview by Metz, S.]. Wissenschaftskommunikation.de. https://www.wissenschaftskommunikation.de/humanoide-roboter-und-dystopien-bilder-von-ki-57167/

Kero, S., Akyürek, S. Y., & Flaßhoff, F. G. (2023). Bekanntheit und Akzeptanz von ChatGPT in Deutschland [Fact sheet]. Meinungsmonitor Künstliche Intelligenz. https://www.cais-research.de/wp-content/uploads/Factsheet-10-ChatGPT.pdf

Kieslich, K., Došenović, P., & Marcinkowski, F. (2022). Everything, but hardly any science fiction. A topic analysis of German media coverage of AI [Fact sheet]. Meinungsmonitor Künstliche Intelligenz. https://www.cais-research.de/wp-content/uploads/Factsheet-7-Medienberichterstattung.pdf

Kong, Y. (2019). Artificial intelligence in news photographs: a cross-cultural visual content analysis. Proceedings of the 37th ACM International Conference on the Design of Communication, 48, 1–3. https://doi.org/10.1145/3328020.3353905

Kurenkov, A. (2019). AI coverage best practices, according to AI researchers. Skynet Today. Putting AI News in Perspective. https://www.skynettoday.com/editorials/ai-coverage-best-practices

Lobinger, K., & Geise, S. (Eds.). (2015). Visualisierung-Mediatisierung. Bildliche Kommunikation und bildliches Handeln in mediatisierten Gesellschaften. Herbert von Halem Verlag.

Lucht, P., Schmidt, L.-M., & Tuma, R. (Eds.). (2013). Visuelles Wissen und Bilder des Sozialen: Aktuelle Entwicklungen in der Soziologie des Visuellen. Springer Fachmedien. https://doi.org/10.1007/978-3-531-19204-8

Matthes, J., & Kohring, M. (2004). Die empirische Erfassung von Medien-Frames. Medien & Kommunikationswissenschaft, 52, 56–75. https://doi.org/10.5771/1615-634x-2004-1-56

Meinecke, L., & Voss, L. (2018). ‘I Robot, You Unemployed’: science-fiction and robotics in the media. Schafft Wissen: Gemeinsames und geteiltes Wissen in Wissenschaft und Technik: Proceedings der 2. Tagung des Nachwuchsnetzwerks “INSIST”, 147–162. https://nbn-resolving.org/urn:nbn:de:0168-ssoar-58220-7

Meyer, D. S. (1995). Framing national security: elite public discourse on nuclear weapons during the cold war. Political Communication, 12, 173–192. https://doi.org/10.1080/10584609.1995.9963064

Müller, M. (2003). Grundlagen der visuellen Kommunikation. UVK.

Mustaklem, M. (2024). What’s wrong with the robots? An Oxford researcher explains how we can better illustrate AI news stories [Interview by Adami, M.]. Reuters Institute for the Study of Journalism. Reuters Institute. https://reutersinstitute.politics.ox.ac.uk/news/whats-wrong-robots-oxford-researcher-explains-how-we-can-better-illustrate-ai-news-stories

Nader, K., Toprac, P., Scott, S., & Baker, S. (2022). Public understanding of artificial intelligence through entertainment media. AI & Society, 39, 713–726. https://doi.org/10.1007/s00146-022-01427-w

Naughton, J. (2019). Don’t believe the hype: the media are unwittingly selling us an AI fantasy. The Observer. Artificial intelligence (AI). https://www.theguardian.com/commentisfree/2019/jan/13/dont-believe-the-hype-media-are-selling-us-an-ai-fantasy

Neudert, L.-M., Knuutila, A., & Howard, P. N. (2020). Global attitudes towards AI, machine learning & automated decision making. Implications for Involving Artificial Intelligence in Public Service and Good Governance [Working paper]. Oxford Commission on AI & Good Governance. https://oxcaigg.oii.ox.ac.uk/wp-content/uploads/sites/11/2020/10/GlobalAttitudesTowardsAIMachineLearning2020.pdf

Nussberger, A.-M., Luo, L., Celis, L. E., & Crockett, M. J. (2022). Public attitudes value interpretability but prioritize accuracy in artificial intelligence. Nature Communications, 13, 5821. https://doi.org/10.1038/s41467-022-33417-3

Obozintsev, L. (2018). From Skynet to Siri: an exploration of the nature and effects of media coverage of artificial intelligence [Master’s thesis]. University of Delaware. http://udspace.udel.edu/handle/19716/24048

Ouchchy, L., Coin, A., & Dubljević, V. (2020). AI in the headlines: the portrayal of the ethical issues of artificial intelligence in the media. AI & Society, 35, 927–936. https://doi.org/10.1007/s00146-020-00965-5

Panofsky, E. (1975). Sinn und Deutung in der bildenden Kunst. DuMont.

Panofsky, E. (1979). Ikonographie und Ikonologie: Theorien, Entwicklung, Probleme. DuMont.

Pentzold, C., Brantner, C., & Fölsche, L. (2018). Imagining big data: illustrations of “big data” in U.S. news articles, 2010–2016. New Media & Society, 21, 139–167. https://doi.org/10.1177/1461444818791326

Potthoff, M. (2012). Medien-Frames und ihre Entstehung. VS Verlag für Sozialwissenschaften. https://doi.org/10.1007/978-3-531-19648-0

Powell, T. E., Boomgaarden, H. G., De Swert, K., & de Vreese, C. H. (2015). A clearer picture: the contribution of visuals and text to framing effects. Visual framing effects. Journal of Communication, 65, 997–1017. https://doi.org/10.1111/jcom.12184

Reese, S. D. (2007). The framing project: a bridging model for media research revisited. Journal of Communication, 57, 148–154. https://doi.org/10.1111/j.1460-2466.2006.00334.x

Righetti, N., & Carradore, M. (2019). From robots to social robots. Trends, representation and Facebook engagement of robot-related news stories published by Italian online news media. Italian Sociological Review, 9, 431. https://doi.org/10.13136/isr.v9i3.298

Rodriguez, L., & Dimitrova, D. V. (2011). The levels of visual framing. Journal of Visual Literacy, 30, 48–65. https://doi.org/10.1080/23796529.2011.11674684

Roe, J., & Perkins, M. (2023). ‘What they’re not telling you about ChatGPT’: exploring the discourse of AI in U.K. news media headlines. Humanities and Social Sciences Communications, 10, 753. https://doi.org/10.1057/s41599-023-02282-w

Roesler, E., Heuring, M., & Onnasch, L. (2023). (Hu)man-Like Robots: the impact of anthropomorphism and language on perceived robot gender. International Journal of Social Robotics, 15, 1829–1840. https://doi.org/10.1007/s12369-023-00975-5

Romele, A. (2022). Images of artificial intelligence: a blind spot in AI ethics. Philosophy & Technology, 35, 1–19. https://doi.org/10.1007/s13347-022-00498-3

Rössler, P. (2010). Inhaltsanalyse (2nd ed.). UVK.

Ryazanov, I., Öhman, C., & Björklund, J. (2025). How ChatGPT changed the media’s narratives on AI: a semi-automated narrative analysis through frame semantics. Minds and Machines, 35, 2. https://doi.org/10.1007/s11023-024-09705-w

Scheufele, B., & Engelmann, I. (2013). Die publizistische Vermittlung von Wertehorizonten der Parteien. Normatives Modell und empirische Befunde zum Value-Framing und News Bias der Qualitäts- und Boulevardpresse bei vier Bundestagswahlen. Medien & Kommunikationswissenschaft, 61, 532–550. https://doi.org/10.5771/1615-634x-2013-4-532

Schmitt, P. (2021). Blueprints of intelligence. Exploring how researchers have illustrated artificial intelligence over the decades could help us build it better. Noema Magazine. https://www.noemamag.com/blueprints-of-intelligence/

Schnettler, B., & Bauernschmidt, S. (2018). Bilder in Bewegung: Visualisierungen in der Wissenschaftskommunikation. In M. R. Müller & H. G. Soeffner (Eds.), Das Bild als soziologisches Problem. Herausforderungen einer Theorie visueller Sozialkommunikation (pp. 197–208). Beltz/Juventa.

Solomonoff, R. J. (1985). The time scale of artificial intelligence: reflections on social effects (R. K. Lindsay, Ed.). Human Systems Management, 5, 149–153. https://doi.org/10.3233/hsm-1985-5207

Sun, S., Zhai, Y., Shen, B., & Chen, Y. (2020). Newspaper coverage of artificial intelligence: a perspective of emerging technologies. Telematics and Informatics, 53, 101433. https://doi.org/10.1016/j.tele.2020.101433

Tewksbury, D., & Scheufele, D. A. (2009). News framing theory and research. In J. Bryant & M. B. Oliver (Eds.), Media effects: advances in theory and research (3rd ed., pp. 17–33). Routledge.

Thomson, T. J., Zhang, S. I., Ren, Q., & Chen, Y. A. (2024). Contrasting frames: visual coverage at urban and regional news outlets in Australia and China. Journalism Studies, 25, 1272–1292. https://doi.org/10.1080/1461670x.2024.2372436

Universität Zürich. (2023). Clusteranalyse. Methodenberatung Universität Zürich. https://www.methodenberatung.uzh.ch/de/datenanalyse_spss/interdependenz/gruppierung/cluster.html

Vergeer, M. (2020). Artificial intelligence in the Dutch press: an analysis of topics and trends. Communication Studies, 71, 373–392. https://doi.org/10.1080/10510974.2020.1733038

Vogelgesang, J., & Scharkow, M. (2012). Reliabilitätstests in Inhaltsanalysen: Eine Analyse der Dokumentationspraxis in Publizistik und Medien & Kommunikationswissenschaft. Publizistik, 57, 333–345. https://doi.org/10.1007/s11616-012-0154-9

Ward, J. H. (1963). Hierarchical grouping to optimize an objective function. Journal of the American Statistical Association, 58, 236–244. https://doi.org/10.1080/01621459.1963.10500845

Wessler, H., Wozniak, A., Hofer, L., & Lück, J. (2016). Global multimodal news frames on climate change: a comparison of five democracies around the world. The International Journal of Press/Politics, 21, 423–445. https://doi.org/10.1177/1940161216661848

Westerlund, M. (2020). The ethical dimensions of public opinion on smart robots. Technology Innovation Management Review, 10, 25–36. https://doi.org/10.22215/timreview/1326

Wilke, J. (2011). Die Visualisierung der Wahlkampfberichterstattung in Tageszeitungen 1949 bis 2009. In J. Wilke (Ed.), Von der frühen Zeitung zur Medialisierung. Gesammelte Studien II (pp. 183–211). Edition Lumiére.

Xanke, L., & Bärenz, E. (2012). Künstliche Intelligenz in Literatur und Film — Fiktion oder Realität? Journal of New Frontiers in Spatial Concepts, 4, 36–43. https://doi.org/10.5445/KSP/1000027215

Zhai, Y., Guo, N., Zhang, J., Zhang, H., Sun, S., & Ding, Y. (2024). The thousand faces of images in AI news: psychological distance, dialectical relationships and sensationalism. Information, Communication & Society, 1–23. https://doi.org/10.1080/1369118x.2024.2406811

Notes

1. Better Images of AI (https://betterimagesofai.org/) is a non-profit Non-Governmental Organization based in London. The project commissions artists to create an alternative repository of images to portray AI, available for anyone to use for free.

2. We use the term “clichéd” [Romele, 2022, p. 5] exclusively in the sense of sci-fi inspired pictures, showing robots, humanoid cyborgs or similar. Of course, other aspects of visual representation can also be clichéd, such as the way something is depicted (e.g., in a stereotyped manner). However, this type of analysis is not included in our study.

3. AI Myths (https://www.aimyths.org/) is a non-profit project funded by Daniel Leufer, a researcher of the Working Group on Philosophy of Technology from the KU Leuven (Belgium). The project aims to debunk myths and reframe narratives about AI.

4. Notable exceptions are, e.g., the studies of Jungblut and Zakareviciute [2019] and Wessler et al. [2016] or Guenther et al. [2022].

5. Unfortunately, the paper does not specify all the news outlets included in the analysis, nor the countries from which they originate. The process of automated image coding is not explained in detail (e.g., which image features are automatically captured). The data quality therefore is difficult to assess.

6. German search string: “‘Künstliche* Intelligenz‘ OR ‘KI‘”.

7. To ensure that only truly relevant articles and images were analyzed, the automatically generated text corpus by the search string was afterwards manually screened for misclassified articles that did not deal with AI as a main topic. As there weren’t any articles excluded, we can assume a high precision of the search string applied. Nevertheless, we did not quantitatively assess neither the precision nor the recall. Therefore, we cannot guarantee with certainty that we have analyzed the whole media coverage on AI in our media sample, but we can be quite confident that the analyzed images are actually used to illustrate AI articles and no other articles superficially mentioning AI.

8. Concerned articles from SZ, FAZ, and Der Spiegel.

9. For the frame variables, we calculated the Holsti coefficient instead of Krippendorff’s alpha, as these are dichotomous variables with a skewed distribution and a lack of variance in the coded values [Vogelgesang & Scharkow, 2012].

10. The gender was derived from directly visible and commonly male or female associated clues (e.g., beards were categorized as male, while make-up was usually associated with female actors, unless there were other contradictory clues). We are aware that this approach is based on stereotypes and therefore may promote their stabilization, which should be critically reflected.

11. Note: the frame variable combination “chances” represents articles in which the frame variable “chances” is dominant compared to all other frame variables. In cases where multiple frame variables appear with approximately equal prominence, we have indicated this by naming the corresponding frame variable combination accordingly (e.g. “chances and risks”).

12. In 2019, the frame element “chances” was coded in 47% of AI images, in 2022/23 it was just under 40% (χ2(1) = 2.854, p = 0.91).

13. In 2019, the frame element “risks” was coded in 26% of AI images, in 2022/23 it was 39% (χ2(1) = 9.021, p = 0.003). The fact that the share of the “risk frame” nevertheless remains stable is because the remaining risk codes are distributed across the new “mixed evaluations frame.”

14. Which is about taking up social discourses in the art and culture sector and describing an artistic examination of the topic of AI, e.g. exhibitions on the topic, theater performances about AI etc. In 2019, the frame element “cultural debate” was coded in 11% of AI images, in 2022/23 it was 8% (χ2(1) = 1.044, p = 0.307).

15. Which brings the imitation or simulation of human characteristics through AI systems to the fore, also weighing up chances and risks as associated frame variables. In 2019, the frame element “role model human being” was coded in 9% of AI images, in 2022/23 it was 6% (χ2(1) = 1.450, p = 0.228).

16. In 2019, the frame element “competition” was coded in 19% of AI images, in 2022/23 it was 16% (χ2(1) = 1.275, p = 0.259).

17. In 2019, the frame element “development” was coded in 12% of AI images, in 2022/23 it was 8% (χ2(1) = 3.022, p = 0.082).

18. Restricted to the simple identification of pictorial objects without being able to make statements about the way or style of visualization (how something is visualized).

About the authors

Melanie Leidecker-Sandmann (Dr. phil.) is a research assistant at the Department of Science Communication at the Karlsruhe Institute of Technology, Germany. Her research focuses on science communication, political communication as well as on media content and journalism research.

E-mail: leidecker-sandmann@kit.edu Bluesky: @leidecker-sandmann

Tabea Lüders is a research assistant (M.A.) at the Department of Science Communication at the Karlsruhe Institute of Technology, Germany. Her research focuses on the journalistic and media communication of scientific and technological topics.

E-mail: tabea.lueders@kit.edu

Carolin Moser (M.A.) is a research associate at the Institute for Technology Assessment and Systems Analysis (ITAS) at the Karlsruhe Institute of Technology, Germany. As a doctoral student her research focuses on infrastructures in real-world laboratories and transdisciplinary research settings.

E-mail: carolin.moser@kit.edu

Vincent R. Boger is a student and student assistant at the Department of Science Communication at the Karlsruhe Institute of Technology, Germany. He is also an active science communicator.

E-mail: vincent.boger@student.kit.edu

Markus Lehmkuhl is Professor of Science Communication in Digital Media at the Karlsruhe Institute of Technology, Germany. His research focuses on the emergence and structure of public opinion formation on scientific topics in general and risk topics in particular, with an emphasis on the role of journalism.

E-mail: markus.lehmkuhl@kit.edu