1 Context

Since the advent of ChatGPT, there has been a notable increase in the discourse surrounding the applications of artificial intelligence (AI), the potential risks they pose to society, and the benefits they offer. However, this phenomenon did not emerge spontaneously. As early as 2015, we began to observe that AI was entering its third phase of mediatisation, following the phases of the 1960s and the 1990s [Crépel & Cardon, 2022]. In France, the extensive mediatisation of AI unfolded in a specific context of political and geopolitical developments: the French government is a prominent proponent of the concept of “digital sovereignty”, which entails the development of a distinctive technological ecosystem within Europe, with the aim of reducing dependence on monopolistic U.S. companies. Moreover, Emmanuel Macron, the country’s president since 2017, has explicitly set himself the mission of modernising the country by transforming it into a “start-up nation” along the lines of Silicon Valley [Defilippi, 2022].

The role of the mainstream media in disseminating in the public sphere information about AI developments, and more generally about science and technology, is of great importance. First, they set the agenda for public debate and influence public perceptions by framing technologies in ways that highlight certain aspects while neglecting others. Secondly, they provide an arena in which different stakeholders negotiate future pathways of these technologies and their role in society [Brause et al., 2023]. Conversely, they can serve to legitimise decisions on scientific advances and their outcomes for the general public [Gerhards & Schäfer, 2009]. Several studies have examined press coverage of AI, with some highlighting the influence of industry and the state on how AI is framed. Brennen et al. [2018] analysed U.K. mainstream outlets and found that the coverage of AI is dominated by industry announcements about AI applications. Similar findings are pointed out by Vergeer [2020] for the Dutch media, who found that technology companies, scientific institutions and the automotive industry were the main sources of journalistic coverage of AI. On the other hand, the press also features critical discourses related to AI controversies: Gourlet et al. [2024] mapped the issues around AI covered in the French press, while Dandurand and colleagues [2022] found that the Canadian press covers AI controversies in a generic way, with little critical perspective. Meanwhile, media coverage of AI has become more critical over time [Nguyen & Hekman, 2022].

For their part, social media allow millions of people to express themselves on political, social, cultural, economic, scientific and other topical issues. They have also become one of the dominant sources of information on all matters, including scientific issues [Arcom, 2024]. The vast majority of topics discussed on social media reflect the agenda of the mainstream media [Smyrnaios & Rieder, 2013]. However, discussions on social media do not only do this: they are also shaped by the representations and political views of users, who generate and disseminate informational content in line with their political opinions [Ratinaud et al., 2019]. In this way, social media become observatories for detecting the framing of technologies such as AI, which may not be visible through the study of discourse in mainstream media. Existing research on the mediatisation of AI on social media mainly examines the evolution of interest and changes in attitudes towards AI [Brause et al., 2023]. Part of this research focuses on stakeholders who influence the discourse on AI on social media. Studies in this area have shown that, in cases such as Chinese social media, industry actors actively shape most of the AI-related discourse [Zeng et al., 2022].

In this research, we aim to simultaneously study the framing of AI in the French press and social media, as well as the dependencies of this discourse, reflected in the stakeholders who steer public attention around AI issues in the media [Richter et al., 2023; Zeng et al., 2022]. By studying both the framing of AI and the stakeholders who dominate the media discourse on AI, we believe that we contribute to the understanding of how sociotechnical imaginaries form and consolidate in the French public sphere. Of course, one should not overlook the fact that sociotechnical imaginaries are not the sole products of the media framing of technology but result from power relations and strategies of state actors, technology companies, influential executives, events, research groups or activists [Jasanoff & Kim, 2009; Mager & Katzenbach, 2021]. Nevertheless, the media framing plays a key role in consolidating these imaginaries, by stabilising the central ideas and emphasizing certain aspects and priorities of a given issue [Vicente & Dias-Trindade, 2021; Scott Hansen, 2022]. In other words, given the role of the media in shaping public perceptions of scientific advances [Bauer, 2005; Chuan et al., 2019], they are likely to contribute to transforming visions of future trajectories of a technology such as AI, expressed by groups of stakeholders, into collectively held visions that are put into debate within the public sphere. For this reason, we consider it important to simultaneously study the evolution of the media framing of AI and the stakeholders who dominated the French press and social media on the topic.

2 Objective

Our study focuses on the agenda-setting and framing of AI in the French general press over the period 2012–2022, as well as on X and Facebook. This allows us to identify the periods when AI was accorded particular prominence in the news agenda and to understand how it was framed by journalists and social media users. The objective of this research is to test three hypotheses. The first hypothesis is that AI was introduced to the media agenda as a result of deliberations held by major technology companies and governmental entities. This would indicate that the agenda-setting of AI occurred in a top-down manner, suggesting that it did not emerge from events within civil society, but rather from announcements made by the aforementioned actors. The second hypothesis is that the framing of AI relied primarily on government and key industry actors, to the exclusion of those with “alternative” visions of AI. Since “the frame in a news text is really the imprint of power [as] it registers the identity of actors or interests that competed to dominate the text” [Entman, 1993, p. 55], it is very important to identify which stakeholders compete to promote their own definitions of AI issues in both legacy and social media. The final hypothesis is that although agenda-setting has been significantly influenced by Big Tech and the government with the aforementioned actors dominating the framing of the AI issues, part of the framing in both the press and social media challenges the narratives of these dominant actors.

3 Methods

3.1 Data collection

To examine the discourse surrounding artificial intelligence (AI) in both traditional and social media, we used the key terms “intelligence artificielle” (artificial intelligence in English) and “deep learning”. The choice of these particular terms is justified by their extensive use in journalistic discourse and among internet users to describe applications of artificial intelligence. The term “IA” (AI in English) was excluded from the query because it would most likely lead to retrieval of irrelevant articles from the newspaper corpus and posts from the Facebook corpus containing the letters “IA”. On the other hand, for X, the #AI string was included in the query because the presence of the hash distinguishes the hashtag from the simple term AI. This choice necessarily excludes a certain part of the discourse on AI, which may be more specialised and may relate, for example, to machine learning. However, the objective was to study the discourse surrounding this technology as it is widely used by non-experts. The press corpus consisted of 13 795 articles retrieved from the Europresse database. The articles we have collected were published between 01/01/2012 and 30/06/2022 in nine national newspapers: Les Échos (4 078 articles), La Tribune (2 827 articles), Le Figaro (2 405 articles), Le Monde (2 113 articles), Libération (647 articles), La Croix (535 articles), Correspondance économique (495 articles), Aujourd’hui en France (417 articles), and L’Humanité (268 articles). We have chosen these media outlets because they are newspapers of record (presse de référence), which hold a dominant position in the dissemination of news and have a national and international influence on opinion leaders and intellectuals [Merrill, 2000]. While this selection allows us to study the mainstream agenda around AI, we acknowledge that by not including other media in the corpus, we may be missing more specialised discourse on this technology or even more critical perspectives on its applications. For X, we collected a corpus of 3 599 335 posts using the Twitter API for academic research for the period between 01/01/2012 and 30/06/2022. Finally, in the case of Facebook, we used the CrowdTangle platform to collect 63 213 posts that were published between 24/11/2012 and 24/11/2022. We were unable to collect posts from the beginning of 2012, as the platform allows data collection up to ten years prior to the collection date, and the collection took place in November 2022.

3.2 Lexicometric analysis

To explore the corpus, we used Reinert’s Descending Hierarchical Classification (DHC) method, which is implemented in the open source software Iramuteq [Ratinaud, 2014]. The method generates lexical classes consisting of text segments extracted from the corpus, which contain words that co-occur with a statistically significant frequency. The lexical classes that are therefore identified, highlight the presence of different “lexical worlds”, or themes, in the discourse on AI. This clustering method based on lexical co-occurrence enables the identification and grouping of semantically similar text segments. The identification of semantic correlations between segments on the basis of co-occurrence constitutes the operational part of frame mapping [Ledouble & Marty, 2019]. Lexicometric analysis can therefore identify frames in an inductive way, where frames are mapped by discovering patterned associations of words [D’Angelo, 2017] . For these reasons, in this research we suggest that the lexical classes resulting from the Reinert analysis represent different frames employed to talk about AI.

An important advantage of this method is that it enables us on the one hand to have an overview of the different frames that are present in the corpus and on the other hand to examine individual segments of each frame, thus facilitating a qualitative analysis of these frames. Finally, this type of analysis enables us to identify and analyse frames that are statistically overrepresented in the corpus for each year. This gives us the opportunity to locate the frames that are introduced into the agenda at different times. Finally, in the context of textual analysis, we used Labbé’s intertextual distance [Labbé & Labbé, 2003] to investigate potential differences in the coverage of AI among newspapers. This enables us to examine the lexical proximity between different media.

The three corpora (press, X, Facebook) were divided into three periods: 2012–2015, 2016–2019, and 2020–2022. The division of the corpus into three periods was based on the results of a preliminary lexicometric analysis of the press corpus, which revealed significant shifts in frames across these three different time periods. The division which was ultimately retained for the analysis of discourse across all media has two advantages: first, it allows for the observation and characterisation of changes in the lexical classes and frames during these key moments. Secondly, it allows for a balanced distribution over the entire period under analysis.

3.3 Network analysis

Some social media platforms, of which X is a notable example, offer researchers the opportunity to study not only discourse of users, but also their interactions. For this reason, we used a protocol on X that integrates lexicometric analysis with network analysis [Smyrnaios & Ratinaud, 2017]. The combination of these methods allows for the study of two different aspects: firstly, the formation of user communities through interactions, specifically through retweets; and secondly, the vocabulary used by each of these communities. This approach makes it possible to study the specific topics discussed by each user community. In this case, the focus was on discussions related to AI. The characterisation of each community was based on a qualitative analysis of the ten users with the highest number of retweets, which provides insights into the composition of the entire community [Ratinaud et al., 2019].

4 Results

1. The grip of the state and the industry on public discourse about AI. Our data indicates that 2015 represents a pivotal moment in the journalistic coverage of AI, marked by the emergence of frames on the associated risks in the media agenda (Figure 1).

PIC

Figure 1: Lexical classes generated by the DHC analysis on the 2012–2015 press corpus. Highlighted are the lexical classes that become statistically overrepresented in the year 2015 and are related to experts’ warnings.

This shift is associated with the publication of an open letter [Open Letter, 2015] signed by over a hundred renowned personalities, including Stephen Hawking, Bill Gates, and Elon Musk, which addressed the potential risks associated with the development and use of autonomous weapons. The open letter attracted considerable media attention, leading to the publication of a substantial number of articles on the issues it raised. The extensive media coverage of the arguments presented in this open letter demonstrates — also in the French case — the important role of experts in shaping the perception of risks surrounding AI in the public sphere [Neri & Cozman, 2020]. Both on Facebook and on X, user activity in posts related to AI started to increase in 2015, with a sharp rise in 2016. From this point onwards, the fluctuation of posts remains at a high level (Figure 2).

PIC

Figure 2: Publications number after min-max normalisation as a function of years: Press articles (Min: 82, Max: 2 925), Facebook (Min: 5, Max: 13 287) and X (Min: 13 960, Max: 875 317).

From 2016 onwards, the political context around AI is dense. In that year, the “Fourth Industrial Revolution” becomes the main focus of the World Economic Forum [Schiølin, 2019]. In 2016, the United States announced its national strategy for AI. It is followed by China, which publishes its own strategy in 2017 and by Germany in 2018 [Bareis & Katzenbach, 2022]. In 2018, French President Emmanuel Macron announced a strategic plan for the development of AI in France. A few days later, a report by mathematician and politician Cédric Villani was published, detailing the challenges of AI for the economy and society. In light of these events, 2018 marks a pivotal moment in the public discourse on AI in France. In that year, the number of press articles on AI increased significantly, reaching its highest level since the beginning of the third wave of AI mediatisation. At the same time, a variety of frames related to AI are enriching the AI agenda. These frames include national politics, economics, geopolitics, the EU’s stance towards prominent U.S. Big Tech companies (GAFAM) and their Chinese counterparts (BATX), and the need for AI regulation. Frames related to the international economic conjuncture on AI and the activity of GAFAM and BATX make up 18% of the total corpus for the period 2016–2019. At the same time, 22.55% of the corpus consists of frames related to the announcements of governmental actors on AI, conferences held in several cities on AI, and financial support measures allocated to start-ups investing in AI development at the national level.

In the case of X, user activity peaks in 2018. Following the announcement of France’s AI strategy by President Emmanuel Macron, the topics discussed on X suddenly shift to various conferences, forums, applications of AI, but also ethical dimensions of AI development (Figure 3). These frames represent the 49.73% of the total corpus. For example, we read in a tweet: “Analysis. From risk modelling to personalized marketing to meet customer needs, the 9 main use cases of data science in the banking sector #AI #machinelearning #bigdata #digitaltransformation #fintech”,1 and “What is the future of customer experience? Top 5 emerging technologies that will revolutionize customer experience in 2018 #AI #emergingtech #bigdata #machinelearning #martech #business #digitaltransformation #fintech #IoT #marketing”. As for the events organised around AI, we read tweets such as “Join me for a discussion at the roundtable on artificial intelligence organized by Kedge at the Entrepreneurs Fair 2018, Parc Chanot, on April 11th at 10 AM. #AI #savethedate #conference”.

PIC

Figure 3: (A) The lexical classes generated by the DHC analysis on the X corpus from 2016 to 2019. (B) A chronological overview of the overrepresentation of lexical classes in the corpus for the period from 2016 to 2019. In figures A and B, framed with small horizontal lines, the lexical classes that are overrepresented in the corpus before 2018. Framed with dashed lines, the lexical classes that become overrepresented in the corpus during the years 2018 and 2019.

In contrast, prior to 2018, the discourse is more diverse, encompassing a range of aspects related to AI. These include frames related to the potential challenges that AI may pose for employment, as well as broader issues such as the relationship between AI and science fiction, the relationship between humans and machines, and the uncertainty that people feel about AI. Yet, several frames revolve around different applications launched by Big Tech companies and the financial resources allocated to AI development in the United States and China (see the right part of the DHC analysis of Figure 3).

As far as Facebook is concerned, no significant shift in frames has been observed over the course of 2018. During the period between 2016 and 2019, a 33,69% of frames concern image recognition applications, autonomous vehicles and voice assistants, AI applications in medicine and conferences on AI. Another 11,36% of frames concern training courses on AI and recruitment for jobs that require knowledge of AI skills. Finally, we identify frames that focus on ethical dimensions of AI and the threat of automation to human labour. These frames represent 10,74% of the total corpus. The impact of the announcement of the strategy for the development of AI is reflected in two frames that are overrepresented in the corpus after 2018. The first of these frames refers to the aforementioned report by Cédric Villani (3,91% of the corpus), while the second contains a vocabulary related to conferences and events on AI (8,34% of the corpus). Nevertheless, we do not observe a drastic change in the agenda in 2018, as was the case with the press and with X.

Overall, the lexicometric analysis of this first period of unprecedented AI mediatisation shows that, as in the case of press coverage in other countries, AI has been put on the agenda with an impetus set both by government announcements on AI development and by the activities of digital companies and the launch of various products and services by them. Although this paper does not aim to explore in detail which sociotechnical imaginaries are publicly performed through the frames present in the press and social media, it is worth noting that several frames set to the agenda after 2018 include a political discourse that reflects the vision promoted by the government of a supposedly “start-up nation”. This discourse prioritises innovation and technological development as key to addressing economic and societal challenges, while also emphasising digital sovereignty in response to AI advances in the U.S. and China — a central theme in France’s national AI strategy [Defilippi, 2024].

2. Actors dominating the public discourse on AI in 2012–2019 The study of AI framing makes it possible to examine the topics that are set on the agenda by the press and those that are discussed by social media users. However, this type of analysis alone does not provide an in-depth insight into the actors who engage in this discourse and contribute to shaping the public debate on this technology. To gain further insight into this dimension, we follow three different approaches: in the case of X, we use a protocol that allows us to visualise the groups of users who frequently interact when discussing AI, as described in the Methods section. In the case of the press, we use a computational method based on Named Entities extraction, to identify and visualise the institutions and individuals mentionned with high frequency in the press articles [Tsimpoukis et al., 2024]. In the case of Facebook, the study of actors was carried out by examining the overrepresentation of public pages and groups in each frame resulting from the lexical analysis. However, the mapping of actors in this case is only indicative: an exhaustive study that would allow us to draw firm conclusions would require categorising a large number of public Pages and Groups, overrepresented in each frame, into different types of actors- a task that was not undertaken in this study.

With regard to the press, it can be observed that four main clusters of actors dominated the journalistic discourse during the period 2016–2019 (Figure 4): a cluster composed of Big Tech actors (Apple, Microsoft, Google, Amazon, Tesla, Huawei, Elon Musk, Mark Zuckerberg). a cluster that includes government actors of French politics and actors from the European Union (Emmanuel Macron, Bruno Le Maire -Economy and Finance Minister at that time-, Cedric Villani, European Commission, European Union). and, finally, two clusters made up of car manufacturers (Toyota, Ford, BMW, Tesla, Volswagen) and the defence, aerospace and security industries (Airbus, Thales, Siemens, Atos, Gemalto, Dassault).

PIC

Figure 4: Visualization of the co-occurrences of named entities of persons and organizations that appeared in articles during the 2016–2019 period. The different clusters consist of named entities that frequently appeared together over 30 times in the articles published in the press.

The frequency with which these actors are cited in journalistic articles demonstrates that both industry and government sources and citations monopolise the discourse on AI. There is one exception to this, with a small cluster below the cluster of government and EU actors, consisting of frequent references to the left-positioned newspaper L’Humanité, the trade union CGT, the communist party PCF and the French National Ethics Advisory Committee (CCNE).

PIC

Figure 5: Network analysis of users who posted tweets about AI between 2016 and 2019. The different colors represent clusters of users who retweeted each other frequently. The more a user retweets another user’s posts, the closer they are to each other in the graph.

Over the same period, X is dominated by a variety of stakeholders from different backgrounds (Figure 5). In particular, 37.47% of users posting about AI frequently interact with others commenting on technology and business news, 7.89% frequently interact with content creators or influencers, 7.49% frequently interact with think tank members, intellectuals and conference speakers, 5.66% interact with mainstream media posts, and 4.49% interact with government user accounts. The clusters representing less than 4% of the sample are made up of users who interact with actors in health, marketing, start-ups, academics, communications specialists for various events, labour, law, consultants, and representatives of companies such as Microsoft or the multinational telecommunications company Orange. The multitude of these stakeholders appeared for the first time in the 2016–2019 period. In the previous analysis period (2012–2015), the predominant user community consisted of individuals engaged in discourse on to the future of AI (10.32%), the music industry (9.83%), the mainstream media (7.27%), and the specialised press (7.13%), as well as executives from companies and institutions related to technology (12.95%), marketing (2.11%), and consulting (2.11%).

Regarding Facebook, in frames related to image recognition applications, autonomous vehicles and voice assistants, we find statistically over-represented public Pages and Groups — and therefore frequent contributors of content related to these frames — from news portals, digital and innovation industries and public institutions. Indicatively, we find sites and groups such as Sputnik France, Agenda Strasbourg, TVT Innovation, Nantes Digital Week, Semaine Numérique, Huawei Algerie, Kulture Geek, but also public institutions such as Palais de découverte, Quai des Savoirs, University of Lyon, Paris Science Festival. In the frames related to Cédric Villani’s report, we find public Pages and Groups of institutions and stakeholders such as the World Economic Forum, France Stratégie, Thomas Gassilloud, Olivier Véran, Commission for Ethics in Science and Technology. Interestingly, in the frames related to the ethical dimensions of AI and the threat of automation to human labour, we find an over-representation of left-leaning Pages and Groups such as Unconditional Basic Income, Generations United for a Desirable Future, Political Economy, La France insoumise, People’s Rally for Progress or Left Union for a Desirable Future.

Overall, the study of the actors involved in the discourse on AI on social media shows that until 2019, Big Tech companies, the business market, the communications industry and, to a lesser extent, government figures and institutions dominate the framing of AI in these arenas. As we will discuss in the next section of this article, this picture begins to shift after 2020 with the emergence of political polarisation around AI-related issues.

3. Contesting dominant AI narratives: discourse and actors. Since the final years of the 2010s, discourses contesting the prevailing narratives on artificial intelligence (AI) gain prominence in the media. In order to facilitate the analysis that follows, it is first necessary to clarify the context in which the contesting discourses are situated. From the outset of our analysis, we identify frames that speculatively refer to the ethical implications of AI. Such frames even encompass the warnings issued by AI experts, which are extensively covered by the media. However, these refer to the general dimensions of the risks posed by the development of AI. The critique of speculative narratives has also been directed at ethics guidelines, which could be accused of overlooking existing dangers, such as the concentration of AI technology development by Big Tech [Cugurullo, 2024]. In contrast, several frames of our analysis include a discourse on existing issues around AI, often driven by activists or groups of individuals affected by AI applications. This contrasts with the top-down discourse on ethical issues that is a recurring theme throughout the analysis. There have been some attempts in the existing literature to distinguish between these two types of narratives. For example, [Gourlet et al., 2024] propose distinguishing between abstract critiques and local controversies, while [Bory et al., 2024] suggest differentiating between strong AI narratives and weak AI narratives. In this section, we will focus on the discourse that arises in response to existing applications of AI, whether it constitutes criticism, touches on conspiratorial rhetoric, or reveals narratives about AI that differ from those of Big Tech and the state.

In the press, the frames of contesting discourse towards concrete AI applications become overrepresented in the corpus along with the experimentation of facial recognition systems in the carnival of the city of Nice that takes place in 2019, as part of the “Safe City” project. We read for example in an article published in 2019 in Libération, entitled Nice: Smile, You’re Unmasked:

“What worries the detractors isn’t necessarily the experimentation, it’s «what will happen tomorrow»: «The software will have to be connected to databases» says Patrick Allemand, a member of the opposition in Nice. «As citizens, we have no control». He expresses concern about data protection and individual freedoms. «We are in a system that would have made Orwell himself pale,» jokes PS municipal councilor Paul Cuturello”.

Even in 2018, we find a contesting discourse concerning the installation of such surveillance systems in cities. This year, we read an article in Le Monde entitled In France, smart cities: cities under surveillance, written by journalist Grégoire Allix:

“From Nice to Valenciennes, from Marseille to La Défense or Nîmes, more and more local authorities are being tempted by digital platforms organized around surveillance and public space control tools. This is a deep-seated movement, aligned with powerful industrial interests and supported by public subsidies, thriving in a certain legal gray area and raising concerns among civil liberties defense organizations”.

From 2020 onwards, the contesting discourse leads in a polarisation of the national newspapers along the left-right political spectrum. More precisely, we notice that in the period 2020–2022 the daily newspapers L’Humanité, Libération, Le Monde, and La Croix (left-center left) share a common vocabulary, while Les Échos, Le Figaro, and La Tribune (right-center right) frequently use a different vocabulary and cover distinct themes regarding AI (Figure 6).

PIC

Figure 6: Labbé intertextual distances for the three press corpora. In the case of 2020–2022 corpus, we notice a clustering around a common vocabulary for the left and centre-left positioned daily newspapers L’Humanité, Libération, Le Monde, and La Croix at the top, and the right-positioned Les Échos, Le Figaro, and La Tribune at the bottom. La Correspondance Économique and Aujourd’hui en France are positioned separately due to the frequent use of a different lexicon.

In the first case, the vocabulary is related to facial recognition, freedoms, and the regulation of AI, while in the second case, the newspapers cover with higher frequency a vocabulary related to economic aspects, innovation, and national policy. During this period, several important events took place: various regulations related to AI and facial recognition came into effect, while in 2021, the Global Security Law was enacted. Among other measures, this law allowed for the use of drones by law enforcement agencies to capture videos and images, thereby facilitating subsequent analysis using of facial recognition software. Furthermore, during the same period, a number of biometric surveillance systems were deployed in various countries as part of the global response to the COVID-19 pandemic. These developments prompted a more critical assessment of AI systems from newspapers with a left or centre-left political orientation. A similar divergence in coverage of facial recognition between newspapers of different political orientations has also been observed in the U.S. press [Shaikh & Moran, 2024].

On X, the core group of actors dominating the public discourse on AI during 2020–2022 remains the same as in the previous periods (Figure 7): think tanks, digital specialists and analysts (40.14%), influencers and youtubers publishing AI-related content (10.44%), start-up and business news (3,84%) and the media (2.8%). We also observe the emergence of new user clusters, such as education on digital technologies (3.96%) and government and defence (1.74%). Nevertheless, it is noteworthy that this period also saw the emergence of a sizeable cluster of far-right users and those disseminating misinformation and conspiracy theory content (4.36%), along with a smaller cluster of technology critics and left-leaning users (1.71%).

PIC

Figure 7: Network analysis of users who posted tweets about AI between 2020 and 2022. The different colors represent clusters of users who retweeted each other frequently. The more a user retweets another user’s posts, the closer they are to each other in the graph.

Both groups interact with the media cluster, indicating that they reference a range of articles published by these outlets to comment on AI-related events. However, the misinformation, conspiracy, and far-right group is isolated at the top of the graph, suggesting that the users of this group are predominantly self-referential, while the tech critics and left-wing group interact with other communities, such as the dispersed community of digital specialists and analysts.

Actors from the mis-information, conspiracy and far-right group discuss different topics regarding AI than tech critics and left-wing actors. In Figure 8 we can see which frames are over- and underrepresented for each of these two user groups. In other words, we can see which vocabulary these groups were most likely to use and which vocabulary they were less likely to use. The frame in which the mis-information, conspiracy and far-right cluster is most overrepresented concerns the Qotmii application. This app purports to use AI to predict electoral outcomes, claiming to circumvent the shortcomings of conventional polling methodologies. In this frame, we frequently encounter discourse emphasising Éric Zemmour, the president of the far-right party Reconquête, as a frontrunner in the 2022 presidential election according to predictions of the app.

PIC

Figure 8: At the top of the figure, the lexical classes generated by the DHC analysis on a sub-corpus consisting of the discourse of both far-right and left-wing actors observed in X network analysis for the period 2020–2022. At the bottom of the figure, the lexical classes in which these users clusters are over-represented or under-represented. The vertical axis represent the χ2 test result between the lexical classes and the two focus groups variables.

The second frame in which this cluster is over-represented concerns the relationship between humans and machines. Here, we encounter a discourse that could qualify as mis-information [Wardle & Derakhshan, 2017]. We read, for example: “Artificial intelligence today allows for the creation of entirely artificial human genome sequences that are indistinguishable from DNA derived from real donors, linking this to the current frenzy of power imposing mRNA vaccines that modify DNA”. We further read: “Everything is possible with the advancements in technology. A few years ago, the term «big data» was on everyone’s lips; that phase is now over. We are now in the era of blockchain, machine learning, and artificial intelligence to control everyone.”.

The third frame in which this group is overrepresented is directly related to the COVID-19 pandemic period. We find here a discourse regarding doctors and the supposed imminent replacement of them by robots. We read, for example: “The government wants to turn them into puppets to better replace them with database management software, pompously called artificial intelligence. The only intelligence is that of the one pulling the strings and deciding who to treat and with what.”. In another post, we read: “Seen this way, all doctors conduct research when they face a difficult case; it’s part of their job. The problem, as we saw with COVID, is turning them into mere executors, ultimately replacing them with software pompously called artificial intelligence.”.

Finally, the same group is overrepresented in a frame which contains a hard conspiracy discourse. We read, for example: “One of the points of the Great Reset is the submission of humanity by artificial intelligence. This is the dream of Laurent Alexandre, Jacques Attali, Bill Gates, and George Soros, among others. Here is what Stephen Hawking thought about their project.”. As for the vaccination period, we read: “The vaccine is just large-scale experimentation for transhumanism, but we are merely the guinea pigs and not eligible, as transhumanism and artificial intelligence are reserved only for the global hyper-class.”. Overall, the study of the discourse of this community shows cross-topic connections with conspiracy theories related to the pandemic and vaccination.

On the contrary, the community of users frequently referring to tech critics and left-leaning users is overrepresented in four frames. The first frame contains tweets that talk about ethical guidelines of AI, while mentioning the work of initiatives such as Tech4Good and Good In Tech in publishing reports on the ethical dimensions of the use of AI. The second frame is related to registration for conferences on AI, while the third frame contains a variety of quotes from philosophers and thinkers. The most notable frame in which this community of users is overrepresented includes a vocabulary related to the dismissal of Timnit Gebru from Google following the publication of a paper commenting on the biases of large language models. We read for example: “A renowned researcher from Google publishes an article on the ethical dangers of artificial intelligence, then the company censors it and fires her, raising questions about the treatment of women of color by the company.”, or another post: “The dismissal of Timnit Gebru, an ethicist in artificial intelligence, by Google led to the creation of a union within the company to promote social, economic, and environmental justice. Our article on Timnit Gebru.”.

With regard to Facebook, it is the platform on which we consistently identify a discourse that diverges from the prevailing narratives about AI. Since 2016, a frame comprising a discourse around apocalyptic scenarios is overrepresented in the corpus, representing 5.23% of the total corpus. We read, for example: “Nuclear war, zombie attack, mutants or extraterrestrials, artificial intelligence taking control, virus spreading across humanity, planet explosion — the idea of the apocalypse both fascinates and terrifies. While we, mere mortals, remain static in the face of humanity’s end, others have already planned everything. The world as we know it could be brought to disappear”. The public Groups and Pages that are overrepresented in this class predominantly concern mysticism and literature. Additionally, a frame was identified that contains a discourse with religious dimensions, representing 2.48% of the total corpus. This frame contains excerpts such as: “Anthony Levandowski, a former engineer at Google and Uber, founded a religion based on artificial intelligence called «The Way of the Future», so that people can worship a robot godhead that is a billion times more intelligent than humans. He wants to create a new church centered around artificial intelligence, with followers kneeling at the feet of a super machine. Technology experts have stated that humans are likely to accept the robot as a superior being”. The public Groups and the Pages which are overrepresented in this class publish news content from French-speaking African countries, but we find as well religious-related groups such as Intereligious debate, Oasis Meuse Liège and Christ Light of the World.

The discourse surrounding AI on Facebook during the 2020–2022 period is noteworthy for two particularities. Firstly, there has been an increase in the prevalence of conspiracy discourse. Secondly, a significant proportion of the conversation has focused on cryptocurrencies, automatic trading and the various ways in which AI systems can be used for profit. More specifically, the frames related to conspiracy theories account for 9.38% of the overall discourse. In this case, too, we find cross-topic connections with conspiracy theories regarding vaccination. We read for example: “COVID-19 stands for Certificate of Vaccination Identity, with 19 being 1 for A and 9 for I, meaning Artificial Intelligence. It’s not the name of the virus but the name of an international plan for enslavement and population reduction, which has been in development for decades and was launched in January 2020 during the last Davos meeting.”. In this class we find over-represented public groups and pages such as 3D vs 5D Consciousness, Real scandal for fake vaccines or The Gallic Knights of New France.

The frames pertaining to cryptocurrencies and automated trading constitutes 18.11% of the total discourse. These particular frames are of interest as they relate to the technological promise of blockchain, which is expected to eliminate all third-party entities that have, until now, facilitated exchanges [Becker, 2018]. We find over-represented in these classes public Pages and Groups such as MLM (Multi Level Marketing), Financial Independence, Crypotocurrency, FOREX (Foreign Exchange Market) and Make money online now. As expected, this discourse focuses on income generation. We read for example: “A development and funding ecosystem for artificial intelligence and blockchain, the project consists of a technology company and a venture capital fund. Today, Digiu brings together 180 countries and over 50,000 partners worldwide. As a technology company, the Digiu project has successfully begun to master three of the five existing artificial intelligence technologies in just one year.”.

Overall, both in the press and on social media, we observe the strengthening of frames presence that diverge from the dominant narratives surrounding AI. In these frames, actors prioritize distinct aspects of the development of AI.

5 Conclusions

The aim of this study was to explore both how artificial intelligence has been framed in the press and on social media over the past decade, during which this technology has been highly mediatised in the public sphere, and the actors who have dominated its mediatisation. To this end, we have formulated three hypotheses. The first hypothesis was that AI was set to the agenda when Big Tech and government actors began to publicly engage in discourse on this technology. Both the study of the temporal variation in the number of articles and posts published in the French press and on social media, as well as the evolution of the actors involved in discussions about AI on platform X from 2012 to 2022, provide compelling evidence that AI was set to the agenda in response to the activities of Big Tech and government announcements regarding AI development. The study of the actors involved in the discourse on AI, both in the press and on social media, serves to confirm the second hypothesis, namely that the frames set on the agenda were primarily oriented towards government and digital industry actors. This is consistent with the absence of frames that refer to narratives other than the dominant ones, such as the environmental dimensions of AI development or the micro-work required to develop AI systems. These results provide evidence that, as has been observed in other countries, industry and the state narratives play an important role in AI coverage in the French media.

Nonetheless, a political polarisation around technologies such as facial recognition emerge the last few years to create fractures in this uniformity of narratives: actors who criticise specific applications of AI and bring to the forefront a discourse that diverges from the dominant narratives — even if these narratives lean towards misinformation or even conspiracy theories — demonstrate that society’s perceptions and representations of AI are not homogeneous.

As social power institutions such as the media have the ability to prioritise and legitimise certain imaginaries over others [Scott Hansen, 2022], we believe that this work highlights not only the framing of AI, but also the actors and the institutions that are prioritised in the press and the social media — thus emphasising the visions of AI development that may crystallize as desirable in the public sphere. Beyond the dominance of state and industry narratives and visions, the polarisation between newspapers with different political orientations, the emergence of communities of actors on X with distinct political traits, and the divergent discourse on AI observed on Facebook suggest that alternative visions of AI technologies may emerge in the public sphere.

References

Arcom. (2024). Les Français et l’information. https://www.arcom.fr/nos-ressources/etudes-et-donnees/mediatheque/les-francais-et-linformation

Bareis, J., & Katzenbach, C. (2022). Talking AI into being: the narratives and imaginaries of national AI strategies and their performative politics. Science, Technology, & Human Values, 47, 855–881. https://doi.org/10.1177/01622439211030007

Bauer, M. W. (2005). Distinguishing red and green biotechnology: cultivation effects of the elite press. International Journal of Public Opinion Research, 17, 63–89. https://doi.org/10.1093/ijpor/edh057

Becker, K. (2018). La technologie blockchain et la promesse crypto-divine d’en finir avec les tiers. In Religiosité technologique, II. https://doi.org/10.15122/isbn.978-2-406-09563-7.p.0033

Bory, P., Natale, S., & Katzenbach, C. (2024). Strong and weak AI narratives: an analytical framework. AI & Society. https://doi.org/10.1007/s00146-024-02087-8

Brause, S. R., Zeng, J., Schäfer, M. S., & Katzenbach, C. (2023). Media representations of artificial intelligence: surveying the field. In Handbook of Critical Studies of Artificial Intelligence (pp. 277–288). Edward Elgar Publishing. https://doi.org/10.4337/9781803928562.00030

Brennen, A. J. S., Howard, P. N., & Nielsen, R. K. (2018). An industry-led debate: how U.K. media cover artificial intelligence. Reuters Institute for the Study of Journalism Fact Sheet, 1–10. https://doi.org/10.60625/risj-v219-d676

Chuan, C.-H., Tsai, W.-H. S., & Cho, S. Y. (2019). Framing artificial intelligence in American newspapers. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 339–344. https://doi.org/10.1145/3306618.3314285

Crépel, M., & Cardon, D. (2022). Robots vs algorithmes: prophétie et critique dans la représentation médiatique des controverses de l’IA. Réseaux, 232–233, 129–167. https://doi.org/10.3917/res.232.0129

Cugurullo, F. (2024). The obscure politics of artificial intelligence: a Marxian socio-technical critique of the AI alignment problem thesis. AI and Ethics. https://doi.org/10.1007/s43681-024-00476-9

Dandurand, G., Blottière, M., Jorandon, G., Gertler, N., Wester, M., Chartier-Edwards, N., Roberge, J., & McKlve, F. (2022). Training the news: coverage of Canada’s AI hype cycle (2012–2021). In Shaping 21st-Century AI. https://espace.inrs.ca/id/eprint/13149/1/report_ShapingAI_verJ.pdf

D’Angelo, P. (2017). Framing: media frames. In P. Rössler, C. A. Hoffner & L. Zoonen (Eds.), The International Encyclopedia of Media Effects (pp. 1–10). Wiley. https://doi.org/10.1002/9781118783764.wbieme0048

Defilippi, F. (2022). Il n’y a pas d’alternative: les imaginaires de l’innovation dans les discours d’Emmanuel Macron. Interfaces numériques, 11. https://doi.org/10.25965/interfaces-numeriques.4755

Defilippi, F. (2024). La construction des futurs nécessaires. Une étude des imaginaires sociotechniques français. https://theses.fr/2024PA100042

Entman, R. M. (1993). Framing: toward clarification of a fractured paradigm. Journal of Communication, 43, 51–58. https://doi.org/10.1111/j.1460-2466.1993.tb01304.x

Gerhards, J., & Schäfer, M. S. (2009). Two normative models of science in the public sphere: human genome sequencing in German and U.S. mass media. Public Understanding of Science, 18, 437–451. https://doi.org/10.1177/0963662507082891

Gourlet, P., Ricci, D., & Crépel, M. (2024). Reclaiming artificial intelligence accounts: a plea for a participatory turn in artificial intelligence inquiries. Big Data & Society, 11. https://doi.org/10.1177/20539517241248093

Jasanoff, S., & Kim, S.-H. (2009). Containing the atom: sociotechnical imaginaries and nuclear power in the United States and South Korea. https://doi.org/10.1007/s11024-009-9124-4

Labbé, C., & Labbé, D. (2003). La distance intertextuelle. Corpus. https://doi.org/10.4000/corpus.31

Ledouble, H., & Marty, E. (2019). The 2016 presidential primaries in the United States: a quantitative and qualitative approach to media coverage. Studia Neophilologica, 91, 199–218. https://doi.org/10.1080/00393274.2019.1616219

Mager, A., & Katzenbach, C. (2021). Future imaginaries in the making and governing of digital technology: multiple, contested, commodified. New Media & Society, 23, 223–236. https://doi.org/10.1177/1461444820929321

Merrill, J. C. (2000). Les quotidiens de référence dans le monde. Les cahiers du journalisme, 7, 10–15.

Neri, H., & Cozman, F. (2020). The role of experts in the public perception of risk of artificial intelligence. AI & Society, 35, 663–673. https://doi.org/10.1007/s00146-019-00924-9

Nguyen, D., & Hekman, E. (2022). The news framing of artificial intelligence: a critical exploration of how media discourses make sense of automation. AI & Society, 39, 437–451. https://doi.org/10.1007/s00146-022-01511-1

Open Letter. (2015). Autonomous weapons open letter: AI & robotics researchers. Future of Life Institute. Retrieved January 14, 2024, from https://futureoflife.org/open-letter/open-letter-autonomous-weapons-ai-robotics/

Ratinaud, P. (2014). IRaMuTeQ: Interface de R pour les analyses multi-dimensionnelles de textes et de questionnaires (version 0.7 alpha 2). http://www.iramuteq.org

Ratinaud, P., Smyrnaios, N., Figeac, J., Cabanac, G., Fraisier, O., Hubert, G., Pitarch, Y., Salord, T., & Thonet, T. (2019). The structuring of discourses on Twitter during the 2017 French presidential election. Between political agenda and social representations. Réseaux, 214-215, 171–208. https://doi.org/10.3917/res.214.0171

Richter, V., Katzenbach, C., & Schäfer, M. (2023). Imaginaries of Artificial Intelligence. In Handbook of Critical Studies of Artificial Intelligence. https://doi.org/10.26092/elib/2190

Schiølin, K. (2019). Revolutionary dreams: future essentialism and the sociotechnical imaginary of the fourth industrial revolution in Denmark. Social Studies of Science, 50, 542–566. https://doi.org/10.1177/0306312719867768

Scott Hansen, S. (2022). Public AI imaginaries: how the debate on artificial intelligence was covered in Danish newspapers and magazines 1956–2021. Nordicom Review, 43, 56–78. https://doi.org/10.2478/nor-2022-0004

Shaikh, S. J., & Moran, R. E. (2024). Recognize the bias? News media partisanship shapes the coverage of facial recognition technology in the United States. New Media & Society, 26, 2829–2850. https://doi.org/10.1177/14614448221090916

Smyrnaios, N., & Ratinaud, P. (2017). The Charlie Hebdo attacks on Twitter: a comparative analysis of a political controversy in English and French. Social Media + Society, 3. https://doi.org/10.1177/2056305117693647

Smyrnaios, N., & Rieder, B. (2013). Social infomediation of news on Twitter — a French case study. NECSUS — European Journal of Media Studies, 2, 359–381. https://doi.org/10.25969/MEDIAREP/15095

Tsimpoukis, P., Ratinaud, P., & Smyrnaios, N. (2024). Evolution des fréquences et des cooccurrences des entités nommées dans le discours de la presse sur l’intelligence artificielle (2012–2022). JADT 2024: 17es Journées Internationales d’Analyse Statistique Des Données Textuelles, 893–902. https://hal.science/hal-04629054v1

Vergeer, M. (2020). Artificial Intelligence in the Dutch press: an analysis of topics and trends. Communication Studies, 71, 373–392. https://doi.org/10.1080/10510974.2020.1733038

Vicente, P. N., & Dias-Trindade, S. (2021). Reframing sociotechnical imaginaries: the case of the Fourth Industrial Revolution. Public Understanding of Science, 30, 708–723. https://doi.org/10.1177/09636625211013513

Wardle, C., & Derakhshan, H. (2017). Information disorder: toward an interdisciplinary framework for research and policy making (p. Council of Europe. https://edoc.coe.int/en/media/7495-information-disorder-toward-an-interdisciplinary-framework-for-research-and-policy-making.html

Zeng, J., Chan, C.-H., & Schäfer, M. S. (2022). Contested Chinese dreams of AI? Public discourse about artificial intelligence on WeChat and people’s daily online. Information, Communication & Society, 25, 319–340. https://doi.org/10.1080/1369118x.2020.1776372

Notes

1. All the quotes cited are translated from French to English by the author.

About the author

Panos Tsimpoukis is a PhD candidate in Information and Communication Sciences at the University of Toulouse. His research explores public discourse and social representations of artificial intelligence.

E-mail: panagiotix@gmail.com Bluesky: @labodenuit