Reviewed Book
Bauer, M. W., & Schiele, B. (Eds.). (2024).
AI and Common Sense: Ambitions and Frictions (1st ed.).
Routledge. https://doi.org/10.4324/9781032626192
Contents
Is it too much of a pipedream to see AI with common sense, or can this ambitious goal, against all odds, somehow be realized? Or maybe the real question is whether AI needs common sense at all. As Martin W. Bauer and Bernard Schiele, co-editors of AI and Common Sense: Ambitions and Frictions, emphasise in their introduction, the point of their book: “to examine the claim made by the proposition ‘AI with CS [common sense]’, where common sense is to be an adverbial attribute of machine intelligence: doing things common-sensically”. According to them, this allows for a more critical assessment of “the role of common sense for a new technology that craves public acceptance in society”.
The book is divided into 16 chapters, split into four parts, offering much-needed insights and timely discussions from various social sciences, psychology, and computer science perspectives. It oscillates between historical, technical, social, and philosophical perspectives, making it a valuable resource for researchers and students interested in the deeper contextual questions surrounding AI and its future.
The book’s opening two chapters lay a foundation for understanding common sense and how it contrasts with AI’s rigid, data-driven logic. It introduces common sense as a deeply human, context-dependent concept by reflecting on its theoretical roots, from Aristotle and the Scottish Enlightenment to Italian philosopher Giambattista Vico. The chapters argue that despite AI’s remarkable evolution in specialised tasks, it remains incapable of replicating the intuitive reasoning that defines human common sense.
The book then shifts gears to examine the technical challenges of common sense in AI by “giving voice to four AI engineers”, to show that even AI experts view AI with common sense with a mix of optimism and caution. On one hand, common sense, or at least a form of it, is described as essential for AI’s long-term relevance (chapter 3). On the other, by questioning the necessity of common sense in AI’s development, it suggests we need to move away from human-centric simulations of common sense, pointing to dead ends in cognitive science and AI design (chapter 6). Chapters 4 and 5, then, further explore this debate through the lens of human-robot interactions, emphasising the need for more public debate on the role of common sense in AI.
For those in science communication, chapters 7–11 are perhaps the most valuable collection of the book, where the book turns its focus to society and public discourse around AI and common sense. These chapters trace the debates across cultures, from Germany to the Netherlands and Japan. By highlighting the importance of language (chapter 7) and history (chapter 8), the chapters collectively paint a rich picture of AI’s place in the collective imagination across cultures.
In the final four chapters before the conclusion, the book uses case studies to put things into perspective. The authors explore the potential accommodation of AI in inter-objectivity and common sense. From text generation (chapter 12) to text mining (chapter 13), job recruiting (chapter 14), and self-driving cars (chapter 15), these chapters highlight the underlying tension between technical capabilities of today’s AI systems and the ever-shifting, evolving nature of common sense. In the concluding chapter, the co-editors return to the foundations laid earlier in the book, reflecting on the dynamic nature of common sense, and explaining how it is shaped by fast, slow, and very slow changes in societal norms and values. Overall, the flow between chapters engages readers by reminding key concepts and main ideas, helping to maintain the book’s coherence and showing how different topics are interconnected.
The book critiques the utilitarian and ahistorical assumptions about common sense that dominate contemporary AI discourse and imaginaries. This perspective invites a shift in how we approach AI, urging us to rethink how we think, plan, and operate for common sense within the realm of AI; seeing it not as a fixed set of rules, but as a dynamic, context-sensitive understanding shaped by experiences and social evolution; a common sense that is not rigid, but adaptive, much like the very nature of human reasoning.
However, the book doesn’t clearly show how this nuanced understanding of common sense and the shift in thinking (and conceptualization) it advocates could be manifested in AI experts’ professional practice today, nor does it offer alternative AI futures, exploring their potential, challenges, and limitations. In my view, this could have deepened the book’s impact, particularly if a perspective or potential solution had been proposed. For example, in chapter 13 (p. 207), the book introduces multimodal Large Language Models as cutting-edge AI systems, highlighting their lack of common sense, suggesting this problem could be addressed by engineering common sense through simulating embodied experience using Augmented Reality (AR). However, how this engineered ‘pseudo-common sense’ aligns with the dynamic, multi-layered concept of common sense discussed later in the conclusion remains unclear, a gap that also appears in other chapters.
A deeper exploration of the tension between technical and social accounts of common sense could also add valuable philosophical and practical insights to the book. In other words, by providing more technical details on advancements in the field of AI and connecting them with the book’s rich discussions on the social, historical, linguistic, and cognitive aspects of common sense, the book’s argument could have been strengthened. However, I understand that this presents challenges, ontologically and epistemologically, and because, unlike the well-established Vicoean notion of common sense in social research, common sense in AI systems remains elusive to define, making comparisons difficult.
That said, the book offers valuable ideas that can inspire new conversations among AI experts, particularly in the field of responsible AI, with which I’m more familiar. For example, in chapter 7, Ivana Marková introduces ‘language’ as a common-sense epistemology, and ‘action’ as its truth. They present fresh ways to understand the complex issues underlying human intersubjective meaning-making of common sense about AI. By focusing on the notion of dialogicality discussed in this chapter, researchers can approach ethical and responsible AI from a new perspective — one that shifts ethics and responsibility away from ‘individual rationality’, which is based on neutral, objectivist cognitive perspectives, and instead based them on ‘dialogical rationality’ shaped by socio-historical and cultural development. Similarly, in chapter 11, Mikihito Tanaka presents interesting ideas on co-production of common sense and imaginaries, and how this dynamic influences the direction of social development in society. This could also inspire researchers in the field of responsible AI to explore how the notion of ‘responsibility’ in the responsible AI discourse are constructed, justified, and contested from the lens of common sense.
While the book offers compelling insights into the contextual nature of common sense, particularly in relation to machine learning and robotics, the assumption of familiarity with social science terminology and frameworks may make it hard for readers outside these fields to fully engage with it. The interdisciplinary nature of the book is undoubtedly a strength, but this has not been reflected in the accessibility of its language, which could limit its reach and impact among a broader audience of researchers and students in computer science, engineering, or data science, who may not share the same foundational understanding.
So, if this topic is new to you, or worse if your background doesn’t include training in social science, I recommend starting with chapter 3, ‘Giving AI Some Common Sense’ by R. J. Brachman and H. J. Levesque (2024). The co-authors, also behind the book Machines Like Us: Toward AI with Common Sense (2022), offer an accessible account of what common sense is and how it might be built into a machine from the perspective of two leading AI experts. In my view, the chapter can serve as an entry point to the book’s deep exploration of the sociological, historical, and philosophical dimensions of common sense in AI.
Overall, this book presents a rich collection of views and discussions on AI and common sense, offering valuable insights and answer questions about past, present and future of AI and common sense. With its blend of essays and case studies, most chapters provide interesting reading, making it appealing to researchers particularly in science and technology studies and communication.
As the co-editors close the book in the final chapter, they begin by stating, “the issue of common sense remains unresolved and puzzling”. And in the very last sentence, they end the book by embracing this vagueness and opacity, hoping it “opens up new possibilities for absorbing new meaning, with AI as part of that conversation”. And I read this ambiguity as yet another invitation to engage with the complex and evolving questions that AI presents to all of us. Filled with provocative ideas and questions, the book leaves the readers with a renewed sense of curiosity, encouraging them not to become complacent, even when the issue seems as ‘obvious’ as ‘common sense’.
About the author
Ehsan Nabavi is a Senior Fellow at Center for Philosophy, Sociology, and History of Science & Technology — The Käte Hamburger Kolleg, RWTH Aachen University. He is also the founder and director of the Responsible Innovation Lab at the Australian National University (ANU) and a Senior Lecturer in Technology and Society at the Australian National Centre for Public Awareness of Science.
E-mail: Ehsan.Nabavi@anu.edu.au