All author's publications are listed below.
There are many different pathways into science communication practice and research. But rarely do these pathways require critical reflection on what it means to be a ‘responsible’ science communicator or researcher. The need for this kind of critical reflection is increasingly salient in a world marked by the wilful disregard of evidence in many high-profile contexts, including politics and, most recently, public health. Responsible science communicators and researchers are audience- and impact-focused, beginning their decision-making process by considering their audiences’ starting positions, needs and values. This article outlines some key considerations for developing socially responsibility for science communication as a field both in terms of practice and research.
Access to high quality evaluation results is essential for science communicators to identify negative patterns of audience response and improve outcomes. However, there are many good reasons why robust evaluation linked is not routinely conducted and linked to science communication practice. This essay begins by identifying some of the common challenges that explain this gap between evaluation evidence and practice. Automating evaluation processes through new technologies is then explicated as one solution to these challenges, capable of yielding accurate real-time results that can directly feed into practice. Automating evaluation through smartphone and web apps tied to open source analysis tools can deliver on-going evaluation insights without the expense of regularly employing external consultants or hiring evaluation experts in-house. While such automation does not address all evaluation needs, it can save resources and equip science communicators with the information they need to continually enhance practice for the benefit of their audiences.
King et al.  argue that ‘emphasis on impact is obfuscating the valuable role of evaluation’ in informal science learning and public engagement (p. 1). The article touches on a number of important issues pertaining to the role of evaluation, informal learning, science communication and public engagement practice. In this critical response essay, I highlight the article’s tendency to construct a straw man version of ‘impact evaluation’ that is impossible to achieve, while exaggerating the value of simple forms of feedback-based evaluation exemplified in the article. I also identify a problematic tendency, evident in the article, to view the role of ‘impact evaluation’ in advocacy terms rather than as a means of improving practice. I go through the evaluation example presented in the article to highlight alternative, impact-oriented evaluation strategies, which would have addressed the targeted outcomes more appropriately than the methods used by King et al. . I conclude that impact evaluation can be much more widely deployed to deliver essential practical insights for informal learning and public engagement practitioners.
Even in the best-resourced science communication institutions, poor quality evaluation methods are routinely employed. This leads to questionable data, specious conclusions and stunted growth in the quality and effectiveness of science communication practice. Good impact evaluation requires upstream planning, clear objectives from practitioners, relevant research skills and a commitment to improving practice based on evaluation evidence.