1 Introduction
Evaluating science communication activities and projects is crucial to build an evidence-base for the quality and impact of science communication. In recent years, this has been recognised by science communication researchers [Jensen & Gerber, 2020; Volk & Schäfer, 2024; Weitkamp, 2015; Ziegler et al., 2021], funders and research organisations [King et al., 2015], and practitioners [Adhikari et al., 2019; Grand & Sardo, 2017]. Through evaluation, funders, research organisations and practitioners demonstrate societal impact, show accountability for research spending and justify the existence of communications teams within an organisation [Sörensen et al., 2024].
However, despite this widespread attention to evaluation, and the growing number of peer-reviewed publications and resources covering everything from theoretical frameworks, methods, practices, criteria for good evaluation and how to achieve it [Jensen & Gerber, 2020; Impact Unit, 2023; Science Foundation Ireland, 2015; Volk & Schäfer, 2024], rigorous evaluation designs are not applied to most science communication activities and projects [Jensen, 2014; Volk, 2024]. Several studies have identified the factors that hinder evaluation by communications teams in scientific organisations. These include lack of budget, personnel or time, non-supportive organisational cultures or leadership, lack of measurable objectives or overly ambitious communication objectives, lack of evaluation skills, and the perception of evaluation as a threat to the communications team [Bühler et al., 2007; Aguiar et al., 2024; Jensen et al., 2022; Pellegrini, 2021; Sörensen et al., 2024].
In this practice insight, we assess whether heads of communications at three types of scientific organisation in Germany, Portugal, Switzerland and the United Kingdom share this view, and delve into which research-practice collaborations may overcome the obstacles to evaluating science communication. To do so, the first author, a science communication practitioner with 20 years’ experience, interviewed 10 leaders of communications teams about the value and obstacles to evaluation, and the extent to which research does and/or could inform, support and guide their evaluation practices. The communications leaders were further asked what type of research-practice interfaces would facilitate “good” evaluation in their contexts. We used as a primer for these reflections a recent essay by Volk and Schäfer [2024] that, drawing on prior scholarship, summarises the characteristics of “good” evaluation and presents recommendations for improved research and practice.
2 Getting to “good” evaluation. What does research say?
Evaluation of science communication is understood as a systematic assessment of the success or failure of an activity against predefined objectives, using quantitative and qualitative social science research methods. It can serve as a means for demonstrating success and accountability, identifying crises, enabling learning processes and making decisions, as well as to optimise or adapt communication processes and planning [Pellegrini, 2021]. Based on an analysis of the many discussions about evaluation in the fields of science communication and informal science education, Volk and Schäfer [2024] have summarised the characteristics of “good” evaluation. We briefly introduce them here, and provide a more detailed description in Appendix A).
Volk and Schäfer identify four characteristics of “good” evaluation: 1) be holistic and evaluate inputs, outputs, outcomes, and impacts; 2) use mixed methods; 3) carry out evaluations at multiple time points; and 4) fit evaluation to the target audience and format.
They also propose seven recommendations whereby researchers and practitioners may advance evaluation practice and research (details in Appendix A): 1) more and more robust evaluations; 2) more demand and support for evaluation from funders, scientific institutions and policy-makers; 3) shared evaluation standards; 4) refined and realistic impact measures; 5) more capacity building, encompassing hands-on evaluation templates, guides, training and networking opportunities; 6) responsibility towards participants, scientific organisations and funders; 7) open access to evaluation data.
Implementing evaluations in organisations, integrating them into teams’ workflows and finding the necessary time and resources for evaluation can, of course, be challenging. In their article, Volk and Schäfer caution that the characteristics and recommendations should not be seen as prescriptive, but rather as guiding principles to inform practice. Not all of them can be, or need to be, implemented by all organisations at all times. Instead, organisations and practitioners should adopt a critical approach, tailoring the characteristics and recommendations for good evaluation to their specific organisational, structural and operational contexts.
3 Bringing in the voices of practitioners
We carried out semi-structured interviews with 10 Heads of Communications/Research Engagement/Public Engagement at research organisations in four countries with different science communication cultures [Mejlgaard et al., 2012]: Germany, Portugal, Switzerland, United Kingdom (see Appendix B for more information on the sampling, interview procedures [Schreier, 2012] and content analysis [Rädiker & Kuckartz, 2019]). All the interviewees were in managerial positions, with public communications of science or research as their main responsibility.
The organisations fall into three types: Intergovernmental Research Organisation (IRO), University, Research Institute. All carry out scientific research, but differ in their research scope, geographical implantation and governance structures (Table 1). IROs generally cover areas of Big Science, entailing the management of large, international infrastructures, facilities and research programmes. Universities are institutions of research and higher education, covering several academic fields. Research Institutes are primarily focused on research, and mostly in a specific field; they usually confer degrees through associated universities (and not directly).
4 What do communications leaders say about getting to “good” evaluations?
4.1 Evaluation seen as important to measure success, impact, and improve future activities
Communications leaders described evaluation as a way to “understand the success of a project or initiative, to see whether or not it has worked out the way you wanted it to” (Research Institute, UK) and also to “really measure the impact of [our] science communication activities” (University, Switzerland) on “the target audience that we intend to reach” (University, Portugal). One interviewee emphasised the importance of evaluations being “aligned with [our] key goals. So [that we] not only measure because [we] can, but because it helps [us] to do it well” (IRO-EMBL).
Evaluation was often described as “assessing conceptions of [our] audiences, the changes in conceptions and then [our] impact on this change” (IRO-CERN), and explicitly as measuring outcomes rather than outputs:
They [Chartered Institute of Public Relations] are very clear that we’re no longer measuring outputs. We’re measuring the outcomes, so the change. What have audiences done with this awareness of the research? — University, UK.
Evaluation was also seen as a means of learning — for individuals and the team — with a view to improving future activities, and therefore “part of the process and not something that is it’s end” (Research Institute, Portugal).
I think it’s also a tool that we can use to help improve what we do next. So it’s about learning from that and making sure that that’s built into the next project and into what we do. — Research Institute, UK.
The understanding and importance of evaluation expressed by the communications leaders in this study is very much in line with previous studies [Grand & Sardo, 2017; Sörensen et al., 2024; Ziegler et al., 2021].
4.2 Agreement that evaluations should be holistic and multi-method
Interviewees considered the characteristics of “good” evaluation and the recommendations to advance evaluation practice and research proposed by Volk and Schäfer [2024] to be very relevant to their practice.
The characteristics — being holistic, using mixed methods, covering different time-points and adapting to audiences and formats — were described as “a very good way of approaching evaluation” (Research Institute, Switzerland), “exactly what you want to do” (IRO-EUROfusion) and that “would effectively allow good evaluation” (IRO-CERN). However, two communications leaders reflected on the need to make decisions on which characteristics of “good” evaluation to apply and when, resonating with Volk and Schäfer’s [2024] caution against taking as prescriptive the characteristics of and recommendations for “good” evaluation that they put forward:
There is, for me, a difference between day-to-day activities and large-scale projects, between activities for particular audiences with — say — a public engagement goal, and organisational communications where the goals sometimes simply include to have said that something happened. I would struggle to go through a comprehensive pre- and post-evaluation every time. — IRO-EMBL
All communications leaders acknowledged the limitations of quantitative output measurements, such as number of media clippings, social media reach and engagement, website traffic, visitor numbers. As described in previous studies [Aguiar et al., 2024; Sörensen et al., 2024], quantitative output monitoring is regularly carried out by their teams, but “I didn’t think that just listing every single article we published was necessarily that helpful of an indicator as to whether or not we were raising awareness and achieving goals and strategically supporting the goals of the programme” (IRO-EUROfusion). In-depth, holistic, evaluations were considered to more fully assess the impact of their communications on the target audiences, but “also the bit that is extremely hard to measure” (IRO-EMBL) and even “impossible to actually measure who we are reaching and what change, if any, we are making” (University, Switzerland).
Overall, the characteristics were familiar to the interviewees. Some communications leaders recognised elements from the fields of marketing and public relations, and also from resources provided by professional organisations: “this is the process whereby the National Coordinating Centre for Public Engagement has structured their evaluation, so we use a logic model to think about what we’re trying to do and plan out our evaluation” (Research Institute, UK).
Most of the communications leaders also described having implemented some or all the characteristics of good evaluation, such as using mixed methods: “We collect quantitative and qualitative metrics — it’s more time-consuming but richer” (University, UK). Implementation was often described as being organic “through being in science communication for a few years [now], and having learnt from experience” (University, Portugal) and done in a non-formal way: “in one way or another, [we] of course all do this anyway, although not necessarily based on the latest research results” (IRO-EuropeanXFEL).
4.3 Need for capacity and support from organisations and funders
Regarding Volk and Schäfer’s [2024] recommendations for improved practice and research of evaluation, four communications leaders considered capacity building (Recommendation #4) to be the priority, namely “shared guides and toolkits, trying to train in the methods, and also have network opportunities [with social science teams]” (University, Switzerland). More demand for and support of evaluation by scientific organisations and funders (Recommendation #2) was a priority for three communications leaders. This recommendation was often seen as a way of making support for science communication visible at the highest level:
I think that if you approach the funding agencies you might have a chance, because you could obtain external funding to do this kind of work, since funding for this within research institutions or universities is very often not given. And it would improve the standing of science communication monitoring. — Research Institute, Switzerland.
The recommendation for shared standards (#3) was highlighted as a priority by two interviewees. A role for EIROforum (the network of eight European IROs) was proposed, to put “something like this [shared standards] in place, because it would give it the high-level visibility, support and awareness that is necessary for these things to actually occur” (IRO-EUROfusion).
Responsible evaluation (Recommendation #5) was considered by one of the communications leaders as “incredibly important. And there’s not been enough visibility of that.” (Research Institute, UK).
Mirroring the views expressed by practitioners in previous studies [Ziegler et al., 2021], the recommendation for open evaluation data (#7) by Volk and Schäfer [2024] raised concerns for several of the communications leaders. In a context where “research institutions and universities are basically in competition with each other” (Research Institute, Switzerland) having evaluation data completely open could increase pressure on already stretched communications teams to deliver to imposed targets and levels:
We speak at conferences to share knowledge, and as much as we can be transparent, we are. Making the evaluation data public in an increasingly competitive market — is it feasible? It’s sharing to a point. — University, UK.
4.4 Lack of time, resources and expertise described as main obstacles to evaluation
Although quantitative output monitoring (e.g. online reach) was acknowledged by the interviewees as being limited, in-depth evaluations of cognitive, affective and behavioural changes among target audiences occur only rarely, independently of the type of organisation or country. The communications leaders we interviewed reported using in-depth evaluations for projects of greater strategic relevance such as Open Days or assessment of public attitudes, or “when we have a combination of means, including time and then also some funds [for evaluation]” (Research Institute, Portugal), or when the communications leader directly drove evaluation: “I brought the people together and led the process [of evaluation]” (Research Institute, UK).
Lack of time, limited resources (encompassing budget and personnel) and lack of dedicated expertise in the teams were described as the main obstacles to evaluation, in particular to in-depth evaluation. These resonate with findings of previous studies, many of them focused on a specific country or type of organisation [Bühler et al., 2007; Aguiar et al., 2024; Impact Unit, 2023; Jensen et al., 2022; King et al., 2015; Sörensen et al., 2024; Volk, 2024; Weitkamp, 2015; Ziegler et al., 2021]. Our findings suggest that the barriers cut across organisations: national and intergovernmental, universities and field-focused research institutes, in different countries, with large or small communications teams.
Four communications leaders connected their perceived lack of time for evaluation to the low priority given to evaluation, either by themselves, by the organisation’s leadership or by funders:
You may, of course, argue that a complete evaluation of all activities is so important that you should find the time to do that. If I look back, we did evaluate key activities, but for a full evaluation there were other things that seemed to be more important. — IRO-EuropeanXFEL.
On an organisational level, the perception that “there is no evaluation responsibility generally speaking in communications teams, or in some cases it would fall under “audit” which is a different perspective” (IRO-CERN) was expressed, as well as uncertainty regarding the extent to which organisation leadership teams read or consider evaluation reports in their decision-making.
This absence of demand for and use of evaluation results seems to lead to a lack of incentive to do ambitious evaluations, including as concerns career progression of the practitioners themselves.
I think time becomes an issue because it’s not prioritised, because you ideally want someone and groups that you know are going to use and find that information interesting and adopt it. And what benefit is there for me going through the time and effort to do that? ( …) People don’t get any thanks for doing evaluation. It’s not something that’s applauded and recognised as much as if you’re telling the story [about the activity]. — Research Institute, UK.
Across all the organisations, independently of the size of the team, communications leaders described that the team’s focus had to be on delivering activities. No organisation had a team member that was fully and exclusively dedicated to evaluation, and so evaluation is mostly an additional task carried out by team members, or, in specific cases, temporary external collaborators, and thus “really a matter of priorities. Do you make the next evaluation of the last event, or do you write new media releases and science stories for our channels?” (University, Switzerland). Concerning budgets, in a context where communication activities compete for funding with research, communication leaders are also often faced with questions about how “maybe it’s better to hire some researchers for the same money” (Research institute, Switzerland).
4.5 Evaluation research and resources seen as useful, particularly when applicable to practice — but not well known
Seven of the communications leaders were explicitly aware of evaluation resources (i.e. toolkits, guides, templates) produced by European projects, by funding agencies and organisations such as the German Federal Association of Communicators, Wissenschaft im Dialog, the UK’s Chartered Institute of Public Relations (CIPR) and the National Coordinating Centre for Public Engagement (NCCPE), and four had used such resources, often in combination with courses. Although toolkits were described as sometimes “extremely theoretical and not tools at all, not useful for practitioners” (University, Portugal), all communications leaders considered evaluation resources to be useful for their practice, as “a way towards a more standardised approach, for comparison within [our] own organisation across time, but also between organisations that share common goals or perhaps common audiences” (IRO-EMBL).
The usefulness of research about science communication evaluation was similarly described to be helpful to “establish a framework by which we could evaluate our projects, one used by other science communicators too” (IRO-CERN). Research was also useful to learn from negative results:
It’s good for me because I learn a lot from those papers. Once I learned a lot from a negative science communication paper, and I cite it all the time because they did something that I thought of doing. — University, Portugal.
Notably, however, none of the communications leaders felt fully up-to-date or knowledgeable of research findings. Although they have “seen papers on the [evaluation] topic” (IRO-EUROfusion), and are “broadly aware of research on evaluation” (IRO-EMBL), they “don’t regularly read that [JCOM journal and so on] because it’s a matter of workload” (University, Switzerland) and “just didn’t have the time” (Research Institute, Switzerland). Several expressed hope in being able to read the papers at some point:
I have Google alerts to be informed of all papers that come out in science communication research. And every time I get that Google alert, I think I’m going to read all the papers in it. But I never get the time to [read them]. I saved them all so I can read them later, hopefully, but it’s really hard. — University, Portugal.
Although several organisations had worked with either science communication researchers (four teams) or agencies (six teams) on evaluation projects, only communications leaders with a Public Engagement in Science background described working with researchers. They also appeared to have greater awareness of research findings than those with a corporate communications background and described efforts to keep team members abreast of research and resources. These included sharing published papers with team members, sending team members to courses, and organising discussion groups on evaluation.
At the same time, the communications leaders with experience of working with researchers mentioned the importance of research being relevant. Research was perceived as sometimes “covering ground that’s already been covered because of the nature of how people can get projects and do that work to publish” (Research Institute, UK) or being out of step with the pace of practice e.g. in the context of AI.
It’s useful to understand what the research shows or what is being researched. However, in practice, you often need to just get things done. You cannot wait for something to be fully researched first. — IRO-EMBL.
This perception that research is not always directly relevant to practice has been identified in previous studies [Han & Stenhouse, 2015; Scheufele, 2022].
Some of the communications leaders that had taken part in in-depth evaluations described struggling with the underlying assumptions, methodology and conclusions. This points to cultural differences between practice and social science-based research:
For someone coming from using animal models and cells, I find that there are many confounding variables that I am always questioning. I remember the first times we were building a theory of change, I thought “how can I make sure that this thing is the one triggering that [effect], because there’s a lot going on in those people’s lives.”. — Research Institute, Portugal.
4.6 Researcher-practitioner partnerships and tailored research communication can help evaluation practice
All the communications leaders we interviewed were interested in and eager to work with researchers to increase and improve their evaluation practice — and they had several ideas on how this could be achieved:
Practitioner-researcher matchmaking Communications leaders would like practitioners to be supported in discovering and contacting the research community. This was seen as crucial to facilitating more and better networking opportunities — “a kind of a matchmaking organisation” (Research Institute, UK) whereby practitioners could know “are there researchers near us? What are they doing? Where are they, and could they be interested in our work for their studies?” (IRO-CERN), and “discuss with the science communication [research] teams how we can go forward in this evaluation issue” (University, Switzerland). This suggestion connects with recommendations #1 (more and better evaluations) and #4 (capacity building) in Volk and Schäfer [2024]. The latter was considered a priority by four of the ten communications leaders in this study, underscoring the importance that the leaders give to networking opportunities. Knowing researchers would be a stepping-stone for subsequent partnerships, which were also proposed by the communications leaders we interviewed.
Embedded partnerships Communications leaders saw a role for researchers in evaluating their current communication practice, by helping with “insights about what they [metrics] are not measuring: is there something that we should consider measuring that we’re not measuring in terms of how people are changing their thinking?” (University, UK). Some interviewees described an audit, “internal, like in self-reflection, on what are we doing well and where is there room for improvement?” (IRO-EMBL) and that researchers could “help to measure the long-term impact” (Research Institute, Switzerland). There are many examples of independent external evaluations by science communication researchers, and they have been identified as a component of the recommendation for more and better evaluations (Recommendation #1) [Volk & Schäfer, 2024]. This approach gives researchers the possibility to collect or access data from real-world communication activities, allowing them to analyse and publish from it [see examples in Volk & Schäfer, 2024]. It would certainly also contribute to fill the lack of evaluation expertise within communications teams. However, it may place researchers in a position of service providers, which clashes with the goal of research to contribute to the scientific evidence base for science communication.
As if to address this clash of goals, several communications leaders highlighted the importance of embedding researchers in their practice right from the design phase, to “decide together on the objectives of the project, and therefore how evaluation should be set up. So that it is also interesting for researchers to use the project in their studies and publications” (IRO-CERN). If researchers would “work [with practitioners], to know about our projects and to help [practitioners] better design the way to evaluate them, and then evaluate them [with practitioners]” (University, Portugal), researchers and practitioners would build the evaluation of the activity together. In doing so they could also overcome some of the cultural differences between practice and research brought up in our interviews and also described in previous studies [Dvorzhitskaia et al., 2024; Peterman et al., 2021].
Whenever we had the means to do these more in depth [evaluations], I felt that we were always explaining the context, because the person was very detached from the context of those schools, and some of the things being proposed didn’t make much sense. — Research Institute, Portugal.
Embedded partnerships must balance an organisation’s promotional goals (guided by a rationale of self-interest) with dialogic/societal conversation goals (guided by “public good” rationale) [Entradas et al., 2023; Weingart & Joubert, 2019]. In doing so, they potentially create a shared meaning of “openness” that could contribute to overcome some of the concerns expressed by communications leaders around the recommendation to make the results of evaluations publicly accessible (Recommendation #7) [Volk & Schäfer, 2024].
Working with researchers was described as also legitimising science communication in research organisations.
I think the science of science communication is very helpful in showing other scientists that this [science communication] is actually a real topic. If you give it a name and present it as a scientific topic — “there’s also research on this” — it becomes more understandable to them — Research Institute, Switzerland.
Adapt funding modes accordingly The communications leaders recognised several obstacles to establishing embedded partnerships. Funding models for science communication do not foster this type of collaboration at the project-shaping stage: “when you are applying to a grant, you need to have everything defined, so the shaping was never done in collaboration with people with this [evaluation] expertise” (Research Institute, Portugal). Furthermore, the two communities (practitioners and researchers) have different priorities “because researchers want their own autonomy about what they’re researching and what their focus is, but you’ve got to focus on the project as well. So it’s a challenge to bring the two together” (Research Institute, UK). Therefore, to allow for effective embedded partnerships, a space within communication activities is necessary, for knowledge exchange and boundary-spanning between practitioners and researchers [Peterman et al., 2021]. This requires time and resources: two elements that research organisations and funders can provide, thus meeting the recommendation for greater support (and demand) for evaluations by funders and organisations (Recommendation #2) by Volk and Schäfer [2024], which several of the communications leaders we interviewed considered a priority.
Tailored information of research to practitioners The communications leaders also proposed practitioner-targeted sharing of research findings, encompassing frequency and format in a “less is more” approach. This would address the main obstacle to both carrying out evaluation and staying informed of scholarly research — time.
If they gave regular updates about what they’ve worked on and the key findings in a very, very simple, high-level way: here’s the Top Ten things we learned this year. Make it really, really simple so that we have somewhere to start with. And then we can dive in from there. Because realistically, I don’t think anybody has time, unless it’s really of interest to them. (IRO-EUROfusion)
Although not mentioned in the interviews, we highlight another aspect of tailoring: the content itself. The language used in scholarly publications is an obstacle to many practitioners who do not have a social science background; therefore the Top Ten would need to be presented as “practitioner summaries”, similar to the lay summaries now quite widely used by peer-review journals, highlighting the relevance of the research to practitioners.
5 Conclusion and outlook
This practice insight identifies how several themes around evaluation of science communication activities cut across different organisations — national and intergovernmental, university and research institutes — countries and sizes of the communications teams. All interviewed leaders of communications teams seem to face the same obstacles to carrying out “good” evaluation, which are also those identified in previous country-specific studies. Research and resources are considered valuable to achieve “good” evaluation, and communications leaders have a strong interest in collaborating with researchers for more and rigorously designed evaluations.
The interviewees made several proposals to bridge the gap with the research community. These resonate with and extend the recommendations based on Volk and Schäfer’s [2024] research overview, e.g. researcher-practitioner matchmaking platforms and practitioner-tailored information. Figure 1 summarises the four proposals, highlighting how implementing these proposals will require the involvement of research organisations, national and international funders (including through EU-funded projects), science communication networks and professional associations.
Research organisations and funders should make in-depth evaluations a priority, working with communications teams and researchers to define which projects and activities to evaluate. They should act on the evaluation outcomes and put in place recognition and rewards for evaluation of science communication, including for career-progression. Science communication networks and professional associations should take the lead in setting up match-making platforms for researchers and practitioners (complementing the toolkits and guides some of them already produce) and also in providing selected research briefings that are relevant for practitioners, possibly with the help of generative AI.
Our findings contribute to the wider debate about the relationship and perceived disconnect between science communication practice and research, beyond evaluation [Anjos et al., 2021; Fischer et al., 2024; Jensen & Gerber, 2020; Peterman et al., 2021; Scheufele, 2022]. We argue that in all areas of science communication, if practitioners are not simply the recipients of research findings and researchers are embedded partners rather than service providers, science communication wins overall, both in terms of organisations’ visibility and public engagement in science.
References
-
Adhikari, B., Hlaing, P. H., Robinson, M. T., Ruecker, A., Tan, N. H., Jatupornpimol, N., Chanviriyavuth, R., & Cheah, P. Y. (2019). Evaluation of the Pint of Science festival in Thailand. PLoS ONE, 14(7), e0219983. https://doi.org/10.1371/journal.pone.0219983
-
Aguiar, C. M. G., Salles Filho, S. L. M., Pereira, S. P., & Colugnati, F. A. B. (2024). Are we on the right path? Insights from Brazilian universities on monitoring and evaluation of Public Communication of Science and Technology in the digital environment. JCOM, 23(06), A01. https://doi.org/10.22323/2.23060201
-
Anjos, S., Russo, P., & Carvalho, A. (2021). Communicating astronomy with the public: perspectives of an international community of practice. JCOM, 20(03), A11. https://doi.org/10.22323/2.20030211
-
Bühler, H., Naderer, G., Koch, R., & Schuster, C. (2007). Hochschul-PR in Deutschland: Ziele, Strategien und Perspektiven [Public relations in German higher education institutions: goals, strategies, and perspectives]. Deutscher Universitätsverlag Wiesbaden. https://doi.org/10.1007/978-3-8350-9148-1
-
Dvorzhitskaia, D., Zamora, A., Sanders, E., Verheyden, P., & Clerc, J. (2024). Exhibition research and practice at CERN: challenges and learnings of science communication ‘in the making’. JCOM, 23(02), N01. https://doi.org/10.22323/2.23020801
-
Entradas, M., Marcinkowski, F., Bauer, M. W., & Pellegrini, G. (2023). University central offices are moving away from doing towards facilitating science communication: a European cross-comparison. PLoS ONE, 18(10), e0290504. https://doi.org/10.1371/journal.pone.0290504
-
Fischer, L., Barata, G., Scheu, A. M., & Ziegler, R. (2024). Connecting science communication research and practice: challenges and ways forward. JCOM, 23(02), E. https://doi.org/10.22323/2.23020501
-
Grand, A., & Sardo, A. M. (2017). What works in the field? Evaluating informal science events. Frontiers in Communication, 2, 22. https://doi.org/10.3389/fcomm.2017.00022
-
Han, H., & Stenhouse, N. (2015). Bridging the research-practice gap in climate communication: lessons from one academic-practitioner collaboration. Science Communication, 37(3), 396–404. https://doi.org/10.1177/1075547014560828
-
Impact Unit. (2023). Evaluation and impact in science communication: results of a community survey. November/December 2023. Wissenschaft im Dialog. Berlin, Germany. https://impactunit.de/wp-content/uploads/2024/04/WiD_ImpactUnit_CommunitySurvey2023.pdf
-
Jensen, E. (2014). The problems with science communication evaluation. JCOM, 13(01), C04. https://doi.org/10.22323/2.13010304
-
Jensen, E. A., & Gerber, A. (2020). Evidence-based science communication. Frontiers in Communication, 4, 78. https://doi.org/10.3389/fcomm.2019.00078
-
Jensen, E. A., Wong, P., & Reed, M. S. (2022). How research data deliver non-academic impacts: a secondary analysis of UK Research Excellence Framework impact case studies. PLoS ONE, 17(3), e0264914. https://doi.org/10.1371/journal.pone.0264914
-
King, H., Steiner, K., Hobson, M., Robinson, A., & Clipson, H. (2015). Highlighting the value of evidence-based evaluation: pushing back on demands for ‘impact’. JCOM, 14(02), A02. https://doi.org/10.22323/2.14020202
-
Mejlgaard, N., Bloch, C., Degn, L., Nielsen, M. W., & Ravn, T. (2012). Locating science in society across Europe: clusters and consequences. Science and Public Policy, 39(6), 741–750. https://doi.org/10.1093/scipol/scs092
-
Pellegrini, G. (2021). Evaluating science communication: concepts and tools for realistic assessment. In M. Bucchi & B. Trench (Eds.), Routledge handbook of public communication of science and technology (3rd ed., pp. 305–322). Routledge. https://doi.org/10.4324/9781003039242
-
Peterman, K., Garlick, S., Besley, J., Allen, S., Fallon Lambert, K., Nadkarni, N. M., Rosin, M. S., Weber, C., Weiss, M., & Wong, J. (2021). Boundary spanners and thinking partners: adapting and expanding the research-practice partnership literature for public engagement with science (PES). JCOM, 20(07), N01. https://doi.org/10.22323/2.20070801
-
Rädiker, S., & Kuckartz, U. (2019). Analyse qualitativer Daten mit MAXQDA: Text, Audio und Video [Analyzing qualitative data with MAXQDA: text, audio, and video]. Springer. https://doi.org/10.1007/978-3-658-22095-2
-
Scheufele, D. A. (2022). Thirty years of science-society interfaces: what’s next? Public Understanding of Science, 31(3), 297–304. https://doi.org/10.1177/09636625221075947
-
Schreier, M. (2012). Qualitative content analysis in practice. SAGE Publications. https://doi.org/10.4135/9781529682571
-
Science Foundation Ireland. (2015). Evaluation toolkit. https://www.sfi.ie/engagement/guidance/
-
Sörensen, I., Volk, S. C., Fürst, S., Vogler, D., & Schäfer, M. S. (2024). “It’s not so easy to measure impact”: a qualitative analysis of how universities measure and evaluate their communication. International Journal of Strategic Communication, 18(2), 93–114. https://doi.org/10.1080/1553118x.2024.2317771
-
Volk, S. C. (2024). Assessing the outputs, outcomes, and impacts of science communication: a quantitative content analysis of 128 science communication projects. Science Communication, 46(6), 758–789. https://doi.org/10.1177/10755470241253858
-
Volk, S. C., & Schäfer, M. S. (2024). Evaluations in science communication. Current state and future directions. JCOM, 23(06), Y01. https://doi.org/10.22323/2.23060401
-
Weingart, P., & Joubert, M. (2019). The conflation of motives of science communication — causes, consequences, remedies. JCOM, 18(03), Y01. https://doi.org/10.22323/2.18030401
-
Weitkamp, E. (2015). Between ambition and evidence. JCOM, 14(02), E. https://doi.org/10.22323/2.14020501
-
Ziegler, R., Hedder, I. R., & Fischer, L. (2021). Evaluation of science communication: current practices, challenges, and future implications. Frontiers in Communication, 6, 669744. https://doi.org/10.3389/fcomm.2021.669744
About the authors
Ana Godinho is Head of Communications and Engagement at the European Spallation Source (ESS) in Sweden. She has held similar roles at CERN and research organisations in Portugal and the UK. Ana has a Ph.D. in Developmental Neurobiology (University of London) and a Masters in Science Communication (Open University).
E-mail: ana.godinho@ess.eu Bluesky: @apmfgodinho
Sophia Charlotte Volk is a Professor of Strategic Communication at the Department of Media and Communication (IfKW) at LMU Munich. Previously, she was a a Senior Research and Teaching Associate at the Department of Communication and Media Research (IKMZ) at the University of Zurich (Switzerland) and a Research Associate at the Chair of Strategic Communication at Leipzig University (Germany). Her research interests include science and university communication, evaluation and impact measurement, strategic communication, digital media environments and technologies like artificial intelligence, and information abundance and overload.
E-mail: s.volk@lmu.de Bluesky: @sophiavolk
Mike S. Schäfer is Full Professor of Science Communication at IKMZ — Department of Communications and Media Research, and director of the Center for Higher Education and Science Studies (CHESS) at the University of Zurich (Switzerland).
E-mail: m.schaefer@ikmz.uzh.ch Bluesky: @mss7676
Supplementary material
Available at https://doi.org/10.22323/169120251104103513
Appendix A. Characteristics and recommendations for “good” evaluations
Appendix B. Methods