Publications including this keyword are listed below.
53 publications found
In the past 25 years school-university partnerships have undergone a transition from ad hoc to strategic partnerships. Over the previous two-and-a-half-years we have worked in partnership with teachers and pupils from the Denbigh Teaching School Alliance in Milton Keynes, UK.
Our aims have been to encourage the Open University and local schools in Milton Keynes to value, recognise and support school-university engagement with research, and to create a culture of reflective practice.
Through our work we have noted a lack of suitable planning tools that work for researchers, teachers and pupils. Here we propose a flexible and adaptable metric to support stakeholders as they plan for, enact and evaluate direct and meaningful engagement between researchers, teachers and pupils. The objective of the metric is to make transparent the level of activity required of the stakeholders involved — teachers, pupils and researchers — whilst also providing a measure for institutions and funders to assess the relative depth of engagement; in effect, to move beyond the seductive siren of reach.
Whilst welcoming Jensen’s response to our original paper, we suggest that our main argument may have been missed. We agree that there are many methods for conducting impact assessments in informal settings. However, the capacity to use such tools is beyond the scope of many practitioners with limited budgets, time, and appropriate expertise to interpret findings.
More particularly, we reiterate the importance of challenging the prevailing policy discourse in which longitudinal impact studies are regarded as the ‘gold standard’, and instead call for a new discourse that acknowledges what is feasible and useful in informal sector evaluation practice.
Access to high quality evaluation results is essential for science communicators to identify negative patterns of audience response and improve outcomes. However, there are many good reasons why robust evaluation linked is not routinely conducted and linked to science communication practice. This essay begins by identifying some of the common challenges that explain this gap between evaluation evidence and practice. Automating evaluation processes through new technologies is then explicated as one solution to these challenges, capable of yielding accurate real-time results that can directly feed into practice. Automating evaluation through smartphone and web apps tied to open source analysis tools can deliver on-going evaluation insights without the expense of regularly employing external consultants or hiring evaluation experts in-house. While such automation does not address all evaluation needs, it can save resources and equip science communicators with the information they need to continually enhance practice for the benefit of their audiences.