1413 publications found
During the last decade universities have developed policies and infrastructures to support open access to publications but now it is time to move a step forward. There is an increasing demand for accessing data supporting the research results to validate and reproduce them. Therefore universities have to be prepared for this new challenge that goes beyond dissemination because it requires a strategy for managing research data within institutions. In this paper I will try to give some hints on how to deal with this challenge that can be framed in the new open science movement aimed at providing openness in all the whole cycle of research.
Open science is the most recent paradigm shift in the practice of science. However, it is a practice that has emerged relatively recently and as such, its definition is constantly-shifting and evolving. This commentary describes the historical background of open science and its current practice, particularly with reference to its relationship with public engagement with research.
This study addresses an open question about science bloggers' self-perceived roles as science communicators. Previous research has investigated the roles science journalists see themselves engaging in, but such research has failed to capture the experiences of science bloggers as a broad and diverse group that is yet often very different in their practices from professional journalists. In this study, a survey of over 600 science bloggers reveals that on the broadest level, science bloggers see themselves engaging most often as explainers of science and public intellectuals. Perceived communication role depends predominantly on occupation, science communication training, blog affiliation and gender.
The ever-changing nature of academic science communication discourse can make it challenging for those not intimately associated with the field ― scientists and science-communication practitioners or new-comers to the field such as graduate students ― to keep up with the research. This collection of articles provides a comprehensive overview of the subject and serves as a thorough reference book for students and practitioners of science communication.
This paper reflects on the evaluation of and findings from a nationwide programme of physics engagement activities hosted by 10 science centres across the UK. We discuss our findings indicating the affordances of the programme with reference to the wider literature in order to draw out elements of the project that may be useful for other science learning and engagement initiatives. In particular, we discuss findings that relate to contemporary research and policy interests around the engagement of girls in science, the key ages at which young people’s views may best be influenced, the importance of explicating the nature of ‘real-world’ content and careers, and the value of collaborative partnerships.
This commentary shares a personal ‘learning curve’ of a science communication researcher about the impact of (playful) tools and processes for inclusive deliberation on emerging techno-scientific topics in the contemporary era of two-way science and technology communication practices; needed and desired in responsible research and innovation (RRI) contexts. From macro-level impacts that these processes are supposed to have on research and innovation practices and society, as encouraged by the RRI community, the author discovers more about ‘micro-level’ impacts; through conversations with peers of her department Athena (VU University, Amsterdam), as well as through experiencing the SiP 2015 conference in Bristol. Based on that, she defines several ‘impact-spheres’: a modular set of flexibly defined micro-level impacts that events in RRI contexts can have on both academic and non-academic participants, with respect and relationship development as focal assets to aim for; individual (micro-)changes that potentially build up towards an ‘RRI world’.
The drive for impact from research projects presents a dilemma for science communication researchers and practitioners — should public engagement be regarded only as a mechanism for providing evidence of the impact of research or as itself a form of impact? This editorial describes the curation of five commentaries resulting from the recent international conference
‘Science in Public: Research, Practice, Impact’. The commentaries reveal the issues science communicators may face in implementing public engagement with science that has an impact; from planning and co-producing projects with impact in mind, to organising and operating activities which meet the needs of our publics, and finally measuring and evaluating the effects on scientists and publics in order to ‘capture impact’.
King et al. [2015] argue that ‘emphasis on impact is obfuscating the valuable role of evaluation’ in informal science learning and public engagement (p. 1). The article touches on a number of important issues pertaining to the role of evaluation, informal learning, science communication and public engagement practice. In this critical response essay, I highlight the article’s tendency to construct a straw man version of ‘impact evaluation’ that is impossible to achieve, while exaggerating the value of simple forms of feedback-based evaluation exemplified in the article. I also identify a problematic tendency, evident in the article, to view the role of ‘impact evaluation’ in advocacy terms rather than as a means of improving practice. I go through the evaluation example presented in the article to highlight alternative, impact-oriented evaluation strategies, which would have addressed the targeted outcomes more appropriately than the methods used by King et al. [2015]. I conclude that impact evaluation can be much more widely deployed to deliver essential practical insights for informal learning and public engagement practitioners.
Access to high quality evaluation results is essential for science communicators to identify negative patterns of audience response and improve outcomes. However, there are many good reasons why robust evaluation linked is not routinely conducted and linked to science communication practice. This essay begins by identifying some of the common challenges that explain this gap between evaluation evidence and practice. Automating evaluation processes through new technologies is then explicated as one solution to these challenges, capable of yielding accurate real-time results that can directly feed into practice. Automating evaluation through smartphone and web apps tied to open source analysis tools can deliver on-going evaluation insights without the expense of regularly employing external consultants or hiring evaluation experts in-house. While such automation does not address all evaluation needs, it can save resources and equip science communicators with the information they need to continually enhance practice for the benefit of their audiences.
In the past 25 years school-university partnerships have undergone a transition from ad hoc to strategic partnerships. Over the previous two-and-a-half-years we have worked in partnership with teachers and pupils from the Denbigh Teaching School Alliance in Milton Keynes, UK.
Our aims have been to encourage the Open University and local schools in Milton Keynes to value, recognise and support school-university engagement with research, and to create a culture of reflective practice.
Through our work we have noted a lack of suitable planning tools that work for researchers, teachers and pupils. Here we propose a flexible and adaptable metric to support stakeholders as they plan for, enact and evaluate direct and meaningful engagement between researchers, teachers and pupils. The objective of the metric is to make transparent the level of activity required of the stakeholders involved — teachers, pupils and researchers — whilst also providing a measure for institutions and funders to assess the relative depth of engagement; in effect, to move beyond the seductive siren of reach.