Publications including this keyword are listed below.
55 publications found
Success stories of citizen science projects widely demonstrate the value of this open science paradigm and encourage organizations to shift towards new ways of doing research. While benefits for researchers are clear, outcomes for individuals participating in these projects are not easy to assess. The wide spectrum of volunteers collaborating in citizen science projects greatly contributes to the difficulty in the evaluation of the projects' outcomes. Given the strong links between many citizen science projects and education, in this work we present an experience with hundreds of students (aged 15–18) of two different countries who participate in a project on cell biology research — Cell Spotting — as part of their regular classroom activities. Apart from introducing the project and resources involved, we aim to provide an overview of the benefits of integrating citizen science in the context of formal science education and of what teachers and students may obtain from it. In this case, besides helping students to consolidate and apply theoretical concepts included in the school curriculum, some other types of informal learning have also been observed such as the feeling of playing a key role, which contributed to an increase of students' motivation.
Volume 15 • Issue 01 • 2016 • Special Issue: Citizen Science, Part I, 2016
Teaching mathematics in informal settings is a relatively new phenomenon, but it has gained more attention due to the recent changes in the society. The aim of the present quantitative study was to compare the learning outcomes of Latvian and Swedish 12-year-olds when they visited a science centre mathematics-art exhibition originally designed in Estonia. The results showed that in general, prior knowledge of the exhibition contents was the strongest predictor of post-test results in both countries but that mathematical thinking skills and self-concept had a small added value in explaining the post-test results. The results of the study give some of the first pieces of evidence of the effectiveness of out-of-school mathematics teaching in a science exhibition context, providing a good basis for further studies.
In the past 25 years school-university partnerships have undergone a transition from ad hoc to strategic partnerships. Over the previous two-and-a-half-years we have worked in partnership with teachers and pupils from the Denbigh Teaching School Alliance in Milton Keynes, UK.
Our aims have been to encourage the Open University and local schools in Milton Keynes to value, recognise and support school-university engagement with research, and to create a culture of reflective practice.
Through our work we have noted a lack of suitable planning tools that work for researchers, teachers and pupils. Here we propose a flexible and adaptable metric to support stakeholders as they plan for, enact and evaluate direct and meaningful engagement between researchers, teachers and pupils. The objective of the metric is to make transparent the level of activity required of the stakeholders involved — teachers, pupils and researchers — whilst also providing a measure for institutions and funders to assess the relative depth of engagement; in effect, to move beyond the seductive siren of reach.
Whilst welcoming Jensen’s response to our original paper, we suggest that our main argument may have been missed. We agree that there are many methods for conducting impact assessments in informal settings. However, the capacity to use such tools is beyond the scope of many practitioners with limited budgets, time, and appropriate expertise to interpret findings.
More particularly, we reiterate the importance of challenging the prevailing policy discourse in which longitudinal impact studies are regarded as the ‘gold standard’, and instead call for a new discourse that acknowledges what is feasible and useful in informal sector evaluation practice.
Access to high quality evaluation results is essential for science communicators to identify negative patterns of audience response and improve outcomes. However, there are many good reasons why robust evaluation linked is not routinely conducted and linked to science communication practice. This essay begins by identifying some of the common challenges that explain this gap between evaluation evidence and practice. Automating evaluation processes through new technologies is then explicated as one solution to these challenges, capable of yielding accurate real-time results that can directly feed into practice. Automating evaluation through smartphone and web apps tied to open source analysis tools can deliver on-going evaluation insights without the expense of regularly employing external consultants or hiring evaluation experts in-house. While such automation does not address all evaluation needs, it can save resources and equip science communicators with the information they need to continually enhance practice for the benefit of their audiences.