All author's publications are listed below.
Effective classification of large datasets is a ubiquitous challenge across multiple knowledge domains. One solution gaining in popularity is to perform distributed data analysis via online citizen science platforms, such as the Zooniverse. The resulting growth in project numbers is increasing the need to improve understanding of the volunteer experience; as the sustainability of citizen science is dependent on our ability to design for engagement and usability. Here, we examine volunteer interaction with 63 projects, representing the most comprehensive collection of online citizen science project data gathered to date. Together, this analysis demonstrates how subtle project design changes can influence many facets of volunteer interaction, including when and how much volunteers interact, and, importantly, who participates. Our findings highlight the tension between designing for social good and broad community engagement, versus optimizing for scientific and analytical efficiency.
We investigate the development of scientific content knowledge of volunteers participating in online citizen science projects in the Zooniverse (http://www.zooniverse.org). We use econometric methods to test how measures of project participation relate to success in a science quiz, controlling for factors known to correlate with scientific knowledge. Citizen scientists believe they are learning about both the content and processes of science through their participation. We don't directly test the latter, but we find evidence to support the former — that more actively engaged participants perform better in a project-specific science knowledge quiz, even after controlling for their general science knowledge. We interpret this as evidence of learning of science content inspired by participation in online citizen science.