1 Introduction

Citizen science refers to scientific projects that receive voluntary contributions from members of the general population. Depending on the project, contributions range from collected data (e.g., bird observations in the eBird project) to analyses or annotations of already-collected data (e.g., in Zooniverse projects such as the Gravity Spy project examined in this paper). In any case, the success of citizen-science projects is heavily dependent on attracting participation from citizen scientists. Many studies have conducted surveys and interviews with citizen scientists to identify the motivations for their participation [e.g., Curtis, 2015 ; Land-Zandstra et al., 2016a ; Nov, Arazy and Anderson, 2011 ; Raddick et al., 2010 ; Wright et al., 2015 ]. Such research suggests factors that are important in motivating citizen scientists to contribute. However, few studies, if any, have examined the relative efficacy of recruitment messages appealing to different motives to participate in citizen science.

To attract citizen scientists to participate in a project, researchers, first, need to let them know about it, what Scheliga et al. [ 2016 ] called “crowd building”. In many cases, researchers send an invitation email to potential citizen scientists or present a message on a citizen-science project platform. Although recruiting messages are the first point of contact with new participants, researchers do not seem to have identified which messages are more effective in attracting participants. Studies by Robson et al. [ 2013 ] and Crall et al. [ 2017 ] have examined the efficacy of different media in recruiting participants, e.g., by comparing traditional media and social networking, but not the efficacy of different messages.

Although studies have not examined efficacy of messages, there is an extensive literature examining motivations for participation in citizen-science projects [e.g., Curtis, 2015 ; Kaufman, Flanagan and Punjasthitkul, 2016 ; Raddick et al., 2010 ; Reed et al., 2013 ; Rotman et al., 2012 ]. Based on the previous literature, this study aims to examine which motivations are most effective to appeal to in a message to recruit new citizen scientists to have them participate in a citizen-science project. For that, we first review literature on citizen scientists’ motivation and identify four motivations that might be appealed to when contacting potential citizen scientists in a recruiting message. Then, we create messages appealing to each of the motivations and test how each message is associated with participation at different stages in a citizen-science project.

2 Theory: motivations of citizen-science volunteers

Recent research [e.g., Curtis, 2015 ; Kaufman, Flanagan and Punjasthitkul, 2016 ; Raddick et al., 2010 ; Reed et al., 2013 ; Rotman et al., 2012 ] has identified a variety of motivation for participation in citizen science. For example, Raddick et al. [ 2010 ] identified 12 motivations for participating in the Galaxy Zoo project, including contributiing to a science project, learning about astronomy, discovery of galaxies few people have seen, community (i.e., meeting people with similar interests), teaching (i.e., useful resource teaching others about astronomy), the beauty of galaxies, finding the work fun, vastness of space (i.e., enjoying considering the scale of the universe), helping (i.e., happy to help), the Zoo (i.e., interest in the Galaxy Zoo project), and astronomy and science (i.e., having general interest in the field). Curtis [ 2015 ] identified a similar set of motivations for the Foldit project, including contribution to science, background interest in science, intellectual challenge, curiosity, liking puzzles, liking computer games, to learn something new, friendly competition, visual appeal/aesthetics and relaxing. Reed et al. [ 2013 ] identified three broader motives, social engagement, interaction with the website and helping.

From these motivations, we have chosen four for this study based on several criteria. First, we removed motivations that only a few participants in prior studies identified as their major motivations. For example, in Curtis [ 2015 ], only one or two of the participants identified friendly competition, visual appeal/aesthetics or relaxing as their motivations. Second, we removed motivations that seemed to specific to a particular project and could not be applied to citizen-science projects more generally. For example, beauty, vastness, Zoo, astronomy in Raddick et al. [ 2010 ] can be applied only to astronomy projects in the Zooniverse, interaction with the website in Reed et al. [ 2013 ] can be applied only to virtual citizen-science projects, and liking puzzles and liking computer games in Curtis [ 2015 ] can be applied to only to game-based citizen-science projects, like Foldit. We then combined similar motivations. For example, the teaching motivation in Raddick et al. [ 2010 ] is similar to learning about science in that both value the resources that citizen-science projects provide to learn about science. Finally, we chose motivations that could be easily appealed to in an email message. For example, fun in Raddick et al. [ 2010 ] and curiosity in Curtis [ 2015 ] are not easy to appeal to in a message to encourage people to contribute, because individuals tend to have different levels of curiosity in science projects and the innate level of fun they feel from citizen-science projects would be different. Based on these criteria, we identified four motivations to try in this experiment: learning about science, joining a community, contributing to science, and helping scientists.

2.1 Learning about science

Citizen-science projects span various science topics, from history to astronomy. To help citizen scientists understand a project, researchers typically provide detailed information about the topic of the project. Many projects are explicitly designed to have citizen scientists experience the scientific processes; thus, Bonney et al. [ 2009 ] concluded that most citizen-science projects are designed to help citizen scientists learn scientific knowledge to some degree. Consistent with these efforts, volunteers of citizen-science projects reported that they actually learned about science by participating in the projects [e.g., Brossard, Lewenstein and Bonney, 2005 ; Land-Zandstra et al., 2016a ; Masters et al., 2016 ].

Similarly, Kraut and Resnick [ 2011 ] argued that since citizen scientists are not provided with monetary rewards, getting knowledge about science can be a reward to encourage them to participate, so whether people can learn science or not by participating in a project is important. Rotman et al. [ 2012 ] state that volunteers report in interviews being motivated by “the opportunity to learn more and widen their scientific horizons”. Cox et al. [ 2017, in press ] found that understanding motives, which include learning, were associated with more contributions. Domroese and Johnson [ 2017 ] also found learning about bees to be the most cited reason for participating in the Great Pollinator Project.

2.2 Contributing to science

Citizen-science projects are designed to contribute to the scientific process. For example, citizen scientists in Zooniverse projects often classify scientific data. Recent surveys and interviews show that citizen science volunteers are motivated to participate in projects by the opportunity to contribute to science [e.g., Brossard, Lewenstein and Bonney, 2005 ; Land-Zandstra et al., 2016a ; Land-Zandstra et al., 2016b ; Reed et al., 2013 ]. The possibility to contribute to science has emerged as a major motivation in several studies. For example, Zooniverse volunteers answered that they are more motivated by their contribution to science than by the possibility to learn about science or to help scientists [Brossard, Lewenstein and Bonney, 2005 ; Reed et al., 2013 ]. Contributing to science was listed as a primary contributor for participation in CosmoQuest [Gugliucci, Gay and Bracey, 2014 ], Foldit [Curtis, 2015 ], and in the Dutch Great Influenza Survey [Land-Zandstra et al., 2016b ] and the second most cited reason for the Great Pollinator Project [Domroese and Johnson, 2017 ]. More interestingly, Land-Zandstra et al. [ 2016b ] found that citizen scientists who had participated in the project for a longer time were more motivated by contributing to science.

2.3 Joining a community

As social creatures, humans seek the community of others. Accordingly, researchers suggest that citizen scientists are sometimes motivated to engage in projects to join a community: when people notice that many other people engage in some activity, they perceive it as a social norm to follow [Kraut and Resnick, 2011 ] and are therefore motivated to engage in the same activity, a phenomenon called social proof [Cialdini, 2001 ] or social norm [Kaufman, Flanagan and Punjasthitkul, 2016 ]. For both reasons, volunteers may be motivated to join a project that they know others are part of.

Evidence for the effect of community or social proof on citizen scientists’ participation is inconsistent. Holohan and Garg [ 2005 ] studied distributed computing projects, which require a very minimal commitment. They found that while only a small fraction of contributors were members of teams, the team members were among the largest contributors, which they took as evidence for the power of community. In an interview study of participants in FoldIt, citizen scientists’ desire to be a part of the community emerged as a motivation to participate [Curtis, 2015 ]. However, there is some inconsistent evidence. Rotman et al. [ 2012 ] report that “community involvement was not mentioned as a primary motivation for participation in scientific projects.”; Cox et al. [ 2017, in press ] actually found a negative relation between social motives and volume of contribution. In a recent experiment study, Kaufman, Flanagan and Punjasthitkul [ 2016 ] showed that a message appealing to social proof was less effective than appealing to helping scientists in encouraging people to participate in a project, which they explained by hypothesizing that when people see many people already participating in a project, they do not make much efforts due to social loafing. Thus, a study needs to untangle these inconsistent results.

2.4 Helping scientists

Finally, helping scientists (phrased as help or altruism in literature) is also suggested as a motivation for citizen scientists. Citizen-science projects are designed by professional scientists to help the scientists advance their projects and rely on contributions from citizen scientists to achieve the scientists’ goals for the project. Thus, making contribution to the projects is a way to help professional scientists achieve their goals. Crowston and Fagnot [ 2008 ] adopted a model of helping behaviors to explain motivations underlying massive virtual collaborations. A message appealing to the motivation to help scientists was found effective in leading people to contribute to a crowdsourcing game [Kaufman, Flanagan and Punjasthitkul, 2016 ] and in an open source project, helping was found as participants’ prominent motivation [Oreg and Nov, 2008 ].

“Helping scientists” is similar to the “contributing to science” motivation in that both ask volunteers to participate in the project not for themselves, but for something else. Indeed, Curtis [ 2015 ] categorized both “contributing to scientific research” and “helping scientists” as “altruism” in that both motivations are to help. However, they are different in that “contributing to science” is to contribute to science , whereas “helping scientists” is to help scientists . If the motivation is helping scientists, although citizen scientists eventually do contribute to science by helping scientists, the primary focus is to help people (the scientists), whereas the motivation of contributing to science is about the science.

2.5 Motivations at different stages of contribution

An important consideration in studying how different motivations appeal to volunteers is that which motivations are effective may change as participants get to know the project. Crowston and Fagnot [ 2008 ] argued specifically that the motivation for initial contribution to a collective project are different than the motives for sustained participation, a finding echoed by Rotman et al. [ 2012 ]. Accordingly, we consider that motives might differ for the decision to participate, initial participation and sustained participation. However, different studies make different suggestions about which motives are salient at different stages. For example, Cox et al. [ 2017, in press ] found that an “understanding motivation [i.e., learning] associates even more strongly and positively with volunteering at higher percentiles of activity” (that is, for volunteers who have contributed more), while West and Pateman [ 2016 ] report that “social factors were significant in retaining volunteers in the long-term”, and further, that initial motives matter, as “people with certain motivations more likely to continue volunteering than others”.

2.6 Present study

In summary, prior research has identified a range of motives for contribution to citizen-science projects. However, it is difficult to draw a clear picture of the relative effectiveness of motives appealed in a message to recruit participants. Further, the evolution of a volunteer’s participation and motives means that results may depend on when they are measured. Thus, we test the relative efficacy of messages appealing to each motivation to answer the following research question: Among messages appealing to four motivations identified in the literature as important to citizen scientists, which is the most effective? Specifically, 1) which message attracts the highest number of volunteers and 2) which attracts the highest number of contributions from volunteers?

3 Methods

3.1 Setting: the Zooniverse citizen science platform

Our empirical study is set in the context of an online citizen-science project. While there are several models of citizen science, the project we investigate here involves volunteers in large-scale scientific-data analysis. Such citizen-science projects rely on an online worldwide collaboration platform to support the involvement of scientists and the public. The scientists share their research projects with the public who are interested in the science.

More specifically, we draw on data from the Zooniverse. Zooniverse is the largest platform for citizen-science projects, hosting more than 70 individual projects at the time of writing, in astronomy, history, oceanography and many other fields. In Zooniverse projects, scientists upload data objects to the platform and pose a series of questions to collect information about the objects or help filter useful data objects from those which might not be useful for the scientists.

The project we studied is Gravity Spy [Zevin et al., 2017 ]. Zooniverse, the Laser Interferometer Gravitational-Wave Observatory (LIGO) Scientific Collaboration (LSC) and citizen-science researchers launched this project in October 2016. The goal of Gravity Spy is improving the scientific instruments used to search for gravitational waves. A challenge for LIGO scientists is that the detectors need extremely high sensitivity to be able to detect gravitational waves, but as a result, the detectors also record a large quantity of noise (referred to as glitches). The glitches obscure or even masquerade as gravitational wave signals, reducing the efficacy of the search. Currently there are more than 20 known classes of glitch with different causes, with the possibility of more classes being identified as the detector is worked on. Gravity Spy recruits volunteers to classify glitches into the known or novel classes. Having a collection of glitches of the same class helps to focus the LIGO scientists’ search for their source.

3.2 Study design and procedure

Zooniverse project staff routinely email members of a mailing list to announce new projects and to solicit contributions. For this experiment, we created four versions of an email message recruiting new volunteers for the Gravity Spy project. These messages were the first public announcement of the project to the list; it had earlier been in beta test with a more select group of participants. The project was simultaneously announced via other channels, attracting new volunteers who did not receive one of the experimental messages. All four messages provided the same short description of the new project but differed in the first and last sentences, which were tailored to emphasize one of the motives discussed above. The first sentences of each message were as follows (not including the phrase in bold italics):

  1. Learning about Science : Extend your knowledge in astrophysics by participating in Gravity Spy!
  2. Joining a Community : Join your fellow citizen scientists in classifying problematic noise in the search for gravitational waves!
  3. Contributing to Science : You can contribute to science by classifying problematic noise in the search for gravitational waves!
  4. Helping Scientists : Astrophysicists need your help to classify problematic noise in the search for gravitational waves!

The full text of each message is included in the appendix. As with other Zooniverse announcement emails, included in the message was a unique link to the Gravity Spy home page for each individual recipient, which allowed the Zooniverse staff to track if a message recipient visited the website by clicking on the link provided.

For the experiment, mailing list members were randomly assigned to one of four cohorts (one per message). The cohorts had between 9,123 and 9,131 members, as shown in Table 1 (below) for a total of 36,513 recipients. The numbers in the cohorts differed due to changes in the mailing list during the experiment. The assigned recruiting emails were sent to users on the Zooniverse email list on 12 October 2016. The process of sending emails takes several hours, so different users receive the email at different times during the day and, of course, we cannot be certain when the message was read.

3.3 Data

Three weeks after the messages were delivered, we collected the number of clicks on the links in the emails to the project site and the number. On 31 January 2017, we collected the classifications done on the Gravity Spy system by all volunteers who had joined the project after the messages were sent. Data for the users were divided into five groups: one for each of the cohorts who had been sent a recruiting message and a fifth group for new volunteers who had joined during that time but who had not been sent a message (i.e., those not on the mailing list).

3.4 Ethics review

The plan for our experiments was reviewed by Syracuse’s IRB. A section of the initial volunteer agreement when volunteers sign up for the Zooniverse is disclosure that site administrators run experiments to improve the system and the volunteer experience. Zooniverse members opt-in to being on the mailing list. The email recruitment process was the same as for other Zooniverse projects, aside from the minor changes in wording. The procedure posed minimal or no risk to the participants. The study does not use any information about the volunteer aside from their behaviours on the site. The site does not collect demographic information of any kind and volunteers are identified only by a self-selected volunteer ID. Collecting informed consent for the experiment would be practically infeasible, given the nature of the study, which is based on emailing members of the mailing list. We were therefore permitted to run the experiment without collecting specific informed consent for participation.

4 Results

Table 1 shows data about the response to the emailed recruiting messages for the four cohorts. In addition, 2,808 volunteers who did not receive a recruiting message joined and contributed during the experimental period.

4.1 Question 1: which message attracted the highest number of volunteers?

We answer question 1 in three ways, corresponding to the three stages in a new volunteer’s movement into participating in the project: decision to participate, initial participation, and sustained participation.

4.1.1 Decision to participate

First, as noted above, each message sent included a unique link to the project that enabled the Zooniverse team to track responses. We counted how many of those links had been clicked (shown as “Click throughs” in Table 1 ), indicating that the volunteer decided to visit the project because of the message. We cut off data collection three weeks after the message was sent, as the growth in the number of clicks had ended at that point.

To determine which messages attracted more volunteers to visit the site, we performed a differences of proportion test comparing the click-through percent for each pair of messages. The z-score and p values for each comparison are shown in Table 2 . Because we ran multiple tests, we applied a Bonferroni correction to the significance of each test. According to Sidak’s adjustment, to maintain an overall alpha of 0.05 for the collection of 6 tests, each individual test should have an alpha of 0.0085. With the correction, the difference of proportion tests shows that messages Learning about Science and Contributing to Science attracted significantly more click through than Helping Scientists , while the other differences are not significant. The final column shows the 99.15% confidence interval for the difference (i.e., with the same correction for multiple tests). The range of the intervals are smaller than 2%, suggesting that the lack of significant results reflect a small difference rather than a lack of power in our tests.


Table 1: Response statistics for the 4 cohorts who received messages. Contributors are those who made a classification on the site. Contributors’ percentage is the count of contributors divided by “click throughs”.

TAB



Table 2: Results of tests comparing the proportion of message recipients who clicked on the link to the project between each pair of message conditions. * Difference is significant at p<0.05 after Bonferroni correction.

TAB


4.1.2 Initial participation

The above analysis examined how many users visited the site after receiving a message. However, only a fraction of those who visited the site went on to actually contribute to the project by making classifications. The number of the message recipients in each cohort who did a classification is shown in the “Contributors Count” column of Table 1 . We ran the same proportion test comparing cohorts on the fraction of the visitors who became contributors (the “Contributors Percent” column, computed as the number who contributed divided by the number who clicked through from Table 1 ) with the same Bonferroni correction. The results are shown in Table 3 .


Table 3: Results of tests comparing the proportion of visitors who made contribution between each pair of message conditions. * Difference is significant at p<0.05 after Bonferroni correction.

TAB


The results show that there is a statistically significant difference in the fraction of visitors to the site who go on to contribute to the project. The percentage for Helping Scientists is higher than for all three other cohorts, but the other differences are not significant. Specifically, even though the message appealing to Helping Scientists had the lowest proportion of click-throughs, a significantly higher fraction of the volunteers who clicked on the link in that message went on to contribute to the project.

4.1.3 Sustained participation

Finally, we considered how many volunteers became sustained contributors. For this analysis, we aggregated each volunteer’s classifications into sessions, defined as a sequential set of classifications separated by a gap of not more than 30 minutes [Mao, Kamar and Horvitz, 2013 ]. The intuition is that volunteers tend to come to the system, do one or more classification in a short period with a short gap between classifications, then take a break until later (e.g., the next day), leaving a longer gap between the classifications, which defines a session boundary. The summary statistics for the session analysis are shown in Table 4 .

An indication of sustained contribution is a larger number of sessions. We also show the number and fraction of volunteers who contributed more than one session (computed as count of volunteers with more than 1 session divided by the number of contributors from Table 1 ). Given that most volunteers contribute to a project just once (note that the median number of sessions in all cohorts is 1, i.e., “one and done”, [McInnis et al., 2016 ] ), another indication of sustained contribution is whether the volunteer comes back for a second session.


Table 4: Contribution statistics for experimental groups: number of sessions for volunteers in the 4 cohorts who received messages and new volunteers during experimental period who did not receive an email message (non-cohort). Percent with more than one session is the count of volunteers with more than one session divided by the number of contributors from Table 1 . No differences are significant.

TAB



Table 5: Results of tests comparing the proportion of contributors who made contributions in more than one session between each pair of message conditions. No differences are significant.

TAB


As is expected, the distribution of the number of sessions per volunteer is quite skewed (most people have only one session but a few have a lot), as indicated by the difference between the mean and the median values and the high standard deviation. We therefore tested whether there was a difference between the cohorts in the number of sessions per volunteer with a non-parametric Kruskal-Wallis test. The test showed that there is no statistically significant difference among the per volunteer number of sessions across the cohorts ( χ 2 (3, N=808) = 3.95, p = 0.2663).

We ran the same proportion test on the fraction of the volunteers who became sustained contributors (the Contributors percentage column) with the same Bonferroni correction. The results are shown in Table 5 . The results show that the proportion of users who sustain their contribution is highest for Contributing to Science and Helping Scientists , followed by Joining a Community , with Learning about Science at the bottom. However, none of the differences are statistically significant. Note though that the confidence intervals are broad (about 25%), more than twice the greatest difference. The wide confidence intervals suggest that the tests suffer from a lack of statistical power to resolve the differences seen and that with a larger sample, the differences could be significant.

Cohort vs. non-cohort contributions . Finally, as noted above, we collected data on all new volunteers who joined the project after the message was sent. We considered the possibility that Zooniverse members who are on the mailing list might differ from those who are not on the list in their interest in contributing. To test this possibility, we compared the count of sessions from volunteers who received the recruiting messages to the count for those who did not receive the messages. The result of Wilcoxon signed-rank test indicated that contributions of volunteers who did not receive the recruiting message are significantly different from the ones who did (W=1228500, p = 0.00003).

We were concerned that the non-cohort sample might differ from the cohort sample because of the timing of when they joined. To address this concern, we first compared the distribution of the number of new volunteers vs. their start date in each cohort and the non-cohort. We found the shapes of the curves to be roughly similar, with a peak of new members at the project announcement, dropping off steadily afterwards. Though activity did not drop off completely in either group (e.g., there were volunteers who received an email in October who made their first contribution at the end of January), there were proportionally more non-cohort volunteers joining further after the announcement than from the cohorts. It could be that these late-joining non-cohort members simply have had less time to contribute, not less interest in contributing.

To check if this late activity was biasing the results, we computed a weighted average of the number of classifications and sessions per volunteer in the non-cohort, giving more weight to the earlier contributions and less to the later ones, so that the distribution of volunteers over time matched. To our surprise, this process actually made the differences between the cohort and non-cohort groups bigger. Apparently the earlier non-cohort contributors actually contributed less than later ones, despite having had more time in which to contribute. In retrospect, it is not surprising that the timing has little effect on the results. The majority of volunteers contribute for only one day, so the timing of data collection has little effect.

4.2 Question 2: which message attracted the highest number of contributions from volunteers?

We answered the second question in two ways, looking first at the average number of contributions from volunteers in each cohort and then considering the contributions from the cohort as a group.

4.2.1 Average number of contributions

The “Classifications done” columns of Table 6 gives the total number of classifications done by members of each cohort, the average and median number of classifications per volunteer and the standard deviation. As should be expected, the distribution of the number of contributions per volunteer are quite skewed — most people contribute only a few classifications and a few contribute a lot — as indicated by the difference between the mean and the median values and the high standard deviation. Figure 1 shows the distribution of classifications done per volunteer in the four cohorts using violin plots. A violin plot is like a box plot, but includes a kernel density plot for the data, thus showing the distribution in more detail. Note that the y-axis is log transformed to correct for the skew.


Table 6: Contribution statistics for experimental groups: 4 cohorts who received messages and new volunteers during experimental period who did not receive an email message (non-cohort).

TAB



PIC

Figure 1: Violin plot of contributions per user by cohort, on log axis.

As the count of contributions per volunteer is not normally distributed, we tested whether there was a difference between the cohorts with a non-parametric Kruskal-Wallis test. The test showed that there is no statistically significant difference among the per volunteer count of contributions across the cohorts ( χ 2 (3, N=808) = 1.378, p = 0.71). In summary, although volunteers in the Contributing to Science cohort did more classifications in comparison to the others, because of the high variability in contributions among volunteers within a cohort, none of the cohorts is statistically significantly different from the rest on the number of contributions per volunteer. The confidence intervals for the pair-wise tests have a range of about 25 to 30, which is quite a bit more than the differences. The wide intervals suggest that the test may suffer from a lack of power to resolve the differences seen and that with a larger sample, the differences could be significant.

Cohort vs. non-cohort contributions . As above, we compared the count of classifications of volunteers who received the recruiting messages to the count for those who did not receive the messages. The result of Wilcoxon signed-rank test indicated that volunteers who did not receive a recruiting message made significantly fewer contributions to the project than did the volunteers who received and responded to the recruiting message (W= 11890, p = 0.0367).

4.2.2 Total contributions

Finally, we examined the total number of contributions provided by each cohort, which is the combined result of attracting more volunteers and attracting volunteers who contribute more (or motivating volunteers to contribute more). Table 6 shows that Contributing to science led to the most total contributions being contributed, more than double the count for Joining a Community However, this difference could be due to chance. Recall that the volunteers in Contributing to science provided on average about 67% more classification each than those in Joining a Community but the high variability within cohorts meant that the difference was not statistically significant.

To test whether the total contributions received from a cohort is more or less than could be expected by chance requires knowing the distribution for total contributions. However, we do not have a sample of cohorts from which to determine this distribution empirically (as we did for average number of classification per user). To address this lacuna, we generated a set of random cohorts from the data for the actual respondents. We created a random cohort by randomly assigning each of the volunteers to one of four cohorts. This process randomly varied the cohorts along the two differences among cohorts we discussed above: how many volunteers are in the cohort and how many contributions the participants in the cohort make. To avoid creating correlations among the artificial cohorts, each time we generated random cohorts we kept only one of the four. Following this process, we created 1000 random cohorts of varied sizes and with varying samples of volunteers and so varying numbers of total contributions. A histogram of the distribution of the total number of contributions in the resulting sample of random cohorts is shown in Figure 2 .

Once we had a set of random cohorts, we could test whether the observed counts of total contributions are different that could be expected by chance by simply noting where in this distribution the actually-observed cohorts fall. This analysis shows that the total number of contributions received in Contributing to Science (63,151) is at the 98th percentile (i.e., it is greater than 98% of the randomly generated cohorts), while the percentile for Joining a Community (30,844) is at the 2nd (i.e., smaller than 98% of the random cohorts). In other words, the total received in these two cohorts are respectively more and less than one would expected by a chance arrangement of the volunteers into cohorts, at p<0.05, suggesting that Contributing to science was particularly good at attracting contributions and Joining a Community was particularly poor.


PIC

Figure 2: Distribution of total contributions for 1000 randomly generated cohorts drawing from mailing list respondents (all cohorts). Vertical lines indicate 5% and 10% upper and lower bounds.


Table 7: Summary of findings. + = significantly higher, – = significantly lower,
(+/–) = higher or lower, though not statistically significant.

TAB


4.3 Summary of findings

Table 7 provides a summary of the findings of this study. Our experiment shows that a message appealing to the motivation of Contributing to Science attracted more volunteers to the project than the three others, and even though the average number of contributions per user is not statistically significantly greater, the total volume of contributions received in response to this message was greater than can be explained by chance. In contrast, Joining a Community message, while receiving a similar number of click throughs to other messages, had a lower level of overall contribution than expected by chance.

5 Discussion

While the experimental design provides good assurance about the results, it does not help to explicate the underlying mechanism for the results — why was a message appealing to Contributing to Science the most effective and one appealing to Joining a Community the least effective in recruiting participants and getting citizen scientists to contribute to the project? Specifically, it does not address the question of whether the results are due to selection, meaning that the message attracts participation by individuals with particular motivations who do more, or whether it makes salient a motivation that encourages more contribution by the recipients.

In this regard, the difference between the click-through and participation percentage for Joining a Community is illuminating. Recall that participants receiving this message clicked through the site at about the same rate as others, but contributed at a much lower rate. A possible explanation for these findings is the nature of interaction in the projects. Recall that joining a community was a significant motivator in Foldit [Curtis, 2015 ]. In this system, volunteers do interact with other citizen scientists; thus, they could perceive that participating in the project as a sort of community. In contrast, there is not much visible community on Zooniverse sites. A new volunteer would need to explore the site to find group discussions and may need considerable expertise at the task to be able to follow or to contribute to the discussion. So, it could be that this message attracted volunteers interested in joining a community who were disappointed by the apparent lack of community when they first visited the site, leading to lower contribution.

If a question is simply which message is more motivating, then a project manager should pick an appeal to contributing to science. However, it is also possible that individual difference, or individuals’ personality factors, can be associated with the message effectiveness. Finding how to tailor messages that appeal to the “right motives” for each individual might improve the overall response rates. For example, for those who are inclined to help others, a message appealing to Helping Scientists would be more effective than other messages; for those who like to get connected with others, Joining a Community could be an important motivation to appeal to in a recruiting message. In this study, the response rates to the individual messages are all quite low. Since the Zooniverse mails project announcements to volunteers regularly, a possible strategy is to try different strategies until one attracts a particular volunteer, and then to try that appeal again in future messages to that volunteer.

A second finding is that the efficacy of different motives does seem to change over time. Specifically, a message about Learning about Science attracted click throughs and initial participation, but seemed to not be as effective in attracting sustained contribution. It could be that volunteers who were motivated by the opportunity to learn about a new branch of science had that interest fulfilled by their interaction with the project tutorials and science materials and so did not feel a need to continue to work on the glitch classification task, which is only tangentially related to the science of gravitational waves.

A further finding of the study is that the volunteers who responded to the recruiting message contributed significantly more than volunteers who joined about the same time, but without having received a message. Again, the implication of this finding depends on whether the message is motivating or selecting volunteers. From the former perspective, the messages are doing what they should in encouraging participation. But from the later perspective, preferred above, it should not be surprising that volunteers who signed up for the mailing list are more motivated than those who did not, the content of the message notwithstanding. In either case, this finding emphasizes the importance of reaching out to prospective volunteers in multiple ways, and to consider channels for reaching and motivating different groups of volunteers.

Finally, the data are consistent with prior theorizing that notes that motivations for initial and sustained participation are different. While a message appealing to Helping Scientists was the least effective in attracting visitors to the site, those visitors went on to contribute to the project more. This finding suggests that messages at different points in a volunteer’s engagement with the project might appeal to different motives: one set of motives to get a prospective volunteer to visit the site (e.g., learning about science), another to convince them to try it (e.g., helping scientists), and third to promote sustained contribution (e.g., contributing to science).

5.1 Study limitations

The design of the study reported in this paper is a true experiment, which addresses many threats to internal validity. However, there are some threats to construct validity. First, message recipients do not have to click on the link provided in the email message to access the system, so the click-through rate might be an underestimate of the true interest. Conversely, a volunteer might forward the message to a friend who clicks the custom link thus increasing the click-through rate. However, we also have counts of actual participation that are not affected by this problem. A second threat is to statistical conclusion validity. It appears that some of the statistical tests are underpowered, so some negative results could be different with a larger sample.

While experiments provide good internal validity, this validity comes at cost of possible threats to external validity. First, we only tested four specific versions of the messages. It could be that slight tweaks to the messages would change their performance, and we know little about the performance of appeals to other motives (only that prior research suggests that the ones we tested are the most important). It might even be possible to craft messages that combine aspect of different motivations, thus appealing to multiple segments of the population at once.

Second, we ran the experiment in only one single project; so it is not clear if we can apply the finding of this study to other citizen-science projects. Prior research on motivation has noted the importance of interest in the science, so projects with different science presumably attract different participants. It would be interesting to know if same results hold in citizen-science projects covering other fields, e.g., history or biology. Finally, given the particular nature of motivation for citizen-science projects, we would not expect the finding to hold in other online communities, though some of the broader implications might (e.g., the evolution of motivations with participation).

6 Conclusion

The experiment reported here has both theoretical and practical implications. First, the work extends prior work on reported motivations by showing how these motivations work as part of a message appealing to initial volunteers to a citizen-science project. Specifically, our results provide further evidence for the importance of making a real contribution to science as a motivation for citizen science participants.

Practically, the work provides guidance to those who run citizen-science projects. We examined three different outcomes and show that depending on the goal of recruitment, different messages may be more or less effective. In particular, if the goal is increasing the number of participants who are aware of the project, then appealing to the chance to contribute to or to learn about science seems to attract more visits than an appeal to helping scientists, though the later is more successful in contributing volunteers to contributors. And over all, an appeal to the chance to contribute to science seems to result in the largest number of contributions to the project. In summary, our results show that at least for the Zooniverse, citizen-science projects are science, and that is reflected in the effectiveness of messages that appealed to different motivations.

A Full text of recruiting email messages

A.1 Condition 1. Learning about science

Subject: Gravity Spy: Extend your knowledge of astrophysics!

Hi there,

I’m thrilled to tell you about a brand new Zooniverse project — Gravity Spy

On September 14th 2015, a century after Einstein predicted the existence of ripples in spacetime known as gravitational waves, the Laser Interferometer Gravitational Wave Observatory (LIGO) made the first direct detection of this elusive phenomenon.

Being the most sensitive and most complicated gravitational experiment ever created, LIGO is susceptible to a variety of non-cosmic artifacts known as glitches. By selecting the right classification for a given glitch, you can teach computers to do this classification themselves on much larger datasets.

In this project, you can learn how to identify all of the glitch morphologies and open up an even bigger window into the gravitational wave universe.

Get involved now at www.gravityspy.org .

A.2 Condition 2. Joining a community

Subject: Gravity Spy: Join your fellow citizen scientists!

Hi there,

I’m thrilled to tell you about a brand new Zooniverse project — Gravity Spy

Join your fellow citizen scientists in classifying problematic noise in the search for gravitational waves!

On September 14th 2015, a century after Einstein predicted the existence of ripples in spacetime known as gravitational waves, the Laser Interferometer Gravitational Wave Observatory (LIGO) made the first direct detection of this elusive phenomenon.

Being the most sensitive and most complicated gravitational experiment ever created, LIGO is susceptible to a variety of non-cosmic artifacts known as glitches.By selecting the right classification for a given glitch, you can teach computers to do this classification themselves on much larger datasets.

Many citizen scientists are already participating in the project, identifying all of the glitch morphologies and opening up an even bigger window into the gravitational wave universe.

Get involved now at www.gravityspy.org .

A.3 Condition 3. Contributing to science

Subject: Gravity Spy: Contribute to Science!

Hi there,

I’m thrilled to tell you about a brand new Zooniverse project — Gravity Spy

You can contribute to science by classifying problematic noise in the search for gravitational waves!

On September 14th 2015, a century after Einstein predicted the existence of ripples in spacetime known as gravitational waves, the Laser Interferometer Gravitational Wave Observatory (LIGO) made the first direct detection of this elusive phenomenon.

Being the most sensitive and most complicated gravitational experiment ever created, LIGO is susceptible to a variety of non-cosmic artifacts known as glitches.

By selecting the right classification for a given glitch, you can teach computers to do this classification themselves on much larger datasets.

Through the Gravity Spy project, you can contribute to science, identify all of the glitch morphologies, and open up an even bigger window into the gravitational wave universe.

Get involved now at www.gravityspy.org .

A.4 Condition 4. Helping scientists

Subject: Gravity Spy: Please help scientists!

Hi there,

I’m thrilled to tell you about a brand new Zooniverse project — Gravity Spy

Astrophysicists need your help to classify problematic noise in the search for gravitational waves!

On September 14th 2015, a century after Einstein predicted the existence of ripples in spacetime known as gravitational waves, the Laser Interferometer Gravitational Wave Observatory (LIGO) made the first direct detection of this elusive phenomenon. Being the most sensitive and most complicated gravitational experiment ever created, LIGO is susceptible to a variety of non-cosmic artifacts known as glitches.

By selecting the right classification for a given glitch, you can teach computers to do this classification themselves on much larger datasets. Through the Gravity Spy project, you can help scientists identify all of the glitch morphologies and open up an even bigger window into the gravitational wave universe!

Get involved now at www.gravityspy.org .

Acknowledgments

This paper was partly supported by the US National Science Foundation, award INSPIRE 15-47880.

References

Bonney, R., Ballard, H., Jordan, R., McCallie, E., Phillips, T., Shirk, J. and Wilderman, C. C. (2009). Public Participation in Scientific Research: Defining the Field and Assessing Its Potential for Informal Science Education . A CAISE Inquiry Group Report. Washington, D.C., U.S.A.: Center for Advancement of Informal Science Education (CAISE). URL: http://www.informalscience.org/public-participation-scientific-research-defining-field-and-assessing-its-potential-informal-science .

Brossard, D., Lewenstein, B. and Bonney, R. (2005). ‘Scientific knowledge and attitude change: The impact of a citizen science project’. International Journal of Science Education 27 (9), pp. 1099–1121. https://doi.org/10.1080/09500690500069483 .

Cialdini, R. B. (2001). ‘Harnessing the science of persuasion’. Harvard Business Review 79 (9), pp. 72–81.

Cox, J., Oh, E. Y., Simmons, B., Graham, G., Greenhill, A., Lintott, C., Masters, K. and Woodcock, J. (2017, in press). ‘Doing good online: The changing relationships between motivations, activity and retention among online volunteers’. Nonprofit and Voluntary Sector Quarterly . URL: http://eprints.whiterose.ac.uk/115831/ .

Crall, A., Kosmala, M., Cheng, R., Brier, J., Cavalier, D., Henderson, S. and Richardson, A. (2017). ‘Volunteer recruitment and retention in online citizen science projects using marketing strategies: lessons from Season Spotter’. JCOM 16 (01), A1. URL: https://jcom.sissa.it/archive/16/01/JCOM_1601_2017_A01 .

Crowston, K. and Fagnot, I. (2008). ‘The motivational arc of massive virtual collaboration’. In: Proceedings of the IFIP WG 9.5 Working Conference on Virtuality and Society: Massive Virtual Communities . (Lüneberg, Germany, 1st–2nd July 2008).

Curtis, V. (2015). ‘Motivation to Participate in an Online Citizen Science Game: a Study of Foldit’. Science Communication 37 (6), pp. 723–746. https://doi.org/10.1177/1075547015609322 .

Domroese, M. C. and Johnson, E. A. (2017). ‘Why watch bees? Motivations of citizen science volunteers in the Great Pollinator Project’. Biological Conservation 208, pp. 40–47. https://doi.org/10.1016/j.biocon.2016.08.020 .

Gugliucci, N., Gay, P. and Bracey, G. (2014). ‘Citizen science motivations as discovered with CosmoQuest’. In: Proceedings of the Ensuring Stem Literacy: A National Conference on STEM Education and Public Outreach, pp. 437–440.

Holohan, A. and Garg, A. (2005). ‘Collaboration Online: The Example of Distributed Computing’. Journal of Computer-Mediated Communication 10 (4). https://doi.org/10.1111/j.1083-6101.2005.tb00279.x .

Kaufman, G., Flanagan, M. and Punjasthitkul, S. (2016). ‘Investigating the Impact of ‘Emphasis Frames’ and Social Loafing on Player Motivation and Performance in a Crowdsourcing Game’. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16) . (San Jose, CA, U.S.A. 7th–12th May 2016), pp. 4122–4128. https://doi.org/10.1145/2858036.2858588 .

Kraut, R. E. and Resnick, P., eds. (2011). Building successful online communities: Evidence-based social design. Cambridge, MA, U.S.A.: MIT Press.

Land-Zandstra, A. M., Devilee, J. L. A., Snik, F., Buurmeijer, F. and van den Broek, J. M. (2016a). ‘Citizen science on a smartphone: Participants’ motivations and learning’. Public Understanding of Science 25 (1), pp. 45–60. https://doi.org/10.1177/0963662515602406 .

Land-Zandstra, A. M., van Beusekom, M., Koppeschaar, C. and van den Broek, J. (2016b). ‘Motivation and learning impact of Dutch flu-trackers’. JCOM 15 (01), A04. URL: https://jcom.sissa.it/archive/15/01/JCOM_1501_2016_A04 .

Mao, A., Kamar, E. and Horvitz, E. (2013). ‘Why stop now? Predicting worker engagement in online crowdsourcing’. In: Proceedings of the Conference on Human Computation and Crowdsourcing . (Palm Springs, CA, U.S.A.).

Masters, K., Oh, E. Y., Cox, J., Simmons, B., Lintott, C., Graham, G., Greenhill, A. and Holmes, K. (2016). ‘Science learning via participation in online citizen science’. JCOM 15 (3), A07. URL: https://jcom.sissa.it/archive/15/03/JCOM_1503_2016_A07 .

McInnis, B. J., Murnane, E. L., Epstein, D., Cosley, D. and Leshed, G. (2016). ‘One and done: Factors affecting one-time contributors to ad-hoc online communities’. In: Proceedings of the ACM Conference on Computer-Supported Cooperative Work & Social Computing , pp. 609–623.

Nov, O., Arazy, O. and Anderson, D. (2011). ‘Dusting for Science: Motivation and Participation of Digital Citizen Science Volunteers’. In: Proceedings of the 2011 iConference . (Seattle, Washington, U.S.A.). ACM Press, pp. 68–74. https://doi.org/10.1145/1940761.1940771 .

Oreg, S. and Nov, O. (2008). ‘Exploring motivations for contributing to open source initiatives: The roles of contribution context and personal values’. Computers in Human Behavior 24 (5), pp. 2055–2073. https://doi.org/10.1016/j.chb.2007.09.007 .

Raddick, M. J., Bracey, G., Gay, P. L., Lintott, C. J., Murray, P., Schawinski, K., Szalay, A. S. and Vandenberg, J. (2010). ‘Galaxy Zoo: Exploring the Motivations of Citizen Science Volunteers’. Astronomy Education Review 9 (1), 010103, pp. 1–18. https://doi.org/10.3847/AER2009036 . arXiv: 0909.2925 .

Reed, J., Raddick, M. J., Lardner, A. and Carney, K. (2013). ‘An Exploratory Factor Analysis of Motivations for Participating in Zooniverse, a Collection of Virtual Citizen Science Projects’. In: Proceedings of the 46th Hawaii International Conference on System Sciences (HICSS 2013) . (7th–10th January 2013). IEEE, pp. 610–619. https://doi.org/10.1109/HICSS.2013.85 .

Robson, C., Hearst, M., Kau, C. and Pierce, J. (2013). ‘Comparing the use of social networking and traditional media channels for promoting citizen science’. In: Proceedings of the 2013 conference on Computer supported cooperative work (CSCW ’13) . (San Antonio, TX, U.S.A.). https://doi.org/10.1145/2441776.2441941 .

Rotman, D., Preece, J., Hammock, J., Procita, K., Hansen, D., Parr, C., Lewis, D. and Jacobs, D. (2012). ‘Dynamic changes in motivation in collaborative citizen-science projects’. In: Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work (CSCW 2012) . (Seattle, WA, U.S.A. 11th–15th February 2012). ACM Press, pp. 217–226. https://doi.org/10.1145/2145204.2145238 .

Scheliga, K., Friesike, S., Puschmann, C. and Fecher, B. (2016). ‘Setting up crowd science projects’. Public Understanding of Science , pp. 1–20. https://doi.org/10.1177/0963662516678514 .

West, S. and Pateman, R. (2016). ‘Recruiting and Retaining Participants in Citizen Science: What Can Be Learned from the Volunteering Literature?’ Citizen Science: Theory and Practice 1 (2), p. 15. https://doi.org/10.5334/cstp.8 .

Wright, D. R., Underhill, L. G., Keene, M. and Knight, A. T. (2015). ‘Understanding the Motivations and Satisfactions of Volunteers to Improve the Effectiveness of Citizen Science Programs’. Society & Natural Resources 28 (9), pp. 1013–1029. https://doi.org/10.1080/08941920.2015.1054976 .

Zevin, M., Coughlin, S., Bahaadini, S., Besler, E., Rohani, N., Allen, S., Cabero, M., Crowston, K., Katsaggelos, A. K., Larson, S. L., Lee, T. K., Lintott, C., Littenberg, T. B., Lundgren, A., Østerlund, C., Smith, J. R., Trouille, L. and Kalogera, V. (2017). ‘Gravity Spy: integrating advanced LIGO detector characterization, machine learning, and citizen science’. Classical and Quantum Gravity 34 (6), p. 064003. https://doi.org/10.1088/1361-6382/aa5cea .

Authors

Kevin Crowston is a Distinguished Professor of Information Science in the School of Information Studies at Syracuse University. He received his Ph.D. (1991) in Information Technologies from the Sloan School of Management, Massachusetts Institute of Technology (MIT). His research examines new ways of organizing made possible by the extensive use of information and communications technology. Specific research topics include the development practices of Free/Libre Open Source Software teams and work practices and technology support for citizen science research projects. E-mail: crowston@syr.edu .

Carsten Østerlund is an associate professor at the School of Information Studies, Syracuse University. He received his Ph.D. from MIT’s Sloan School of Management (2003) and a M.A. in social psychology and social anthropology from University of Aarhus and University of Copenhagen, Denmark. During his MA studies he spent two years as a Fulbright scholar at UC Berkeley, Department of Social and Cultural Studies. His interests include distributed & virtual work, organizational learning and knowledge, communication practices, and medical informatics. E-mail: costerlu@syr.edu .

Tae Kyoung Lee, Ph.D., is an Assistant Professor in the Department of Communication at the University of Utah. Her Ph.D. is from Cornell University. Her research focuses on message processing and effects. E-mail: tae.lee@utah.edu .

Mahboobeh (Mabi) Harandi is a Ph.D. student in the School of Information Studies (iSchool) at Syracuse University. She got her MSc in information systems from Norwegian University of Science and Technology where she was involved in the SmartMedia program and studied machine learning techniques and sentiment analysis over news articles. Her current research is studying user behavior/experience in online communities. She applies different approaches of natural language processing and statistical test to analyze the user behaviors. E-mail: mharandi@syr.edu .

Grant Miller was awarded a Ph.D. in astrophysics in 2013, working on the detection and characterisation of transiting exoplanets at the University of St Andrews. He then joined the Zooniverse, the world’s leading citizen science research group, at the University of Oxford where he is now project manager and communications lead. E-mail: grant@zooniverse.org .