1 Introduction
There is an increasing interest in citizen science as an object of study in its own right and in investigations concerned with how to improve the implementation of citizen science projects in the future [Jordan et al., 2015 ]. Amongst the key issues are maximizing the quality of volunteer performance [Sprinks et al., 2017 ], motivating participants to sustain their contributions and to facilitate meeting other project aims also dependent on engagement, typically in terms of scientific outreach and education [e.g., Constant and Roberts, 2017 ; Dickerson-Lange et al., 2016 ]. More bluntly, a citizen science project without numerous engaged volunteers is unlikely to be viable [Kaufman, 2014 ]. A challenge in securing these aims is the lack of workable frameworks that allow us to understand citizen science as a designable system whose boundaries encompass social, scientific and technical elements that are interconnected, mutually constraining and hence are best strengthened through their joint optimisation. In this paper we describe how a sociotechnical systems approach and in particular, a recently released standard, BS ISO 27500:2016 “The human-centred organization” [British Standards Institute, 2016 ] might be used to provide a supporting conceptual framework derived from best practice across a wide range of organizational settings. We base our discussion around the principles of BS ISO 27500:2016 and what they might mean in terms of a virtual citizen science project.
Our primary focus in this paper is upon “virtual citizen science” (VCS) projects [Reed, Rodriguez and Rickhoff, 2012 ] that concern mass online participation, typically in order to perform analysis and classification tasks on existing data (e.g., satellite imagery, scanned historical documents, the outputs of sensors on earth and in space). Such studies tend to be driven by a core science case (i.e., the need to process data too numerous for a professional scientist to work through alone and too rich or complex for algorithmic approaches) while at the same time, typically also intend to address other agendas in science communication and engagement, although it appears such concerns emerged somewhat later in the development of the method [Straub, 2016 ]. The relative ease with which the technical element of the work (a web page or an app with a database back end to serve materials and record responses) can be deployed, and the focus that may be put upon it given the “one to many” nature of such projects may obscure the complexity of the wider venture actually being entered into and furthermore, there may be little conscious recognition on the part of scientists deploying them of the true breadth of design choices available and their consequences for users. Arguably, this specific form of largely top-down crowdsourced citizen science accessible to people around the world does not differ functionally from paid forms of crowdsourcing (e.g., Amazon Mechanical Turk; [Bergvall-Kåreborn and Howcroft, 2014 ] ) and Human Computation [Quinn and Bederson, 2011 ] and in its computer-based mediation of tasks may also be said to have some wider similarities with other forms of digital gig work [Taylor et al., 2017 ]. Consequently, it is reasonable to note that a creating a virtual citizen project is to bring into existence a form of work organization, albeit a highly specialised one defined by a scientific agenda.
Recognition of the breadth of the design space in which VCS operates, and the interconnected nature of technical and social elements, may aid in addressing a range of challenges that appear at different levels of analysis. One set of concerns lies in ensuring the quality of the data produced is adequate for scientific use [Kosmala et al., 2016 ]; a particular challenge given scepticism amongst some professional scientists despite the growing existence of high quality citizen-science datasets [Burgess et al., 2017 ; Riesch and Potter, 2014 ]. This may express itself in concerns as to whether citizen scientists can contribute to high quality data at all, what kinds of support are required and whether deliberate or accidental contribution of poor quality data can be detected and managed noting that data quality can and should supported at all stages in the lifecycle of a project [Wiggins, Bonney et al., 2013 ; Freitag, Meyer and Whiteman, 2016 ]. Beyond this, the motivation and participation patterns of users are also important. Meeting the interests and aptitudes of citizen scientists is important to attract them in the first place, but then so too may be sustaining and growing that motivation and contribution [Crall et al., 2017 ]. Different models exist in this space from those that emphasise sustained and evolving participation [Crowston and Fagnot, 2008 ; Jackson et al., 2015 ] and those that appeal more to a “shallow and many” approach [Eveleigh et al., 2014 ]. Given that there exist a range of motivations for participation in a project [Raddick, Bracey, Gay, Lintott, Murray et al., 2010 ; Raddick, Bracey, Gay, Lintott, Cardamone et al., 2013 ], that might also indicate a requirement for more personalised participation options [Aristeidou, Scanlon and Sharples, 2017 ]. This goes beyond merely encouraging numerosity of contributions and contributors to sustain the project however, agendas in science communication and education demand a genuine engagement between the citizen and the content of the science that is created [Bonney et al., 2009 ; Prather et al., 2013 ; Straub, 2016 ]. While perhaps early implementations of VCS could trade on the novelty of the venture, in an increasingly competitive environment in which to attract would-be volunteers [Sauermann and Franzoni, 2015 ] we might also give consideration to what kind of experience citizen scientists receive both in terms of carrying out the task itself but also in terms of social interaction, satisfaction and identity [Wiggins and Crowston, 2010 ; Jackson et al., 2015 ]. Furthermore, the issue of patterns of participation also speaks to challenges in terms of the democratization of science: can we design VCS projects to reduce inequality of participation [Haklay, 2016 ] by allowing it to appeal beyond specific demographic groups [Ponciano and Brasileiro, 2014 ]. In common with all forms of science, thought also needs to be given to the entire lifecycle of the scientific venture (not merely one stage of data analysis) and thus the governance of ethical and research integrity issues such that all participants are appropriately protected [Resnik, Elliott and Miller, 2015 ; Wiggins, Bonney et al., 2013 ]. Finally — and cutting across all these issues — in the same way publication bias may obscure not just the true rate of replication of scientific results but also the rate at which scientific ventures in general fail or do not meet their aims, so too it likely obscures the rate at which citizen science initiatives themselves fail to meet their aims and the efforts of both professional and citizen scientists go to waste [Kaufman, 2014 ]. In summary, while creating a VCS project may primarily involve us in making design decisions about a website, in reality it places us in quite a complex design space where an external standard may be of use to both help us understand this terrain and to draw our attention to relevant issues.
2 Relevance to virtual citizen science
The BS ISO 27500:2016 standard was introduced in March 2016 by the British Standards Institute (BSI) [British Standards Institute, 2016 ] and was aimed at being a supplement to prior standards that encouraged consideration of human needs in the design of equipment and work processes (these would include most notably ISO 6385 Ergonomic principles in design of work systems [International Organization for Standardization, 2016 ], ISO 26800 Ergonomics [International Organization for Standardization, 2011 ] and ISO 9241 Ergonomics of human-system interaction [International Organization for Standardization, 2010 ]). The purpose of international standards is to provide “requirements, specifications, guidelines, or characteristics that can be used consistently to ensure that materials, processes and services are fit for purpose” [International Organization for Standardization, 2018 ] and can be a resource for risk management and engagement with best practices. The specific focus of BS ISO 27500 is to promote taking a human-centred and sociotechnical perspective at an organisational level where prior standards tended to emphasise individual systems or pieces of equipment: “It has an important role in making work more humanized which facilitates participation and improved quality of life for everyone” [British Standards Institute, 2016 , p. 3]. It is a relatively broad standard intended not as a set of strictly binding rules, but rather, to encourage stakeholders towards human-centred practices by setting out a clear set of guiding principles and a consideration of the benefits associated with them although the details of implementation are likely to vary depending on the needs and nature of the organization they are applied to [Barraclough and Stewart, 2016 ]. The principles are summarised in Table 1 . Within virtual citizen science, the main advantages of adopting these ideas are four-fold. First, by adopting a broad standard, this allows stakeholders in citizen science to locate their undertaking not as a suis generis activity but instead to recognize within their projects the construction of a sociotechnical system and in doing so, gain access to an extended body of scholarship and best practice. Second, it signposts values and concerns beyond the purely technical and scientific we might wish to address in our practice and can be used as a heuristic tool in design (have we given thought to and addressed the issues it identifies?). Third, deciding that a project is going to accept this standard might also be a useful way for project managers to indicate their intent both internally and externally as to how the project will be designed, run and governed. Finally, the authors of the standard claim that respecting these principles has been demonstrated across a wide variety of organizations to improve productivity, retention, engagement and output quality as well as helping to achieve non-business goals in environmental and social responsibility. These are all outcomes that may be beneficial to citizen science projects and likely to increase the probability of a successful project being produced. In order to understand why this would be, it is worth briefly reviewing the underlying theoretical background.
Read in terms of the wider literature, the standard can be seen as a vehicle for promulgating a sociotechnical systems perspective, a view of work systems that emerged out of a growing sense in the latter half of the 20 century that there was a lack of balance between social aspects (human workers) and technical aspects (machinery and automation) of the work system. This imbalance was thought to be the cause not only of low worker satisfaction and fraught industrial relations but also of sub-optimal productivity, industrial accidents and skills shortages. The roots of his lay in the prioritization of the technical aspects over the social aspects; new machines would be enthusiastically introduced in the quest for more efficient and rapid productivity and workers would be expected to ‘fit in’ around the technical core. Worse, human work itself might be rationalized (arguably ‘robotised’) to treat humans as if they were themselves elements of automation, a trend that had begun with Taylorism [Taylor, 1911 ]. Taylor’s agenda around work concerned the standardisation of tasks, their decomposition into the smallest possible actions and a strong element of management control exerted over the worker subsequently with the aim of increasing workplace productivity. In practice this hoped-for transformation was often fraught with problems, particularly as often little consideration was given to human strengths and weaknesses as for example described heuristically in Fitts’ list [Fitts, 1951 ; de Winter and Dodou, 2014 ]. A striking example of this is found in so-called ‘left over task allocation’ [Hendrick, 2002 ] where a job of work is automated in those aspects most amenable to machines and the remaining work that cannot be easily automated is dumped on the human without regard for what this package of tasks looks like and their relationship to human abilities. Additional to having a tendency to generate boring, disjointed and repetitive patterns of work, in particularly egregious cases this would lead to human work becoming effectively impossible to do well, either because humans were not well suited to it or because in the absence of key elements of the work now automated, workers were not able to bring their skills to bear particularly where a hands-on element has been removed [Bainbridge, 1983 ]. The overarching critique of the sociotechnical theorists was not primarily that productive work was determined by a ‘business case’ but rather that this business case had been interpreted uncritically to license the prioritizing technical elements of the work at the expense of consideration of the social even as a counterbalancing force. The general form of the solution to these problems was to consider the human, technical and social elements holistically and sympathetically to seek their joint optimisation [Emery and Trist, 1960 ].
The above background may have some resonance with citizen science where the ‘science case’ behind the work (and thus the data that ultimately need to be collected) might determine the form of the project in a brute-force manner with only limited regard for the experience of the contributors themselves. Indeed, there have been several formulations of crowd work in general that have rhetorically reduced human contributions to the outputs of “human processing units” not dissimilar to the Taylorist attempt to rationalise workers as if they were just another form of industrial machinery [see Reeves, 2013 ; Kittur et al., 2013 ; Lease and Alonso, 2014 , for critiques of this tendency]. Virtual citizen science tasks may also have a Taylorist flavour to them when they are similarly the result of a decomposition down to a small ‘microtask’ assumed easiest for a passing unskilled participant. Furthermore, leftover task allocation is often a risk in citizen science particularly where humans are deployed to collect or analyse data simply because automated (algorithmic) approaches are unavailable. In reviewing the tenets of BS ISO 27500:2016 we will examine different strategies for overcoming these potential problems.
3 Principles
We now turn to review each of the principles of BS ISO 27500:2016 and reflect upon how we can interpret these to provide useful guidance and ideas to the design of virtual citizen science projects that allow participants to contribute large quantities of data remotely [Bonney et al., 2009 ; Reed, Rodriguez and Rickhoff, 2012 ]. For ease of explanation we re-order the principles set out in ISO 27500:2016. No weighting appears to be intended by the authors themselves who have previously presented them in varying order [Barraclough and Stewart, 2016 ]. This is followed by a discussion of the characteristics of a human-centred design process and our reflections overall on this way of thinking about citizen science.
3.1 Adopt a total system approach
In keeping with the sociotechnical underpinning of the standard, it is important to think about both social and technical elements of the system being constructed (i.e., it does not begin and end with a usable interface or app alone but should also consider what this is likely to give rise to organisationally and socially and how experiential aspects like motivation will be addressed in design terms). A broad insight offered by sociotechnical theory is that creating a piece of technology we expect people to use or interact with manifests a range of design decisions about how they will have to behave and think; it is behavior shaping [Rasmussen and Pejtersen, 1995 ]. It is important such decisions are made consciously and not by default because they have not been recognised or falsely considered immutable. One way to think in this way is to begin with some sort of conceptual systems model for a citizen science project.
There are many ways in which a conceptual system model can be defined and represented. For example, a generalised systems model of virtual citizen science organizations is defined by Wiggins and Crowston based around the interplay between individual and organisational inputs, outputs, states and processes [Wiggins and Crowston, 2010 ]. In a separate paper, the same authors discuss a range of citizen science typologies in terms of how they differ in scientific, technical and organizational factors — this may also offer a useful set of categories [Wiggins and Crowston, 2011 ]. Based on an extensive review of Zooniverse projects undertaken with platform experts, Tinati and colleagues developed an extensive set of design claims based on themes of Task Specificity, Community Development, Task Design, Public Relations and Engagement [Tinati et al., 2015 ]. The purpose of this exercise is that the organization itself should use this understanding of the system as a whole as a resource for design decision making. Consequently, there is probably not a single correct way to do this, and it may vary from a relatively informal sketch (e.g., a rich picture [Checkland, 1981 ]) through to formal forms of system diagramming. The main thing is to work with a common understanding of a citizen science project that captures the interplay between and within social (psychological and group in our example) and technical elements (interface, task and data aggregation) that may be present.
From work in our laboratory, we can offer on a variation of the so-called “Onion Model” [Wilson and Sharples, 2015 ] customized our citizen science case of crater counting (Figure 1 ) that we found useful in understanding the complexity of the design space available. The goal of adopting this model was to provoke consideration of how design decisions at different levels relate and constrain the overall form of the system and can affect two key dependent variables: participant motivation and the quality of the scientific contributions generated. A virtual citizen science project typically includes a large group of workers (however brief their involvement), the issuing of tasks (be it classifications in a Zooniverse project or data collection through an app) and the recovery and judgement of the product of those tasks. Layered onto this will typically be some form of data aggregation as well as social interactions between workers (e.g., a web forum) and often a group of scientists who provide some form of overarching management and governance. That this activity might be, at least in part if not whole, mediated or created implicitly through webpages, mobile phone apps, servers and databases does not mean these elements are not present or that their manifestation cannot be altered or redesigned.
At the lowest level of analysis, the user is asked to make some sort of judgment or decision about the imagery presented. The issues here concern human perceptual psychology with regard to the imagery presented and the parameters under which good decisions and judgments can be made; one useful way of classifying these activities might be to draw upon the sensory and perceptual psychophysics literature which can provide taxonomies of different perceptual judgements [e.g., Pelli and Farell, 2010 ]. For example, users may be asked to judge detection/threshold (is a stimulus present), discrimination (are two stimuli the same or different) or undertake matching (adjusting an attribute of one stimulus until it matches another). Corresponding judgements may include a binary response (yes/no), n-alternative forced choice (nAFC; pick the best categorisation or match to pre-defined examples) and rating scales. Each of these methods have well established literature concerning how various combinations of judgements and responses can affect performance and motivation. Even from this brief review there are many different choices one could make about ways of recovering information from having citizen scientists scrutinise imagery or data.
This imagery is of course presented using some sort of user interface; an optimal user interface will allow easy interaction with the imagery and easy reporting of judgments by the citizen scientist. Factors to consider here above basic usability issues might, in an imagery interpretation scenario, concern window size and relative image resolution. This is itself conditioned by constraints; the quality of the available imagery, the nature of the judgement being carried out, the cognitive strategies used for carrying it out (e.g., recognising an environmental feature will often require understanding how it fits into the wider terrain) and the nature of the underlying imagery and the task that is being executed [e.g., Hodgson, 1998 ; Battersby, Hodgson and Wang, 2012 ]. The sum of activities the citizen scientist is undertaking will add up to some sort of ‘work design’; at this point we might begin to wonder what the optimal pattern of workflow would be to keep the citizen scientist engaged. For example, it is well established that repetitious tasks in which the target rate is low suffer from a so-called “vigilance decrement” where disengagement leads to reduced performance [Mackworth, 1948 ; Teichner, 1974 ]. If a task starts to have this quality we might ask if it is better if we artificially raise the target rate or perhaps redesign the activity so participants see more variation in their interaction and perhaps develop or demonstrate different skills [Sprinks et al., 2017 ].
At the highest level we need to consider how the many citizen scientists are organised and managed; what steps are taken to check or balance performance, what opportunities are there for community interaction and how can we aggregate responses into accurate answers to the questions we are most interested in? All the elements in Figure 1 are intimately related and both affect and constrain each other. A difficult task may leave a citizen scientist feeling demotivated, the user interface will be configured to match the job design, the level of explicit organisation will depend upon the kinds of activities being undertaken, schemes of aggregation are affected by what the citizen scientist has been asked to do and the perceived quality of that output and so on. The overriding point here is that human-centred design begins with a recognition that users (both scientists and citizen scientists) are both located in a relatively complex system and consideration of their needs must occur at several levels of analysis, not just in terms of technical elements alone.
It also from this view that we can understand the occurrence of some of the problems mentioned earlier in terms of phenomena such as left-over task allocation and difficult/impossible or just tedious tasks and also the means of their remediation. For example, if users find completing a given type of classification or collection task very difficult, we might look to whether it is physically/cognitively plausible as presented, whether the nature of the task could be adapted (perhaps to emphasize contextual or supporting information), and whether aggregation techniques could be modified to manage noise in the data Indeed, we might even have cause to ask whether anyone should be undertaking the task at all (perhaps implicating renewed exploration of AI or secondary data approaches] or whether a citizen science project is actually a valid approach given the scientific problem at hand [Kaufman, 2014 ]. The ultimate reason for recognising these elements exist in a citizen science project is to consciously identify the challenges that lie in reconciling each of the elements with each other in a harmonious manner.
3.2 Capitalize on individual differences as an organizational strength
At first blush, the relatively simple tasks often set for volunteers in citizen science platforms would feature little potential for variation in the kinds of contributions made by different users. However, phenomenal studies of user behavior, particularly on online virtual citizen science platforms, reveal that variations exist in terms of aptitudes, motivations and levels of engagement and that designers could more directly address these emerging user profiles [Ponciano and Brasileiro, 2014 ; Aristeidou, Scanlon and Sharples, 2017 ]. For example, visitor behavior on online citizen science platforms follows a long-tail distribution, with the majority of citizen scientists only visiting once and performing a few tasks [Nov, Arazy and Anderson, 2011 ]. Considering two Zooniverse projects (Galaxy Zoo and the Milky Way project), 70% of volunteers visited only once contributing approximately 20% of the workload [Ponciano, Brasileiro et al., 2014 ]. While individual projects may differ slightly in profile, this power law style pattern of contributions has been observed as a recurring and perhaps defining pattern [Haklay, 2016 ]. This suggests the existence of (at least) two distinctive types of citizen scientist, the one-time visitor previously described and the committed volunteer. We might choose to design for the ‘dabbler’ to yield a valuable-at-scale ‘one more click’ [Eveleigh et al., 2014 ] differently from how we design for the committed volunteer where we give more thought to the motivational benefits of task complexity [Gerhart, 1987 ] and autonomy and variety [Dubinsky and Skinner, 1984 ] on the committed volunteer likely to produce better quality data owing to growing experience [Prather et al., 2013 ].
In addition to the individual differences between citizen scientists based on their visit behavior, how their interaction with the system changes over time is also worthy of consideration. For example, it has been argued that participation patterns in user-generated content projects [Crowston and Fagnot, 2008 ; Crowston and Fagnot, 2018 ] may evolve over time with the final outcome being committed advanced users involving themselves more in the ‘meta’ community aspects of the project and defining their own ‘roles’ and responsibilities within the community [Jackson et al., 2015 ], although other findings emphasize the primacy of intrinsic interest in the topic or area of study although even here, experienced participants chafed against the limited scope of their potential involvement as their participation matured [Aristeidou, Scanlon and Sharples, 2017 ]. At least the potential for this evolution has important implications regarding the platform and social stage of the system model (Figure 1 ). It is imperative that the design of these mechanisms facilitate the emergence of a citizen science culture [Newman et al., 2012 ], where more experienced citizen scientists can act as mentors for new participants when guidance is sought, helping to ensure they follow the established norms of performing scientific work. This social interaction may act as a motivation for prolonged participation beyond that provided by scientific contribution alone [Van Den Berg, Dann and Dirkx, 2009 ]. Beyond the training benefits that can arise by direct interaction between more committed citizen scientists and the inexperienced majority, a less obvious benefit can be realized at the aggregation stage of the system. By capitalizing on the relationship between citizen scientist experience and performance, many existing citizen science platforms are identifying analysis produced by more committed volunteers as reference standard data [Lintott and Reed, 2013 ]. In the absence of ground truth (as may be the case in space and planetary projects), this data is utilized to verify and validate the dataset as a whole. A strategy therefore would be to design mechanisms for identifying highly committed citizen scientists and the data they produce at the aggregation stage of the system, to improve the validity of any resulting scientific outcomes. As the standard suggests, it is important that citizen science systems recognize that citizen scientists are not clones of each other, but individuals with various levels of experience, motivation and performance whose strengths can complement each other. Whilst it is true past citizen science research has acknowledged differences between its users, this has been normally been through post-hoc analysis of existing projects considering the effect on data validation [Freitag, Meyer and Whiteman, 2016 ], or changes in volunteer motivation [Jackson et al., 2015 ]. Future citizen science endeavors should seek to accommodate these differences, their relationships and the elaborate mechanics required [Crowston, Mitchell and Østerlund, 2018 ] at the design stage, in order to develop social and collaborative motivations that encourage commitment and ultimately improve the analysis produced.
3.3 Make usability and accessibility strategic objectives
This principles emphasizes the potential risks that can emerge from a failure to achieve usability of products and services; they should function as expected, provide utility to users and ideally be pleasurable and informative in use. In the case of a citizen science project it is clear that poor design could lead to participant disengagement and possibly poor-quality data production. Inequalities may also occur where certain groups of users find a website or app unreasonably inaccessible leading to inequalities in the representation of different demographic groups [Haklay, 2016 ]. One approach to this is to use the ISO 9421-210 standard “Ergonomics of Human-System Interaction” [International Organization for Standardization, 2010 ] which has been adopted by European governments to ensure the accessibility of websites.
More widely however, usability and accessibility are generally recognized as the product of adopting a principled design process (as opposed to a one-off intervention) that extends across the lifecycle of the project from inception through to its closure or legacy and BS ISO 27500-2016 identifies these features as characteristic of a human centered approach to design coherent with the wider aims of the standard. Key qualities of user-centered approach to usability are having an early and sustained focus on engagement with the intended users, the use of empirical measurement where possible and an iterative development process that makes use of prototyping and testing as the system evolves [Gould and Lewis, 1983 ]. There are wide range of frameworks and methods for achieving this [e.g., Preece, Rogers and Sharp, 2015 ] that could be adapted to the citizen science case, although it also reasonable to note that not all citizen science projects will have the same needs or the same resources available to them. This might include, for example, the use of well-known heuristic guides to usability design [e.g., Nielsen, 1994 ; Gerhardt-Powals, 1996 ] that provide guidance and schemes for limited self-assessment. It has been noted previously that initially citizen science has had only limited engagement with HCI professionals, but that this situation has changed in recent years [Preece, 2016 ]. For example, over and above these generic guides, the Design Claims described by Tinati and colleages [Tinati et al., 2015 ] concerning Zooniverse sites now provide a range specic design insights (which go beyond accessibility alone to system-wide design choices) that can offer guidance to designers and address issues such as the best forms of instruction, task workflow and the use of feedback.
In Figure 2 we illustrate a relatively generic design process we have used ourselves, it is probably at the more conservative, linear end of the available options [Houghton, Balfe and Wilson, 2015 ]. Given the nature of virtual citizen science style projects, it begins with the definition of the science case which will likely be carried out by professional scientists. This will define the data needs and effectively define a mission statement for the activity (e.g., “Identify and quantify features in different sets of planetary imagery”). However, as discussed earlier, this should not be taken as a statement about what the participants should actually do — there are in fact varied ways in which this could achieved featuring different types of judgements and workflows [Sprinks et al., 2017 ] and this should be considered in the wider terms of the systems diagram identified in Figure 1 . One could for example ask participants to mark features, but one might instead build up an estimate of numerosity from simply asking participants to judge which of two images appears to contain the most craters, or perhaps identifying features and marking them could be split it into different tasks. The process then goes through some formal experiments (if a particular issue is identified for which there is no pre-existing data, e.g., how well can humans recognize specific features, is this is a fair demand?) through to prototyping, limited release and then final release where the project continues to be monitored and continuously improved. The major difference between prototype stages generally lies in the maturity of technology and also the granularity of data collection available. In an experimental study it might be reasonable to collect fine-grained individual measures (e.g., eye tracking, structured qualitative interviews, workload indices or fine-grained screen captures) whereas when closer to deployment, feedback is likely to be received in more abbreviated form (e.g., questionnaires, analysis of forum comments) and without the means to ensure compliance, may suffer from biases in respondent self-selection. On the other hand, with greater numbers, empirical data about task performance, task completion and data quality become more compelling as metrics we can infer from. This processes both fed forward and fed back; from user feedback, an amended prototype might be completed and from that specific experimental activities may be indicated. Indeed, this might feedback as far as re-reviewing the premise of the work and making some decisions about the adequacy of source imagery in the work in light of mass testing.
While this example might be seen as fairly typical, it would be remiss not to note that the nature of citizen science itself offers some opportunities for distinctive design process options. As a temporally extended activity already featuring mass participation, it is possible to carry out iterative development as the project runs monitoring engagement, quality of output directly and carrying out further survey or even ethnographic work to inform redesign [Jordan et al., 2015 ]. Following a more agile “Release early. Release often. And listen to your customers” approach [Raymond, 1999 ], the platform becomes its own ongoing mass participation HCI/usability experiment alongside fulfilling its citizen science mission. For example, it may be possible to switch in and out different interfaces or task flow options for A-B testing [Sprinks et al., 2017 ]. In balancing between the more staged, linear process described in Figure 2 and taking advantage of more agile approaches, there are trade-offs to consider. On the one hand, given that the market for citizen science contributors can be said to be competitive [Crall et al., 2017 ] giving too early access to a project might be seen as wasting an opportunity if elements are not right first time, particularly given that the pattern of visits typically peaks early in the life of a project around launch publicity and novelty. Although, given that the conversion rate from first-time visitors to regulars is typically low at the best of times [Eveleigh et al., 2014 ] the opportunity cost is perhaps most severely felt in the event the quality of data produced is lower than it could have been. On the other hand, properly implemented and communicated, given that ‘helping’ and an interest in being an ‘enthusiast’ for the venture are motivations amongst some [e.g., Reed, Raddick et al., 2013 ] this can be seen as way of engaging citizen scientists in an element of co-design, listening to their voice, engaging in a dialogue about the project and building investment in the ongoing work. The balance between these two positons will likely vary given the nature of the project, the resources available and a sense of its desired end-point; projects likely to impact people beyond science alone might lend themselves better to being supported over a wider evolution by their community, whereas projects concerned with more abstract scientific concerns where it may be harder to identify a community a priori may benefit from more polish before release.
Going further, it may also be possible to transition from a user-centred perspective to a more radical collaborative and co-design situation where the line between scientist and citizen scientist is blurred with citizens contributing to a wider range of scientific activities from deciding on research questions at the beginning through to reporting or surveying impact at the end [Bonney et al., 2009 ]. It is reasonable to think that the community could itself could co-design interfaces on this basis too using participatory and crowd sourced design techniques. Such ventures at the present time tend to be more associated with collecting observations, volunteered geographic information and similar smaller-scale community/community-of-interest type projects than virtual citizen science [Preece, 2016 ]. Again, a necessary precondition for such projects would be being able to identify a relevant community that might contribute ahead of the project per se existing. That said, as online virtual citizen science projects gather steam and build their own relatively stable communities, this could be a design direction for future successor projects and “refreshes” of long standing ones.
3.4 Value personnel and create meaningful work
In terms of creating activities that engage and show respect for the value of the user’s input, we might begin by considering why people participate in these projects. One of the earliest surveys into the motivations of VCS platform participants was conducted by Raddick, Bracey, Gay, Lintott, Murray et al. [ 2010 ]. Through a combination of analysis of forum posts and interviews with specific users of the Galaxy Zoo project, a set of 12 motivational categories were defined that included scientific contribution, learning, discovery and being part of a community. Follow-up studies [Raddick, Bracey, Gay, Lintott, Cardamone et al., 2013 ] have reiterated the importance of ‘contributing to science’ as a primary motivation to participation, with social engagement (being involved with a like-minded group), helping (feeling important, making a contribution) and having access to pleasing imagery also identified [Mankowski, Slater and Slater, 2011 ; Reed, Raddick et al., 2013 ]). The general design implications surrounding these findings are that a citizen science project should concern actual, valid science and that this should be communicated with users who are given credit (this may be within the platform or co-authorship on papers) for their contributions. Essentially, citizen scientists want to be scientists at some level, rather than “Human Processing Units” and the conduct of the project should respect this.
One practical development at the turning point in industrial management thinking towards sociotechnical theory was the growth of Job Characteristic Theory [Hackman and Oldham, 1975 ; Hackman and Oldham, 1980 ], a perspective on what makes work implicitly meaningful and engaging ‘by design’. In the years since, JCT has been subject to extensive tests and while some have raised criticisms, these lie more in suggestions for extensions and variants of the model than an undermining of its core tenets [Oldham and Hackman, 2010 ]. It has also recently been suggested that it could be specifically applied to the design of crowd work [Wiggins and Crowston, 2010 ; Kittur et al., 2013 ]. We extend the analysis of that idea now. Hackman and Oldham suggest that desirable, engaging work that offers intrinsic motivation has the following characteristics: skill variety, task identity, task significance, autonomy and feedback. In Table 2 we explain these categories and suggest ways in which citizen science platforms could apply these ideas; in general they point to idea of offering flexible ways of interacting with the platform rather than carrying out simple repetitive tasks that do not seem to add up to anything in particular.
3.5 Values: Be open and trustworthy, Act in socially responsible ways, Ensure health, safety and well-being are business priorities
In the standard these are three separate principles; while the first four items speak to explicit design factors, these speak to values that might inform them. For practical reasons we merge them here as we believe in the case of Virtual Citizen Science they are quite closely interlinked in practice with openness in particular being crucial to supporting social responsibility and well-being. However, in terms of adopting BS ISO 27500:2016 in a project, what is key here is that the values-based elements receive equal importance and weight to any other element of the project and are regarded as important.
Given that many citizen science projects are undertaken for socially responsible reasons, we would hope that this would extend to the governance of the project itself. The most likely area where social responsibility could become challenging would be in terms of the use of data collected through the project, particularly where data participants have produced in good faith for one purpose that gets reused for another (e.g., data gathered about global warming impacts by ecologically motivated citizen scientists are sold to a logging company). There may also be situations where the professional scientist’s commitment to open publication of data comes into conflict with a sense of community ownership by citizen scientists. A related issue would be the allocation of credit — whether the status of the citizen contribution merits authorship or similar. These are all challenging issues and matters of conflict and tension in normative professional science and there may be good reasons why approaches vary between projects (e.g., depending on the perceived sensitivity of the data set and compatibility between the scope of contribution and thresholds for authorial recognition). A best course of action would appear to lie in the related value of being open and transparent from the start: “Scientists who work with citizens should clearly discuss data ownership and other intellectual property issues with citizen volunteers at the beginning of the project, and periodically, and as needed, to ensure mutual understanding. They may also find it useful to negotiate agreements that recognize the interests of all stakeholders” [Resnik, Elliott and Miller, 2015 ]. However, this is not to say that exploitative behaviour towards citizen scientists would ever be acceptable even on the basis of “the small print”; the underlying approach must be a reasonable one.
One special challenge to being ‘open and trustworthy’ is posed by the dual status of citizen scientists themselves. While the primary forms of participation are analysing and contributing data, a wide range of forms of participation and inter-relation are possible in citizen science paradigms that create different relationships between professional scientists and citizen scientists [Bonney et al., 2009 ; Shirk et al., 2012 ] and it may be increasing challenging to acquire clarity as to exactly where the citizen scientist stands in terms of being a citizen and being a scientist and this may have implications for design choices. For example, while transparency is normally understood as a virtue in science, there are good reasons (as is explicitly recognised in many ethical frameworks) for not disclosing everything to users, rendering them experimental participants to whom an enhanced duty of care is now owed. The main motivation for this is likely to lie in quality management. One possible scenario might be the use of “foil” stimuli. Foils are elements inserted into a questionnaire or other tool of measurement to evaluate the quality of response or to assess compliance with instructions. In citizen science the intent behind this can vary from trying to ‘catch out’ someone who is deliberately trying to spoil a dataset (vandalism) through to trying to calibrate the performance of individuals against ‘gold standard’ or even artificially created stimuli. The risks posed here lie on several levels — reputational to the project, the accusation of wasting people’s time and perhaps, in projects skewed towards discovery, disappointment on the part of a user who believes that they have made a major discovery. It may, if over used, more generally begin to undermine relations between professional and citizen scientists creating suspicion and a “them and us” culture in what is supposed to be a collaborative collegial venture [Kelman, 1967 ; Hertwig and Ortmann, 2008 ]. Where experimental deception is employed as a deliberate strategy it should not be casually adopted and rather, should be used only when strictly required and justified, assessed in advance for risks and harms and a robust and timely debriefing process should be put in place limit potential harm.
Practically, we could argue that judgements made about acceptable and unacceptable activities should be informed by normative ethical and research integrity frameworks together with data protection frameworks [American Psychological Association, 2010 ; World Conference on Research Integrity, 2010 ; European Parliament and European Council, 2016 ]. In doing so, unless there are good reasons to deviate from this, it seems advisable to respect the dual-status of users of citizen science projects as being both scientists and ‘experimental participants’ and to apply those rules which take best care of the user if those two statuses come into conflict. Although different bodies use different wordings and may take different emphases, commonly agreed principles address the above cases; informed consent, restrictions on data reuse, responsible handling of necessary deception.
The other commonly agreed to ethical principle relevant here is that participants should come to no harm — that their health, safety and well-being are priorities. In terms of what health and safety concerns we might have for participants in virtual citizen science, the most obvious concerns the interaction with information technology equipment, although recognising that in most frameworks, individuals are usually responsible for taking reasonable steps in looking after themselves as well. One way to address this within a project might be to make relevant information from NIOSH/OSHA/HSE (U.K.) or similar about remote working practices [e.g., Health and Safety Executive, 2011 ] and lone working practices [e.g., Health and Safety Executive, 2013 ] available to participants on the project website and to monitor for anything obviously irresponsible (e.g., in the event users are spending multiple hours on a task, they might be prompted to take a break). Another way of gaining leverage on this issue is to address it a cultural factor within the project through forum communications and messaging.
In common with most online activity, another form of well-being that must be maintained concerns the privacy of citizen scientists and reputational and social risks [Preece, 2016 ]. This is perhaps another area where there might be a clash created by the dual-status of citizen scientist; normative integrity frameworks tend to emphasise accountability and transparency. However we can also recognise that a strong interpretation of this notion will result in citizen scientists being placed in a position where they are unable to manage the responsibilities placed on them owing to a lack of control over the overall venture. On the principle of that the highest level of protection should be offered, citizen scientists should have their privacy looked after through adopting best practices such as Privacy-By-Design [Jerusalem Resolution on Privacy By Design, 2010 ] and coherence with emerging legal standards [European Parliament and European Council, 2016 ]. Having said this, above this baseline we also note that in contrast to normative scientist-subject experimental designs, the participatory and community-building nature of citizen science may also support negotiation and discussion of the costs and benefits of privacy issues and that it might be possible to establish more nuanced “contextually-appropriate” norms agreeable to all than would be the case in a normal laboratory setting [Bowser et al., 2017 ].
4 Conclusion
One of the apparent paradoxes of virtual citizen science is that while it is commonly referred to as a form of ‘crowd work’ and the sheer number of contributors is a defining feature for design, the requirement that it exists at all also reaffirms that view that human time and attention are extremely precious resources. The principles outlined here constitute a sociotechnically-informed approach to virtual citizen science that may correct for a tendency to fail to reconcile the science case with a worthwhile user experience (by analogy to business that fail to reconcile the business case with productive and safe work design). It is noteworthy that, via a systems model, the borders of the designable system extend well beyond a web interface, app or even the science case alone to encompass organizational and social aspects of the undertaking.
As noted earlier, the standard is not intended as a rigorous set of rules to be blindly followed but rather as a set of concerns, provocations and questions derived from a principled perspective on how humans need to be thought about and supported thought about as they interact purposefully with machines and data. While the specific responses, system model and development process we outline here may not be suitable for all, we would suggest that many projects would benefit from the explicit consideration of what system model and development process might be useful for their needs and to reflect upon how they are addressing the tenets of the standard. In conclusion, we suggest that BS ISO 27500:2016 has potential to strengthen citizen science by offering both a new way of thinking about these projects and a set of tools and practices that ultimately support the most valuable part of any system — the people within it.
References
-
American Psychological Association (2010). Publication manual of the American Psychological Association. 6th ed. Washington, DC, U.S.A.: APA.
-
Aristeidou, M., Scanlon, E. and Sharples, M. (2017). ‘Profiles of engagement in online communities of citizen science participation’. Computers in Human Behavior 74, pp. 246–256. https://doi.org/10.1016/j.chb.2017.04.044 .
-
Bainbridge, L. (1983). ‘Ironies of automation’. Automatica 19 (6), pp. 775–779. https://doi.org/10.1016/0005-1098(83)90046-8 .
-
Barraclough, S. and Stewart, T. (2016). ‘How to become a human-centred organization’. The Ergonomist Nov-Dec, pp. 8–9.
-
Battersby, S. E., Hodgson, M. E. and Wang, J. (2012). ‘Spatial resolution imagery requirements for identifying structure damage in a hurricane disaster: a cognitive approach’. Photogrammetric Engineering & Remote Sensing 78 (6), pp. 625–635. https://doi.org/10.14358/pers.78.6.625 .
-
Bergvall-Kåreborn, B. and Howcroft, D. (2014). ‘Amazon mechanical turk and the commodification of labour’. New Technology, Work and Employment 29 (3), pp. 213–223. https://doi.org/10.1111/ntwe.12038 .
-
Bonney, R., Ballard, H., Jordan, R., McCallie, E., Phillips, T., Shirk, J. and Wilderman, C. C. (2009). Public Participation in Scientific Research: Defining the Field and Assessing Its Potential for Informal Science Education . A CAISE Inquiry Group Report. Washington, D.C., U.S.A.: Center for Advancement of Informal Science Education (CAISE). URL: http://www.informalscience.org/public-participation-scientific-research-defining-field-and-assessing-its-potential-informal-science .
-
Bowser, A., Shilton, K., Preece, J. and Warrick, E. (2017). ‘Accounting for privacy in citizen science: ethical research in a context of openness’. In: Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing — CSCW ’17 . New York, NY, U.S.A.: ACM Press, pp. 2124–2136. https://doi.org/10.1145/2998181.2998305 .
-
British Standards Institute (2016). BS ISO 27500:2016 the human-centred organization — rationale and general principles. U.K.: BSI Standards Limited.
-
Burgess, H. K., DeBey, L. B., Froehlich, H. E., Schmidt, N., Theobald, E. J., Ettinger, A. K., HilleRisLambers, J., Tewksbury, J. and Parrish, J. K. (2017). ‘The science of citizen science: exploring barriers to use as a primary research tool’. Biological Conservation 208, pp. 113–120. https://doi.org/10.1016/j.biocon.2016.05.014 .
-
Checkland, P. (1981). Systems thinking, systems practice. U.S.A.: Wiley.
-
Constant, N. and Roberts, L. (2017). ‘Narratives as a mode of research evaluation in citizen science: understanding broader science communication impacts’. JCOM 16 (04), A03. https://doi.org/10.22323/2.16040203 .
-
Crall, A., Kosmala, M., Cheng, R., Brier, J., Cavalier, D., Henderson, S. and Richardson, A. (2017). ‘Volunteer recruitment and retention in online citizen science projects using marketing strategies: lessons from Season Spotter’. JCOM 16 (01), A1. URL: https://jcom.sissa.it/archive/16/01/JCOM_1601_2017_A01 .
-
Crowston, K. and Fagnot, I. (2008). ‘The motivational arc of massive virtual collaboration’. In: Proceedings of the IFIP WG 9.5 Working Conference on Virtuality and Society: Massive Virtual Communities (Lüneberg, Germany, 1st–2nd July 2008).
-
Crowston, K. and Fagnot, I. (2018). ‘Stages of motivation for contributing user-generated content: a theory and empirical test’. International Journal of Human-Computer Studies 109, pp. 89–101. https://doi.org/10.1016/j.ijhcs.2017.08.005 .
-
Crowston, K., Mitchell, E. and Østerlund, C. (2018). ‘Coordinating advanced crowd work: extending citizen science’. In: Proceedings of the 51st Hawaii International Conference on System Sciences . ACM, pp. 1681–1690. https://doi.org/10.24251/hicss.2018.212 .
-
de Winter, J. C. F. and Dodou, D. (2014). ‘Why the Fitts list has persisted throughout the history of function allocation’. Cognition, Technology & Work 16 (1), pp. 1–11. https://doi.org/10.1007/s10111-011-0188-1 .
-
Dickerson-Lange, S., Eitel, K., Dorsey, L., Link, T. and Lundquist, J. (2016). ‘Challenges and successes in engaging citizen scientists to observe snow cover: from public engagement to an educational collaboration’. JCOM 15 (01), A01. https://doi.org/10.22323/2.15010201 .
-
Dubinsky, A. J. and Skinner, S. J. (1984). ‘Impact of job characteristics on retail salespeople’s reactions to their jobs’. Journal of Retailing 60 (2), pp. 35–62.
-
Emery, F. E. and Trist, E. L. (1960). ‘Sociotechnical systems’. In: Management sciences models and techniques. Vol. 2. London, U.K.
-
European Parliament and European Council (4th May 2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data and repealing Directive 95/46/EC (General Data Protection Regulation) . L119, pp. 1–88. URL: http://data.europa.eu/eli/reg/2016/679/oj (visited on 2nd April 2018).
-
Eveleigh, A. M. M., Jennett, C., Blandford, A., Brohan, P. and Cox, A. L. (2014). ‘Designing for dabblers and deterring drop-outs in citizen science’. In: Proceedings of the IGCHI Conference on Human Factors in Computing Systems (CHI ’14) . New York, NY, U.S.A.: ACM Press, pp. 2985–2994. https://doi.org/10.1145/2556288.2557262 .
-
Fitts, P. M. (1951). Human engineering for effective air navigation and traffic control system. Washington, DC, U.S.A.: National Research Council.
-
Freitag, A., Meyer, R. and Whiteman, L. (2016). ‘Strategies employed by citizen science programs to increase the credibility of their data’. Citizen Science: Theory and Practice 1 (1), p. 2. https://doi.org/10.5334/cstp.6 .
-
Gerhardt-Powals, J. (1996). ‘Cognitive engineering principles for enhancing human-computer performance’. International Journal of Human-Computer Interaction 8 (2), pp. 189–211. https://doi.org/10.1080/10447319609526147 .
-
Gerhart, B. (1987). ‘How important are dispositional factors as determinants of job satisfaction? Implications for job design and other personnel programs’. Journal of Applied Psychology 72 (3), pp. 366–373. https://doi.org/10.1037/0021-9010.72.3.366 .
-
Gould, J. D. and Lewis, C. (1983). ‘Designing for usability — key principles and what designers think’. In: Proceedings of the SIGCHI conference on Human Factors in Computing Systems — CHI ’83 . New York, NY, U.S.A.: ACM Press, pp. 50–53. https://doi.org/10.1145/800045.801579 .
-
Hackman, J. R. and Oldham, G. R. (1980). Work redesign. Reading, MA, U.S.A.: Addison-Wesley.
-
Hackman, J. R. and Oldham, G. R. (1975). ‘Development of the job diagnostic survey’. Journal of Applied Psychology 60 (2), pp. 159–170. https://doi.org/10.1037/h0076546 .
-
Haklay, M. (2016). ‘Why is participation inequality important?’ In: European handbook of crowdsourced geographic information. Ed. by C. Capineri, M. Haklay, H. Huang, V. Antoniou, J. Kettuen, F. Osterman and R. Purves. London, U.K.: Ubiquity, pp. 35–44. https://doi.org/10.5334/bax.c .
-
Health and Safety Executive (2011). Homeworkers: guidance for employers on health and safety . Leaflet INDG226. London, U.K.
-
— (2013). Working alone: health and safety guidance on the risks of lone working . Leaflet INDG73, Rev. 3. London, U.K.
-
Hendrick, H. (2002). ‘An overview of macroergonomics’. In: Macroergonomics: theory, methods and applications. Ed. by H. Hendrick and B. Kleiner. Boca Raton, FL, U.S.A.: CRC Press, pp. 1–24. https://doi.org/10.1201/b12477-5 .
-
Hertwig, R. and Ortmann, A. (2008). ‘Deception in experiments: revisiting the arguments in its defense’. Ethics & Behavior 18 (1), pp. 59–92. https://doi.org/10.1080/10508420701712990 .
-
Hodgson, M. E. (1998). ‘What size window for image classification? A cognitive perspective’. Photogrammetric Engineering & Remote Sensing 64 (8), pp. 797–807.
-
Houghton, R. J., Balfe, N. and Wilson, J. R. (2015). ‘Systems analysis and design’. In: The evaluation of human work. Ed. by J. R. Wilson and S. Sharples. 4th ed. Boca Raton, FL, U.S.A.: CRC Press, pp. 221–248.
-
International Organization for Standardization (2010). ISO 9241-210:2010: ergonomics of human-systems interaction — part 210: human-centred design for interactive systems. URL: https://www.iso.org/standard/52075.html (visited on 2nd April 2018).
-
— (2011). ISO 26800:2011 ergonomics — general approach, principles and concepts. URL: https://www.iso.org/standard/42885.html (visited on 20th August 2018).
-
— (2016). ISOI 6385:2016. Ergonomics principles in design of work systems. URL: https://www.iso.org/standard/63785.html (visited on 20th August 2018).
-
— (2018). We’re ISO: we develop and publish International Standards . URL: https://www.iso.org/standards.html (visited on 20th August 2018).
-
Jackson, C. B., Østerlund, C., Mugar, G., Hassman, K. D. and Crowston, K. (2015). ‘Motivations for sustained participation in crowdsourcing: case studies of citizen science on the role of talk’. In: 2015 48th Hawaii International Conference on System Sciences . IEEE, pp. 1624–1634. https://doi.org/10.1109/hicss.2015.196 .
-
Jerusalem Resolution on Privacy By Design (2010). 32nd International Conference on Data Protection and Privacy Commissioners . Jerusalem, Israel. URL: https://edps.europa.eu/sites/edp/files/publication/10-10-27_jerusalem_resolutionon_privacybydesign_en.pdf (visited on 2nd April 2018).
-
Jordan, R., Crall, A., Gray, S., Phillips, T. and Mellor, D. (2015). ‘Citizen science as a distinct field of inquiry’. BioScience 65 (2), pp. 208–211. https://doi.org/10.1093/biosci/biu217 .
-
Kaufman, A. (2014). ‘Let’s talk about citizen science: what doesn’t work’. Animal Behavior and Cognition 1 (4), pp. 446–451. https://doi.org/10.12966/abc.11.02.2014 .
-
Kelman, H. C. (1967). ‘Human use of human subjects: the problem of deception in social psychological experiments’. Psychological Bulletin 67 (1), pp. 1–11. https://doi.org/10.1037/h0024072 .
-
Kittur, A., Nickerson, J. V., Bernstein, M., Gerber, E., Shaw, A., Zimmerman, J., Lease, M. and Horton, J. (2013). ‘The future of crowd work’. In: Proceedings of the 2013 conference on Computer supported cooperative work — CSCW ’13 . New York, NY, U.S.A.: ACM Press, pp. 1310–1318. https://doi.org/10.1145/2441776.2441923 .
-
Kosmala, M., Wiggins, A., Swanson, A. and Simmons, B. (2016). ‘Assessing data quality in citizen science’. Frontiers in Ecology and the Environment 14 (10), pp. 551–560. https://doi.org/10.1002/fee.1436 .
-
Lease, M. and Alonso, O. (2014). ‘Crowdsourcing and human computation, introduction’. In: Encyclopedia of social network analysis and mining. Ed. by R. Alhajj and J. Ronke. New York, NY, U.S.A.: Springer, pp. 304–315. https://doi.org/10.1007/978-1-4614-6170-8_107 .
-
Lintott, C. and Reed, J. (2013). ‘Human computation in citizen science’. In: Handbook of Human Computation. Ed. by P. Michelucci. New York, NY, U.S.A.: Springer, pp. 153–162. https://doi.org/10.1007/978-1-4614-8806-4_14 .
-
Mackworth, N. H. (1948). ‘The breakdown of vigilance during prolonged visual search’. Quarterly Journal of Experimental Psychology 1 (1), pp. 6–21. https://doi.org/10.1080/17470214808416738 .
-
Mankowski, T. A., Slater, S. J. and Slater, T. F. (2011). ‘An interpretive study of meanings citizen scientists make when participating in Galaxy Zoo’. Contemporary Issues in Education Research (CIER) 4 (4), pp. 25–42. https://doi.org/10.19030/cier.v4i4.4165 .
-
Newman, G., Wiggins, A., Crall, A., Graham, E., Newman, S. and Crowston, K. (2012). ‘The future of citizen science: emerging technologies and shifting paradigms’. Frontiers in Ecology and the Environment 10 (6), pp. 298–304. https://doi.org/10.1890/110294 .
-
Nielsen, J. (1994). Usability engineering. San Diego, CA, U.S.A.: Academic Press.
-
Nov, O., Arazy, O. and Anderson, D. (2011). ‘Technology-mediated citizen science participation: a motivational model’. In: Proceedings of the Fifth international AAAI Conference on Weblogs and Social Media (Barcelona, Spain), pp. 249–256.
-
Oldham, G. R. and Hackman, J. R. (2010). ‘Not what it was and not what it will be: the future of job design research’. Journal of Organizational Behavior 31 (2-3), pp. 463–479. https://doi.org/10.1002/job.678 .
-
Pelli, D. G. and Farell, B. (2010). ‘Psychophysical methods’. In: Handbook of optics, third edition, volume III: vision and vision optics. Ed. by M. Bass, C. DeCusatis, J. Enoch, V. Lakshminarayanan, G. Li, C. MacDonald, V. Mahajan and E. V. Stryland. New York, U.S.A.: McGraw-Hill, pp. 3.1–3.12.
-
Ponciano, L. and Brasileiro, F. (2014). ‘Finding volunteers’ engagement profiles in human computation for citizen science projects’. Human Computation 1 (2), pp. 247–266. https://doi.org/10.15346/hc.v1i2.12 .
-
Ponciano, L., Brasileiro, F., Simpson, R. and Smith, A. (2014). ‘Volunteers’ engagement in human computation for astronomy projects’. Computing in Science & Engineering 16 (6), pp. 52–59. https://doi.org/10.1109/mcse.2014.4 .
-
Prather, E. E., Cormier, S., Wallace, C. S., Lintott, C., Raddick, M. J. and Smith, A. (2013). ‘Measuring the Conceptual Understandings of Citizen Scientists Participating in Zooniverse Projects: A First Approach’. Astronomy Education Review 12 (1), 010109, pp. 1–14. https://doi.org/10.3847/AER2013002 .
-
Preece, J. (2016). ‘Citizen science: new research challenges for human-computer interaction’. International Journal of Human-Computer Interaction 32 (8), pp. 585–612. https://doi.org/10.1080/10447318.2016.1194153 .
-
Preece, J., Rogers, Y. and Sharp, H. (2015). Interaction design: beyond human-computer interaction. Chichester, U.K.: John Wiley & Sons.
-
Quinn, A. J. and Bederson, B. B. (2011). ‘Human computation: a survey and taxonomy of a growing field’. In: Proceedings of the 2011 annual conference on Human factors in computing systems — CHI ’11 , pp. 1403–1412. https://doi.org/10.1145/1978942.1979148 .
-
Raddick, M. J., Bracey, G., Gay, P. L., Lintott, C. J., Cardamone, C., Murray, P., Schawinski, K., Szalay, A. S. and Vandenberg, J. (2013). ‘Galaxy Zoo: Motivations of Citizen Scientists’. Astronomy Education Review 12 (1), pp. 010106–010101. https://doi.org/10.3847/AER2011021 . arXiv: 1303.6886 .
-
Raddick, M. J., Bracey, G., Gay, P. L., Lintott, C. J., Murray, P., Schawinski, K., Szalay, A. S. and Vandenberg, J. (2010). ‘Galaxy Zoo: Exploring the Motivations of Citizen Science Volunteers’. Astronomy Education Review 9 (1), 010103, pp. 1–18. https://doi.org/10.3847/AER2009036 . arXiv: 0909.2925 .
-
Rasmussen, J. and Pejtersen, A. (1995). ‘Virtual ecology of work’. In: Global perspectives on the ecology of human-computer systems. Ed. by J. Flach, P. Hancock, J. Caird and K. J. Vicente. U.S.A.: Lawrence Erlbaum Associates, pp. 121–156.
-
Raymond, E. S. (1999). The Cathedral and the Bazaar: musings on Linux and Open Source by an accidental revolutionary. U.S.A.: O’Reilly Media.
-
Reed, J., Raddick, M. J., Lardner, A. and Carney, K. (2013). ‘An Exploratory Factor Analysis of Motivations for Participating in Zooniverse, a Collection of Virtual Citizen Science Projects’. In: Proceedings of the 46th Hawaii International Conference on System Sciences (HICSS 2013) (7th–10th January 2013). IEEE, pp. 610–619. https://doi.org/10.1109/HICSS.2013.85 .
-
Reed, J., Rodriguez, W. and Rickhoff, A. (2012). ‘A framework for defining and describing key design features of virtual citizen science projects’. In: Proceedings of the 2012 iConference — iConference ’12 (Toronto, Canada), pp. 623–625. https://doi.org/10.1145/2132176.2132314 .
-
Reeves, S. (2013). ‘Human-computer interaction issues in human computation’. In: Handbook of human computation. Ed. by P. Michelucci. New York, NY, U.S.A.: Springer, pp. 411–419. https://doi.org/10.1007/978-1-4614-8806-4_32 .
-
Resnik, D. B., Elliott, K. C. and Miller, A. K. (2015). ‘A framework for addressing ethical issues in citizen science’. Environmental Science & Policy 54, pp. 475–481. https://doi.org/10.1016/j.envsci.2015.05.008 .
-
Riesch, H. and Potter, C. (2014). ‘Citizen science as seen by scientists: Methodological, epistemological and ethical dimensions’. Public Understanding of Science 23 (1), pp. 107–120. https://doi.org/10.1177/0963662513497324 .
-
Sauermann, H. and Franzoni, C. (2015). ‘Crowd science user contribution patterns and their implications’. Proceedings of the National Academy of Sciences 112 (3), pp. 679–684. https://doi.org/10.1073/pnas.1408907112 .
-
Shirk, J. L., Ballard, H. L., Wilderman, C. C., Phillips, T., Wiggins, A., Jordan, R., McCallie, E., Minarchek, M., Lewenstein, B. V., Krasny, M. E. and Bonney, R. (2012). ‘Public Participation in Scientific Research: a Framework for Deliberate Design’. Ecology and Society 17 (2), p. 29. https://doi.org/10.5751/ES-04705-170229 .
-
Sprinks, J., Wardlaw, J., Houghton, R., Bamford, S. and Morley, J. (2017). ‘Task workflow design and its impact on performance and volunteers’ subjective preference in Virtual Citizen Science’. International Journal of Human-Computer Studies 104, pp. 50–63. https://doi.org/10.1016/j.ijhcs.2017.03.003 .
-
Straub, M. C. P. (2016). ‘Giving citizen scientists a chance: a study of volunteer-led scientific discovery’. Citizen Science: Theory and Practice 1 (1), p. 5. https://doi.org/10.5334/cstp.40 .
-
Taylor, F. W. (1911). The principles of scientific management. New York, NY, U.S.A.: Harper & Brothers.
-
Taylor, M., Marsh, G., Nicol, D. and Broadbent, P. (2017). Good work: the Taylor review of modern working practices. U.K. URL: https://www.gov.uk/government/publications/good-work-the-taylor-review-of-modern-working-practices (visited on 20th October 2018).
-
Teichner, W. H. (1974). ‘The detection of a simple visual signal as a function of time of watch’. Human Factors: The Journal of the Human Factors and Ergonomics Society 16 (4), pp. 339–352. https://doi.org/10.1177/001872087401600402 .
-
Tinati, R., Van Kleek, M., Simperl, E., Luczak-Rösch, M., Simpson, R. and Shadbolt, N. (2015). ‘Designing for Citizen Data Analysis: A Cross-Sectional Case Study of a Multi-Domain Citizen Science Platform’. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15) (Seoul, Korea, 18th–23rd April 2015). ACM Press, pp. 4069–4078. https://doi.org/10.1145/2702123.2702420 .
-
Van Den Berg, H. A., Dann, S. L. and Dirkx, J. M. (2009). ‘Motivations of adults for non-formal conservation education and volunteerism: implications for programming’. Applied Environmental Education & Communication 8 (1), pp. 6–17. https://doi.org/10.1080/15330150902847328 .
-
Wiggins, A., Bonney, R., Graham, E., Henderson, S., Kelling, S., Littaeur, R., LaBuhn, G., Lotts, K., Michener, W., Newman, G., Russell, E., Stevenson, R. and Weltzin, J. (2013). Data management guide for public participation in scientific research . Albuquerque, NM, U.S.A.
-
Wiggins, A. and Crowston, K. (2010). ‘Developing a Conceptual Model of Virtual Organizations for Citizen Science’. International Journal of Organizational Design and Engineering 1 (1), pp. 148–162. https://doi.org/10.1504/IJODE.2010.035191 .
-
— (2011). ‘From Conservation to Crowdsourcing: a Typology of Citizen Science’. In: Proceedings of the 44 th Hawaii International Conference on System Sciences (HICSS-44) . Kauai, HI, U.S.A. Pp. 1–10. https://doi.org/10.1109/HICSS.2011.207 . (Visited on 20th August 2018).
-
Wilson, J. R. and Sharples, S. (2015). ‘Methods in the understanding of human factors’. In: The evaluation of human work. Ed. by J. R. Wilson and S. Sharples. 4th ed. Boca Raton, FL, U.S.A.: CRC Press, pp. 1–36.
-
World Conference on Research Integrity (2010). Singapore Statement on Research Integrity . URL: http://www.singaporestatement.org (visited on 2nd April 2018).
Authors
Robert J. Houghton is an Assistant Professor in Human Factors in the Faculty of Engineering at the University of Nottingham. He specializes in cognitive and systems ergonomics and has carried out research in both laboratory and field settings relevant to topics such as digital economy services, multimodal interfaces and the development of mobile technology. E-mail: Robert.Houghton@nottingham.ac.uk .
James Sprinks is a Research Fellow at Nottingham Trent University. His main research considers the user interface design and development of HCI systems including the development of citizen science platforms that allow the online public to take part in and contribute towards scientific research across a range of different disciplines. E-mail: James.Sprinks@ntu.ac.uk .
Jessica Wardlaw is a Research Fellow at the Nottingham Geospatial Institute, University of Nottingham. Her research interests span Geography (including Web GIS, cartography, spatial cognition and knowledge construction, health geography) and Human-Computer Interaction (from applied aspects such as user-centred design, usability engineering and design practice, to cognitive aspects including sense- and decision-making and information visualisation). E-mail: Jessica.Wardlaw@nottingham.ac.uk .
Steven Bamford is a Lecturer in the School of Physics and Astronomy at the University of Nottingham. His main research interests are in extragalactic astronomy, but through early involvement in Galaxy Zoo he came to be founding Science Director of the Citizen Science Alliance. Through the Zooniverse, this organisation has brought authentic involvement in scientific research to millions of participants, and led to valuable science results in many fields. Dr Bamford continues to contribute to the coordination and utilisation of citizen science, with a particular interest in improving the efficiency and quality of citizen science. E-mail: Steven.Bamford@nottingham.ac.uk .
Stuart Marsh is the Head of the Geohazards and Earth Processes Research Group at the University of Nottingham. His research interests concern developing geoscience applications for Earth Observation. Applications include EO applications geological and soils mapping, 3D and elevation modelling, geological hazard mitigation, mineral, energy and water resources, waste management and environmental change. E-mail: Stuart.Marsh@nottingham.ac.uk .