1 Context
The development of sign language technologies primarily impacts deaf and hard of hearing communities. Historically, these technologies have been developed by hearing scientists with little or no input from deaf researchers or community representatives — leading to technologies that are unusable and do not fulfil the community’s needs. Sign language gloves, for example, are regularly developed without meaningful contribution from deaf people, resulting in a technology that cannot capture the complexity of sign languages, that fail to acknowledge the linguistic structure of sign languages especially the importance of non-manual features, and that place a burden on deaf users to sign in a restrictive way [Hill, 2020]. This highlights the importance of ethical and responsible development of sign language technologies [De Meulder, 2021].
Understanding “the social conversation around science” [Bucchi & Trench, 2021] as it relates to sign language technologies is essential for ethical research in this sector. Technologies that have received hype, in the way many sign language technologies have, present complex challenges for public engagement, but important opportunities to discuss expectations and future visions for the technologies [Roberson, 2020]. While the science communication literature tends to prioritise hearing perspectives, there are many examples of science communication research and practice by deaf scientists and science communicators that centre sign languages. Atomic Hands, founded by Dr. Alicia Wooten and Dr. Barbara Spiecker, is a collection of American Sign Language (ASL)-centric videos and resources.1 SIGNtific is a programme of workshops and live demonstrations at the Science Museum in London presented in British Sign Language (BSL).2 Dr. Audrey Cameron OBE has presented science programmes on national television in the UK, and has also led the development of a BSL science, technology, engineering and maths (STEM) glossary, to support STEM education and learning [O’Neill, Cameron, Quinn, O’Neill & McLean, 2015; Cameron, 2015]. This is one of many sign language STEM glossaries, including the Irish Sign Language (ISL) STEM Glossary [Mathews, Cadwell, O’Boyle & Dunne, 2022] and the ASL STEM Concept Learning Resource (ASL CLeaR) [Reis, Solovey, Henner, Johnson & Hoffmeister, 2015].
SignON was a Horizon 2020 project exploring sign language machine translation (SLMT) — an emerging technology with the potential to improve communication between deaf, hard of hearing, and hearing people — across several signed and spoken languages. The project brought together deaf community representatives and experts in sign linguistics, machine translation, sign language recognition, speech recognition, and avatar synthesis. The SignON consortium consisted of 16 partner organisations, including the European Union of the Deaf (EUD) and the Vlaams GebarentaalCentrum (VGTC, Flemish Sign Language Centre).
SignON used a co-creation methodology, developed and led by EUD, to facilitate the exchange of information and ideas between deaf and hard of hearing communities and the largely hearing technology experts. We used surveys, interviews, focus groups, workshops, and round-tables — while also exploring creative engagement methods. This paper describes the development of a theatre performance, and a subsequent performance of the same material for the camera, to engage audiences with SLMT. It was developed by deaf, hard of hearing, and hearing researchers in collaboration with deaf theatre practitioners and other experts in art, science, and education.
Initiatives such as SMASHfestUK have demonstrated the value and potential for immersive, narrative-led experiences to engage communities that have been excluded from informal science learning, to enhance their science identity [Keith & Griffiths, 2021], and to build their science capital [Archer, Dawson, DeWitt, Seakins & Wong, 2015]. Plays have been used to place complex and controversial scientific topics, such as human cloning, into social and emotional contexts [Donkers & Orthia, 2016] — and so plays present an important opportunity to understand SLMT contextually. The process of developing a play by deaf theatre performers with the support and input of the SignON team also presented an important opportunity for art-science collaboration. Many researchers see the value of theatre as a way to communicate scientific ideas [Amaral, Montenegro, Forte, Freitas & Cruz, 2017], and when guided through the collaborative art-science process realise the value of exchanging knowledge and ideas with theatre practitioners [Dowell & Weitkamp, 2012]. To summarise, we as the SignON co-creation team saw this as an opportunity for our scientific teams to learn from collaborating with theatre makers, to support deaf arts, to create work that would contextualise SLMT, and to gather the resultant insights from deaf and hard of hearing audiences so that they may inform the project.
We adapted a method initially developed by Association TRACES [Merzagora, Ghilbert & Meunier, 2022] as part of SISCODE, a European project exploring co-creation methodologies, to produce a theatre performance in ISL that incorporated elements of machine vision and machine translation. The performance was followed by an audience discussion on SLMT, which we transcribed and reflected on using thematic analysis. We evaluated the overall project using the Equity Compass, an evaluation tool designed to facilitate structured, critical reflection on informal science learning projects with a view to making them as socially just as possible [YESTEM Project UK Team, 2020]. This paper describes our process, the outputs, its evaluation, and our reflections on the effectiveness of art-science methods to engage deaf, hard of hearing, and hearing audiences with emerging sign language technologies.
2 Methods
2.1 SignON co-creation framework
There are many ways in which co-creation can be applied to research projects [Eckhardt, Kaletka, Krüger, Maldonado-Mariscal & Schulz, 2021]. Co-creation in SignON was based on a ‘Design For All’ approach as described by the World Federation of the Deaf [2014], and was developed and coordinated by the EUD — a not-for-profit European non-Governmental organisation representing deaf people at a European level, whose members comprise National Associations of the Deaf.3 The strategy was refined during the project, based on organisational and community feedback.
The SignON co-creation workflow (Figure 1) facilitated exchange of information and ideas between the SLMT user community and the researchers. The process involved ongoing engagement with deaf, hard of hearing, and hearing groups through surveys, interviews, round table discussions, and workshops. Importantly, many SignON team members are deaf or hard of hearing, and we found that deaf participants in our co-creation events reported higher levels of trust in deaf researchers than hearing researchers. [SignON Consortium, 2021a]. This resonates with calls for deaf leadership in sign language AI research due to the influence of positionality, and the increased likelihood of biases in projects led by hearing non-signing researchers [Desai, De Meulder, Hochgesang, Kocab & Lu, 2024].
Co-creation in SignON was supported by a communications strategy developed and led by VGTC. Feedback gathered during co-creation activities was communicated internally to the (largely hearing) technology team, who used this information to prioritise specific features and user requirements for the SignON app. To complete the co-creation cycle, user communities then tested and provided feedback on SignON app prototypes. This feedback influenced several aspects of the prototype, including the design of the avatar, for example, from having large hands to having average size hands; and the use cases for the app, indicating that we should focus on travel and hospitality use cases. Feedback gathered during co-creation activities was also communicated externally, through academic publications [Shterionov et al., 2022], conference presentations, plain language summaries, and public engagement activities [SignON Consortium, 2021b].
The co-creation activities described in this paper — a theatre performance titled All the World’s a Screen and a filmed performance of the same content titled That is the Question — were managed by a hearing team member (author SO) in collaboration with two deaf theatre performers (authors LQ and AJ). Team members at EUD (including authors RO and DVL) and VGTC (including author CB) provided regular feedback and advice during the development process, and technical, artistic, or academic advisors were consulted when additional expertise was required. Most meetings were conducted over Zoom, with interpreting between signed and spoken languages. These activities were directly funded by the Science Foundation Ireland Discover Science Week grant, and SignON resources (such as the time and expertise of team members) were provided with the support of Horizon 2020 funding.
2.2 Choice of format
This paper covers two related outputs of an art-science engagement process: All the World’s a Screen was a live performance followed by an audience discussion, and That is the Question is an adaptation of this performance filmed for the camera and shared with audiences at screenings and online.
In adapting the method developed by Association TRACES as part of SISCODE, in which ten partners studied co-creation ecosystems in temporary ‘co-creation labs’. Paul Boniface at Association TRACES developed Hamlet in the Gym with MTV (or Hamlet en salle de gym with MTV) through a co-creation process to explore the reframing of artificial intelligence (AI) as a ‘co-spectator’ [Merzagora et al., 2022]. It involved a performance of Hamlet in a gym setting, where audience members viewed the performance on (and from the perspective of) apps including Google Lens, SeeingAI, Yolo, and others. This was followed by an audience discussion on AI and machine learning.
Their hypothesis was that reframing AI as a co-spectator may spark important conversations about our relationship with the technology. We chose this method because it was developed through co-creation, it showed promise as a method of community engagement, and it would complement more formal engagement methods such as focus groups and surveys. We adapted the format to centre ISL and to focus on exploring the audience’s relationship with SLMT. We were also interested in broader themes of technology and the deaf community, including accessibility in the arts, inaccuracies in automated captioning, and the role of sign language interpreters.
2.3 Development of content
Texts by Shakespeare were chosen by the performers (authors LQ and AJ) based on the prompt, ‘if we were to introduce an AI to Shakespeare texts in ISL, which extracts would we choose first?’. Additional consideration was given to texts that resonated with aspects of sign language or AI. The following texts were selected: Macbeth, Act 5, Scene 5; Romeo and Juliet, Act 1, Scene 5; Sonnet 18; Romeo and Juliet, Act 2, Scene 1; As You Like It, Act 2, Scene 7; Hamlet, Act 3, Scene 1; and Romeo and Juliet, Act 4, Scene 3.
The texts were translated to ISL by deaf performers, LQ and AJ. The ISL version showcased the complexity of ISL and sign languages, and incorporated Visual Vernacular, a stage technique used by deaf performers which can be independent of sign languages, but also borrow creatively from them [Haughey & Armstrong, 2019]. Some additional content was added to the script, to further connect with the theme of AI, for example:
To be, or not to be, that is the question.
To live or not to live, that is the question.
To be human or not to be human, that is the question.
To be a carbon-based or silicon-based life form, that is the question.
2.4 First performance
All the World’s a Screen was developed and rehearsed over several months. The first performance took place in November 2022 as part of Science Week, a major national programme of science festivals and events in Ireland.4 The venue for this performance was the Trinity Long Room Hub at Trinity College Dublin (a SignON partner). The Trinity Long Room Hub was a preferred venue because it is an arts and humanities research institute located on the city centre campus, and it contains an events space that regularly hosts public engagement events.
The event was held in a flexible black box space, and seating was arranged around an area marked as the stage. Props included a hardback copy of the complete works of Shakespeare and a Kindle (with a digital copy). The performance was free to attend, and the event listing was shared on Eventbrite (in English and ISL). We promoted the event through the SignON, DCU, and Trinity College Dublin networks, with a focus on deaf community groups and organisations. Approximately 70 adults attended the event, and most were part of the deaf community. They were welcomed, given a printout with some information on the event, and invited to take their seats. The scenes were performed, with an English voiceover of the original Shakespeare text provided by interpreters.
During the performance, we projected a live pose analysis (Figure 2A), the original script (Figure 2B), and live autotranscription (Figure 2C) onto a large screen.
The live pose analysis — where a computer attempts to identify and track the performers’ movement, visualised by lines and dots on specific parts of the body — was produced with MoveNet, a pose estimation model released by Google Research in 20215 [Jo & Kim, 2022]. Live automated transcription was generated from the English voiceover provided by interpreters. This was transcribed into autocaptions, usually inaccurately, by using the ‘voice typing’ function in a Google Doc which was displayed on the screen. The original script was also displayed on a separate Google Doc so that audience members could compare the autocaptions to the original text. This was highlighted by authors LQ and AJ as an important element for hearing audience members, so they could observe the inaccuracies of automatic captions, which deaf people are often expected to rely on. A soundtrack was prepared by (hearing) artistic adviser Maurice Joseph Kelliher.
As an encore, we repeated As You Like It, Act 2, Scene 7, this time inviting audience members to view it through Google Lens (Figure 3A) set to the retail tab, which attempts to identify objects and searches for them as online items for sale (Figure 3C). We described the audience’s phones as their ‘machine guests’, and invited them to watch the performance from their guest’s perspective. This was to facilitate a shift from viewing AI as a tool, to viewing AI as a co-spectator [Merzagora et al., 2022]. We chose the retail tab as an example of AI with a specific application.
2.5 Audience discussion
After the performance, audience members were provided with a plain language summary of the project, and asked to sign an informed consent form before taking part in the audience discussion. The audience discussion was facilitated by a hearing researcher (author SO) in English, accompanied by ISL interpreters. The prompts were open-ended: “How did you feel about the performance?”; “How do you feel about this technology?”, “What are your thoughts on the future of this technology?”. The discussion followed three main themes: the centering of ISL; the limitations of machine translation; and the future of sign language technologies. The discussion was transcribed by a hearing researcher (author EM) fluent in English and with a high level of competence in ISL.
2.6 Evaluation
We evaluated the audience discussion and the overall process of the project. To evaluate the audience discussion, we took a thematic analysis approach [Braun & Clarke, 2006]. The transcript of the audience discussion was coded based on discussion, agreement, and recoding. We first identified units of meaning across all interviews. These codes were discussed by members of the research team and revised into themes. One coder then recoded the transcripts for these themes, and this was checked for consistency by the rest of the team.
To evaluate the overall process, we used the Equity Compass, a practice-focused evaluation tool developed by Informal Science Learning experts to facilitate reflection on engagement practice [YESTEM Project UK Team, 2020]. We chose this evaluation tool because it supports the creation of socially just community engagement, which is a core value of SignON’s co-creation approach. Initially, two members of the research team used the prompts provided in the Equity Compass guide for STEM ambassadors and the Equity Compass worksheet to reflect on, discuss, and analyse the project across the four areas of the compass (challenging the status quo, working with and valuing minoritised communities, embedding equity, and extending equity) and eight dimensions within those areas. This process was audio and video recorded, summarised as text, shared with all members of the research team, and revised based on discussion and consensus.
2.7 Legacy
All the World’s a Screen was performed a second time, without the audience discussion, in February 2023 at the Royal Irish Academy as part of the Sign Languages on the Island of Ireland conference. Additional performances, however, are limited by funding, resources, and availability. In order to capture the performance so that it can hopefully spark further conversations about sign language technologies, we filmed a version for the camera.
That is the Question was filmed at Deaf Village Ireland (a large campus in west Dublin housing schools and services for the Deaf community), with the same performers (authors LQ and AJ), and included three scenes: the adapted version of Hamlet, Act 3, Scene 1; As You Like It, Act 2, Scene 7; and Romeo and Juliet, Act 4, Scene 3 (Figure 4A). Intermittently throughout the video, a pose recognition overlay is generated with MediaPipe6 (Figure 4B). This video piece was launched in August 2023 at an event in Dublin City University, and shown again in September 2023 at EU Researcher Night in Trinity College Dublin. We intend to show this video in arts and science spaces internationally, with a focus on deaf arts festivals.
3 Results and discussion
3.1 Evaluation of the audience discussion using thematic analysis
The key themes and topics that emerged from the post-show audience discussion are: the centering of Irish Sign Language; the limitations of machine translation; and the future of sign language technologies. They are summarised and discussed in the following paragraphs.
Centering Irish Sign Language. When asked how they felt about the performance, most audience members focused on what it was like to experience a Shakespeare performance in ISL, rather than focusing on the AI or technology aspects of the performance. A representative selection of audience feedback on the centering of ISL follows.
A deaf audience member described their feelings about the performance: “Wow! The complexity of translating that script into ISL. I’ve never seen Shakespeare in ISL. Incredible experience. Live performance. My own language. Goosebumps.”
Another deaf audience member shared their perspective on Shakespeare texts: “Usually very inaccessible. I couldn’t follow the written text, but I could easily access the ISL translation.”
A deaf audience member reflected on what it might be like for hearing people in the audience: “I feel lucky as a deaf person because I could understand the ISL. I feel sorry for the hearing people who could only access the English.”
A hearing audience member described their experience of the performance: “I wasn’t sure what to expect. I don’t know ISL, I know some ASL. I have learned so much tonight, for example about women’s signs. It was beautiful, and hypnotic watching the expressiveness of the actors.”
Much of the audience discussion focused on the beauty and importance of seeing Shakespeare performed in ISL. In Ireland, most science engagement events are presented in English, sometimes with ISL interpreting. This was an event that centred ISL, was presented through ISL, and still engaged hearing audiences who were accessing it through English interpretation. This demonstrated that we could engage deaf, hard of hearing, and hearing audiences — all of whom might be impacted by SLMT — through ISL. We would therefore like to see more opportunities for deaf-led science engagement practice that centres sign language.
The limitations of machine translation. The audience discussed the limitations of current machine translation technologies, often referring to the examples of autocaptioning and image recognition we used during the performance. This demonstrated the usefulness of incorporating elements of an emerging or future technology into an event. They pointed out that Google Lens suggested very expensive clothes for one performer, and very cheap clothes for the other performer. This generated a lot of laughter, which was similar to the engagement observed by Association TRACES in Hamlet in the Gym with MTV [Merzagora et al., 2022].
There were other examples of machine vision mistakes during the performance. Google Lens appeared to be informed by the performers’ gestures. When performer AJ was portraying a soldier, with her arm pointed forwards, Google Lens identified her clothing as curtains, and included links to purchase curtains online. When the same performer marched around the venue, it identified her black outfit as a ninja costume, and offered options to buy ‘similar costumes’ online. These mistakes helped spark conversations on the inaccuracy and current limitations of machine vision (a component of SLMT).
Audience members expressed their frustrations with, and lack of trust in related technologies in their day-to-day lives. The autocaptions generated during the performance were filled with mistakes. We predicted this would happen, and it was planned as part of the performance by authors AJ and LQ to highlight the inaccuracies of autocaptions to hearing audience members, who may not be as familiar with this issue as deaf audience members. Including this more familiar, present day experience was intended to spark further conversation on the limitations of SLMT, and it was effective. As one deaf audience member pointed out:
“It will be interesting/fascinating to see if it will be able to translate from ISL to written English at some stage. There are lots of examples of this technology not working well.”
By integrating AI technologies into the performance, we sparked a conversation about their limitations. The feedback from deaf and hard of hearing audience members was in line with the feedback received through SignON surveys and roundtables [O’Boyle, Rijckaert & Mathews, 2024].
The future of sign language technologies. One of our aims was to facilitate a discussion on imagined futures for sign language technologies. We felt that this was well-suited to an immersive cultural experience intended to take audiences out of their everyday lives, and beyond the here and now. Deaf audience members raised points about the complexity of ISL, and whether or not SLMT will ever be able to process it. This included references to variations in ISL that may be not be readable by technologies — especially if the variations are not well-documented. For example, there is gender variation in ISL, which resulted from gender segregated schools for deaf students [LeMaster, 2006; Leeson & Grehan, 2004].
“Alvean was using everyday ISL, Lianne was using variant of women’s sign. Challenge for technology to capture this.”
Many audience members referenced the performers by name (authors AJ and LQ), because they are well-known in Dublin’s deaf community. This is relevant because the hearing project coordinator (author SO) could not have developed an event with this depth of engagement without the performers. Their expertise and knowledge on deaf culture, the deaf community, and ISL informed every creative and linguistic choice. This led to an audience discussion that cut straight to complex topics such as how AI might perceive the community, and the security of interpreters’ jobs.
On how sign language technologies might ‘perceive’ signers: “If AI was observing us, would it see us as rotors? To pick up on ISL you would need sensors on every joint of the body. Interpreters’ jobs are safe for a long time yet.”
“Will we get to a time where we have loads and loads of data for this? I know this will take a long long time. We need the interpreters for now.”
While this was an exploratory study, the depth of engagement and the complexity of themes that emerged during the audience discussion indicate that an art-science approach to deaf community engagement — when led by deaf artists — is valuable within a SLMT co-creation process.
3.2 Evaluation of the project using the Equity Compass
We used the Equity Compass to reflect on our project across eight dimensions, each providing a different lens on the process (Figure 5A). We used the guiding questions and prompts to identify our location on the compass for each dimension. A result located on the inside of the compass indicates weak practice in terms of equity, and a result on the outside of the compass indicates strong equitable practice. The eight dimensions are grouped into four overarching areas: challenging the status quo; working with and valuing minoritised communities; embedding equity; and extending equity. We present a visual summary of our results (Figure 5B) and a detailed analysis across the four overarching areas.
Challenging the status quo. We used this area of the compass to explore how well our practice prioritised deaf and hard of hearing communities, improved power relations, and redistributed resources to the deaf community.
STEM engagement events usually centre hearing audiences. All the World’s a Screen transformed the usual accessibility dynamic by prioritising deaf and hard of hearing audiences. Deaf audience members experienced this project in their own language — “My own language. Goosebumps.” — and hearing audience members learned how complex and sophisticated sign languages are — “I have learned so much tonight”. We promoted the event in deaf community spaces before sharing more broadly, to ensure deaf audiences could register first. Registration was free, but the event could have been made more financially accessible by providing food and transport. Deaf people who are isolated or without access to the internet may not have been reached by the promotion for this event.
Shakespeare is celebrated for his creative and complex approach to language [Hussey, 2016]. The creativity and complexity of the ISL translation transformed power relations by challenging the idea that spoken languages are ‘better’ than signed languages. Often, at accessible events, interpreters are at the front of the stage (for practical reasons). In this performance, however, interpreters sat in the front row and provided an English voiceover by microphone. They were not visible to the audience until they provided interpreting for the audience discussion.
While this project was part of SignON, it was mostly funded by an external grant. We aimed to put funding into the deaf community, and it was mainly used to pay performers, contributors, interpreters, and other staff. However, the grant was ultimately held by a hearing academic, which may reflect systemic barriers to deaf people accessing public engagement funding. For example, the grant required a lengthy written application through English, with no information on the grant or application process available in ISL, and no option to submit a video application. More accessible application processes could lead to community groups rather than academics holding such grants, or to partnership grants that require academics and community organisations to work collaboratively.
Deaf talent was prioritised in the budget for this project. Deaf performers were hired to develop, translate, and perform the script; and deaf interpreters were hired to produce promotional videos. Deaf community expertise was prioritised over academic expertise, and hearing collaborators were brought in where a particular expertise or hearing perspective was required (for example, hearing artist Maurice Joseph Kelliher provided creative guidance and produced sound design for All the World’s a Screen and That is the Question). Our ability to work with more deaf service suppliers was limited by the timelines and accessibility of procurement processes within the university, and so we worked with a hearing stage manager for All the World’s a Screen, and a hearing videographer for That is the Question. While a city centre location was suggested by performers AJ and LQ (to encourage more hearing people to attend), holding a performance in a deaf community space (which, in Dublin, is outside the city centre) may have resulted in more funding going to the deaf community.
To summarise our reflections on challenging the status quo, we were strong but had some room for improvement in prioritising deaf and hard of hearing communities; we were strong on transforming power relations in favour of the deaf community; and we were strong with some room for improvement on redistributing resources to the deaf community.
Working with and valuing minoritised communities. We used this area of the compass to explore how participatory and asset-based our approach was. This project was a collaboration between deaf, hard of hearing, and hearing contributors before we even applied for funding. While the project was coordinated by a hearing researcher, it was part of SignON’s co-creation work package, led by the EUD. Our methodological starting point — the experiment by Association TRACES — was developed through a co-creation process, but with hearing participants. As a small team of deaf, hard of hearing, and hearing colleagues, we deconstructed and adapted this method to work within a project centred on sign language. Future iterations would ideally have a longer development time and more funding, and therefore more scope for co-creation.
Deaf and hard of hearing expertise was essential to all aspects of this project, including: the historical relationship between the deaf community and hearing tech developers; engaging deaf and hard of hearing audiences with science; deaf arts and culture; ISL linguistics and culture; translation and interpreting; accessibility; and marketing and communications. For the scientific content, we prioritised deaf community perspectives on sign language technologies over the perspectives of hearing scientists. To summarise our reflections on working with and valuing minoritised communities, we were very strong on participation, and strong on taking an asset-based approach.
Embedding equity. We used this area of the compass to explore if equity was mainstreamed, rather than tokenistic in our project. The event led to an audience discussion on equity issues relating to SLMT. Their insights and concerns were shared with the consortium and publicly to improve hearing scientists’ awareness of deaf perspectives on sign language technologies. While we agreed that our project was very strong on embedding equity, more focus must be placed on equity issues relating to class, age, gender, ethnic background, sexuality, and disability within the deaf community.
Extending equity. We used this area of the compass to explore if the project was community oriented and long-term. This was a 3-month project developed with external funding in the second year of a 3-year research project, which limits some of its potential for long-term engagement. We have addressed this in part by creating That is the Question so that it might live on as a video installation after SignON. We also prepared a plain language summary of the project in the form of an illustrated zine, designed by Hana Ayoob, to facilitate deeper engagement with community groups. While we were strongly community oriented, we were weak on long-term engagement and impact.
We set out to create an event that would give audiences an immersive experience of important elements of SLMT, and to capture the audience discussion that followed. Future iterations of this project could explore performances that incorporate audience interaction, which may support audiences to build their own narratives around SLMT and similar technologies [Griffiths & Keith, 2022]. The themes and feedback that arose during the audience discussion complemented existing feedback that had been collected by SignON through surveys, focus groups, and roundtable discussions. By integrating this feedback into SignON, we are supporting an engaged research methodology.
We documented and learned several things throughout the process that might benefit similar community engagement and co-creation initiatives:
-
It is important for deaf audiences to see events designed to be performed directly in sign language, rather than live-interpreted into sign language.
-
Events in sign language can engage deaf, hard of hearing, and hearing audiences; signers and non-signers.
-
Deaf leadership and collaboration is required from the beginning of a project, to successfully reach and engage the deaf community.
-
Deaf contributors should be paid and credited for their time and work.
-
Incorporating elements of an emerging technology such as SLMT into a performance can spark valuable discussions on its future directions.
-
Audience discussions can offer tangible benefits to a research project when they are documented and shared as part of a co-creation process.
4 Conclusion
Science engagement through sign language is an effective way to engage deaf, hard of hearing, and hearing audiences with sign language technologies. An art-science approach can support discussion on the future of these technologies, and the directions people would like them to go in. When this feedback is documented and shared within a research project, it can be used to direct current research and inform future research. Within SignON, for example, these discussions helped define usage domains and use-case scenarios for the SignON app. Use cases were regularly redefined throughout the project, based on community input [SignON Consortium, 2023]. This impacted the design of the app, and how we pitched it to potential users. While this study is limited as a short-term exploratory project, we propose that engagement such as this will be essential if we are to navigate sign language technology research ethically, and it should always be led by deaf experts, artists, and community representatives. We hope that this project will raise questions about accessibility within science communication and the arts, and that it will challenge hearing people’s perceptions about sign language and the deaf community. We hope that it will facilitate more deaf and hard of hearing people engaging with SLMT, in a way that they might direct the research to benefit the community.
Acknowledgments
This project received funding from Science Foundation Ireland (grant number 22/SW/10389). SignON has has received funding from the European Union’s Horizon 2020 Research and Innovation Programme (grant agreement number 101017255). The authors would like to thank the SignON consortium and artist Maurice Joseph Kelliher for valuable conversations and expertise that informed this work.
References
-
Amaral, S. V., Montenegro, M., Forte, T., Freitas, F. & Cruz, M. T. G. (2017). Science in theatre — an art project with researchers. Journal of Creative Communications 12 (1), 13–30. doi:10.1177/0973258616688966
-
Archer, L., Dawson, E., DeWitt, J., Seakins, A. & Wong, B. (2015). “Science capital”: a conceptual, methodological, and empirical argument for extending Bourdieusian notions of capital beyond the arts. Journal of Research in Science Teaching 52 (7), 922–948. doi:10.1002/tea.21227
-
Braun, V. & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology 3 (2), 77–101. doi:10.1191/1478088706qp063oa
-
Bucchi, M. & Trench, B. (2021). Rethinking science communication as the social conversation around science. JCOM 20 (03), Y01. doi:10.22323/2.20030401
-
Cameron, A. (2015). The development of astronomy signs and analysis of impact on deaf and hearing communities. In Projeto Surdos: Simposio Caminhos da Inclusao: Saberes cientificos e technologios. Sua importancia para o desenvolvimento do indivíduo surdo. UFRJ, Rio de Janerio, Brazil. Retrieved from https://www.researchgate.net/publication/281593745_The_development_of_astronomy_signs_and_analysis_of_impact_on_deaf_and_hearing_communities
-
De Meulder, M. (2021). Is “good enough” good enough? Ethical and responsible development of sign language technologies. In D. Shterionov (Ed.), Proceedings of the 1st International Workshop on Automatic Translation for Signed and Spoken Languages (AT4SSL) (pp. 12–22). Association for Machine Translation in the Americas. Retrieved from https://aclanthology.org/2021.mtsummit-at4ssl.2
-
Desai, A., De Meulder, M., Hochgesang, J. A., Kocab, A. & Lu, A. X. (2024). Systemic biases in sign language AI research: a deaf-led call to reevaluate research agendas. arXiv: 2403.02563v1
-
Donkers, M. & Orthia, L. A. (2016). Popular theatre for science engagement: audience engagement with human cloning following a production of Caryl Churchill’s A number. International Journal of Science Education, Part B 6 (1), 23–45. doi:10.1080/21548455.2014.947349
-
Dowell, E. & Weitkamp, E. (2012). An exploration of the collaborative processes of making theatre inspired by science. Public Understanding of Science 21 (7), 891–901. doi:10.1177/0963662510394278
-
Eckhardt, J., Kaletka, C., Krüger, D., Maldonado-Mariscal, K. & Schulz, A. C. (2021). Ecosystems of co-creation. Frontiers in Sociology 6, 642289. doi:10.3389/fsoc.2021.642289
-
Griffiths, W. & Keith, L. (2022). Actors with agency: immersive science theatre and science identity. In E. Weitkamp & C. Almeida (Eds.), Science & theatre: communicating science and technology with performing arts (pp. 103–112). doi:10.1108/978-1-80043-640-420221008
-
Haughey, L. & Armstrong, D. (2019). On the theatricality of sign languages on stage. Performance Research 24 (4), 76–79. doi:10.1080/13528165.2019.1641327
-
Hill, J. (2020). Do deaf communities actually want sign language gloves? Nature Electronics 3 (9), 512–513. doi:10.1038/s41928-020-0451-7
-
Hussey, S. S. (2016). The literary language of Shakespeare (2nd ed.). doi:10.4324/9781315844862
-
Jo, B. & Kim, S. (2022). Comparative analysis of OpenPose, PoseNet, and MoveNet models for pose estimation in mobile devices. Traitement du Signal 39 (1), 119–124. doi:10.18280/ts.390111
-
Keith, L. & Griffiths, W. (2021). SCENE: a novel model for engaging underserved and under-represented audiences in informal science learning activities. Research for All 5 (2), 320–346. doi:10.14324/rfa.05.2.09
-
Leeson, L. & Grehan, C. (2004). To the lexicon and beyond: the effect of gender on variation in Irish Sign Language. In M. Van Herreweghe & M. Vermeerbergen (Eds.), To the lexicon and beyond: sociolinguistics in European deaf communities (pp. 39–73). doi:10.2307/j.ctv2rh28cx.7
-
LeMaster, B. (2006). Language contraction, revitalization, and Irish women. Journal of Linguistic Anthropology 16 (2), 211–228. doi:10.1525/jlin.2006.16.2.211
-
Mathews, E., Cadwell, P., O’Boyle, S. & Dunne, S. (2022). Crisis interpreting and deaf community access in the COVID-19 pandemic. Perspectives 31 (3), 431–449. doi:10.1080/0907676x.2022.2028873
-
Merzagora, M., Ghilbert, A. & Meunier, A. (2022). TRACES — In 2030, artificial intelligences will visit museums? In A. Deserti, M. Real & F. Schmittinger (Eds.), Co-creation for responsible research and innovation: experimenting with design methods and tools (pp. 129–138). doi:10.1007/978-3-030-78733-2_13
-
O’Boyle, S., Rijckaert, J. & Mathews, E. (2024). Sign language machine translation communication and engagement. In A. Way, L. Leeson & D. Shterionov (Eds.), Sign language machine translation. Cham, Switzerland: Springer. Retrieved from https://link.springer.com/book/9783031473616
-
O’Neill, K. P., Cameron, A., Quinn, G., O’Neill, R. & McLean, F. (2015). British Sign Language glossary for mathematics and statistics. IMA International Conference on Barriers and Enablers to Learning Maths: Enhancing Learning and Teaching for All Learners. Glasgow, U.K.
-
Reis, J., Solovey, E. T., Henner, J., Johnson, K. & Hoffmeister, R. (2015). ASL CLeaR: STEM education tools for deaf students. In ASSETS ’15: Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility (pp. 441–442). doi:10.1145/2700648.2811343
-
Roberson, T. M. (2020). Can hype be a force for good?: Inviting unexpected engagement with science and technology futures. Public Understanding of Science 29 (5), 544–552. doi:10.1177/0963662520923109
-
Shterionov, D., De Sisto, M., Vandeghinste, V., Brady, A., De Coster, M., Leeson, L., … Rijckaert, J. (2022). Sign language translation: ongoing development, challenges and innovations in the SignON project. In L. Macken, A. Rufener, J. Van den Bogaert, J. Daems, A. Tezcan, B. Vanroy, … H. Moniz (Eds.), Proceedings of the 23rd Annual Conference of the European Association for Machine Translation (pp. 325–326). Ghent, Belgium: European Association for Machine Translation. Retrieved from http://hdl.handle.net/1854/LU-8754795
-
SignON Consortium (2021a). Sign language translation mobile application and open communications framework. Deliverable 1.1: Case studies and evidence analysis. Retrieved from https://signon-project.eu/publications/public-deliverables/
-
SignON Consortium (2021b). Sign language translation mobile application and open communications framework. Deliverable 6.1: SignON communication and dissemination plan. Retrieved from https://signon-project.eu/publications/public-deliverables/
-
SignON Consortium (2023). Sign language translation mobile application and open communications framework. Deliverable 1.2: Use cases & usage domains and stakeholders’ acceptance. Retrieved from https://signon-project.eu/publications/public-deliverables/
-
World Federation of the Deaf (2014). Working document on adoption and adaptation of technologies and accessibility. Prepared by the WFD Expert Group on Accessibility and Technology. Retrieved from http://wfdeaf.org/news/resources/working-document-on-adoption-adaptation-of-technologies-accessibility/
-
YESTEM Project UK Team (2020). The Equity Compass: a tool for supporting socially just practice. Retrieved from https://yestem.org/
Authors
Shaun O’Boyle is a research fellow in the School of Inclusive & Special Education at
Dublin City University.
E-mail: shaun.oboyle@dcu.ie
Elizabeth Mathews is an associate professor with the School of Inclusive & Special
Education at Dublin City University.
E-mail: elizabeth.mathews@dcu.ie
Caro Brosens is a linguist at Vlaams GebarentaalCentrum.
E-mail: caro.brosens@vgtc.be
Rehana Omardeen is a project officer at the European Union of the Deaf.
E-mail: rehana.omardeen@eud.eu
Davy Van Landuyt is a project officer at the European Union of the Deaf.
E-mail: davy.van.landuyt@eud.eu
Alvean Jones is a Deaf artist and historian at Dublin Theatre of the Deaf.
Lianne Quigley is a Deaf artist and co-ordinator of the Dublin Theatre of the
Deaf.