How to enhance the inclusivity and accessibility of your online calls

Posted by j.laird on 2 March 2021 - 9:30am

Welcome sign
Photo by Tim Mossholder on Unsplash

By Yo Yehudi, Kaitlin Stack Whitney and Malvika Sharan.

 

This blog post is part of the Research Software Camp: research accessibility web content series. 

Even before the pandemic-driven age of remote meetings, many open science and open source communities have been using participatory online meeting formats to involve their members from multiple countries. Local communities are also successful at designing inclusive formats that take languages, cultures, and identities into account. Many international initiatives host online training and community meetings using traditional information delivery methods: training workshops, guest presentations, as well as participatory methods: group discussions, collaborative document writing, and online co-working. On one hand, these meetings aim to be inclusive of members from different time zones, languages and identities together in a common place. On the other hand, formats used in these calls presume that participants of such calls actively use spoken language, have shared vocabulary, and learn in similar ways.

Such online meeting formats can be great for some people, but with intentional planning, they can become broadly inclusive and accessible. Research across educational, business, and collaboration settings has shown that inclusion benefits everyone - and that diverse teams solve problems better. With this in mind, we have been experimenting with ways to increase the participation of our members in Open Life Science (OLS) training calls. In the first cohort, we facilitated this through small breakout discussions, silent note-taking, and sharing recordings of these calls with transcription for self-paced learning. All these aspects of our cohort calls were successful, but these calls were not live captioned, visually described or inclusive of sign language users. In the second cohort, we integrated live captioning and introduced text-chat-based breakout rooms to provide better access in our cohort calls for participants.

In this post, we will describe how to structure online group calls for successful, multimodal collaboration among people who communicate in different ways. We’ve learned these practices along the way whilst we continue to design for better inclusion and accessibility in Open Life Science. (Please note that there are many other elements to accessibility and inclusion that are not discussed here).

If you have someone’s specific access needs in mind, always ask them

Seek their recommendations about the best way they can participate - they are the expert on their needs and experience. Yet you may need to work together to find solutions specific to the calls and the group that meet their needs. In order to do this, call organisers need to know what resources are available. If your online calls are part of working with a mid-size or large community, in the long term consider creating an accessibility working group. Working group members can add their perspectives from their experience (for example with neurodiversity, dyslexia, visual-blind accessibility, mobility, or other apparent and non-apparent disabilities). Even if no participants to date on the online calls have been in those communities, this input will help ensure that future participants will be welcome and online calls will be inclusive.

If you are able, provide real-time captioning

Research shows that the majority of caption users are English language learners and hearing people - captions benefit everyone! Automated captioning is sometimes glitchy, but still provides better access to your calls than none at all. If you can afford it, real-time human captioning is often better. For example, at Bioinformatics Community Conference 2020, an online bioinformatics conference, 25% of all participants reported that they used the captions. From their report

“Thanks to generous sponsorship from eLife, we added closed captioning (CC) to both pre-recorded and live talks. Over 25% of attendees (as reported in the post-conference survey) utilised these, and many - including those who did not have hearing issues - shared their appreciation. “Fantastic, very helpful,” one respondent commented about the CC. “I have good hearing, but it helped me digest more effectively anyway. (I also turn on CC on Netflix for the same reason!)” Another noted, “It was very useful. My audio was not working for a few moments, and the CC allow[ed] me to still follow the conference.” The appreciation and high uptake we saw with the CC is an example of how increasing accessibility can improve the conference experience for a wide range of participants.

Google Hangouts Meet doesn’t have breakout room functionality, but does have built-in auto captioning, making it an affordable way to access captions, specifically when breakout rooms are not required. 

Google Slides & Screen Sharing: Another affordable option is to ensure that the speaker is always screen-sharing and uses Google Slides in Chrome with automatic captioning to ensure that their words are automatically transcribed - but this is best for scenarios where there is only a single speaker. If there are many speakers, only the person presenting will be captioned and most audio won’t be accessible. Before presenting, test that this functionality works. If the captions stop for any reason during the presentation, don’t be afraid to take a break to fix the issue before you go forward. 

Otter.ai + Zoom: For Open Life Science cohort calls, we have been using Otter.ai, the costs of which has been supported by the Software Sustainability Institute. Otter has an API integration with Zoom, which allows real-time captioning for our group calls. Although it doesn’t support captioning breakout rooms (see our next note for ways we’ve experimented handling this), it has so far transcribed our cohort calls well even when presentations are delivered by people with different accents, only struggling with some people’s names. 

Written breakout rooms with Zoom

A breakout room is where a group, usually 2-6 people, discuss a set task for a period of time in a mini video call group with private audio, video, and chat. Since Otter doesn’t support breakout room captioning, we decided to try to find a way to make them inclusive without captioning, and eventually decided to split our group into two different kinds of breakout rooms: breakout rooms in spoken English and breakout rooms in written English. InterMine has used typing-only breakout rooms in the past in order to be inclusive for participants with low bandwidth, but this is the first time we’d experimented with them in OLS, which had a much larger participant base. Early feedback indicated that some people really enjoyed the writing-based breakout rooms whilst others were less sure of the purpose, and worried that using the Zoom chat box would result in all members of the call receiving the messages. From this early experiment we learned a few things - and are extremely grateful to our OLS-2 cohort call participants for bearing with us while we refine our methods. 

  1. Start with a written full-cohort collaborative documentation activity shortly before you start your first written breakout rooms - this helps to set the scene, and show people how silent interactions can work and still be highly interactive and stimulating. Here is an example from one of the cohort calls in Open Life Science. 
  2. Explain why the two types of room are options and that anyone is welcome in either - some people find it easier to communicate with speech, but others might prefer a written medium, which can provide time to reflect or be easier to deal with if the call language isn’t their first or primary language. Written/text based collaboration opportunities can be the most accessible platform to the most participants. For example, this approach does not rely on or require speech or hearing. (Further reading: “How to Teach with Text: Platforming Down as Disability Pedagogy”).
  3. Allow people to indicate their preferences for spoken, written, or either. This could be done via an emoji in their display name, “spoken/written/either” added to the roll call, or separate sign-ups sheets for each of the three options. Adding an option for “either” allows your call group to mix more effectively.
  4. Make sure you have very explicit instructions on how the room interaction should be done, e.g. where to type! A shared document allows bullet-point style conversation threading, which is convenient, but you could also use the Zoom chat, so long as you make sure to let participants know it is private to their room and not interrupting everyone.
  5. Finally, something that is true for both spoken and written English breakout rooms: tell people what to do if they need help! This might be a Slack back channel or using the “Ask for help” feature built into Zoom. 

If you are unable to provide live captioning, you can caption the calls afterwards

If you can, record your community or training call. Make everyone aware that you will be recording and ask them to turn their cameras off if they don’t want to appear in the videos. Communicate clearly when there will be captions available after the call and how they can be accessed. This can be done in a few ways. If you upload to YouTube, AI-generated captions will automatically be added after a few hours. Please note that these auto-generated captions are known by some deaf YouTubers as "craptions" and may not provide sufficient access. The industry standard for sufficient accuracy for captions to provide accessibility is 99% accuracy or higher. Accuracy rates for these captions vary widely based on the speaker's voice and accent - some research has indicated there are potentially higher errors in auto-captions generated from women speakers and non-white non-American dialects. It's thus critical to check the quality of these captions. They can be edited easily by the person who posts (YouTube is removing community-contributed captioning features). 

Another option that is cost-free is Otter.ai again. Whilst real-time Otter.ai captioning requires a subscription, non-real-time captions are free for up to 4,000 minutes per month - enough for most cash-strapped projects. When sharing these with a group of people, ask them to help you in improving the transcription if they spot any errors. Read more about different types of captions at The Mind Hears, a group of deaf and hard of hearing scientists, who write about their experiences with captioning programs.

Always seek and be open to feedback

Don’t take feedback personally. It’s easy to take that “everything felt right” or “this didn’t go right at all” feeling to heart, but remember, without the honest feedback of your attendees you won’t be able to improve. Use different forms of engagement and feedback pathways to ensure that the target audience is asked for their recommendations before anything is done. Lack of access or inclusion may also be reasons that people did not participate, so also try to survey people who are or may be interested but chose not to participate in the online calls. Reflect on their feedback so you can improve and find ways to make it better. 

As you start integrating different accessibility aspects into your community participation, you might not get it right the first time or for every participant - we certainly have found the need to iterate on our practices ourselves! 

The Open Life Science otter.ai subscription is supported by the SSI fellowships awarded to two of the co-founders. You can read their project and community report available on their website: https://openlifesci.org/posts.


Want to discuss this post with us? Send us an email or contact us on Twitter @SoftwareSaved.  

Share this page