An online training platform for differently abled people - visual and auditory impairment. The platform has a mission to teach web integration so that they become financially independent and integrate into the workforce.
Date: October 2020 - April 2021
Duration: 7 months
Role: Lead UX Designer
Team: 4 UX Designer
Mission: User Research, User testing, Wireframing, UI design
Sponsor: Tanaguru et O'Clock
Problem Statement
To make a digital platform for online classes, accessible for visually and hearing-impaired populations.
In France, there are 6 million people with auditory disabilities and 2 million people who are visually impaired.
They are isolated people suffering from chronic unemployment.
Intergra11y is a project of Tanaguru and O'clock to find a solution that will give hands-on digital training to people with disabilities.
Project Goal
Define user needs and create problem statements.
Find the pain points concerning the existing e-learning platforms.
Design an accessible virtual training platform for visually and hearing impaired target audiences.
UX Methods Used
Team Canvas
A workshop to create a shared understanding for the project. Aligning the vision and objectives of all team members and stakeholders.
Competitive Analyses
Analyses of e-learning schools (Udacity,Openclassrooms...)and virtual platforms (Zoom, Googlemeet...) on the accessibility criteria recommended by RGAA (Référentiel Général d’Accessibilité pour les Administrations ) and 5 "Easy checks"by WCAG (Web Content Accessibility Guidelines).
Exploratory Interview
In order to familiarize and understand what it means to live with visual disability and hearing disability. Participants profile : Two users were blind, one user deaf was from birth and one user was with hearing aid.
Semi-Structured Interviews
Based on predefined protocol, the exploratory axes were their past experiences of school and professional life, their habits, their preferences and pain points with online learning and their familiarity with digital tools. Participants profile : Thirteen participants - eight were blind/visually impaired, seven were deaf/hearing impaired.
Affinity Map
Affinity mapping allowed us to identify different variables like level of familiarity with digital tools, motivation to undergo web training and actual employment status. This helped us define personas.
Persona
Based on target grouping together different archetypes such as practice of LSF, use of
NVDA, level of braille use, level of motivation for long-distance learning, personality traits and level of support system.
Hybrid Workshop
Interview & Focus Group
In order to create an Experience Map for the blind persona, we needed more information related to precise pain points they encounter during training (in person or virtual), analyse the opportunities we have, to improve their experience with our solution. Participants profile : Three participants were blind.
Photo-Elicitation Workshop
We showed the hearing-impaired target audience a photo montage made up of 9 photos to take them on an imaginary and chronological journey
(past, present and future), and then generate a whole series of comments, inspired by the tone of the photos or certain clues that they found. Participants profile : Three participants were deaf by birth and use hearing aid
Design Sprint
3 days +1 day of User testing
Design sprint was organised because we needed to prototype fast and test it in order to obtain objective insights about our hypothesis.
Five participants - 1 decision-maker(CEO Tanaguru), four designers
Tools used : MIRO, Google Meet
Workshop framework : AJ&Smart
User testing
We tested a quick and dirty Wizard of OZ prototype to get initial feedback about our solution. Participants profile : Five participants were hearing impaired
Tools used: Google meet and Figma
Due to the pandemic we couldn't run user testing with the blind users.
Co-sketching Workshop
Stakeholders, development team and end users. Participants profile : Five Participants - Product Owner, UI designer, Lead front end developer and 4 end users (deaf by birth with a hearing aid).
The pandemic severely restricted teams’ meetings and contacts with users, and henceforth they were run virtually on Google meet and MIRO.
Key focus Area
What technologies do blind/visually impaired and deaf/hearing impaired target users use to navigate on the web?
What are the constraints (visual, auditive, cognitive) and workarounds they have incorporated in order to be comfortable digitally?
Is the technological accessibility of the platform the only criteria for the success of the solution or are there subjective factors like social support from colleagues, the cognitive load of the programme, extra help after class, special sessions with the professor?
Find the elements of a successful offline class experienceand translate them to a digital platform.
Key Findings
Technologies used by target audience
Visually impaired users use Jaws, NVDA. Hearing impaired users (deaf with no hearing aid/implants) use sign language, users (with hearing aid/implants) use LPC (Langue Parlé Complété), captions and Roger voice .
Our solution needs to have real-time subtitles and an option for sign language and LPC (Langage parlé complété). Transcripts for every class should be downloadable. The platform should be compatible with NVDA and Jaws.
The majority of the users are neophytes of programming and have limited knowledge of their technical devices
Sessions of onboarding, audio/subtitled video tutorials and technical help during the class sessions is necessary.
The characteristics of hearing disability to take into account to make the platform accessible
the hearing aid sends all sounds at once, which can be quite disturbing. So a class with more than 5 students can be incomprehensible for hearing impaired users.
On the future platform a session should not comprise more than five students.
Each disability is unique and their requirements different; each user has a unique workaround .
The users should be able to customize the platform as per their needs.
The window size of LSF/LPC translator is too small, not resizable and of bad quality on current platforms.
The windows on the platform need to be resizable.
Too much graphic content on a platform makes it inaccessible for the visually impaired
We need to make minimum use of icons and the graphic items need to be labelled with alt text
Heavy Cognitive load
due to various devices (hearing aid for the hearing impairment user) and screen reader for the visually impaired users A 5hour intensive class would be impossible to keep up.
Sessions should be of shorter duration (half a day), but the whole course can be stretched to a longer duration.
Ideation
Design Sprint - 4 Days, 5 participants
Objectives of the Sprint
Find ways for making a digital platform for virtual classroom accessible for deaf/hearing impaired users.
Some HMW questions we solved throughout the sprint :
How might we personalize interface windows so that they are suitable for a variety of hearing impairments?
How might we save user preferences?
The solution proposed by the participants
The solutions proposed were the hypothesis which formed the base for prototyping and creating scenarios for user testing.
Which window arrangement allows for a better understanding of the captions, the LSF interpreter and the lip reading?
Are horizontal captions easier to understand than verticals?
What is the minimum size for the captions to be easily read?
Although during our workshop users said that they preferred a customizable interface, we wanted to test this hypothesis by offering them 2 screens arranged
differently.
Prototype(Wizard of OZ)
Prototype Interface Layout A
Prototype Interface Layout B
Method used for creating the mockup
Showing the static screens did not allow us to test the cognitive dimension. We needed
a "real class scenario" with a talking teacher, scrolling subtitles and a dynamic screen sharing window.
We wrote 2 different scripts of 2mn for each layout.
One of us played the part of the teacher. With the Quicktime movie we recorded this sequence.
With the platform Zoom, we filmed the LSF interpreter who had the audio
of the course to be interpreted.
Google Meet allowed us to have the subtitles.
With Figma we assembled all the above elements to give an impression of a "real classroom".
User Testing
5 participants
User profile : users with hearing implants, verbal and deaf from birth
Each session was 30 minutes, the tools used were: Google Meet and Figma.
User testing objective:
Our aim was to check the usability as well as the accessibility of the virtual classroom.
Know user accessibility preferences for the 2 versions of the same screen.
Graph for layout preference (A or B) of the participants. 3/5 Participants prefered version A
Graph for participants performance. Almost all the participants - 4/5 understood the subtitles but none of the participants- 0/5 could lip read or understand the sign language translator.
Graph for post test questionnaire of satisfaction concerning captions of layout A. 5/5 participants found captions easy to read and prefer horizontal formatHowever only 2/5 participants declared that captions were easy to follow.
Graph for post test questionnaire of satisfaction concerning subtitles of layout B. 4/5 participants captions easy to read. 2/5 declared it was easy to follow and 2/5 declared it was not easy to follow. Only 1/5 said she prefers this format against 2/5 who opposed and 1/5 found it functional but doesn't have a strong preference for it.
Graph for post test questionnaire of satisfaction. 3/5 participants found the instructions easy to understand during the test. 2/5 users find the arrangement of the elements in version A fairly good and 2/5 users find it moderately good. 3/5 users find the interface moderately easy to use 3/5 users found the navigation of the interface intuitive
Co-creation workshop for an 'Ideal Cockpit'
Participants: 6 end-users, stakeholders(decision-maker, UI designer and development team
The 30minute user testing session wasn't sufficient for detailed feedback. We needed to know more about their constraints and their vision.
The stakeholder and development team was involved in the design process :
The workshop gave us an opportunity to share our results of user research with the whole team in an applicative way.
It was our aim to bring the development team in direct contact with users to develop empathy and that the solution had to be the fruit of collaboration and not the result of a UX team having worked in a silo.
Wireframing the most promising ideas from the design thinking workshop
Interrupt the teacher in case of urgent need:
the student has the option of interrupting the teacher by clicking on the "accessibility" button. The teacher's assistant (helper) will be notified. If he accepts the request, he will solve the problem directly or request for the teacher's help.
Reduce cognitive load:
with the notes tool, the student can take notes at any time with the built-in timestamp. This will help the student navigate the post-course download transcript. They can also download the entire transcript.
Integrally Cockpit Wireframe
Presets button- to view, save or delete the different presets saved
Windows - all windows are resizable and movable
The caption window customised for better legibility - text and background color
Facilitate lip reading - spatial analysis is enabled by default by the system. AI will
detect the presence of the teacher and will analyze their movements within an area
given. This will make it possible to crop the camera still on the teacher's lips. The
manual zoom functionality will help the student to configure the screen according to their needs.
Optimisation of screen space - features, such as chat, subtitles
are only active by activating them via a button.
Communication with the professor on demand.
Solution : 1 screen scenario, Persona with hearing aid/ implants using captions and lip reading for comprehension
Solution : 2 screens scenario, Persona with hearing aid/ implants using captions and lip reading for comprehension
Learning and next steps
The overall recruitment process was very difficult and we could not get a representative sample space of our users due to the pandemic. The users were often solicited.
The solutions need to be tested through unmoderated user testing tools with a larger sample size of users to ensure we have addressed the majority of accessibility issues.
There is a convenience bias for the initial exploratory interviews - the recruitment was done mainly through LinkedIn. So the majority of users initially were those who are employed and not particularly in need of an online web design course.
We have huge variability in user testing due to a small number of users, we need to have at least 15 users (3 sessions with 5 users)for satisfactory feedback about the efficiency and usability of our prototype.
We need to address the issue of the prototype for blind/visually impaired users. They were left out from the later study dueto COVID constraint.
Our recruitment was in majority of hearing-impaired users and not of deaf users using solely sign language. Whether showing the webcam window of the sign language interpreter and other neighbouring windows all at once has a negative effect on accessibility and cognition with such profiles needs to be verified through user testing.