Signal

An application that integrating smart glasses and mobile platforms to help Deaf or Hard of Hearing (DHH) and hearing people communicate with ease by connecting an online interpreted to the conversation.

User-Centered Design Methods

Smart-glasses and Mobile App Design

Accessibility

Concept Design

 

Tool: Figma

Problem Statement

The mode of communications used by Deaf or Hard-of-Hearing (DHH) people and hearing people are different which leads to a major gap. 

Currently, DHH people do use technologies like typing text messages, software to translate speech to text, and an old fashioned method of pen-and-paper. The problem with all the mentioned methods is that they are time-consuming and they require a lot of effort from the end of DHH people. 

Overview

To come up with a user-centric design, we first had to understand our users

Our team, being a group of hearing individuals, could only imagine the issues that arrived on daily basis for our deaf and hard-of-hearing peers. But for designing a user-centric solution, imagination was not going to get us far enough!

 

Hence, from the start till the end of the project, at every step, our team consulted the expert users which in our case were deaf and hard-of-hearing individuals. All the design and feature decisions made in the project were first verified with the expert users, only after a certain level of approval, the decisions were included in the proposed concept.

Expert User

To understand the pain points of our users, we interviewed expert users

Expert users are the people who represent the target audience. For this project, our expert user was a Mechanical Engineering undergraduate student who was identified as a Deaf or Hard-of-Hearing individual. 

User-Centric Design Methods

Empathize

Test

Ideate

Design

Prototye

Problem Scope

466 million

people are devoid of auditory senses

​Deaf and Hard-of- Hearing smartphone users

Deaf or Hard of Hearing individuals use their visual senses more to compensate for hearing impairment. Hence, we provide a minimalist ​It was crucial to understanding our target audience problems to design a solution which perfectly fits needs, hence, it was important to conduct thorough research with relevant users to extrapolate required information.

Target Audience 

​R.I.T has 1,100+ DHH students from various fields


Rochester Institute of Technology has NTID college which mainly focuses on creating the most powerful, successful network of deaf and hard-of-hearing professionals in the world. Being part of R.I.T gave us first-hand experience to work with DHH students from various industry domains.

My Role

01

Research

I helped draft scripts for the interviews and facilitated them with participants which informed our design direction

I led the concept generation stage and formulated the design direction that is used in the final design.

02

Ideation

I was the in-charge of creating sketches, low and high fidelity prototypes that reflected our concepts and allowed us to test our designs

03

Design

Interviews

Total Participants: 4 DHH users

Insight:

Users unhappy with current technology

100%

50%

Users expressed their need for customizing data

Users needed better data visualization

75%

I want to know where the sound is coming from, like how far and which way

It should be able to add sounds if the app doesn’t work

I want to custom the information, I want to
change the icons

Key Goals

Easy Accessibility

Constant Availibility

Information Architecture

Minimalistic Visualization

Feasability

Reduced Cognitive Load

We refined our workflow based on the user needs

Home

Detect Sound

Text 911

Emergency?

New Sound?

Add New Sound

Design Rationale

Home Screen

The main purpose of the application is to detect sound hence it is set as the first screen that the user would see when they open the application.

 

When the user clicks on the circle shown in the middle of the screen, the system capture, analysis environmental sound, and shows the required information about the sound

Various Sound Detection

Data Visualization

After detecting sound, the information about the sound is displayed.

Icons to represent sounds in visual form  
The textual representation of sounds
Direction from which the sound is coming
Severity Scale
Additional information about sound
Feature to text 911 in case of emergency

Reasons behind feature choices

Text 911 

This feature allows DHH users to quickly share their information in case of an emergency. Sound name, its location from the user's device, severity along with the user's name and location will be share. 

Users can modify all the sounds to enable or disable this feature. 

Add Sound

The user has the ability to add a new sound or some customizable sound. Users can add their friends' voices to the list to detect their call. 

Need for Profile

Other than the social and virtual presence, account creation was needed to store vital information that would be sent to authorities in case of an emergency.

Also, the user's notification choice, list of sound, and other preferences can be stored as well.

 

Sound List

The list represents all the sound database. Users can add new sound data or delete unwanted data. Users can also customize existing data according to their needs.

The red dash on the right side of the sound represents that the "Text 911" feature is enabled.

 

Reasons behind design choices

Sounds are represented in icons
In order for the users to recognition sound quickly, easier, and intuitively. Research shows that DHH users have heightened visual sense, hence icons are made the primary representation of sound. Also, more space is given to icons in order to gain more attention from users
Textual representation
Just in case, if the user is unable to relate the icon with the sound, the sound will be textually represented as well
Important data visualisation
Our interview participants were more concerned about the sound, its direction, and its severity level. 
The direction is shown in a circular format representing the compass design around the sound icon.
Color Scheme
Generally, when we consider a traffic light, humans associate green as a positive color, red as negative, and yellow as mid color. Hence, we choose green to indicate low severity, red to indicate high severity, and yellow to indicate mid severity.
Severity Scale

Sound is usually divided into three levels depending on its unit of measurement, i.e. decibel (dB).


0 to 75dB is considered as low severity

76dB to 120dB is considered as mid severity 


The sound that falls above 120dB, is considered to have high severity.
 

Scale Sub-division

Each severity is further divided into 5 parts depending on
the cellphone’s location from the sound source.

 

These five divisions would work as a scale of 1 to 5, where 1 being close to sound score and 5 being far

(1: very close, 2: close, 3: neutral, 4: far, 5: very far). 

Emoji for
color-blindness

Keeping color-blindness in mind, we chose to indicate different level of severity using different emoticons

Also, if the user is taking a walk then the application would detect a lot of sounds, and reading information about each detected sound would be tedious for the user. Hence we choose emoji

 

We refined our workflow based on the user needs

Home

Detect Sound

Emergency?

New Sound?

Add New Sound

Text 911

We refined our workflow based on the user needs

Home

Detect Sound

Text 911

Emergency?

YES

New Sound?

Add New Sound

YES

We refined our workflow based on the user needs

Home

Detect Sound

Text 911

Emergency?

YES

NO

New Sound?

Add New Sound

YES

NO