Sound Detection App

User Interface and Experience Design | Accessibility | App Design


Tool: Adobe XD 

Problem Statement

Some individuals have limited or no access to sound

Audio input, along with visual input is important for our survival. 

However, Deaf or Hard of Hearing(DHH) individuals may have limited or no access to auditory information, making it difficult to interpret the world around them

Our Solution

Detecting sound and visualizing only vital information

Deaf or Hard of Hearing individuals use their visual senses more to compensate for hearing impairment. Hence, we provide a minimalist visualization.


Problem Scope

466 million

people are devoid of auditory senses

Target Audience 

​Deaf and Hard-of- Hearing smartphone users

Deaf or Hard of Hearing individuals use their visual senses more to compensate for hearing impairment. Hence, we provide a minimalist ​It was crucial to understanding our target audience problems to design a solution which perfectly fits needs, hence, it was important to conduct thorough research with relevant users to extrapolate required information.

​R.I.T has 1,100+ DHH students from various fields

Rochester Institute of Technology has NTID college which mainly focuses on creating the most powerful, successful network of deaf and hard-of-hearing professionals in the world. Being part of R.I.T gave us first-hand experience to work with DHH students from various industry domains.

My Role



I helped draft scripts for the interviews and facilitated them with participants which informed our design direction

I led the concept generation stage and formulated the design direction that is used in the final design.



I was the in-charge of creating sketches, low and high fidelity prototypes that reflected our concepts and allowed us to test our designs




Total Participants: 4 DHH users


Users unhappy with current technology



Users expressed their need for customizing data

Users needed better data visualization


I want to know where the sound is coming from, like how far and which way

It should be able to add sounds if the app doesn’t work

I want to custom the information, I want to
change the icons

Key Goals

Easy Accessibility

Constant Availibility

Minimalistic Visualization

Reduced Cognitive Load

We refined our workflow based on the user needs


Detect Sound

Text 911




New Sound?

Add New Sound



Design Rationale

Home Screen

The main purpose of the application is to detect sound hence it is set as the first screen that the user would see when they open the application.


When the user clicks on the circle shown in the middle of the screen, the system capture, analysis environmental sound, and shows the required information about the sound

Various Sound Detection

Data Visualization

After detecting sound, the information about the sound is displayed.

Icons to represent sounds in visual form  
The textual representation of sounds
Direction from which the sound is coming
Severity Scale
Additional information about sound
Feature to text 911 in case of emergency

Reasons behind feature choices

Text 911 

This feature allows DHH users to quickly share their information in case of an emergency. Sound name, its location from the user's device, severity along with the user's name and location will be share. 

Users can modify all the sounds to enable or disable this feature. 

Add Sound

The user has the ability to add a new sound or some customizable sound. Users can add their friends' voices to the list to detect their call. 

Need for Profile

Other than the social and virtual presence, account creation was needed to store vital information that would be sent to authorities in case of an emergency.

Also, the user's notification choice, list of sound, and other preferences can be stored as well.


Sound List

The list represents all the sound database. Users can add new sound data or delete unwanted data. Users can also customize existing data according to their needs.

The red dash on the right side of the sound represents that the "Text 911" feature is enabled.


Reasons behind design choices

Sounds are represented in icons
In order for the users to recognition sound quickly, easier, and intuitively. Research shows that DHH users have heightened visual sense, hence icons are made the primary representation of sound. Also, more space is given to icons in order to gain more attention from users
Textual representation
Just in case, if the user is unable to relate the icon with the sound, the sound will be textually represented as well
Important data visualisation
Our interview participants were more concerned about the sound, its direction, and its severity level. 
The direction is shown in a circular format representing the compass design around the sound icon.
Color Scheme
Generally, when we consider a traffic light, humans associate green as a positive color, red as negative, and yellow as mid color. Hence, we choose green to indicate low severity, red to indicate high severity, and yellow to indicate mid severity.
Severity Scale

Sound is usually divided into three levels depending on its unit of measurement, i.e. decibel (dB).

0 to 75dB is considered as low severity

76dB to 120dB is considered as mid severity 

The sound that falls above 120dB, is considered to have high severity.

Scale Sub-division

Each severity is further divided into 5 parts depending on
the cellphone’s location from the sound source.


These five divisions would work as a scale of 1 to 5, where 1 being close to sound score and 5 being far

(1: very close, 2: close, 3: neutral, 4: far, 5: very far). 

Emoji for

Keeping color-blindness in mind, we chose to indicate different level of severity using different emoticons

Also, if the user is taking a walk then the application would detect a lot of sounds, and reading information about each detected sound would be tedious for the user. Hence we choose emoji