About us
We are a research lab within the University of Michigan's Computer Science and Engineering department. Our mission is to deliver rich, meaningful, and interactive sonic experiences for everyone through research in human-computer interaction, audio AI, accessible computing, and sound UX. We have two focus areas: (1) sound accessibility, which includes designing systems and interfaces to deliver sound information accessibly and seamlessly to end-users, and (2) hearing health, which includes developing hardware, algorithms, and apps for next-generation earphones and hearing aids.
We embrace the term ‘accessibility’ in its broadest sense, encompassing not only tailored experiences for people with disabilities, but also the seamless and effortless delivery of information to all users. Our focus on accessibility provides us a window into the future, as people with disabilities have often been early adopters of many modern technologies such as telephones, headphones, email, messaging, and smart speakers.
Our lab is primarily composed of HCI and AI experts. We also regularly collaborate with scientists from medicine, psychology, sociology, music, and design backgrounds. Our multi-stakeholder approach has led to a huge community impact. Many of our technologies have been publicly released (e.g., one deployed app has over 100,000 users) and have directly influenced products at companies like Microsoft, Google, and Apple. Our research has also earned multiple paper awards at premier HCI venues, has been featured in leading media outlets (e.g., CNN, Forbes, New Scientist), and is included in academic curricula worldwide.
Our current impact areas include media accessibility (e.g., enhanced captioning for movies, accessible sound augmentations for VR) and healthcare accessibility (e.g., technologies to support communication within mixed-ability physician teams, modeling patients' hearing health to improve hearing aids). Key research focuses and questions are:
Intent-Driven Sound Awareness Systems. How can sound awareness technologies model user's intent and deliver context aware sound feedback?
Projects:
| AdaptiveSound
| ProtoSound
| HACSound
| SoundWeaver
Sound Accessibility in Media. How can generative AI improve accessibility of sounds in mainstream media or new media?
Projects:
SoundVR
| SoundModVR
| SoundShift
Next-Generation Hearing Aids & Earphones. How can next earphones extract desired sounds or suppress unwanted noises to provide a seamless hearing experience? How can we dynamically capture audiometric data, model human auditory perception, and diagnose hearing-related medical conditions on the edge?
Projects:
MaskSound
| SonicMold
| SoundShift
In future, we envision a world where technologies will expand human hearing, enabling extremely personalized, seamless, and fully accessible soundscapes that dymamically adapt to users' intent, environment, and social contexts. We call this vision, "auditory superintelligence".
If our vision and current focus areas appeal to you, please apply. We are actively recruiting PhD students and postdocs to join us in shaping the future of sound accessibility!
Recent News
Oct 30: Our CARTGPT work received the best poster award at ASSETS!
Oct 11: Soundability lab students are presenting 7 papers, demos, and posters at the upcoming UIST and ASSETS 2024 conferences!
Sep 30: We were awarded the Google Academic Research Award for Leo and Jeremy's project!
Jul 28: Two demos and one poster accepted to ASSETS/UIST 2024!
Jul 02: Two papers, SoundModVR and MaskSound, accepted to ASSETS 2024!
May 22: Our paper SoundShift, which conceptualizes mixed reality audio manipulations, accepted to DIS 2024! Congrats, Rue-Chei and team!
Mar 11: Our undergraduate student, Hriday Chhabria, accepted to the CMU REU program! Hope you have a great time this summer, Hriday.
Feb 21: Our undergraduate student, Wren Wood, accepted to the PhD program at Clemson University! Congrats, Wren!
Jan 23: Our Masters student, Jeremy Huang, has been accepted to UMich CSE PhD program. That's two good news for Jeremy this month (the CHI paper being the first). Congrats, Jeremy!
Jan 19: Our paper detailing our brand new human-AI collaborative approach for sound recognition has been accepted to CHI 2024! We can't wait to present our work in Hawaii later this year!
Oct 24: SoundWatch received the best student paper nominee at ASSETS 2023! Congrats, Jeremy and team!
Aug 17: New funding alert! Our NIH funding proposal on "Developing Patient Education Materials to Address the Needs of Patients with Sensory Disabilities" has been accepted!
Mar 16: Professor Dhruv Jain elected as the inaugral ACM SIGCHI VP for Accessibility!