Projects per year
Abstract
Imagine navigating a bustling city street, the cacophony of traffic horns and conversations threatening to drown out your friend’s voice. Or perhaps you’re struggling to follow a lecture in a crowded hall, the professor’s words lost in the background hum. These are just a few of the daily challenges faced by millions with hearing loss. While hearing aids exist, their static settings often fall short in dynamic environments. This thesis explores the potential for data-driven machine learning methods to improve personalized hearing tailored to individual preferences.
However, it is crucial to first understand how users currently manage their hearing in dynamic environments. This project’s initial exploration investigated real-world, anonymized data on hearing aid listening programs. These programs allow users to adjust settings for specific situations, such as noisy streets or lectures. By analyzing insights from over 32,000 users, the study revealed that a significant portion leverage these programs. Interestingly, program selection aligned with their intended use, suggesting users are actively personalizing their hearing experience based on context. The findings provide a valuable starting point for developing hearing personalization that builds upon existing user behavior and preferences.
Building on this foundation, a field study is conducted to gather a high-resolution dataset capturing discrete sound environment features and corresponding preferred listening programs. Using a custom iOS application, experienced users were asked to evaluate custom amplification and noise reduction settings in their daily lives. Furthermore, a joint diffusion model for time-series data is proposed, which combines generative and discriminative capabilities to learn complex relationships between sound environments and user preferences.
Real-world audiological data is highly sensitive, capturing intricate details about individual hearing challenges. This knowledge, ranging from speech comprehension difficulties to sound preferences, is deeply personal and raises significant privacy concerns. A federated learning (FL) approach is proposed to address these challenges, where a shared model is trained collaboratively across user smartphones, keeping audiological data local. In this implementation, user devices are treated as clients collecting and processing data locally, while model updates are aggregated on a central server without exposing raw data. To further enhance data privacy, a secret sharing technique is explored, splitting model updates into multiple shares before transmission.
Beyond personalized preferences, the final chapter takes an exciting step to explore similar techniques applied to a different domain. This work, conducted at NASA’s Jet Propulsion Laboratory (JPL), investigates federated learning applied to robotic exploration of planetary environments. In this setting, “personalization” refers to learning onboard map data collected by rovers. Multi-agent mapping is explored, where each rover learns local representations that contribute to a unified map for downstream tasks like path planning. This cross-domain application demonstrates the potential for federated learning to support autonomy in dynamic, distributed environments with minimal communication costs.
However, it is crucial to first understand how users currently manage their hearing in dynamic environments. This project’s initial exploration investigated real-world, anonymized data on hearing aid listening programs. These programs allow users to adjust settings for specific situations, such as noisy streets or lectures. By analyzing insights from over 32,000 users, the study revealed that a significant portion leverage these programs. Interestingly, program selection aligned with their intended use, suggesting users are actively personalizing their hearing experience based on context. The findings provide a valuable starting point for developing hearing personalization that builds upon existing user behavior and preferences.
Building on this foundation, a field study is conducted to gather a high-resolution dataset capturing discrete sound environment features and corresponding preferred listening programs. Using a custom iOS application, experienced users were asked to evaluate custom amplification and noise reduction settings in their daily lives. Furthermore, a joint diffusion model for time-series data is proposed, which combines generative and discriminative capabilities to learn complex relationships between sound environments and user preferences.
Real-world audiological data is highly sensitive, capturing intricate details about individual hearing challenges. This knowledge, ranging from speech comprehension difficulties to sound preferences, is deeply personal and raises significant privacy concerns. A federated learning (FL) approach is proposed to address these challenges, where a shared model is trained collaboratively across user smartphones, keeping audiological data local. In this implementation, user devices are treated as clients collecting and processing data locally, while model updates are aggregated on a central server without exposing raw data. To further enhance data privacy, a secret sharing technique is explored, splitting model updates into multiple shares before transmission.
Beyond personalized preferences, the final chapter takes an exciting step to explore similar techniques applied to a different domain. This work, conducted at NASA’s Jet Propulsion Laboratory (JPL), investigates federated learning applied to robotic exploration of planetary environments. In this setting, “personalization” refers to learning onboard map data collected by rovers. Multi-agent mapping is explored, where each rover learns local representations that contribute to a unified map for downstream tasks like path planning. This cross-domain application demonstrates the potential for federated learning to support autonomy in dynamic, distributed environments with minimal communication costs.
Original language | English |
---|
Publisher | Technical University of Denmark |
---|---|
Number of pages | 186 |
Publication status | Published - 2024 |
Fingerprint
Dive into the research topics of 'Personalizing Audiology With User-Centered, Private AI'. Together they form a unique fingerprint.Projects
- 1 Finished
-
Personalizing audiology by learning behavioral graphs based on user centred AI
Szatmari, T.-I. (PhD Student), Larsen, J. E. (Main Supervisor), Sun, K. (Supervisor), Pontoppidan, N. (Supervisor), Georgakis, G. (Examiner) & Serafin, S. (Examiner)
15/10/2020 → 02/05/2025
Project: PhD