Emotion recognition has a privacy problem – here’s how to fix it

By | February 26, 2020

With devices listening everywhere you go, privacy concerns are endemic to advancing technology. For example, a virtual assistant that can learn to adapt to a user’s mood creates more useful, human-like interactions. But what if the audio powering these insights stores the user’s gender and demographic information on company servers – leaving the user open to identification by the company or, worse, any malicious eavesdroppers?

Research by CSE PhD student Mimansa Jaiswal and professor Emily Mower Provost proposes more secure technologies built on machine learning (ML). Through the use of adversarial ML, they’ve demonstrated the ability to “unlearn” these sensitive identifiers from audio before it’s stored, and instead use stripped-down representations of the speaker to train emotion recognition models.

“We find that the performance is either maintained, or there is a slight decrease in performance for some setups,” Jaiswal and Mower Provost write about their work. Jaiswal hopes to use these findings to make machine learning research safer and more secure for users in the real world.