In the ever-evolving landscape of audio technology, the quest to accurately predict audio quality has taken a significant leap forward. Researchers Thomas Biberger and Stephan D. Ewert, along with their team, have introduced the computationally efficient monaural and binaural audio quality model, or eMoBi-Q. This innovative model is a game-changer because it integrates both monaural and binaural auditory features, offering a more comprehensive approach to evaluating audio quality.
The initial validation of eMoBi-Q across six diverse audio datasets is impressive. These datasets encompass quality ratings for both music and speech, processed through algorithms commonly used in modern hearing devices such as acoustic transparency, feedback cancellation, and binaural beamforming. The model’s ability to perform well across such a broad range of audio types and processing techniques underscores its versatility and robustness.
Building on this foundation, the researchers have now expanded eMoBi-Q to account for the perceptual effects of sensorineural hearing loss (HL) on audio quality. This extension is crucial because it addresses the needs of a significant portion of the population who experience hearing impairment. The model has been enhanced with a nonlinear auditory filterbank, which is designed to incorporate loudness as a sub-dimension for predicting audio quality. This is particularly important because altered loudness perception is a prevalent issue among listeners with hearing impairment.
The integration of loudness into the model is not just a technical upgrade; it’s a step towards more inclusive audio technology. By considering loudness as a sub-measure of audio quality, the model can help select reliable auditory features for hearing-impaired listeners. This could lead to more effective and personalized hearing aid fittings, ultimately improving the listening experience for those with hearing loss.
The parameters of the filterbank and subsequent processing stages were informed by the physiologically-based (binaural) loudness model proposed by Pieper et al. in 2018. This study presents and discusses the initial implementation of the extended binaural quality model, marking an important milestone in the ongoing effort to refine and enhance audio quality prediction.
The implications of this research are far-reaching. For producers and developers, the ability to predict audio quality more accurately can lead to better sound design and audio processing techniques. For enthusiasts, it means a more immersive and satisfying listening experience. Moreover, the focus on hearing impairment highlights the importance of accessibility in technology, ensuring that advancements benefit everyone, regardless of their auditory capabilities.
In summary, the work of Biberger, Ewert, and their team represents a significant advancement in the field of audio technology. By expanding the eMoBi-Q model to include the perceptual effects of hearing loss, they are paving the way for more inclusive and effective audio solutions. This research not only pushes the boundaries of what is possible in audio quality prediction but also underscores the importance of considering the diverse needs of all listeners.



