Deep learning algorithms are widespread in audio and speech processing methods, from the state-of-the-art researches to the applications on our smartphones. This growing deployment of deep leaning-based audio and speech processing algorithms would not have been possible if not for the lightning-fast progress of computer science in both hardware and software aspects.
Recently, employing deep learning in audio and speech processing approaches has shown significant improvement in system performance compared to the signal processing methods applying conventional machine learning algorithms. Automatic feature engineering in deep learning algorithms make them so compatible for learning the representations of audio and speech signals and creating a complex mapping between acoustic features and targets. Currently, deep neural networks have a wide-range application in audio and speech processing methods such as automatic speech recognition, speech enhancement, speech intelligibility improvement, multi-talker localization, noise PSD estimation, attended speech identification, beam forming, hearing aid development, and acoustic echo cancelation.
An even more interesting research trend is focusing on new strategies for deep neural network based speech processing methods such as new stage-of-the-art combinations (i.e. deep neural network and hidden Markov model combinations) and multi-task learning models.
The authors are encouraged to submit original research articles, reviews, theoretical and critical perspectives, and viewpoint articles in the fields of advanced deep learning methods for audio and speech processing.