Abstract

Recent work on deep neural networks as acoustic models for automatic speech recognition (ASR) have demonstrated substantial performance improvements. We introduce a model which uses a deep recurrent auto encoder neural network to denoise input features for robust ASR. The model is trained on stereo (noisy and clean) audio features to predict clean features given noisy input. The model makes no assumptions about how noise affects the signal, nor the existence of distinct noise environments. Instead, the model can learn to model any type of distortion or additive noise given sufficient training data. We demonstrate the model is competitive with existing feature denoising approaches on the Aurora2 task, and outperforms a tandem approach where deep networks are used to predict phoneme posteriors directly.

Keywords

Noise reductionReduction (mathematics)Computer scienceNoise (video)Artificial neural networkSpeech recognitionRecurrent neural networkNoise measurementArtificial intelligenceMathematics

Affiliated Institutions

Related Publications

Publication Info

Year
2012
Type
article
Citations
337
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

337
OpenAlex

Cite This

Andrew L. Maas, Quoc V. Le, Tyler M. O'Neil et al. (2012). Recurrent neural networks for noise reduction in robust ASR. . https://doi.org/10.21437/interspeech.2012-6

Identifiers

DOI
10.21437/interspeech.2012-6