comunicación de congreso
Assessing the robustness of recurrent neural networks to enhance the spectrum of reverberated speech
Fecha
2020Registro en:
978-3-030-41005-6
10.1007/978-3-030-41005-6_19
322-B9-105
Autor
Paniagua Peñaranda, Carolina
Zeledón Córdoba, Marisol
Coto Jiménez, Marvin
Institución
Resumen
Implementing voice recognition systems and voice analysis in real-life contexts present important challenges, especially when signal recording/registering conditions are adverse. One of the conditions that produce signal degradation, which has also been studied in recent years is reverberation. Reverberation is produced by the sound wave reflections that travel through the microphone from multiple directions.
Several Deep Learning-based methods have been proposed to improve speech signals that have been degraded with reverberation and are proven to be effective. Recently, recurrent neural networks, especially those with short and long term memory (LSTM), have presented surprising results in those tasks.
In this work, a proposal to evaluate the robustness of these neural networks to learn different reverberation conditions without any previous information is presented. The results show the necessity to train fewer sets of LSTM networks to improve speech signals, since a single network can learn several conditions simultaneously, in contrast with the current method of training a network for every single condition or noise level.
The evaluation has been made based on quality measurements of the signal’s spectrum (distance and perceptual quality), in comparison with the reverberated version. Results help to affirm the fact that LSTM networks are able to enhance the signal in any of five conditions, where all of them were trained simultaneously, with equivalent results as if to train a network for every single condition of reverberation.