Improved Cepstra Minimum-Mean-Square-Error Noise Reduction Algorithm for Robust Speech Recognition

Slide Note
Embed
Share

This study introduces an improved cepstra minimum-mean-square-error noise reduction algorithm for robust speech recognition. It explores the effectiveness of conventional noise-robust front-ends with Gaussian mixture models (GMMs) and deep neural networks (DNNs). The research demonstrates the benefits of single-channel robust front-ends in deep learning models and showcases t-SNE plots for clean and noisy utterances in training and testing sets. Overall, it highlights the importance of well-designed single-channel robust front-ends in enhancing speech recognition accuracy.


Uploaded on Oct 09, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Improved Cepstra Minimum-Mean- Square-Error Noise Reduction Algorithm for Robust Speech Recognition Jinyu Li, Yan Huang, Yifan Gong Microsoft

  2. Robust Front-End Conventional noise-robust front-ends work very well with the Gaussian mixture models (GMMs). Single-channel robust front-ends were reported not helpful to multi-style deep neural network (DNN) training. DNN s layer-by-layer structure provides a feature extraction strategy that automatically derives powerful noise-resistant features

  3. t-SNE Plot for Paired Clean and Noisy Utterances in Training Set (LFB)

  4. t-SNE Plot for Paired Clean and Noisy Utterances in Training Set (Layer 1)

  5. t-SNE Plot for Paired Clean and Noisy Utterances in Training Set (Layer 3)

  6. t-SNE Plot for Paired Clean and Noisy Utterances in Training Set (Layer 5)

  7. t-SNE Plot for Paired Clean and Noisy Utterances in Testing Set (Layer 1)

  8. t-SNE Plot for Paired Clean and Noisy Utterances in Testing Set (Layer 3)

  9. t-SNE Plot for Paired Clean and Noisy Utterances in Testing Set (Layer 5)

  10. Robust Front-End Conventional noise-robust front-ends work very well with the Gaussian mixture models (GMMs). Single-channel robust front-ends were reported not helpful to multi-style deep neural network (DNN) training although multi-channel signal processing still helps. In this study, we show that the single-channel robust front-end is still beneficial to deep learning models as long as it is well designed.

  11. Cepstra Minimum Mean Square Error (CMMSE) Reported very effective in dealing with noise when used in the GMM-based acoustic models The solution to CMMSE for each element of the dimension-wise MFCC: ???,? = ? ???,? ??? = ???,?? log???,? |??(?) The problem is reduced to finding the log-MMSE estimator of the Mel filter-bank s output: ???,? exp ? log???,? |??(?,?) weak independent assumption between Mel-filterbanks. given a

  12. 4-Step Processing of CMMSE Voice activity detection (VAD): detects the speech probability at every time- filterbank bin; Noise spectrum estimation: uses the estimated speech probability to update the estimation of noise spectrum; Gain estimation: uses the noisy speech spectrum and the estimated noise spectrum to calculate the gain of every time-filterbank bin; Noise reduction: applies the estimated gain to the noisy speech spectrum to generate the clean spectrum.

  13. CMMSE Input signal noise spectrum estimation (MCRA) filter-bank gain estimation spectrum calculation noise reduction Mel filtering VAD (MCRA) cleaned filter-bank spectrum

  14. VAD CMMSE uses a minimum controlled recursive moving average (MCRA) noise tracker (Cohen and Berdugo, 2002) to detect the speech probability ? ?,? in each filterbank bin b and time t. I. Cohen and B. Berdugo, Noise estimation by minima controlled recursive averaging for robust speech enhancement, IEEE signal processing letters, 9(1), pp.12-15, 2002.

  15. Noise Spectrum Estimation The noise power spectrum ???,? is estimated using MCRA(Cohen and Berdugo, 2002) as ???,? = ? ??? 1,? + 1.0 ? ???,? with ? = ??+ 1.0 ?? ? ?,?

  16. Gain Estimation ?(?,?) 1+ ?(?,?)exp ? ? ??? . 1 2 ? ?,? The gain of time-filterbank bin ? ?,? = ??(?,?) ??(?,?) posterior SNR : ? ?,? = prior SNR is calculated using a decision-directed approach (DDA) ? ?,? = ? ? ? 1,? ? ? 1,? + 1.0 ? ? ?,? ? ?,? = max(? ?,? 1,0.0) ?(?,?) 1+ ?(?,?)? ?,? ? ?,? =

  17. Noise Reduction ???,? ? ?,? ???,?

  18. CMMSE Input signal noise spectrum estimation (MCRA) filter-bank gain estimation spectrum calculation noise reduction Mel filtering VAD (MCRA) cleaned filter-bank spectrum

  19. Speech Probability Estimation Reliable speech probability estimation is critical to the estimation of noise spectrum: ???,? = ? ??? 1,? + 1.0 ? ???,? . We use IMCRA (Cohen 2003) to estimate the speech probability ? ?,? in each time-filterbank bin. I. Cohen, "Noise spectrum estimation in adverse environments: improved minima controlled recursive averaging," IEEE Trans. Speech and Audio Processing, Vol. 11, No. 5, pp. 466-475, 2003.

  20. Improving CMMSE Input signal noise spectrum estimation (IMCRA) filter-bank gain estimation spectrum calculation noise reduction Mel filtering VAD (IMCRA) cleaned filter-bank spectrum

  21. Refined Prior SNR Estimation We do further gain estimation to get a converged gain ? ?,? = ? ?,? ? ?,? ?(?,?) 1+ ?(?,?)? ?,? ? ?,? = ? ? ??? ?(?,?) 1+ ?(?,?)exp 1 2 ? ?,? ? ?,? =

  22. Improving CMMSE Input signal noise spectrum estimation (IMCRA) filter-bank gain estimation spectrum calculation noise reduction Mel filtering VAD (IMCRA) cleaned filter-bank spectrum

  23. OMLSA Optimally-modified log-spectral amplitude (OMLSA) speech estimator (Cohen and Berdugo, 2001) is used to modify time-filterbank gain: 1 ? ?,? ? ?,? = ? ?,?? ?,??0 I. Cohen and B. Berdugo, "Speech enhancement for non-stationary noise environments," Signal Processing, Vol. 81, No. 11, pp. 2403-2418, 2001.

  24. Gain Smoothing It is better to use OMLSA when the speech probability estimation is reliable. We do cross filterbank gain smoothing in this stage to partially address the weak independent assumption of Mel-filterbanks. ? ?,? = ( ? ?,? 1 + ? ?,? + ?(?,? + 1))/3

  25. Improving CMMSE Input signal noise spectrum estimation (IMCRA) filter-bank gain smoothing filter-bank gain estimation spectrum calculation noise reduction Mel filtering VAD (IMCRA) cleaned filter-bank spectrum

  26. Two-stage Processing The noise reduction process is not perfect due to the factors such as imperfect noise estimation. A second stage noise reduction can be used to further reduce the noise. We use OMLSA together with gain smoothing for the gain modification in the second stage because the residual noise has less impact to speech probability estimation after the first stage noise reduction

  27. Improved Cepstra Minimum Mean Square Error (ICMMSE) Input signal noise spectrum estimation (IMCRA) filter-bank gain smoothing filter-bank gain estimation spectrum calculation noise reduction Mel filtering VAD (IMCRA) 1st stage cleaned filter-bank spectrum noise spectrum estimation filter-bank gain estimation filter-bank gain VAD (IMCRA) noise reduction OMLSA+smoothing (IMCRA) final cleaned filter-bank spectrum

  28. Experiments Task Aurora 2 Training Data 8440 utterances Acoustic Model GMM

  29. Experiments Task Aurora 2 Chime 3 Training Data 8440 utterances 1600 real utterances + 7138 simulated utterances Acoustic Model GMM feed-forward DNN

  30. Experiments Task Aurora 2 Chime 3 Training Data 8440 utterances 1600 real utterances + 7138 simulated utterances 3400hr Live data Acoustic Model GMM feed-forward DNN Cortana LSTM-RNN

  31. Aurora 2 1-stage ICMMSE 2-stage ICMMSE Baseline CMMSE Relative WER reduction from baseline 35.00 30.00 Clean 20db 15db 10db 5db 0db -5db Avg. db) 1.39 2.69 3.6 6.04 14.38 43.41 75.93 13.67 1.48 2.21 3.21 5.65 13.01 38.36 72.8 12.17 1.20 1.99 2.91 5.30 12.59 34.02 68.47 10.87 1.10 1.85 2.71 4.97 11.57 32.06 67.45 10.19 25.00 20.00 15.00 10.00 5.00 0.00 Clean 20db 15db 10db 5db 0db -5db Avg. (0-20 db) -5.00 -10.00 CMMSE 1-stage ICMMSE 2-stage ICMMSE (0-20

  32. Improvement Breakdown Method Baseline CMMSE Avg. WER 13.67 12.17

  33. Improvement Breakdown Method Baseline CMMSE + IMCRA Avg. WER 13.67 12.17 11.66

  34. Improvement Breakdown Method Baseline CMMSE + IMCRA +OMLSA Avg. WER 13.67 12.17 11.66 10.99

  35. Improvement Breakdown Method Baseline CMMSE + IMCRA +OMLSA +refined prior SNR Avg. WER 13.67 12.17 11.66 10.99 10.87

  36. Improvement Breakdown Method Baseline CMMSE + IMCRA +OMLSA +refined prior SNR -OMLSA (1-stage ICMMSE) Avg. WER 13.67 12.17 11.66 10.99 10.87 10.74 +gain smoothing

  37. Improvement Breakdown Method Baseline CMMSE + IMCRA +OMLSA +refined prior SNR -OMLSA (1-stage ICMMSE) Avg. WER 13.67 12.17 11.66 10.99 10.87 10.74 +gain smoothing +2nd stage 10.19 processing (2-stage ICMMSE)

  38. Chime 3 Relative WER reduction from Baseline Model FE Test Baseline CMMSE ICMMSE Baseline CMMSE ICMMSE Real Simulate N/A N/A N/A 18.18 16.76 15.79 14.00 12.00 Clean Clean Clean Noisy Noisy Noisy 7.56 7.64 7.41 18.95 17.32 16.73 10.00 8.00 6.00 4.00 2.00 0.00 Clean Real Noisy Simu Noisy -2.00 CMMSE ICMMSE

  39. Chime 3 WER Breakdown with Real and Simu. Noisy Test Set

  40. Cortana Relative WER reduction from Baseline 12.00 WER 20db above 13.17 10-20db 0-10db Baseline CMMSE ICMMSE 12.64 20.8 19.83 27.03 10.00 8.00 12.71 18.51 24.71 6.00 4.00 26 2.00 0.00 20db above 10-20db 0-10db CMMSE ICMMSE

  41. Conclusion A new robust front-end called ICMMSE is proposed to improve the previous CMMSE front-end with several advanced components The IMCRA algorithm helps to generate more accurate speech probability. The refined prior SNR estimation helps to get a converged gain. Either cross filterbank gain smoothing or OMLSA is helpful to further modify the gain function. The two-stage processing helps to reduce the residual noise after the first-stage processing. ICMMSE is superior regardless of the underlying models and evaluation tasks

  42. Result Summary Task Training Data Acoustic Model Relative WER reduction 25.46% 11.98% Aurora 2 8440 utterances Chime 3 1600 real utterances + 7138 simulated utterances Cortana 3400hr Live data GMM feed-forward DNN LSTM-RNN 11.01%

  43. Thank You

Related