Αρχειοθήκη ιστολογίου

Σάββατο 14 Απριλίου 2018

Comparing human and automatic speech recognition in simple and complex acoustic scenes

S08852308.gif

Publication date: Available online 14 April 2018
Source:Computer Speech & Language
Author(s): Constantin Spille, Birger Kollmeier, Bernd T. Meyer
Former comparisons of human speech recognition (HSR) and automatic speech recognition (ASR) have shown that humans outperform ASR systems in nearly all speech recognition tasks. However, recent progress in ASR has led to substantial improvements of recognition accuracy, and it is therefore unclear how large the task-dependent human-machine gap still remains. This paper investigates this gap between HSR and ASR based on deep neural networks (DNNs) in different acoustic conditions, with the aim of comparing differences and identifying processing strategies that should be considered in ASR. We find that DNN-based ASR reaches human performance for single-channel, small-vocabulary tasks in the presence of speech-shaped noise and in multi-talker babble noise, which is an important difference to previous human-machine comparisons: The speech reception threshold, i.e., the signal-to-noise ratio with 50 % word recognition rate is at about -7 to -8 dB both for HSR and ASR. However, in more complex spatial scenes with diffuse noise and moving talkers, the SRT gap amounts to approximately 12 dB. Based on cross comparisons that use oracle knowledge (e.g., the speakers' true position), incorrect responses are attributed to localization errors or missing pitch information to distinguish between speakers with different gender. In terms of the SRT, localization errors and missing spectral information amount to 2.1 and 3.2 dB, respectively. The comparison hence identifies specific components in ASR that can profit from learning from auditory signal processing.



from #ORL-AlexandrosSfakianakis via ola Kala on Inoreader https://ift.tt/2qwFdYO

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου