Αρχειοθήκη ιστολογίου

Σάββατο 9 Δεκεμβρίου 2017

Audio-visual word prominence detection from clean and noisy speech

elsevier-non-solus.png

Publication date: March 2018
Source:Computer Speech & Language, Volume 48
Author(s): Martin Heckmann
In this paper we investigate the audio-visual processing of linguistic prosody, more precisely the detection of word prominence, and examine how the additional visual information can be used to increase the robustness when acoustic background noise is present. We evaluate the detection performance for each modality individually and perform experiments using feature and decision fusion. For the latter we also consider the adaptive fusion with fusion weights adjusted to the current acoustic noise level. Our experiments are based on a corpus with 11 English speakers which contains in addition to the speech signal also videos of the speakers' heads. From the acoustic signal we extract features which are well known to capture word prominence like loudness, fundamental frequency and durational features. The analysis of the visual signal is based on features derived from the speaker's rigid head movements and movements of the speaker's mouth. We capture the rigid head movements by tracking the speaker's nose. Via a two-dimensional Discrete Cosine Transform (DCT) calculated from the mouth region we represent the movements of the speaker's mouth. The results show that the rigid head movements as well as movements inside the mouth region can be used to discriminate prominent from non-prominent words. The audio-only detection yields an Equal Error Rate (EER) averaged over all speakers of 13%. Based only on the visual features we obtain 20% of EER. When we combine the visual and the acoustic features we only see a small improvement compared to the audio-only detection for clean speech. To simulate background noise we added 4 different noise types at varying SNR levels to the acoustic stream. The results indicate that word prominence detection is quite robust against additional background noise. Even at a severe Signal to Noise Ratio (SNR) of −10 dB the EER only rises to 35%. Despite this the audio-visual fusion leads to notable improvements for the detection from noisy speech. We observe relative reductions of the EER of up to 79%.



from #ORL-AlexandrosSfakianakis via ola Kala on Inoreader http://ift.tt/2nIASTs

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου