Αρχειοθήκη ιστολογίου

Σάββατο 9 Δεκεμβρίου 2017

Situated reference resolution using visual saliency and crowdsourcing-based priors for a spoken dialog system within vehicles

elsevier-non-solus.png

Publication date: March 2018
Source:Computer Speech & Language, Volume 48
Author(s): Teruhisa Misu
In this paper, we address issues in situated language understanding in a moving car. More specifically, we propose a reference resolution method to identify user queries about specific target objects in their surroundings. We investigate methods of predicting which target object is likely to be queried given a visual scene and what kind of linguistic cues users naturally provide to describe a given target object in a situated environment. We propose methods to incorporate the visual saliency of the visual scene as a prior. Crowdsourced statistics of how people describe an object are also used as a prior. We have collected situated utterances from drivers using our research system, which was embedded in a real vehicle. We demonstrate that the proposed algorithms improve target identification rate by 15.1% absolute over the baseline method that does not use visual saliency-based prior and depends on public database with a limited number of category information.



from #ORL-AlexandrosSfakianakis via ola Kala on Inoreader http://ift.tt/2Ao9mRf

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου