Αρχειοθήκη ιστολογίου

Παρασκευή 23 Φεβρουαρίου 2018

Reward Estimation for Dialogue Policy Optimisation

S08852308.gif

Publication date: Available online 24 February 2018
Source:Computer Speech & Language
Author(s): Pei-Hao Su, Milica Gašić, Steve Young
Viewing dialogue management as a reinforcement learning task enables a system to learn to act optimally by maximising a reward function. This reward function is designed to induce the system behaviour required for the target application and for goal-oriented applications, this usually means fulfilling the user's goal as efficiently as possible. However, in real-world spoken dialogue system applications, the reward is hard to measure because the user's goal is frequently known only to the user. Of course, the system can ask the user if the goal has been satisfied but this can be intrusive. Furthermore, in practice, the accuracy of the user's response has been found to be highly variable. This paper presents two approaches to tackling this problem. Firstly, a recurrent neural network is utilised as a task success predictor which is pre-trained from off-line data to estimate task success during subsequent on-line dialogue policy learning. Secondly, an on-line learning framework is described whereby a dialogue policy is jointly trained alongside a reward function modelled as a Gaussian process with active learning. This Gaussian process operates on a fixed dimension embedding which encodes each varying length dialogue. This dialogue embedding is generated in both a supervised and unsupervised fashion using different variants of a recurrent neural network. The experimental results demonstrate the effectiveness of both off-line and on-line methods. These methods enable practical on-line training of dialogue policies in real-world applications.



from #ORL-AlexandrosSfakianakis via ola Kala on Inoreader http://ift.tt/2GIDHZr

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου