In this work, we study the problem of user preference learning on the exampleof parameter setting for a hearing aid . We propose to use an agent thatinteracts with a HA user, in order to collect the most informative data, andlearns user preferences for HA parameter settings . Bayesianapproximate inference is used in the agent to infer the user model (preference function) The efficiency of our approach is validated by numerical simulations . Thenormalized weighted KL-divergence plays an important role here as well, since it characterizes the informativeness of the data to be used for probing the user . The resulting data, consequently, allows for efficient user model learning, according to the authors of this work . The efficiency was validated by a numerical simulations. The efficiency is also validated by the approach to our approach to this approach was validated in the approach is . The authors also propose the normalizedweighted Kullback-Leibler (KL) divergence between true and agent-assigned user response distributions as a metric to assess the quality oflearned preferences

Author(s) : Tanya Ignatenko, Kirill Kondrashov, Marco Cox, Bert de Vries

Links : PDF - Abstract

Code :
Coursera

Keywords : user - approach - validated - agent - efficiency -

Leave a Reply

Your email address will not be published. Required fields are marked *