The goal of this work is to build conversational Question Answering (QA) interfaces for the large body of domain-specific information available in FAQ sites . We present DoQA, a dataset with 2,437 dialogues and 10,917 QA pairs… The dialogues are collected from three Stack Exchange sites using the Wizard of Oz method with crowdsourcing . The results of an existing, strong, system show that, thanks to transfer learning from a Wikipedia QA dataset and fine tuning on a single FAQ domain, it is possible to build high quality conversational QA systems for FAQs without in-domain training data . The good results carry over into the more challenging IR scenario. In both cases, there is still ample room for improvement, as indicated by the higher human upperbound. The results show that there are still

Links: PDF - Abstract

Code :


Keywords : qa - domain - conversational - results - specific -

Leave a Reply

Your email address will not be published. Required fields are marked *