For languages with no annotated resources, transferring knowledge from rich-resource languages is an effective solution for named entity recognition (NER) The results show that our approach significantly outperforms existing state-of-the-art methods across the board . To further improve the model’s generalization ability across different languages, we introduce a masking scheme and augment the loss function with an additional maximum term during meta-training . To this end, we present a meta-learning algorithm to find a good model parameter initialization that could fast adapt to the given test case and propose to construct multiple pseudo-NER tasks for meta- training by computing sentence similarities . The results are published in The New York Review of Science, published by Springer, October 26, 2013. For more information, visit http://www.mailonline.com/sciencewire.

Links: PDF - Abstract

Code :

https://github.com/microsoft/vert-papers/tree/master/papers/Meta-Cross

Keywords : meta - languages - learning - named - results -

Leave a Reply

Your email address will not be published. Required fields are marked *

error

Enjoy this blog? Please spread the word :)