Beyond Fine tuning Few Sample Sentence Embedding Transfer

Fine-tuning (FT) pre-trained sentence embedding models on small datasets has been shown to have limitations . Concatenating the embeddings from the pre-training model with those from a simple sentence . model trained only on the target data, can improve over the performance of FT for few-sample tasks… To this end, a linear classifier is trained on the combined embeddeddings . We perform evaluation on seven small datasets from NLP tasks and show that our approach with end-to-end training outperforms FT with negligible computational overhead . We also show that sophisticated combination techniques like CCA and KCCA do not work as well in practice as concatenation .

Links: PDF - Abstract

Code :

None

Keywords : ft - trained - sentence - sample - datasets -

Leave a Reply

Your email address will not be published. Required fields are marked *