Large Dimensional Analysis and Improvement of Multi Task Learning

Multi Task Learning (MTL) efficiently leverages useful information contained in multiple related tasks to help improve the generalization performance of all tasks . This article conducts a large dimensional analysis of a simple but, as we shall see, extremely powerful when carefully tuned, Least Square Support Vector Machine (LSSVM) version of MTL, in the regime where the dimension $p$ of the data and their number $n$ grow large at the same rate . This fine-tuning is fully based on the theoretical analysis and does not in particular require any cross validation procedure. Besides, the reported performances on real datasets almost systematically outperform much more elaborate and less intuitive state-of-the-art multi-task and transfer learning methods, which are almost systematically out of the art multi- task learning methods . The results are robust to broad ranges of data distributions, which our present experiments corroborate. The article reports a systematically close behavior between theoretical and empirical performances on popular datasets, which is strongly suggestive of the applicability of the proposed carefully tuned MTL-L SSVM method to real data to real datasets, is strongly suggested of the method to Real data . The authors conclude. This fine tuning is based on theoretical analysis does not need to be cross validation .

Links: PDF - Abstract

Code :


Keywords : analysis - real - data - multi - learning -

Leave a Reply

Your email address will not be published. Required fields are marked *