One Deep Music Representation to Rule Them All A comparative analysis of different representation learning strategies

In order to benefit from deep learning in an effective, but also efficient manner, deep transfer learning has become a common approach… In this approach, it is possible to reuse the output of a pre-trained neural network as the basis for a new learning task . The underlying hypothesis is that if the initial and new learning tasks show commonalities and are applied to the same type of input data (e.g. music audio), the generated deep representation of the data is also informative for the new task . In this paper, we present the results of our investigation of what are the most important factors to generate deep representations for the data and learning tasks in the music domain . We conducted this investigation via an extensive empirical study that involves multiple learning sources, as well as multiple deep learning architectures with varying levels of information sharing between sources, in order to learn music representations . We then validate these representations considering multiple target datasets for evaluation. The results of their experiments yield several insights on how to approach the design of methods for learning widely deployable deep data representations in the music domain. The results yield several surprises on how they yield several ‘innovative deep data distributions in the music disparadeship of the music distribution

Links: PDF - Abstract

Code :

https://github.com/eldrin/MTLMusicRepresentation-PyTorch

Keywords : learning - deep - music - data - representations -

Leave a Reply

Your email address will not be published. Required fields are marked *