Domain2Vec model provides vectorial representations of visual domains based on joint learning of feature disentanglement and Gram matrix . We demonstrate that our embedding is capable of predicting domain similarities that match our intuition about visual relations between different domains . We create two large-scale cross-domain benchmarks to evaluate the effectiveness of our new model . The first one is TinyDA, which contains 54 domains and about one million MNIST-style images, and the second benchmark is DomainBank, which is collected from 56 existing vision datasets . The experiments are conducted to demonstrate the power of the new datasets in benchmarking state-of-the-art multi-source domain adaptation methods, as well as the advantage of our proposed model . We demonstrated that our embeddedding is able to predict domain similarities. Extensive experiments were conducted to .

Links: PDF - Abstract

Code :

None

Keywords : domain - model - domains - embedding - conducted -

Leave a Reply

Your email address will not be published. Required fields are marked *

error

Enjoy this blog? Please spread the word :)