Learning Visual Representations for Transfer Learning by Suppressing Texture

Recent works have shown that features obtained from supervised training of CNNs may over-emphasize texture rather than encoding high-level information . In self-supervised learning, texture as a low-level cue may provide shortcuts that prevent the network from learning higher-level representations… To address these problems we propose to use classic methods based on anisotropic diffusion to augment training using images with suppressed texture . This simple method helps retain important edge information and suppress texture at the same time . Our method is particularly effective for transfer learning tasks and we observed improved performance on five standard transfer learning datasets . The large improvements on the Sketch-ImageNet dataset, DTD dataset and additional visual analyses of saliency maps suggest that our approach helps in learning better representations that transfer well .

Links: PDF - Abstract

Code :

None

Keywords : learning - texture - transfer - level - representations -

Leave a Reply

Your email address will not be published. Required fields are marked *