Traditional and deep learning-based fusion methods generated the intermediatedecision map to obtain the fusion image through a series of post-processing procedures . However, the fusion results generated by these methods are easy to lose some source image details or results in artifacts . Inspired by imagereconstruction techniques based on deep learning, we propose a multi-focusimage fusion network framework without any post-preparation to solve these problems in the end-to-end and supervised learning way . To sufficiently train the fusion model, we have generated a large-scale . dataset with ground-truth fusion images . What’s more, we further designed a novel fusion strategy based on unity fusionattention, which is composed of a . channel attention module and a spatialattention . module . We firstly utilize seven convolutional blocks to extract the . fused image features from source images. Then, the extracted convolutionsal features . Finally,

Author(s) : Yongsheng Zang, Dongming Zhou, Changcheng Wang, Rencan Nie, Yanbu Guo

Links : PDF - Abstract

Code :

https://github.com/oktantod/RoboND-DeepLearning-Project


Coursera

Keywords : fusion - image - deep - learning - based -

Leave a Reply

Your email address will not be published. Required fields are marked *