Global and Local-Contrast Guides Content-Aware Fusion for RGB-D Saliency Prediction

Global and Local-Contrast Guides Content-Aware Fusion for RGB-D Saliency Prediction

Abstract:

Many RGB-D visual attention models have been proposed with diverse fusion models; thus, the main challenge lies in the differences in the results between the different models. To address this challenge, we propose a local-global fusion model for fixation prediction on an RGB-D image; this method combines global and local information through a content-aware fusion module (CAFM) structure. First, it comprises a channel-based upsampling block for exploiting global contextual information and scaling up this information to the same resolution as the input. Second, our Deconv block contains a contrast feature module to utilize multilevel local features stage-by-stage for superior local feature representation. The experimental results demonstrate that the proposed model exhibits competitive performance on two databases