Towards Robust Referring Image Segmentation
Jianzong Wu
Xiangtai Li
Xia Li
Henghui Ding
Yunhai Tong
Dacheng Tao
[Paper]
[GitHub]

Abstract

Referring Image Segmentation (RIS) aims to connect image and language via outputting the corresponding object masks given a text description, which is a fundamental vision-language task. Despite lots of works that have achieved considerable progress for RIS, in this work, we explore an essential question, "what if the description is wrong or misleading of the text description?". We term such a sentence as a negative sentence. However, we find that existing works cannot handle such settings. To this end, we propose a novel formulation of RIS, named Robust Referring Image Segmentation (R-RIS). It considers the negative sentence inputs besides the regularly given text inputs. We present three different datasets via augmenting the input negative sentences and a new metric to unify both input types. Furthermore, we design a new transformer-based model named RefSegformer, where we introduce a token-based vision and language fusion module. Such module can be easily extended to our R-RIS setting by adding extra blank tokens. Our proposed RefSegformer achieves the new state-of-the-art results on three regular RIS datasets and three R-RIS datasets, which serves as a new solid baseline for further research.


[Slides]

Negative Sentence Generating Process

We presents the five proposed negative sentence generation methods. More details can be found in the paper.

Method: RefSegformer

We present RefSegformer, a transformer-based approach that contains a language encoder, a vision encoder, and an encoder-fusion meta-architecture. In particular, we present a new multi-modal fusion module named Vision-Language Token Fusion (VLTF). Rather than directly fusing across vision and language features, we propose dynamically using memory tokens to select the most relevant language information. Then the selected language context is fused into a vision encoder with Multi-Head Cross Attention (MHCA). The final segmentation results are obtained via an FPN-like decoder which takes the outputs of VLTF in each stage. For the R-RIS setting, we propose to adopt blank tokens to indicate whether the described texts are in the image. The blank tokens are not fused to the vision feature via MHCA, but it is directly optimized by binary cross entropy loss. Our proposed RefSegformer achieves the new state-of-the-art results on both RIS datasets and our R-RIS datasets. Extensive experiments and analyses show the effectiveness of our proposed RefSegformer.

Visual Results: RefSegformer

Our method segments both positive and negative sentence well.
[Bibtex]


Acknowledgements

This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.