MDPENet: Multimodal-driven prototype evolving network for few-shot semantic segmentation


Abstract

Few-shot Semantic Segmentation (FSS) aims to predict the mask of unknown targets with only a few labeled samples. Prototype learning is a commonly used method in FSS, which transfers prototype vectors from known categories (support images) to novel categories (query images) to predict the mask of unseen objects. Although such methods have achieved success, FSS methods based on prototype learning still suffer from the problems of prototype bias and insufficient utilization of limited information. In this work, we propose a Multimodal-Driven Prototype Evolving Network (MDPENet) to alleviate these problems. Our method mainly includes the Support Feature Enhancement Module (SFEM), the Query Feature Disentanglement Module (QFDM), and the Prototype Evolution Module (PEM). Concretely, the SFEM is first utilized to establish multimodal feature interaction between the text label features encoded by CLIP and the separated support foreground features, improving the reliability of the support foreground features. Then, the QFDM combines the text label features encoded by CLIP and the support foreground features to disentangle the whole query features, which helps reduce the mutual interference between different semantics within the query features. Finally, the PEM generates the prototype set on the enhanced support foreground features and the disentangled query foreground features at a fine-grained level. Extensive experiments on the benchmark datasets PASCAL-5 i and COCO-20 i demonstrate the superiority of our MDPENet compared to classical FSS methods.
Ask to review this manuscript

Notes for potential reviewers

  • Volunteering is not a guarantee that you will be asked to review. There are many reasons: reviewers must be qualified, there should be no conflicts of interest, a minimum of two reviewers have already accepted an invitation, etc.
  • This is NOT OPEN peer review. The review is single-blind, and all recommendations are sent privately to the Academic Editor handling the manuscript. All reviews are published and reviewers can choose to sign their reviews.
  • What happens after volunteering? It may be a few days before you receive an invitation to review with further instructions. You will need to accept the invitation to then become an official referee for the manuscript. If you do not receive an invitation it is for one of many possible reasons as noted above.

  • PeerJ Computer Science does not judge submissions based on subjective measures such as novelty, impact or degree of advance. Effectively, reviewers are asked to comment on whether or not the submission is scientifically and technically sound and therefore deserves to join the scientific literature. Our Peer Review criteria can be found on the "Editorial Criteria" page - reviewers are specifically asked to comment on 3 broad areas: "Basic Reporting", "Experimental Design" and "Validity of the Findings".
  • Reviewers are expected to comment in a timely, professional, and constructive manner.
  • Until the article is published, reviewers must regard all information relating to the submission as strictly confidential.
  • When submitting a review, reviewers are given the option to "sign" their review (i.e. to associate their name with their comments). Otherwise, all review comments remain anonymous.
  • All reviews of published articles are published. This includes manuscript files, peer review comments, author rebuttals and revised materials.
  • Each time a decision is made by the Academic Editor, each reviewer will receive a copy of the Decision Letter (which will include the comments of all reviewers).

If you have any questions about submitting your review, please email us at [email protected].