Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on December 21st, 2022 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on April 12th, 2023.
  • The first revision was submitted on May 11th, 2023 and was reviewed by 2 reviewers and the Academic Editor.
  • A further revision was submitted on July 18th, 2023 and was reviewed by the Academic Editor.
  • The article was Accepted by the Academic Editor on August 20th, 2023.

Version 0.3 (accepted)

· Aug 20, 2023 · Academic Editor

Accept

From the submitted revision and related rebuttal letter, I am satisfied with the author's efforts to address reviewer comments.

[# PeerJ Staff Note - this decision was reviewed and approved by Jyotismita Chaki, a PeerJ Section Editor covering this Section #]

Version 0.2

· Jul 7, 2023 · Academic Editor

Minor Revisions

It would be beneficial, if the authors could take a clear stance regarding the CSD method proposed in Feng 2021 and compare their proposed approach with this similar CSD-based method. See comment from Reviewer 2.

Reviewer 1 ·

Basic reporting

The manuscript have been greatly improved. I have no more comments.

Experimental design

The manuscript have been greatly improved. I have no more comments.

Validity of the findings

The manuscript have been greatly improved. I have no more comments.

Additional comments

The manuscript have been greatly improved. I have no more comments.

Reviewer 2 ·

Basic reporting

The authors have provided responses to all of my inquiries.

There are still some English writing errors

Experimental design

no comment

Validity of the findings

Not very original as CSD was investigated in many papers.

Additional comments

A Category-wise similarity distillation (CSD) was also proposed by Feng 2021, in "Double Similarity Distillation for Semantic Image Segmentation".
The authors should compare and cite this paper.


There are still some English writing errors (e.g. "given a object detector" should be "given an object detector")

Version 0.1 (original submission)

· Apr 12, 2023 · Academic Editor

Major Revisions

Please make sure that the revised manuscript addresses the reviewer comments.

[# PeerJ Staff Note: The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at copyediting@peerj.com for pricing (be sure to provide your manuscript number and title) #]

[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]

Reviewer 1 ·

Basic reporting

The paper proposes an incremental learning method for remote sensing image target recognition, which is of great practical value. However, the English writing in the paper is very poor, which significantly hinders understanding. In addition, the paper contains many non-standard elements, such as improper referencing of the literature, unnumbered equations, and unexplained symbols in the equations.

I suggest that the authors should thoroughly rewrite the entire paper, or even use ChatGPT to perform language polishing for the paper.

Experimental design

The comparative experiments are not sufficient, and it is recommended to compare with the latest methods.

In addition, the dataset used should be properly cited with the corresponding references.

Validity of the findings

The proposed method is somewhat innovative.

Reviewer 2 ·

Basic reporting

The equations are not referenced.
Some terms need to be defined.
There are ambiguities in the notation (e.g. i , C, ...)
(see the section bellow for Additional comments)

Experimental design

Not very original, distillation of class information using cosine similarity from exemplars was already proposed in
Hou & al. "Learning a Unified Classifier Incrementally via Rebalancing", CVPR2019.

Validity of the findings

no comment

Additional comments

This paper, entitled “Class Incremental Learning of Remote Sensing Images Based on Class Similarity Distillation” , presents a method for incremental object detection. The algorithm that introduces the information similarity uses RPN as a baseline.
The document has several serious shortcomings, which I will identify in the following points:
Q1 : In line 200, Z is defined as R^{HWC} whereas in line 204 M_t is denoted as R^{HxWC}. What is the difference between the two notations HxWC and HWC?

Q2: equation 1 does not refer to the number of channels C?

Q 3: The denominator [i:y_i:c] should be clearly defined. The notation is not explicit.

Q4: “i” is used everywhere, as a pixel index, as a batch index which makes understanding the process more difficult to follow.

Q5: Each equation referred to in the text should be identified by the corresponding number. Again, this complicates understanding for the readers.

Q6 : line 215, “new” is duplicated ?

Q7: The approach is based on the RPN which means that the joint training of the FPN-il and the CSD should be similar in terms of AA. However, in table 2, as we can see the AA is identical for BC, ST, TR, and on the other hand it is always better for SFB, Harbour, SP, and HC? a clear argument should be given to clarify these cases.

Q8: line 31, Fig.1 should be fig 5 !!

Q9: line 301, what do the SBF and BD classes stand for?

Q10: line 360, citation guidelines should be removed!

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.