Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on September 26th, 2024 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on January 17th, 2025.
  • The first revision was submitted on February 21st, 2025 and was reviewed by 2 reviewers and the Academic Editor.
  • A further revision was submitted on April 8th, 2025 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on April 29th, 2025.

Version 0.3 (accepted)

· Apr 29, 2025 · Academic Editor

Accept

The authors correctly addressed the points raised by the reviewers and therefore I can recommend this article for acceptance.

[# PeerJ Staff Note - this decision was reviewed and approved by Mehmet Cunkas, a PeerJ Section Editor covering this Section #]

·

Basic reporting

The authors have diligently addressed all previous review comments, resulting in a significantly improved manuscript. I think the revised paper meets the journal's standards. The authors' thorough efforts in revision and detailed response to feedback are appreciated. I am satisfied with the current version and recommend acceptance.

Experimental design

N/A

Validity of the findings

N/A

Additional comments

N/A

Version 0.2

· Mar 28, 2025 · Academic Editor

Minor Revisions

Please address the remaining reviewer comments.

·

Basic reporting

Overall, I am generally satisfied with the authors' revisions, and the paper has shown significant improvement. However, there are still a few minor issues that need to be addressed. I hope these revisions would further strengthen the paper.

1. The brief description of social disputes is necessary, but the current section is too lengthy and presented in a glossary-like format. I suggest condensing it into a short paragraph.

2. The "related works" now includes a more focused discussion on meta-learning-based few-shot text classification. However, the cited works are all from papers before 2018. I recommend adding recent studies from 2020 onwards, preferably within the last three years.

Experimental design

3. I am curious why the authors chose to use the WordNet-based method for computing w_{ij}* on page 7, as this approach is somewhat old fashioned. For simple word similarity calculations, most researchers now prefer word embedding-based methods. Have the authors compared the performance and impact of these two approaches on the results?

Validity of the findings

4. It would be helpful to bold the best results in the tables for clarity.

5. Regarding the optimal number of Attention Heads, I suggest using a graph to illustrate the relationship between the number of heads and accuracy, rather than just describing it in text.

·

Basic reporting

The paper is clear and unambiguous throughout. The introduction clearly indicates the purpose of the work. Literature survey is well referenced and relevant to the proposed work. The structure of the paper is maintained and as per standards. The definitions, equations, formulaes are clear and modelled well .

Experimental design

Article is as per the aim and scope of the journal . The methods described provide sufficient detailing. All the changes proposed by the evaluators have been clearly incorporated in the revised article.
The updated version of the dataset used includes additional details about the private dataset, such as its source, scale, and example categories.
The revised manuscript now includes a detailed explanation of why four attention heads were chosen, supported by systematic experiments and k-fold cross-validation.
The revised version addresses why the model underperforms on this dataset, citing noisy text and feature extraction limitations. Future improvements, such as optimizing feature extraction and enhancing the attention mechanism, have been proposed.
Initially, only accuracy was reported. The revision now incorporates F1-score, recall, and precision to provide a more comprehensive performance assessment.

Validity of the findings

The findings proposed by the authors are genuine . The work done by the authors are properly justified by the obtained results.
As all forms of performance evaluations are considered the experiments and evaluations done are satisfactory. Conclusions however does not propose future directions.

Additional comments

The changes to be done proposed by the evaluators has been incorporated in the revised article.

Version 0.1 (original submission)

· Jan 17, 2025 · Academic Editor

Major Revisions

Respond to the comments from the reviewers in an appropriate revision

[# PeerJ Staff Note: The review process has identified that the English language must be improved. PeerJ can provide language editing services if you wish - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title). Your revision deadline is always extended while you undergo language editing. #]

·

Basic reporting

This paper proposes a Meta-Learning Siamese Network based on multi-head attention. From the paper, it is clear that this work builds on Meta-SN (Han, 2023), primarily by adding multi-head attention and synonym replacement. The results of the proposed model demonstrate significant improvements across several public datasets. However, there are substantial issues in writing that need to be addressed.

1. The paper contains numerous grammatical and formatting errors. The authors need to proofread the manuscript to address these issues. Below are a few errors identified during the review process:
1) Punctuation marks should be followed by a space.
2) Please ensure the proper appearance of citations in the paper. Currently, references in the text lack parentheses and cannot be distinguished from the main text.
3) The last word in line 52 extends beyond the margin.
4) Figures 1 and 2 have insufficient resolution, and Figure 2 is oversized. The sizes of Figures 3 and 4 could also be reduced, perhaps aligned side by side.
5) Mathematical formulas should be center-aligned.
6) In line 12, the first word should be "has." Also, "when" following "1)" should be capitalized to be consistent with "2)" and "3)."
7) The long sentence starting with "However" in line 27 is not smooth.
8) The subject in line 169 should not be "It."
9) Line 172 contains garbled characters.
10) In the paragraph under line 191, "in this thesis" should be replaced with "in this paper."
11) In the paragraph under line 222, what does "repertoire" mean in this context? Additionally, the phrase "mapping mapping" is incorrect.
12) The sentence starting with "The model..." in line 297 has grammatical issues.
13) In the references, there are two identical entries for Han (2021). Additionally, many references are early versions from arXiv. Have these papers been officially published?

2. The authors claim their contribution is, "This paper applies the few-shot classification technology to the field of social disputes for the first time." However, the paper does not provide a brief introduction to the domain of social disputes nor give related references. Though the authors briefly describe the features of social dispute texts, but readers remain confused about the nature of these texts. It is recommended to include examples or add references. BTW, in title the author use the term "Conflict Disputes". Is it the same with "social disputes"?

3. The authors should reorganize and enrich the "Related Work" section. The current discussion on meta-learning is overly broad, while the description of few-shot text classification is fragmented and lacks sufficient references. The review section would be better if the authors focus on few-shot text classification methods based on meta-learning.

Experimental design

4. The authors have shared partial code snippets, which appear to be modifications of the open-source Meta-SN code. Based on the methodological descriptions in the paper, it seems that the functionality of the unpublished code aligns closely with Meta-SN. The authors are encouraged to consider releasing the full source code to enhance the credibility and reproducibility of their work.

5. While the authors do not publicly release their private social dispute dataset, they could provide a more detailed description of the dataset, such as the number of samples and categories. Including example fragments, even anonymized ones, would also be helpful.

6. One of the claimed contributions of the paper is synonym replacement. However, the descriptions of synonym replacement, which appear in several sections, remain unclear. Is synonym replacement applied to class labels or words in the text? How is $w_{ij}^*$ in Equation 5 obtained?

7. Meta-SN uses fastText and BERT as pre-trained models. Why did the authors not use BERT in their experiments? From a computational resource perspective, using a frozen BERT does not appear to be a significant challenge.

Validity of the findings

8. In the evaluation results presented in Han (2023), the datasets RCV1 and FewRel were included. Given that the baseline models used by the authors are identical to those in Han (2023), why did the authors choose not to include results for these two datasets?

9. The evaluation on the private dataset is overly simplistic. The authors should at least compare the results of Meta-SN and MASM on this dataset to strengthen the paper's credibility.

10. In the ablation study, the authors compare the performance differences with varying numbers of attention heads. What would the results be with more heads, such as 8 or 16, as in transformers? Why is 4 deemed optimal? Furthermore, the title and content of Table 3 are inconsistent, as the table does not include the effects of synonym replacement.

·

Basic reporting

Clear and unambiguous, professional English used throughout.
Intro & background to show context. Literature well referenced & relevant.
Structure conforms to PeerJ standards, discipline norms.
Introduction adequately introduces the subject motivations mentioned in the paper are clear.
Formal results are clear and justified accordingly.

Experimental design

The use of synonym substitution and multi-head attention mechanisms addresses key limitations of existing Siamese networks but the explanation of hyperparameter selection (e.g., number of heads in attention) and their impact on results could be more rigorous.
The model is evaluated only for accuracy. Results of the other measure such as F1-score, Recall , Precision can also be shown.
Data set used is not clear. Authors can give a brief introduction to the private dataset used.

Validity of the findings

Conclusions are well stated & limited to supporting results.
there a well-developed and supported argument that meets the goals set out in the Introduction.
However,
The slightly lower performance on the 1-shot Amazon dataset compared to Meta-SN can have further discussion. Is it due to dataset-specific characteristics or inherent limitations of the model?

Additional comments

NIL

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.