Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on March 31st, 2025 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on July 21st, 2025.
  • The first revision was submitted on August 26th, 2025 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on October 1st, 2025.

Version 0.2 (accepted)

· · Academic Editor

Accept

The paper may be accepted.

[# PeerJ Staff Note - this decision was reviewed and approved by Xiangjie Kong, a PeerJ Section Editor covering this Section #]

Reviewer 1 ·

Basic reporting

The paper is well-structured with logical flow, and the authors have addressed the concerns.
The language has improved, but can still be simplified in some long sentences.
The figures and tables are presented good, but the captions should explain more.
The authors have fulfilled my concerns in this section.

Experimental design

In this revision, the authors have improved the following:
1. The research question is well defined.
2. The methodology is explained with good detail and described with examples.
3. Datasets are discussed clearly.
4. The comparative analysis with baseline studies is added.

Validity of the findings

In the revised version, the results are statistically sound and consistent across multiple datasets. The t-test validation is properly explained. The limitations and future work are now included.

Additional comments

The revised version is improved and suitable for publication. The authors have addressed and responded to the concerns raised in the previous version.

Version 0.1 (original submission)

· · Academic Editor

Major Revisions

**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

**Language Note:** PeerJ staff have identified that the English language needs to be improved. When you prepare your next revision, please either (i) have a colleague who is proficient in English and familiar with the subject matter review your manuscript, or (ii) contact a professional editing service to review your manuscript. PeerJ can provide language editing services - you can contact us at [email protected] for pricing (be sure to provide your manuscript number and title). – PeerJ Staff

Reviewer 1 ·

Basic reporting

The paper has clearly explained the gap in feature extraction from customer reviews and has addressed it in a step-by-step way with good coverage of existing methods.

However, the paper can be improved by considering the following suggestions:

1. Add comparison with recent deep learning models.
2. Include statistical significance testing.
3. Explain the pruning strategies more clearly with theoretical support.
4. Discuss how the method can handle implicit features or complex sentiments.

Experimental design

The following points are suggested for improving the experimental design:

1. Although the paper reports improved performance metrics, it does not include statistical significance testing. It is recommended to perform statistical tests to validate the improvements.
2. Include confidence intervals or p-values to support the reported performance results.
3. It would strengthen the paper if experimental graphs or case studies are provided to show how the pruning strategies impact the final feature set size and extraction accuracy.

Validity of the findings

The paper has verified performance only through standard metrics (precision, recall, F1-score) but does not validate against modern baseline methods such as transformer-based models. It would be better to include comparisons against recent deep learning models to ensure the validity of the proposed approach in current research contexts.

Reviewer 2 ·

Basic reporting

Formal results should include clear definitions of all terms and theorems, and detailed proofs.

Experimental design

No comment

Validity of the findings

No comments

Additional comments

The paper is well structured but following observations are there-

Pre-processing phase should be elaborated by explaining about the tools used.
The working of "Enhanced Heuristics Pattern-based Algorithm" should be explained in more detail for better understanding.
The details of "Opinion Lexicon" is missing. The authors should mention, that it is self created or being downloaded.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.