Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on June 2nd, 2023 and was peer-reviewed by 3 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on August 24th, 2023.
  • The first revision was submitted on September 21st, 2023 and was reviewed by 2 reviewers and the Academic Editor.
  • A further revision was submitted on December 13th, 2023 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on December 26th, 2023.

Version 0.3 (accepted)

· Dec 26, 2023 · Academic Editor

Accept

The manuscript was revised addressing the concerns raised. Based on the revision and the feedback from the reviewers, the manuscript can be accepted for publication.

[# PeerJ Staff Note - this decision was reviewed and approved by Xiangjie Kong, a PeerJ Section Editor covering this Section #]

·

Basic reporting

I concur with the current version of the manuscript, as it comprehensively meets all the publication requirements for PeerJ Computer Science.

Experimental design

The authors have effectively elucidated an essential experimental design that aligns well with the publication standards of PeerJ Computer Science.

Validity of the findings

The findings presented in this paper demonstrate robust validity and contribute significantly to our understanding, making them a valuable addition to PeerJ Computer Science.

Cite this review as

Version 0.2

· Nov 8, 2023 · Academic Editor

Minor Revisions

- The authors provide rebuttal to the concerns raised by the reviewers but it seems many of these rebuttals are not incorporated in the manuscript. For example, concerns raised regarding comparison with deep-learning systems, experimenting on more datasets (and more). Similar questions will come to the readers’ mind. It is not enough to provide answers in the rebuttal letter alone. The manuscript should include such discussions in appropriate sections for the readers. Please make sure that these responses are appropriately incorporated throughout the manuscript.

- The article mentions using 5 datasets but the improvements in performance measures are mention as singular values (e.g., “Testing on five datasets showed that the incorporation of this study 16 new 30 rules significantly improved aspect extraction precision by 6%, recall by 6% and F-measure 31 value by 5% compared to the contemporary approach”)? It is not clear. Maybe there is a range for each measure considering all the datasets.

- It might be useful to apply the complete process (including preprocessing, POS tagging, Feature extraction and evaluation) for a couple of samples and add this as an illustration in the manuscript in an appropriate section.

- Also, please look at the comments from Reviewer 3, if they are not already addressed (seems points 1 and 3), please address them (including updating the manuscript if needed).

**PeerJ Staff Note:** Please ensure that all review and editorial comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.  It is a common mistake to address reviewer questions in the response letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the response letter.  Directions on how to prepare a response letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/.

·

Basic reporting

The paper introduces a groundbreaking method aimed at enhancing the efficiency of explicit feature extraction from product review documents. The primary goal of the author is to pinpoint and extract features that are accompanied by opinions, and this is achieved by applying sequential pattern rules across five distinct datasets. What sets this study apart is the introduction of an advanced set of pattern rules. These newly proposed rules not only complement the existing ones from prior research but also fill in the gaps that were previously neglected. By taking into account these previously unaddressed nuances, the approach significantly boosts performance. It's also worth noting that the authors have meticulously incorporated and addressed all the suggestions and feedback provided in the initial review, ensuring a comprehensive and improved version of their work.

Experimental design

The experimental endeavors showcased in the paper are indeed laudable, with the potential to significantly influence both academic discourse and practical applications. The ingenuity of the research is evident, and it undeniably paves the way for future advancements in the domain. It's noteworthy that the authors have diligently addressed the pointed suggestions for the paper. For those suggestions that weren't implemented, the authors provided a well-reasoned justification. They highlighted that alternate methodologies, such as deep learning or enhanced optimization techniques, often necessitate larger data points, expansive training datasets, augmented setup costs and time, robust hardware configurations, and intricate algorithms. Drawing a comparison between the extraction performance derived from such an intricate and computationally intensive setup to results yielded by a more straightforward machine-learning framework doesn't seem entirely equitable. While the outcomes produced by deep learning methodologies might be superior upon execution, the overheads associated with establishing an appropriate execution environment are substantially higher. This sentiment is echoed by Tubishat, Idris & Abushariah (2021), who, while achieving improved results by layering optimization over rule-based mining techniques, also encountered increased memory usage and processing time. Such overheads were adeptly circumvented in the approach proposed in this paper. This explanation should be prominently featured in the main body of the paper to provide clarity and context to readers.

Validity of the findings

The results and findings presented in this paper hold significance both from an academic perspective and in terms of practical application.

Additional comments

none

Cite this review as

Reviewer 3 ·

Basic reporting

no comment

Experimental design

no comment

Validity of the findings

no comment

Additional comments

(1)Please explain the reason why “Other sentences that are titled or have implicit, suggestion or comparative features are removed in line 300.
(2)the results section comparison with many past studies in feature extraction using customer review datasets, but it does not compare the proposed method based on deep learning or neural networks,it is recommended to include a comparison with other methods to demonstrate the advantages and improvements of the proposed approach.
(3)In section Results and Discussion, Table 9 shows that the average precision, recall and F-measure obtained in this study are comparable to the current state-of-the-art baselines. However, why does the proposed algorithm in this paper achieve similar results as the baseline? Could you please elaborate on the advantages of the proposed approach compare the state-of-the-art baselines?)

Cite this review as

Version 0.1 (original submission)

· Aug 24, 2023 · Academic Editor

Minor Revisions

I believe that the paper is making an important contribution to the field. However, there are still a number of things which could be improved and others which are in need of correction.

**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

**Language Note:** PeerJ staff have identified that the English language needs to be improved. When you prepare your next revision, please either (i) have a colleague who is proficient in English and familiar with the subject matter review your manuscript, or (ii) contact a professional editing service to review your manuscript. PeerJ can provide language editing services - you can contact us at copyediting@peerj.com for pricing (be sure to provide your manuscript number and title). – PeerJ Staff

Reviewer 1 ·

Basic reporting

In Tables 3 and 4, explicitly place the meaning of abbreviations (AVB, JJ, NN, NNS, etc.), either in the text of the article or as a table note, so that the text is self-contained.

Experimental design

'no comment'

Validity of the findings

'no comment'

Cite this review as

·

Basic reporting

Based on the issues identified in the paper, I recommend that significant revisions be made before considering a revised version for re-evaluation. The concerns raised, such as the lack of comparisons with other methods, limited explanation of the new set of rules, insufficient dataset explanation, and the absence of practical implications, require substantial revisions to enhance the quality and impact of the research.

By addressing these concerns and incorporating the suggested improvements, the revised version has the potential to address the gaps in the current manuscript and provide a more comprehensive and valuable contribution to the field. The decision of whether to re-invite the authors for a revised version will ultimately be made by the Academic Editor, taking into account the extent of the revisions and the potential impact of the revised paper.

Experimental design

The experimental work conducted in the paper is commendable and has the potential to make a significant contribution to academia and practical analysis. The originality of the research is apparent and holds promise for advancing the field. However, it is crucial for the authors to address the specific issues pointed out in the attached review file. By carefully considering and implementing the suggested improvements, the paper can be enhanced in terms of clarity, rigor, and overall quality.

Taking into account the valuable feedback provided in the review, the authors have an opportunity to strengthen their methodology, refine their analyses, and improve the overall presentation of their findings. By addressing the identified issues, the revised version of the paper can effectively communicate the research outcomes, expand upon their implications, and provide valuable insights for both academic and practical applications.

It is recommended that the authors thoroughly address the concerns raised in the review file while also considering additional suggestions to enhance the paper's quality. By doing so, they can significantly improve the chances of their work being considered for publication and make a valuable contribution to the research community.

Validity of the findings

'no comment'

Additional comments

I strongly encourage the authors to carefully review and consider all the suggestions provided in the attached file to improve the paper's suitability for publication in the PeerJ Computer Science journal. The feedback and recommendations provided in the file are likely to be valuable in addressing the necessary revisions and enhancing the quality of the research.

It is important to carefully evaluate each suggestion, address any identified gaps or limitations, and revise the paper accordingly. By incorporating the suggested improvements and ensuring the paper meets the journal's guidelines and standards, the authors can increase the chances of publication.

Please note that the final decision on publication rests with the journal's editors, who will assess the revised version based on the extent to which the suggested improvements have been addressed.

Cite this review as

Reviewer 3 ·

Basic reporting

There are some features that were extracted are non-noun words such as verbs in the field of opinion mining. Then, the study proposes a novel approach based on sequential pattern rules, to address the overlooked features and opinion words by existing rules, validated across multiple datasets. The performance improvement of this approach is significant, demonstrating similar accuracy and recall compared to previous baseline studies.

Experimental design

1. Please explain the reason why “other sentences that are titled or have implicit, suggestion or comparative features are removed in Line 300.

Validity of the findings

2. the results section comparison with many past studies in feature extraction using customer review datasets, but it does not compare the proposed method based on deep learning or neural networks,it is recommended to include a comparison with other methods to demonstrate the advantages and improvements of the proposed approach.

Additional comments

3. In section Results and Discussion, Table 9 shows that the average precision, recall and F-measure obtained in this study are comparable to the current state-of-the-art baselines. However, why does the proposed algorithm in this paper achieve similar results as the baseline? Could you please elaborate on the advantages of the proposed approach compare the state-of-the-art baselines?

Cite this review as

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.