Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on June 29th, 2023 and was peer-reviewed by 3 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on September 27th, 2023.
  • The first revision was submitted on November 4th, 2023 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on January 16th, 2024.

Version 0.2 (accepted)

· Jan 16, 2024 · Academic Editor

Accept

The paper has addressed all reviewers' questions.

[# PeerJ Staff Note - this decision was reviewed and approved by Jyotismita Chaki, a PeerJ Section Editor covering this Section #]

·

Basic reporting

All ok

Experimental design

Done as per the suggestions

Validity of the findings

Satisfied

Additional comments

None

Version 0.1 (original submission)

· Sep 27, 2023 · Academic Editor

Major Revisions

Based on the reviewers' suggestions and my opinion, I suggest the paper needs major revisions.

**PeerJ Staff Note:** Please ensure that all review and editorial comments are addressed in a response letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

**Language Note:** The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title). Alternatively, you should make your own arrangements to improve the language quality and provide details in your response letter. – PeerJ Staff

·

Basic reporting

1. What is the novelty of this paper? Clearly mention it in the introduction section.
2. The proposed algorithm and flow of instructions are missing you should add it in your manuscript.
3. The discussion section is missing you should add it to your manuscript

Experimental design

1. In Figure 1 what is the use of output of training from SVM, LR, and RF in test data.
2. What is the contribution of the ML classifier over the BERT classifier?
3. Clearly explain Figure 1.

Validity of the findings

1. Result section is week. You should add more model results such as DNN, CNN, LSTM, Bi-LSTM, LSTM-A, etc., for comparison of your proposed model.
2. The time complexity of your proposed model should be compared with the existing state-of-the-art models.

Additional comments

1. At line no. 82, what is nn-damage?
2. Lot of work have already been done on this field. Without citing then you work seems to be incomplete for example you may refer to the work done by

Reviewer 2 ·

Basic reporting

The authors need to consider the following points:


1. Refine the paper for better flow and viewpoint.

2. Some bad English constructions and misuse of articles. Therefore, a professional language editing service is strongly recommended.

3. More description of the technical details will help to improve the quality of the manuscript.

4. It is unclear how the different steps in the proposed model are implemented. The steps would be better explained with an example.

5. Different techniques and algorithms are used. However, there is no a clear justification for why these techniques and algorithms should be used rather than others.

6. There are no statistically significant results for the improvement. A statistical test should be conducted to determine whether any improvements are statistically significant.

7. Report clearly the limitations of this work.

Experimental design

No comment

Validity of the findings

No comment

Additional comments

No comment

Reviewer 3 ·

Basic reporting

There are few typos.
Line 82 nn-damage is likely to be a typo, please fix.
Line 178–183, please be consistent with syntax. For example in line 180 replace -> replacement of
Line 236 reached on the conclusion -> reached the conclusion
Line 239: Determining the number of epochs in the fine-tuning stage is a common issue is training the weights of deep learning models.

Experimental design

In the experiment section, the authors compared three models -- pre-trained BERT, word2Vec, and TFIDF on a known benchmark datasetCrisisMMD. The dataset is also well-introduced.

Validity of the findings

The paper mostly compared existing models and fine-tuned the BERT model on a specific dataset. The results are valid but more novelty would be appreciated.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.