Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on June 20th, 2022 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on August 2nd, 2022.
  • The first revision was submitted on November 27th, 2022 and was reviewed by 2 reviewers and the Academic Editor.
  • The article was Accepted by the Academic Editor on December 28th, 2022.

Version 0.2 (accepted)

· Dec 28, 2022 · Academic Editor

Accept

The paper is now acceptable.

[# PeerJ Staff Note - this decision was reviewed and approved by Arkaitz Zubiaga, a PeerJ Computer Science Section Editor covering this Section #]

Reviewer 1 ·

Basic reporting

The authors have shown quality work & overall they have improved the manuscript.

Experimental design

The research is well defined & meaningful. The gaps were highlight & identified & solved very well.

Validity of the findings

All underlying data have been provided; they are robust, statistically sound, & controlled

Reviewer 2 ·

Basic reporting

The authors have made the necessary improvements and many choices have been justified.
For me, the paper can be accepted for publication.

Minor: the lines indicated in the rebuttal letter do not correspond to the modified lines in the pdf. This may be due to the fact that the paper undergoes layout changes before sending to proofreaders

Experimental design

nothing to add

Validity of the findings

nothing to add

Version 0.1 (original submission)

· Aug 2, 2022 · Academic Editor

Major Revisions

English needs to be improved.

[# PeerJ Staff Note: The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at copyediting@peerj.com for pricing (be sure to provide your manuscript number and title) #]

[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]

Reviewer 1 ·

Basic reporting

The English Language used in the article is poor. In terms of technical terms, it has reflected good LR.

I am having following suggestions:
1. the format of the paper is good but the text is not justified.
2. Overall good effort.

Experimental design

I am having following suggestions:
1. the format of the paper is good but the text is not justified.
2. I didnt see any data from Pakistan even in tables. I have a fear that the datasets are not appropriate if
you claim Pakistan in title then the tables & the plots must have Pakistan in it.
3. Overall good effort.

Validity of the findings

Serious concern as title represents Pakistan Whereas I cant see anything in dataset & plots.

Additional comments

Read my all comments

Reviewer 2 ·

Basic reporting

The paper describes four approaches analyzing sentiment from tweets dealing with COVID-19.
The authors also claim that five datasets, in this context, were built but they are not clearly introduced in the paper. They don’t mention if they are made available to the community. The document is organized and clear, and experimental results are more or less convincing. However, many points/sections need to be clarified.

It seems that there is a confusion between opinion mining and sentiment analysis! The sentiment is the feeling of the person writing the tweet while the opinion is the position of the person toward something. We can have a positive sentiment and negative opinion about something and vice versa.
I think that the authors should be definite and consistent.

Experimental design

It is not well justified why authors remove hashtags during preprocessing? Particularly for twitter, hashtags are a relevant source of information and can sometimes encode sentiments.

I would have appreciated if the authors exploit the particular linguistic phenomena on social networks in order to extract feelings like emojis, repeated letters (goood, tooo bad) ...

Validity of the findings

The article offers a quite rich and informative discussion. But I would like the authors to further discuss why sentiments regarding a preventive measure can change over time and what are exactly the decisions in relation with each measure that were appreciated by people?

Additional comments

Minor remark:
Please add as footnotes the links to the tools used in this work (Snscrape, vader, TextBlob...)

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.