All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Dear authors,
Thank you for your revised version of the paper. It is our pleasure to inform you that the paper can be accepted.
[# PeerJ Staff Note - this decision was reviewed and approved by Jyotismita Chaki, a PeerJ Computer Science Section Editor covering this Section #]
Dear authors,
Your revised version of the paper has been reviewed by two reviewers. One of them asked for revisions of the paper. Please revise the paper according to comments by reviewer, mark all changes in new version of the paper and provide cover letter with replies to them point to point.
The paper has been corrected in accordance with the comments and may be accepted for publication.
The paper has been corrected in accordance with the comments and may be accepted for publication.
The paper has been corrected in accordance with the comments and may be accepted for publication.
The paper has been corrected in accordance with the comments and may be accepted for publication.
Overall, the current version of the manuscript has addressed most of the comments addressed from the previous reviews.
However, please consider the following technical aspects:
Line 54 – Spell out the abbreviation NLP first.
Line 90 – Missing statements relating to the paper contributions.
Line 234 – the conclusions of the revised literature do not show the gaps of the research and do no contribute to the problem statement of this research. Please elaborate more the gaps of the current work in terms of sentiment analysis and multi-criteria recommendations.
Line 244 – Equation (1) and its description need to be checked and revised. Please ensure that the variables stated in the discussion and the equation are consistently formatted.
The experimental design is much clearer now as compared to the previous version.
No comment
Dear authors,
Your paper has been reviewed by two reviewers who suggested revisions. Please make appropriate changes and write a response with replies to reviewers point to point.
[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]
[# PeerJ Staff Note: The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at copyediting@peerj.com for pricing (be sure to provide your manuscript number and title) #]
• English should be improved
• Scientific paper writing without "we"; "our".
• Lines 87-88, check!?
• Much one-sentence one-paragraph writing?!
• There is no section 5.1 if there is no 5.2.
• Figures and tables should be better displayed and organized (font size, font style, etc).
• Research questions are missing.
• The separate strong section on Practical and theoretical implications is missing.
• Scientific contributions must be more clear.
• Conclusion section is not on a satisfactory level. The conclusion in scientific papers is very important.
o Limitations of your research must be emphasized
o Future research directions must be stronger.
.
The paper's revised version is expected to improve from the previously submitted version significantly. However, the authors did not take seriously the comments given in the previous version. The only extra information which I see in the current version is Table 1 which shows a few related research but the authors did not discuss very much in the literature. Other than that, most of the content is similar to the previous version.
The comments from the previous submission are still not being addressed accordingly. The following is from my previous comments:
1. Comparisons with other approaches are invalid as some of the compared literature used different datasets.
2. The results of the experiments and their related discussions were inconsistent. For example, in Table 2 and 3, contrasting results can be observed. Table 2 shows that the prediction with a sentiment (with alpha = 0.3) is the best but in Table 3 prediction without the sentiment is the best. However, the authors simply stated that sentiment with alpha = 0.3 yields a successful outcome (refer to lines 286 & 287).
Point number (1) is not being addressed. If multi-criteria RS is not the main contribution, then it should be dropped from the scope or contribution of this paper. My suggestion is that the authors should focus on sentiment-based RS, as I didn't see very much discussion and contribution on the multi-criteria aspects.
Point number (2) introduces another confusion. Now, the authors claimed that alpha = 0.7 performed the best. My suggestion is that the average of the three results should be considered in order to conclude the findings.
The followings are my comments from the previous submission:
(1) Findings with regard to multi-criteria must be conducted. Thus, the proposed model must be compared with non-multi-criteria and other multi-criteria models.
(2) Comparison in terms of sentiment analysis accuracy is not the priority of the paper. However, the effort given by the authors to show the results of the analysis can be appreciated. But, when comparing with other methods, the authors must ensure that the used datasets are the same as those used in the experiment.
For point (1), since multi-criteria is not the main contribution of this paper, my suggestion for the author to scope on sentiment-based RS. For example, how various criteria are being aggregated to perform prediction has not been discussed. What is the basis of comparison between the predicted ratings? Was it against the overall ratings? Please elaborate if multi-criteria is still one of the scopes of this paper.
There are a few glaring incomplete information throughout the paper. Please go through the paper in more detail.
The author must pay special attention that in recommender systems, evaluation in the form system's suggestions are mainly of two types: prediction accuracy (RMSE, MAE) and user-based evaluation (precision, recall and F-measure). Accuracy in the case of this paper is only applied to the accuracy of sentiment analysis and not to the recommendation/suggestions quality. There has been a mixed evaluation metrics being presented in the paper.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.