Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on October 11th, 2021 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on November 10th, 2021.
  • The first revision was submitted on December 29th, 2021 and was reviewed by 1 reviewer and the Academic Editor.
  • A further revision was submitted on January 25th, 2022 and was reviewed by the Academic Editor.
  • The article was Accepted by the Academic Editor on January 26th, 2022.

Version 0.3 (accepted)

· Jan 26, 2022 · Academic Editor

Accept

Based on the revisions made by authors, the paper is accepted. Congrautlations.

Version 0.2

· Jan 18, 2022 · Academic Editor

Minor Revisions

In view of the comments from the reviewers, the authors are advised to make 'minor revisions'.

[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]

Reviewer 2 ·

Basic reporting

no comment

Experimental design

no comment

Validity of the findings

no comment

Additional comments

This paper needs to consider some suggestions to improve it like: (not taken into account in the first revision)

1- In the conclusions section, the findings should be explained clearly.
2- Conclusions have significantly improved. The authors should elaborate more on the practical implications of their study, as well as the limitations of the study, and further research opportunities.

After the modifications, I suggest to Accept this paper

Version 0.1 (original submission)

· Nov 10, 2021 · Academic Editor

Major Revisions

Based on the reviewers' comments, I advise the authors to make "major revisions" and resubmit the revised version.

[# PeerJ Staff Note: Please ensure that all review and editorial comments are addressed in a response letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. #]

[# PeerJ Staff Note: The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at copyediting@peerj.com for pricing (be sure to provide your manuscript number and title) #]

Reviewer 1 ·

Basic reporting

Authors in their work proposed a new dataset for crowd analysis. In addition to this, proposed a fully convolutional neural network (FCNN)-based method to monitor the crowd. There is merit in the dataset annotation but the manuscript still need a lot of work.

1) Literature references are not sufficient. The related work section needs more work. There is a lot of work done in this field is reviewed in the following papers [1,2].
[1] Sindagi, Vishwanath A., and Vishal M. Patel. "A survey of recent advances in CNN-based single image crowd counting and density estimation." Pattern Recognition Letters 107 (2018): 3-16.
[2] Gao, Guangshuai, et al. "Cnn-based density estimation and crowd counting: A survey." arXiv preprint arXiv:2003.12783 (2020).

2) The English language should be improved to ensure clearly understand your text. Some examples where the language could be improved include lines 80, 93, 157 (what is L denoting here ?), Mathematical notations should be well defined. (For example in 228, What is m and n) – the current phrasing makes comprehension difficult.

Experimental design

These are major issues with the experimental setup.

1. Experiment design and experimental results
Both the proposed method and the benchmark methods are stochastic and the results from multiple independent runs are expected. What is currently reported in the paper is the results from a single run, which is not enough to draw concrete conclusions. Furthermore, multiple runs will be needed to conduct a statistical significance test.

2. Performance metrics
The typical accuracy is inappropriate to be used when the dataset is imbalanced (Shanghai Tech and UCSD) and multiclass (proposed dataset, Shanghai Tech and UCSD). We know for sure that the benchmark datasets in this study fall under that category. Then why the typical accuracy is still used to assess the effectiveness of the experimented methods.

3. Benchmark methods and fairness of the comparisons.
3.1- Benchmark methods
I do not think any of the methods in the experiments was specifically proposed for crowd density classification. There are a number of studies that have proposed similar techniques to the proposed method [1,2], e.g., utilising CNN or other machine learning methods to classify/estimate crowd density. Why none of these was included in the comparisons despite some of such methods being discussed in the related work section.

[1] Gao, Guangshuai, et al. "Cnn-based density estimation and crowd counting: A survey." arXiv preprint arXiv:2003.12783 (2020).

3.2 - Fairness
(a) The proposed method utilises pre-trained models (transfer learning) whereas the benchmark methods are trained from scratch. I do not think this is a fair comparison unless the study is about transfer learning vs conventional learning.
(b) Overall dataset annotation looks fair except there is a chance of having a human bias in 5 classes. As it is very difficult to see the difference in the low and medium (3rd image) and the same is the case with medium and high. In my personal opinion, having three classes (low, medium and high) will be more appropriate than five classes to reduce human bias or error.

Validity of the findings

Experiments 1 and 2 have fundamental issues that need to be resolved before making any valid conclusions.

1. The main issue with stochastic methods is that different results are produced depending on the starting point of the search. In neural networks, the random value generator, more specifically the starting point of the random values generator, initialises the weights; hence, causing the network to start the process from a different point in the search space. Therefore, we must rerun the method multiple times using different seed values “while” keeping everything else untouched/identical.

2. How the other benchmark datasets where category wise evaluation (Mentioned in Table 1) is not available were modified into a classification problem.

3 what are the hyperparameters for all the benchmarks and our proposed model?

4. what are the train and test sizes for other benchmark datasets.

5. Why are recall, f-score etc are not reported?

6. All benchmark models come with pre-trained weights. Which means they are different from each other.
For example, I can use a method that was trained to do anomaly detection and fine-tune it (re-train it) for crowd classification and then compare it to a method that was trained to perform cancer image segmentation in MRIs after re-train it for crowd classification. Can you see the difference between the base models?

Reviewer 2 ·

Basic reporting

no comment

Experimental design

no comment

Validity of the findings

no comment

Additional comments

This article needs important modifications to be suitable for this journal. I suggest major revision for
this paper. The main comments are:

1) In this study, the authors would like to propose a model of deep crowd density classification
for Hajj pilgrimage by using fully convolutional neural network. It can be noticed that the CNN
used in this manuscript has been studied and applied in the previous literatures.
2) The novelty of this paper should be further justified and to establish the contributions to the
new body of knowledge.
3) Abstract section should be improved considering the proposed structure from the journal.
4) In Introduction section, the authors should improve the research background, the review of
significant works in the specific study area, the knowledge gap, the problem statement, and
the novelty of the research.
5) The presentation of the results and conclusions were not enough; it should be highlighted.
6) In the conclusions section, the findings should be explained clearly.
7) The authors should elaborate more on the practical implications of their study, as well as the
limitations of the study, and further research opportunities.
8) The English writing does not influence, in all the paper. There are a lot of grammatical errors
which should be revised by the authors. So, the paper needs a professional English revision.
The author’s guide should be considered by the authors in the writing style in all the paper.

Annotated reviews are not available for download in order to protect the identity of reviewers who chose to remain anonymous.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.