Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on October 13th, 2021 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on November 24th, 2021.
  • The first revision was submitted on January 27th, 2022 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on February 3rd, 2022.

Version 0.2 (accepted)

· Feb 3, 2022 · Academic Editor

Accept

The paper can be accepted. Congratulations.

·

Basic reporting

The authors provide a systematic review of graph-based methods. They tried to identify the performance of graph neural network models for graphlets. Authors performed the evaluation of five Graph Neural Networks. The idea is good and interesting.

Experimental design

My previous comments on the experimental portion have been addressed.

Validity of the findings

findings of this study are significant

Additional comments

All my previous comments have been addressed. I have no further comments.

Version 0.1 (original submission)

· Nov 24, 2021 · Academic Editor

Major Revisions

Two reports have been received for this paper. I recommend a revision before further processing. Please note that I do not expect you to cite any recommended references unless they are crucial. I look forward to receiving your revised paper and a response letter.

[# PeerJ Staff Note: It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors are in agreement that they are relevant and useful #]

Reviewer 1 ·

Basic reporting

This is an experimental analysis paper for evaluating different baseline GNNs using graph sampling. The motivation of the work seems to inform the community of a experimental benchmark that supplies the trends of using different sampling methods while using several GNN instances. The writing of the paper is clear and the experiments are fairly compared when the models are evaluated on the 4 datasets considered. The code framework is public and from my observation it follows a similar layout/setting as Benchmarking GNNs [Dwivedi et al., 2020] following the priciples of fair GNN benchmarking. However, the paper has no technical contribution apart from the aforementioned sentences; and this is obvious from the introduction and setting of the paper.

Following are my concerns which may prohibit the usefulness of this paper as a benchmark of sampling methods using different GNNs:

* The datasets used are quite small and there is a well-known agreement in the community to move on to more large and complex datasets than Cora, Citeseer, etc. that are used in the paper. See Dwivedi et al., 2020, Open Graph Benchmark: Hu et al., 2020 which describe the need to using other datasets than aforementioned to evaluate GNN trends fairly. It is also clear that this paper is aware of the previous works on GNN benchmarks. Hence, I do not understand the need of making a graph sampling benchmark using the same small datasets!
* The paper follows the setting of Dwivedi et al., 2020 but do not consider GatedGCNs which were best performing in that work. This may be question the validity of the paper's insight where it is written that GCN/BFS are generally the best performing (L65), since a previous well performing GNN baseline-GatedGCN is not considered for evaluation in this work.
* If the paper's intent was to establish a evaluation trend of different GNN under sampling, it is necessary to consider robustness of experiments to make the results reliable and useful for the community. While the experiments in the paper are fair and unquestionable, it is not necessarily robust. For example, it does not seem the results are reliable if they are not evaluated on more complex and diverse datasets, and also on diverse tasks (such as graph prediction, link prediction, node prediction). This work only considers node prediction on small datasets.

Experimental design

* The research questions are well defined and meaningful. If the work would be more robust, it could fill the research gaps. However, see my concerns above in "Basic Reporting" which may not make this paper robust.
* Of the experiments considered in the paper, the settings are well defined for reproducibility. Similarly, the methods are sufficicently detailed to replicate. However, the concerns are with the datasets used and the lack of robustness in such an experimental work. Please refer to comments in "Basic Reporting" section on this.

Validity of the findings

* Of the experiments performed in the paper, the underlying data is provided, but are questionable given the current state and trend of the graph deep learning field in general. In particular, the community is slowly transitioning to evaluate GNNs on more diverse, complex and medium-to-large scale datasets. The datasets used in the paper are small and I believe that the paper is aware of that.

Additional comments

* In the writing of the paper, several references do not mention date and have "n.d." instead of the date. For eg. Bruna et al. n.d. It seems this is a minor thing which the paper should have taken care of before submission.

·

Basic reporting

The authors provide a systematic review of graph based methods. They tried to identify the performance of graph neural network models for graphlets. Authors performed the evaluation of five Graph Neural Networks. The idea is good and interesting. However I have few comments on this article.

Experimental design

1- In experimental setup, author did not mention how many hidden layers are in the network.
2- What information/features are you extracting in layers of the network.
3- Please mention the no of training, testing and validation samples you are using in the experiments.
4- can you elaborate row no 138 "there is no missing edges among..." why ?
5- what are effects you noticed for 1000 epochs for small and large datasets? any visualization graphs etc??
6- among five GNNs you did not considered the Siamese GNN, any reason about that? for example "Riba, Pau, et al. "Learning graph distances with message passing neural networks." 2018 24th International Conference on Pattern Recognition (ICPR). IEEE, 2018." and NI kajla et. al. "Graph Neural Networks Using Local Descriptions in Attributed Graphs: An Application to Symbol Recognition and Hand Written Character Recognition"

Validity of the findings

1- in Abstract row no 27 author say that to complete a sample subgraph does not improve the performance. they adds that it depends on the dataset, However I am not clear about the statement. Please elaborate more about it.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.