All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
The authors have addressed all the comments and issues that the reviewers pointed out, so the paper is now in a proper shape to be accepted for publication.
Revision well done
Revision well done
Revision well done
No comment
No comment
No comment
No comment
The authors addressed the comments in a very good way.
The authors addressed the comments in a very good way.
The authors addressed the comments in a very good way.
Based on the reviewers' comments, the manuscript needs more efforts and work in order to be in a good shape for being accepted.
Also, two of the reviewers have requested that you cite specific references. You may add them if you believe they are especially relevant. However, I do not expect you to include these citations, and if you do not include them, this will not influence my decision.
[# PeerJ Staff Note: It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors are in agreement that they are relevant and useful #]
The authors submitted a very interesting work, my suggestions to improve it are:
1.I suggest to add it to github “The source code for reproducing the experiments will be available upon publication of the manuscript”
2.The contribution is no introduced clearly on the theoretical analysis; also if very good results are reported. If feasible add some more details on the theoretical analysis of your approach.
3.You should better review the literature on ensemble of classifiers, e.g.
https://arxiv.org/abs/1802.03518
https://doi.org/10.1016/j.eswa.2020.114048
https://arxiv.org/pdf/2104.02395.pdf
4.Some typos, E.g. row 153 “BNN model can be used in a low poer” -> “...power”
5.You run many experiments and you have reported many results, this is appreciated, anyway please better stress the novelty of your method respect the literature on pruning and quantization approaches:
https://www.sciencedirect.com/science/article/pii/S0031320321000868
https://arxiv.org/pdf/2103.13630.pdf
no comment
1.The last word on line 153 is wrong. Suggest to examine each word and sentence carefully.
2. Suggest to polish the language of the writing.
No comment
Figure 5(b) does not contained data when in the fusion ensemble due to the limitation of GPU resources. I suggest you to supplement the result when , in order to ensure the integrity of the experiment. You can use smaller batch size or use CPU for retraining.
In this manuscript, the authors proposed a storage-efficient ensemble classification to overcome the low inference accuracy of binary neural networks (BNNs). The work indicates that proposed method reduces the storage burden of multiple classifiers in the lightweight system. This is a good idea, which can be used to improve the accuracy and reduce the storage burden of BNN. In addition, this manuscript also provides a solution for the application of neural network in lightweight system. There are some suggestions as follows:
1. Fusion, voting, and bagging schemes were applied to evaluate ensemble-based systems. However, there is no comparison between these methods in the paper.
2. Your introduction at lines 296-310 needs more detail. I suggest that you can add figure to describe the experiment result.
This is the review report of the paper entitled "A storage-efficient ensemble classification using filter sharing on binarized convolutional neural networks".
The paper presents a very important topic and is well presented. However, I have some comments to improve the paper.
1- In the abstract, add the value of classification accuracy to support the theory.
2- The authors show the results of their proposed method with the state-of-the-art model (ResNet), I would suggest showing the results of the ResNet on the same dataset without the use of the proposed method.
3- training parameters are required to mention.
4- Paper code with a nice demo is important to upload on any public platform.
5- I would suggest citing the following reference when referring to CNN so new reference from 2021 can be used
https://link.springer.com/article/10.1186/s40537-021-00444-8
6-Comparison with state-of-the-art is necessary to add on the same used dataset.
7-explain more on the research gap of previous methods.
8-The contributions of the article have to be clear for the readers, I would suggest making them as bullet points at the end of the introduction.
See the first box " Basic reporting"
See the first box " Basic reporting"
See the first box " Basic reporting"
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.