All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
The manuscript is ready for publication.
[# PeerJ Staff Note - this decision was reviewed and approved by Jyotismita Chaki, a PeerJ Computer Science Section Editor covering this Section #]
All my previous concerns have been fully addressed in the revision.
n/a
No detectable errors
None
The paper contains publishable contents in terms of technical contribution. However, the writing needs to be further improved as the readability of current version affects the evaluation of the proposed method performance. Therefore, a major revision is needed to address all the comments from reviewers and a proofreading is highly recommended.
[# PeerJ Staff Note: The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at copyediting@peerj.com for pricing (be sure to provide your manuscript number and title) #]
The manuscript addresses an important practical and theoretical challenge of developing efficient learning algorithms with proven generalisation capabilities. The Authors propose to combine classical SCNs with evolutionary optimisation to increase the efficiency of constructing regularised SCNs. In my opinion these results are important and new and deserve to be published. The manuscript is well-written and presented. I have only few minor comments, mostly around notation.
1. In or before eq (2), it would be great to define the meaning of <,> (I suppose that this is an inner product but it wasn't defined as such). It may be worthwhile to define domains of the functions f and g too.
2. In eq (20), please define h(x_i). Consider adding that \eta is non-negative
All experiments are clear
The results are clear and appear to be correct. Conclusions are adequate and all data are provided and sound.
No further comments
A new method of RSCN-INFO is proposed in this paper for optimizing parameter settings of the SCN method. The article structure is reasonable, and literature references,experiment result are sufficient. The definitions of theorems are clear.
The full-text language should be refined to improve the logic and causality of expression。For example, shallow neural networks is highlighted in abstract, while in the introduction section the neural networks are described.
1. “Stochastic configuration networks (SCN) have shown tremendous potential in building shallow neural networks under a supervisory mechanism to constrain the random assignment of hidden layer parameters.”, is it a precision description?
2.“the performance of SCN is frequently impacted by the parameter settings of the model”, it would be better to specify the major performance.
3. In Line 41, what is this problem, please express yourself clearly.
4. Check the equation (1) and (2), are they elements or vectors.
5. Check the pseudo-code carefully in page 8, such as the code in line 30.
The experiment and comparison test are convincing and sufficient. However, some improvement can be made,
In page9, some extra sentences, such as universality, should be added to describe briefly the background of the function and KEEL dataset.
The comparison of test results show that the RSCN model with paramenter settings and structure optimized by INFO possess superior performance, such as fast convergence, a more compact network structure.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.