All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Congratulations on the acceptance of your manuscript.
[# PeerJ Staff Note - this decision was reviewed and approved by Sebastian Ventura, a PeerJ Section Editor covering this Section #]
Please enhance the literature review with recent papers. Although Reviewer 1 has requested that you cite some references, I do not expect you to include these citations. If you do not include them, this will not influence my decision in any sense.
Remember that it is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors are in agreement that they are relevant and useful, not under the suggestion of the reviewer.
[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]
This work presents a novel method for feature extraction based on Mahalanobis distance defined by the covariance matrix between features. Some comments:
- Abstract should be more clear
- Separation of graphics from the sequence of the text makes it more difficult to read.
- Highlight all assumptions and limitations of your work.
- Conclusions should provide some lessons learnt.
- Related works section does not mention recent research effors in new approaches to extract meaningful features. Authors are advised to refer to the following related articles to add some discussions: [1] Supervised contrastive learning over prototype-label embeddings for network intrusion detection, Information Fusion, 2022 [3] Effective Feature Extraction via Stacked Sparse Autoencoder to Improve Intrusion Detection System, IEEE Access, 2018 [3] A predictive hybrid reduced order model based on proper orthogonal decomposition combined with deep learning architectures, Expert Systems with Applications, 2022
The design of experiments and the ablation strategy seem correct
The selection of different datasets and alternative models looks enough to validate the results
no comment
The experimental design of the authors can be accepted.
The findings demonstrated by the authors are very interesting. Especiallly, distance metric-based methods are more suitable for extracting those features with linear separabilities from high-dimensional data than feature selection-based methods
To extract feature from the data in a high-dimensional space, this paper proposed a novel autoencoder approach based on Mahalanobis distance metric of rescaling transformation. Through performing rescaling transformation on Mahalanobis distance metric, then the transformed Mahalanobis distance metric is introduced into the autoencoder, so as to improve the ability of feature extraction to the model.
The issue is very interesting, and these findings are valuable. Moreover, the experiments and the results are acceptable through verifying the source code provided by the authors. Based on the studied problem and these findings, I recommend that this paper is acceptable. In addition, to highlight the studied problem and these findings, I suggest the title “A novel autoencoder approach to feature extraction with linear separability for high-dimensional data”
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.