All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Reviewers have no comments.
[# PeerJ Staff Note - this decision was reviewed and approved by Sebastian Ventura, a PeerJ Section Editor covering this Section #]
-
-
-
-
-
-
**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.
This paper proposed a feature-based framework, CFIRE, to improve classification performance while preserving computational efficiency. CFIRE integrates a wide range of transformations, including derivative, autocorrelation, spectral (Fourier), harmonic (cosine), time-frequency (wavelet), and analytic (Hilbert) domains. The experimental results on 142 datasets from the UCR collection demonstrate that the proposed method outperforms top feature-based classifiers. The paper has a complete structure, reasonable logic, and sufficient experiments. In summary, this is a meaningful and valuable article.
However, there are also some issues in the paper, and it is recommended to improve in the following aspects:
1. The current citation method of references is not convenient for reading. Using \citep can ensure that all references are cited within parentheses.
2. Replacing Crossfire with CFIRE in the title and abstract would be better.
3. Please check “(p ¡ 0.05).” in Line 660.
4. It would be better to provide a statement about the limitations of the research.
1. In Section 2.2, the author gives the features used in CFIRE. And the author conducts sensitivity analysis in Section 3.3.1. However, there is one issue that the author overlooked. Why were some features used in Catch22 or TSFresh not used in CFIRE?
2. In Section 3.2, the Wilcoxon p-values for these comparisons were below 0.05, indicating statistically significant differences. In this case, increasing the contrast in feature extraction time may yield better results.
-
1. In Table 1, the marks of L2 and DN_HistogramMode_10 may be incorrect. If it was not used for all representations, please remove it.
2. In Figures 7 and 11, different symbols can be used to identify different situations.
This paper proposes CFIRE (Cross-representation Feature Integration for Robust Extraction), a multi-representation feature fusion framework for time series classification that systematically integrates time, frequency, wavelet, and analytic domain features, achieving state-of-the-art accuracy on the UCR archive while maintaining computational efficiency comparable to Quant, demonstrating that comprehensive feature engineering can rival deep learning methods.
Based on the submission, concerns are:
1. In the introduction, the authors should focus on explaining and detailing the research gaps in the manuscript, e.g., what problems did the previous similar works address? The innovations of the proposed method need to be explained.
2. Also, the motivations are not well-explained.
3. The TSC literature review is not well organized. Many methods only conduct one sample in the background. Besides, why not include deep learning-based methods in the relevant work?
4. What criteria (e.g., mutual information, variance) are used to reduce redundancy while preserving discriminative power?
5. How do derivative, autocorrelation, and Hilbert transform features complement traditional time/frequency features in classification?
6. For a journal publication, the authors need to include some mathematical reasoning.
1. The performance metrics need to be defined in the manuscript.
2. In the experiments, the authors only state some experimental results. In contrast, the authors should highlight the reasons behind the experimental results in the manuscript.
3. For a comprehensive analysis, the authors need to compare the proposed method with recent TSC algorithms, e.g., deep learning algorithms. How does parallel extraction enable CFIRE to match Quant’s speed despite processing more representations?
4. In Fig. 9, the performance of the proposed method is not the best among the compared algorithms. The authors failed to explain the reasons and simply recorded some results.
5. The experimental analysis demonstrates insufficient methodological rigor and analytical depth, significantly undermining the argumentative rigor of the study.
6. Some errors exist in the manuscript, e.g., "p¡ 0.05"-> what is the "p¡ 0.05"?
1. Which representations (e.g., wavelets vs. Fourier) contribute most to periodic vs. aperiodic time series?
2. How does CFIRE scale to ultra-long sequences (e.g., sensor data with >100K points)?
3. Can CFIRE’s features provide domain insights (e.g., dominant frequencies) that deep models lack? The authors need to enhance the CFIRE’s interpretability.
4. Is there an information-theoretic justification for the optimal representation combination?
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.