Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on October 30th, 2025 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on November 21st, 2025.
  • The first revision was submitted on December 8th, 2025 and was reviewed by 2 reviewers and the Academic Editor.
  • The article was Accepted by the Academic Editor on December 16th, 2025.

Version 0.2 (accepted)

· · Academic Editor

Accept

The authors have addressed all of the reviewers' comments. Based on their recommendations and my assessment, I suggest to accept this manuscript for publication.

[# PeerJ Staff Note - this decision was reviewed and approved by Mehmet Cunkas, a PeerJ Section Editor covering this Section #]

Reviewer 1 ·

Basic reporting

The revision meets the journal's standards in terms of basic reporting.

Experimental design

Experimental design is good. The revised manuscript has included extra experiments as suggested.

Validity of the findings

The authors have provided more results for validation and supporting their proposed model. Detailed discussion on limitations and future work for the study has been included. No further comments from me.

Additional comments

The revised manuscript can be accepted for publication.

Reviewer 2 ·

Basic reporting

The revised version is well-organized and clearly written. I don't have any other comments.

Experimental design

I have no further comments.

Validity of the findings

I have no further comments.

Additional comments

No further comments.

Version 0.1 (original submission)

· · Academic Editor

Major Revisions

Please address the comments from the reviewers and revise the manuscript accordingly. Consider performing more experiments and include statistical analysis by running the experiments multiple times as suggested.

**PeerJ Staff Note**: Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

Reviewer 1 ·

Basic reporting

- The logical flow between sections is clear, and the English writing is generally understandable and technically correct. However, improving the fluency and reducing redundancy in certain sentences would make the text more concise and professional.
- The Approach to Model Selection subsection clearly explains the motivation for using TabNSA. Nonetheless, the authors should emphasize the research gap more explicitly to better justify the novelty of their approach. The Introduction and Related Work sections could further clarify what limitations of existing tabular deep learning models the proposed method aims to overcome.

Experimental design

- In proposed architecture, the rationale behind integrating Native Sparse Attention (NSA) and TabMixer in a parallel rather than sequential configuration should be discussed in more depth.
- In the Feature Embedding subsection, what is the motivation for using a linear projection instead of an embedding lookup table? The authors should include relevant references or empirical justification for this choice.
- The authors may want to include additional comparative experiments, for example, with DANets and DeepGBM, which are strong baselines in tabular deep learning.

Validity of the findings

- The description of the TabMixer module should specify key implementation details such as layer dimensions and the number of main parameters.
- The paper would benefit from providing more insight into how NSA contributes to capturing long-range feature dependencies compared to other attention-based mechanisms.
- The authors should run the experiments multiple times and report the mean and standard deviation of the results.

Additional comments

It is not clear about whether NSA is an existing architecture cited from previous work or a variant newly proposed in this paper.

Reviewer 2 ·

Basic reporting

+ The paper is mostly clear, but the authors should improve the language to make it more fluent and professional. The Introduction has a logical structure and provides enough background, but the paragraphs are too long. It would be better to divide them into shorter parts to make the text easier to read. The related works are relevant and well cited, but the paper should explain more clearly what is new in this study compared to previous research. The contribution of the proposed model needs to be emphasized. Figure 5 only has a short caption “Model Architecture”, which is not enough. The figure should include more visual details and explanations for the NSA and TabMixer blocks, so readers can better understand the model design.
+ Overall, the paper follows the general structure of PeerJ, but the writing, organization, and figure captions should be improved for clarity and readability.

Experimental design

+ The study fits well with the aims and scope of the journal. The experiments are reasonable, but some parts of the method need more details to make the study reproducible.
+ In the Fusion and Aggregation part, the authors use element-wise summation. Please explain why this method was chosen. Have you tried other strategies, such as concatenation or attention-based fusion?
+ The paper says that grid search was used to tune the ML models. However, the parameters and value ranges used for tuning are not described. Please list the tuned parameters and their selected values.
+ There are other tabular deep learning methods, e.g., TabTransformer, that can be included in the comparison.

Validity of the findings

+ The results look good, but more information is needed to check their reliability and real-world use. Please include the computation time or training cost of the proposed model and discuss its possible use in real applications and discuss briefly how this model can be applied in real educational systems or adaptive learning environments.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.