Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on April 19th, 2025 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on July 14th, 2025.
  • The first revision was submitted on July 25th, 2025 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on October 6th, 2025.

Version 0.2 (accepted)

· · Academic Editor

Accept

Reviewer 1 is satisfied with the changes, and both of us think this version is ready for publication.

[# PeerJ Staff Note - this decision was reviewed and approved by Claudio Ardagna, a PeerJ Section Editor covering this Section #]

Reviewer 1 ·

Basic reporting

-

Experimental design

-

Validity of the findings

-

Additional comments

Thanks for the revision, changes and answers are sufficient

Cite this review as

Version 0.1 (original submission)

· · Academic Editor

Major Revisions

**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

**Language Note:** The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title). Alternatively, you should make your own arrangements to improve the language quality and provide details in your response letter. – PeerJ Staff

Reviewer 1 ·

Basic reporting

-

Experimental design

-

Validity of the findings

-

Additional comments

Peer Review Report for PeerJ Computer Science
(Optimizing lexical fitness assessment in L2 Chinese reading texts)

1. This study aims to objectively evaluate the grading of Chinese reading texts for second language learners, demonstrating that lexical fitness-based features enhance classification performance, with the best results achieved using a random forest model, and highlighting the importance of integrating lexical form, meaning, and syntax for optimal assessment.

2. The introduction section includes a limited mention of the importance of the subject and the contributions of the study. It is also strongly recommended that the literature review be detailed.

3. When the amount, type, and distribution of the dataset used in the study are generally examined and evaluated within the scope of the study, it is observed that they are at a sufficient level. In addition, the features for assessing lexical fitness are also very suitable.

4. While models such as SVM and KNN are used as traditional models, it is observed that the transformer-based model is preferred in addition to them. Although there are many different classification models in the literature regarding the solution of the problem within the scope of this study, it should be clearly stated how these models are determined, whether different experiments have been made, and the point of originality.

5. Although basic evaluation metrics have been obtained, the metrics need to be expanded for a more accurate analysis of the results. For this, all metrics obtained in the literature specific to classification should be carefully examined.

In conclusion, the study was interesting and has the potential to contribute to the literature. However, the above sections should be given special attention.

Cite this review as

Reviewer 2 ·

Basic reporting

-

Experimental design

-

Validity of the findings

-

Additional comments

The paper has the potential to be an interesting proof of concept on how machine learning techniques can be applied to assess text difficulty for L2 Chinese learners. However, upon further reading, it becomes unclear what the goal of the study was. From what I could understand, machine learning was applied to predict whether a particular text falls under the correct HSK band, and six different classification models were trained and compared. However, there were many points of incoherence and inconsistencies in the paper, of which the most important one was how the construct of "lexical fitness" was operationalised. For example, it was not clear if the "73 lexical fitness features" were predefined or extracted through machine learning. While the author provided some explanation regarding the use of multi-head attention for feature extraction, it wasn't clear what the machine learning architecture was, how "pretrained word embeddings" fit into the training pipeline, and what the training examples were. There were also parts that were incoherent (e.g., it is unclear what lines 47-48 meant, and line 305, "the model predicts HSK1 as HSK6 more correctly than HSK2" might have been an error).

Cite this review as

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.