Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on March 27th, 2025 and was peer-reviewed by 3 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on August 5th, 2025.
  • The first revision was submitted on September 8th, 2025 and was reviewed by 2 reviewers and the Academic Editor.
  • The article was Accepted by the Academic Editor on October 7th, 2025.

Version 0.2 (accepted)

· · Academic Editor

Accept

The authors have addressed all comments. The paper may be accepted.

[# PeerJ Staff Note - this decision was reviewed and approved by Shawn Gomez, a PeerJ Section Editor covering this Section #]

·

Basic reporting

The suggestion was incorporated.

Experimental design

As per the suggestion, the authors improved the evaluation.

Validity of the findings

Suggestions incorporated.

·

Basic reporting

-

Experimental design

-

Validity of the findings

-

Version 0.1 (original submission)

· · Academic Editor

Major Revisions

**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

Reviewer 1 ·

Basic reporting

1. Some main references (e.g., Li et al. 2021; Zhang et al. 2022; Zhao et al. 2024) did not provide enough information, such as DOI or proceeding or journal title, which are important for traceability and reproducibility.

2. There is no explicit explanation of how the raw data or MEG-1.0 dataset can be accessed for study replication.

3. Some paragraphs only contain one sentence; it is far from an academic writing style.

4. Related works should stand as a new section, and Section 1.1 would be better if introduced in the related works section.

5. The structure is composed of two deep subsections. It should be revised and improved. We advise making it a section and a subsection only.

Experimental design

1. The selection of GANs and DM models appears to be based solely on visual observation and qualitative descriptions, not quantitative metrics (such as FID, SSIM, LPIPS).

2. There is no mention of inter-annotator validation (i.e., if there is a human assessment of the images). Those parts are required unless the dataset was obtained from a previous public dataset.

3. It is not explained whether there are ethical limitations (ethics review), especially when it comes to minority cultural writing systems that have cultural sensitivity. Explain it.

Validity of the findings

1. There are no comparative studies against simple baseline models (e.g., variational autoencoders, traditional skeleton-based methods).

2. Lack of testing with other minority writing systems beyond the Mongolian example.

3. Studies are limited to technical aspects, with no real impact on end-user communities or field studies. Discuss it.

Additional comments

1. Add standard quantitative metrics to measure the quality of model image output.

2. Add non-GAN/non-DM baselines to provide greater context for comparison.

3. Add more discussion of the potential bias of AI models when trained on non-minority data to generate minority fonts.

4. The visual evaluation method used is subjective; describe more about the evaluation used.

Cite this review as

·

Basic reporting

Some parts of the methods section, especially Section 2.1.2, are too technical and detailed. Simplifying these parts would make it easier for general readers to follow the process.

Experimental design

The paper does not include common evaluation scores like FID or SSIM, and there is no human feedback. Adding even basic user testing would make the results more convincing.

Validity of the findings

A limitation is the absence of external validation: no user studies or feedback from native script users or font designers to confirm that the generated glyphs are usable or accurate.

·

Basic reporting

The article is well-structured and easy to follow. No doubt, the lack of digital support for minority languages is a genuine concern. It would help to add more insights into why this is an issue and what the impact is, and on whom. Adding more quantitative information would help demonstrate the criticality of the study.

Experimental design

In the methods section, include and define evaluation metrics. Detail the evaluation criteria for each model type. Compare metrics and discuss the merit of using one over the other.

Validity of the findings

The evaluation methods need the following improvements:
1/ Quantitative analysis across both GAN & DM models
2/ Quantitatively describe the glyph quality
3/ Analyze more language scripts with different combinations of features, such as structure and directionality

The conclusion can be more concise. Elaborate on how the study can be applied in related fields. As of this version, there are details on applicability.

Expand limitations to include other models apart from GANs & DMs.

Additional comments

Wherever possible, cut down the long sentences as it's hard to follow. For instance, a sentence like below:

'Due to the fact that the quality of the glyph images generated by GANs is not up to expectations, the speed of DM generation is slow, and its style characteristics are weak, and the fixed model of GANs combined with DM is not flexible enough, an effective strategy for rapidly expanding high-quality datasets is proposed.'
Can be split for ease of understanding.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.