Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on August 25th, 2025 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on September 15th, 2025.
  • The first revision was submitted on October 21st, 2025 and was reviewed by 2 reviewers and the Academic Editor.
  • The article was Accepted by the Academic Editor on October 28th, 2025.

Version 0.2 (accepted)

· · Academic Editor

Accept

The authors have addressed all of the reviewers' comments. Based on their recommendations and my own reading, I suggest accepting this manuscript for publication.

[# PeerJ Staff Note - this decision was reviewed and approved by Mehmet Cunkas, a PeerJ Section Editor covering this Section #]

The SE commented:
> It would be a good to add a paragraph highlighting this study at the end of the introduction in terms of article writing style.

Reviewer 1 ·

Basic reporting

The revised manuscript meets the journal's standards in terms of reporting.

Experimental design

The authors have conduct more experiments to address my concerns. I am happy with the results.

Validity of the findings

No further comments.

Additional comments

The manuscript has been improved, and it can be accepted for publication.

Cite this review as

Reviewer 2 ·

Basic reporting

The revised version is well-organized and clearly written. I don't have any other comments.

Experimental design

I have no further comments.

Validity of the findings

I have no further comments.

Additional comments

No further comments.

Cite this review as

Version 0.1 (original submission)

· · Academic Editor

Major Revisions

Please consider comments from the reviewers and revise the manuscript accordingly.

**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

Reviewer 1 ·

Basic reporting

The paper is written in English with clear and coherent language that meets academic standards. The introduction and related work provide good background and context, with relevant and up-to-date references. The overall structure of the paper follows common conventions for an application research article, which makes it easy to follow. Experimental results are presented in a systematic way with tables and visual figures to support the findings. Mathematical formulas are expressed accurately, with consistent notation that helps clarify the process and methodology.

Experimental design

The authors should provide additional information and clarifications by addressing the comments listed below, so that the manuscript can be improved further:
- Although the model is designed to learn from limited data, the paper does not discuss in detail the risks of using only one or a few input images. Could the model simply “memorize” the inputs instead of learning general patterns? This issue needs more explanation.
- The progressive scale-wise training structure is logical, but it requires high computational cost. The authors should provide specific information such as the average training time, so that readers can better understand the practical efficiency of the method.

Validity of the findings

- For evaluation, the paper mainly reports FID and Diversity Score. Adding more metrics, such as the Inception Score (IS), could provide a more complete picture of the model’s performance.
- In the section about results, it would be clearer to show direct visual comparisons between the outputs of the proposed model and those of baseline methods, using the same input images. This would help readers easily observe the differences.
- It would be better if the authors included a direct comparison with other related approaches (for example, SinGAN) to highlight the novelty of their work.

Cite this review as

Reviewer 2 ·

Basic reporting

The paper introduces a hierarchical multi-scale generative framework for producing high-quality watercolor landscape paintings. Overall, the paper is presented in a relatively clear structure, with all the major sections such as Introduction, Model Selection Rationale, Comparison with Related Research, Limitations, etc.
Some points can be improved:
+ The Introduction/Related Work section provides useful cultural, historical, and aesthetic context for Chinese watercolor painting, while also highlighting the challenges of automating the creative process. Nevertheless, the authors fail to sufficiently address the research gap compared to prior work, such as the use of GANs and Style Transfer for Chinese watercolor painting. A more explicit identification of what makes their work novel in contrast to these prior approaches would strengthen the motivation.
+ The Related Work section is relatively comprehensive, covering various branches of research from GANs, CycleGAN, Pix2Pix to Neural Style Transfer. However, the paper could further improve by including more recent studies on generative modeling, especially diffusion models, to make the literature review more comprehensive and current.
The experimental results are presented clearly, with supporting tables and illustrations. However, the current discussion section is more descriptive than analytical. The paper would be more convincing if it included deeper insights into: (1) why the proposed model outperforms the baselines, (2) what practical implications the results hold for cultural preservation or assisting artists, and (3) how the framework could be extended to other artistic styles.

Experimental design

The experimental design is relatively complete, with appropriate baselines and comparisons to related studies.
There are some points can be improved:
+ The dataset description lacks statistics on the exact number of training and testing samples, as well as their distribution. Without this information, it is difficult to assess the robustness of the model.
+ The paper does not provide sufficient information about the model configuration, such as the architecture details of each layer, training parameters, and optimization settings. Including this would make the work more transparent and replicable.

Validity of the findings

+ The Limitations section mentions some weaknesses, but the discussion could go deeper into practical applications. For instance: How could the model support cultural preservation or digital archiving of ancient paintings? Could the generated outputs be useful for restoration tasks or for creating supplementary training data for art historians? A more concrete discussion here would highlight the real-world value of the approach.
+ The proposed model mainly addresses the challenge of limited training data, but does not provide mechanisms for controlled generation based on input conditions or user requirements. This makes the framework less flexible for real-world use. The authors could suggest future directions such as integrating text-to-image models or conditional GANs. Even a brief mention of these directions would significantly increase the scholarly impact and show awareness of ongoing trends in generative modeling.

Cite this review as

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.