All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Thank you for your valuable contributions.
[# PeerJ Staff Note - this decision was reviewed and approved by Xiangjie Kong, a PeerJ Section Editor covering this Section #]
The paper is written in clear and correct English. It gives enough background and references to show how the study fits into the field. The structure is logical, with useful figures, tables, and shared data. The work is complete, presents all results for its aims, and is suitable for publication.
The study presents original primary research within the journal’s aims and scope. The research question is clearly defined, relevant, and addresses a meaningful knowledge gap. The investigation was carried out rigorously to high technical and ethical standards, and the methods are described in sufficient detail to allow transferability to other study sites.
The study is evaluated on the strength of its methods and data, not on subjective judgments of impact or novelty. Replication is welcome where a clear rationale and benefit to the literature are shown. All underlying data are provided, robust, statistically sound, and well-controlled. The conclusions are clearly stated, directly linked to the research question, and supported by the results.
Dear Author,
Thank you for your clarifications and revisions. The manuscript is now significantly improved and scientifically sound.
Kindest regards
-
-
-
Good work on refining the manuscript. This is publishable by all means now.
**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.
**PeerJ Staff Note:** It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors agree that they are relevant and useful.
**Language Note:** When you prepare your next revision, please either (i) have a colleague who is proficient in English and familiar with the subject matter review your manuscript, or (ii) contact a professional editing service to review your manuscript. PeerJ can provide language editing services - you can contact us at [email protected] for pricing (be sure to provide your manuscript number and title). – PeerJ Staff
In my opinion, the subject of leaf chlorophyll content and fraction vegetation cover is highly relevant in remote sensing research. The use of machine learning algorithms and UAV data creates new opportunities for broader applications in environmental studies. Nevertheless, I have several comments, particularly concerning the Abstract, Introduction, Methods, and Discussion sections. In my opinion, the Results section is well-written and properly prepared.
I have the impression that the authors focused more on the technical aspects of the analysis rather than on the scientific aspects of the modeling. In my comments, I have pointed out several elements that should be addressed in the discussion, such as the role of specific spectral ranges in detecting and responding to phenological changes, as well as the importance of post-analysis using machine learning algorithms (e.g., gain parameter, Mean Decrease Accuracy [MDA], or the Boruta algorithm) to assess feature importance in the modeling process.
Regarding the Experimental Design, I find the post-analysis of the machine learning (ML) modeling to be lacking in a broader context. Specifically, it would be valuable to go beyond assessing autocorrelation and instead explore the role of feature importance in model development. For example, gain parameters, as illustrated in Figure 6 of the study [https://doi.org/10.1016/j.ecoinf.2024.102603], or Mean Decrease Accuracy (MDA), as shown in Figure 5 of the paper [https://doi.org/10.1038/s41598-024-83699-4], offer insights into the relevance of input variables. I recommend citing these articles in the Discussion section as part of a broader discourse on the perspectives of your work. Although these studies focus on grasslands and satellite data, the methodology is universal and should be equally applicable to maize, particularly when using UAV imagery and ML algorithms. Your Section 5.2, entitled The Impact of VI and TF on Crop Parameter Estimation, should specifically include this type of analysis. I do not require these analyses to be added, but as a broader scientific discussion, they should be included in the Discussion section, with reference to the articles I have mentioned.
In the Introduction, I found the role and significance of UAVs in this type of research insufficiently addressed. For example, the study [https://doi.org/10.1109/IGARSS47720.2021.9554744] provides an excellent example of using Copernicus satellite products to compare crop growth conditions for maize and winter wheat in Poland and South Africa. You should clearly highlight the advantages of UAVs in comparison to satellite data. Additionally, it is important to discuss the limitations, both pros and cons, of using UAVs relative to Copernicus satellite products, as well as the potential for transferring your method across different platforms and scales. Please also consider the reference [https://doi.org/10.2478/mgrsd-2020-0029], which demonstrates that vegetation indices derived from Copernicus satellite data for maize and sugar beet effectively tracked changes associated with plant aging, such as the SIPI index.
I am not entirely clear on how your validation was conducted. From the perspective of the work, could you obtain ground truth data and demonstrate that the variability in estimation differs across the various maize growth stages? This approach has been shown to be important in modeling crop conditions for wheat (regardless of crop type), where variability between phenological phases is critical, as discussed in the article [https://doi.org/10.34867/gi.2017.2]. Please cite this work as it highlights the importance of validating findings with field data to ensure a comprehensive approach to modeling using machine learning algorithms.
One final aspect regarding UAVs is the significant limitation caused by soil noise, which was not mentioned in the discussion. There is an opportunity to combine UAV data with radar data to estimate additional biophysical parameters for maize, such as soil moisture and surface roughness, as demonstrated in the article [https://doi.org/10.2478/v10177-012-0013-7]. This important possibility should also be addressed in the Discussion section.
**PeerJ Staff Note:** The PeerJ's policy is that any additional references suggested during peer review should only be included if the authors find them relevant and useful.
In the Abstract and Introduction, you should clearly highlight what is new and original in your approach to mapping LCC and FVC using UAV data and machine learning algorithms. There are numerous similar studies that apply machine learning and satellite or aerial imagery to estimate variables related to maize condition and growth. What specific innovations does your study introduce?
In Table 2, titled Vegetation Indices, you should provide the equations along with the specific spectral bands (with their bandwidths) rather than just abbreviations such as NIR, RE, and R. Please refer to Table 2 in the research paper [https://doi.org/10.3390/rs15092392], which lists the Sentinel-2 spectral bands dedicated to vegetation indices used in that study. You should also clearly justify why only five vegetation indices (VIs) were selected for your analysis, while the referenced paper incorporated 40 indices to better capture the phenological stages. Furthermore, the role of spectral bands, particularly red edge, red, and near-infrared, should be clearly emphasized in the Discussion section. You are encouraged to draw on the results from the referenced article, especially in the context of mapping chlorophyll fluorescence (ChF), which is presented as an alternative and more sensitive indicator of phenological stages compared to LCC.
What I found missing in the Discussion section is a broader perspective on the potential of alternative vegetation indices (VIs) that are more sensitive to stress-inducing factors such as temperature and plant drought. I recommend referring to the article [https://doi.org/10.3390/plants13162319], which demonstrates that indices like the Normalized Difference Drought Index (NDDI) can be applied using both satellite data and UAVs equipped with VIS-SWIR sensors to monitor plant stress. This is particularly important in the context of mapping maize phenological stages, which are highly susceptible to drought, an increasingly frequent condition across Europe. Early detection of stress signals is critical. Moreover, plant stress directly influences phenological development, and therefore, the sensitivity of VIs to physiological responses, such as photosynthetic activity measured by chlorophyll fluorescence (e.g., Fv/Fm), is key in modeling. This is exemplified by early signs of rust development in grasses, as shown in the study [https://doi.org/10.1016/j.ecoinf.2024.102603].
**PeerJ Staff Note:** The PeerJ's policy is that any additional references suggested during peer review should only be included if the authors find them relevant and useful.
I believe the list of references is too limited and disproportionately focused on studies conducted in China. Expanding the scope to include research from other regions, such as Europe and North America, would provide valuable context and contribute to a more comprehensive and internationally balanced understanding of the research problem.
I strongly suggest including a clear roadmap that outlines the five main steps undertaken within the framework of your research study. This will help readers better understand the structure and flow of your methodology.
The manuscript is generally well-written, clear, and structured according to standard scientific format. The introduction provides good context, and relevant literature is appropriately cited. The figures and tables are of high quality and effectively support the analysis. Still, some revisions need to be done.
1. While overall readable, some sections, especially the Methods and Discussion, contain awkward phrasing (e.g., “the estimation accuracy...was compared” could be reworded for clarity). A light language review by a native or fluent English speaker is recommended.
2. Figures 3–7 could benefit from more informative captions. Currently, they lack interpretive comments that guide the reader in understanding trends and insights.
-
3. The stacking ensemble model was shown to outperform others, but the hierarchical architecture and cross-validation strategy used to avoid overfitting are not clearly described. Please provide more details, including:
• Which models were used as base learners?
• What validation framework (e.g., k-fold) was used?
• How was overfitting checked?
4. The statistical results are compelling, especially for LCC (R² = 0.945) and reasonably good for FVC (R² = 0.645). The comparison of input types (VI, TF, VI+TF) adds robustness to the conclusions. But the discussion attributes the underestimation in the S2 period to a possible “outlier,” but no evidence is shown to support this. Please either remove this speculation or support it with visual or statistical evidence (e.g., residual plot or leverage analysis).
5. While the ensemble model performs better than others, an R² of 0.645 is still modest. The discussion should explicitly acknowledge this limitation and explore potential improvements (e.g., adding structural features, incorporating meteorological data).
To strengthen the discussion section and contextualize the results, the authors might compare their findings with some recent studies. For example:
• https://www.mdpi.com/2072-4292/16/12/2058
• https://www.sciencedirect.com/science/article/pii/S2667010025000344
**PeerJ Staff Note:** It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors are in agreement that they are relevant and useful.
The LCC and FVC mapping outputs (Figure 7) are visually informative. However, it would improve the manuscript to include a brief quantitative evaluation of the spatial prediction accuracy (e.g., RMSE maps or point-wise validation)
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.