Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on March 31st, 2025 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on May 14th, 2025.
  • The first revision was submitted on June 11th, 2025 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on July 21st, 2025.

Version 0.2 (accepted)

· · Academic Editor

Accept

Dear Author,

Your paper has been revised. It has been accepted for publication in PEERJ Computer Science. Thank you for your fine contribution.

[# PeerJ Staff Note - this decision was reviewed and approved by Claudio Ardagna, a PeerJ Section Editor covering this Section #]

Reviewer 1 ·

Basic reporting

The authors replied to all my previous comments and recommendations. All the responses were convincing.

Experimental design

There are no new issues in this manner in this version.

Validity of the findings

-

Cite this review as

Version 0.1 (original submission)

· · Academic Editor

Major Revisions

**PeerJ Staff Note:** Please ensure that all review and editorial comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

**Language Note:** The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title). Alternatively, you should make your own arrangements to improve the language quality and provide details in your response letter. – PeerJ Staff

Reviewer 1 ·

Basic reporting

The goal of the paper was to prepare a method fortime series forecasting of data related with renewable energy (e.i. wind speed and solar irradiation). Authors used Frequency Enhanced Decomposed Transformer and Attention-LSTM combined with Seasonal and Trend Decomposition Using Loess. The proposed solution is sound and the results given are promising.

Experimental design

The proposed method is valid and reflects the up to date methods used in similar problems. However, I strongly advise to improve the description of the method (proposed framework section) so it will be easier to reproduce the approach.
1) I recommend starting the section with the overall description of the method. Currently this part is at the end of the section
2) I advise to describe the LOESS method in the manuscript
3) The methods used for preprocessing the data etc. (currently in the experimental setup and results) should be included in the proposed framework section
4) It is stated that input of the model have size of 8760x5 which corresponds to 5 years. However, later in the manuscript it is stated that 4 years are used for training (2019-2023). This aspect should be clearified.
5) What is the reason for gaps in the data?

Validity of the findings

The results given are promising and performs better than compared methods.

Additional comments

Some minor issues:
1) The combined acronym STL-LSTM-FED is used in the abstract without introduction
2) lines 102-103 - some bold characters are used
3) Related Works - since Authors considers long-term predictions I advise to describe related works with relation to how long were predicted made. I recommend to include some table that summarize this.
4) lines 169-172 and 226-232 - I recommend to include enumeration here
5) equation 5: does tau represents time? In the next equations time is denoted as t. I advice to use one global symbol for time.
6) figures 7 and 8: is there any difference between these architecture and those in fig. 6? If not I think figs 7 and 8 can be omitted.
7) line 399 - It looks like equation isin superscipt
8) If possible I advice to improve the quality of the figures. Especially figs. 2 and 3 are a little bit blurry
9) Figs 4, 5 and 9: Figures may be unclear for english speaking readers. I recommend to change all labels to english.

Cite this review as

Reviewer 2 ·

Basic reporting

The clarity and professionalism of the English language are generally acceptable for the technical content presented. The introduction provides context, and the literature review references relevant works [3, 13-20, 27-33 etc.].
However, a significant issue exists with the figures. Several figures within the manuscript, including Figures 1-11, contain text and labels that are written in Chinese. While captions are in English, the content of the figures themselves is not. This hinders understanding for an international audience and needs to be corrected.
Figure 10 and Figure 11, showing the comparison of true vs. predicted values for radiation intensity and wind speed respectively, appear to lack sufficient clarity, potentially due to the long time series displayed. To improve readability and allow reviewers/readers to better assess the model's performance in capturing detailed fluctuations, please consider providing zoomed-in sections for specific time periods in both Figure 10 and Figure 11.
The structure of the manuscript generally conforms to a standard research paper format.

Experimental design

The research question focusing on long-term forecasting of wind-solar energy using deep learning is relevant and meaningful. The proposed STL-LSTM-FED method aims to fill a gap by combining data decomposition with hybrid deep learning models. The methods are described, including the overall framework, LSTM, Attention Mechanism, FEDformer, and the specific STL-LSTM-FED implementation. Data collection from Auckland, New Zealand (2019-2023) is mentioned.
- The manuscript describes using a 24-hour moving average to fill gaps up to 48 hours and excluding segments with missing data beyond that. While this approach is straightforward, it may excessively smooth the inherently volatile wind and solar data, potentially masking important short-term variability. Excluding longer gaps could also discard rare but meaningful patterns. It remains unclear whether the impact of these preprocessing choices was evaluated, or whether alternative, less aggressive methods were considered. Justification for these specific choices (e.g., why 24-hour moving average, why the 48-hour threshold) would strengthen the methodology description.
- The method applies STL decomposition with a fixed annual period (8760 hours) for both wind speed and solar radiation. However, these variables may exhibit different dominant cyclical patterns - e.g., solar radiation often follows strong daily and seasonal cycles, while wind speed can vary more irregularly. The manuscript does not justify why a single fixed decomposition period is appropriate for both data types. It would be helpful to clarify whether other periods (e.g., weekly or monthly) were considered, or if adaptive decomposition methods were explored to better capture the specific cyclic behavior of each variable. Furthermore, not only STL, but the study also uses the same hybrid model to forecast both solar radiation and wind speed. It is unclear whether the model accounts for the difference between 2 variables, or whether any source-specific preprocessing or architecture tuning was applied. Clarifying this point would help assess the model’s generalizability and robustness across different types of renewable energy data.
- The paper proposes using the complex FEDformer model to predict the trend component after STL decomposition. Given that trend components, particularly after seasonal decomposition, often exhibit relatively simple, slowly-varying patterns, please provide further justification for selecting a sophisticated model like FEDformer over potentially simpler time series forecasting methods (e.g., exponential smoothing, ARIMA, or polynomial regression applied to the trend). Furthermore, did the authors consider comparing the performance of using FEDformer for the trend component prediction against results obtained using these simpler methods on the trend component? Including such a comparison, even if only as an ablation study or supplementary result, could strengthen the argument for the chosen approach.
- The justification provided is that FEDformer is "particularly suitable for processing short-period characteristic data and demonstrates strong predictive capability for trend components after annual cycle removal". Please elaborate on this justification. It seems counter-intuitive to use a model described as suitable for "short-period characteristic data" for the long-term "trend" component, especially when the Attention-LSTM is described as excelling "in capturing long-period, slowly-varying features" and is used for the periodic component.
- Furthermore, please explain specifically how the internal mechanisms of FEDformer, such as its frequency enhancement mechanism (utilizing Fourier transform and attention in the frequency domain) and its temporal decomposition mechanism (MOEDecomp, which uses average filters to highlight long-term sequence trends), contribute to effectively modeling and forecasting the STL-derived trend component. The paper mentions that the trend component data "undergoes multi-level decomposition to extract finer-grained features" during FEDformer training; please clarify how FEDformer's internal decomposition mechanism interacts with or benefits from the initial STL decomposition of the raw data's trend.
- While Table 2 lists the parameter settings for FEDformer and Attention-LSTM, the manuscript does not explain how these hyperparameters were selected. It is unclear whether systematic tuning methods (e.g., grid search, cross-validation) or manual selection were used. This lack of detail affects reproducibility and raises questions about whether the reported performance reflects optimal configurations. Clarifying the hyperparameter tuning process would improve the transparency and rigor of the experimental design.
- Attention-LSTM, ACGAN, and FEDformer are used as baseline models [477–478, Table 3, Table 4, 98, 99], but the rationale for selecting these particular models—given the broader range of methods mentioned in the literature (e.g., probabilistic models, SVM, other deep learning techniques)—is not clearly explained. Moreover, to better contextualize the performance of the proposed approach, comparisons with other recent state-of-the-art models in time series forecasting would be beneficial.

Validity of the findings

The paper presents experimental results using five statistical indicators: MAE, MSE, RMSE, p-value, and R² [475-477, Tables 3 and 4, 98, 99].
While p-values are reported in Tables 3 and 4, the manuscript does not explain the statistical test used to calculate these p-values or how they should be interpreted to support the claim that the performance improvements are statistically significant. Clarification on the statistical validation of the results is necessary.
The conclusions drawn are generally linked to the results presented in the tables and figures.

Annotated reviews are not available for download in order to protect the identity of reviewers who chose to remain anonymous.
Cite this review as

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.