Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on September 30th, 2024 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on December 3rd, 2024.
  • The first revision was submitted on December 9th, 2024 and was reviewed by 2 reviewers and the Academic Editor.
  • A further revision was submitted on January 3rd, 2025 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on January 17th, 2025.

Version 0.3 (accepted)

· Jan 17, 2025 · Academic Editor

Accept

Dear Authors,

Thank you for addressing the reviewers' comments. Your manuscript now seems ready for publication.

Best wishes,

Reviewer 2 ·

Basic reporting

The revised manuscript has appropriately incorporated the changes requested by the reviewers.

Experimental design

No comment

Validity of the findings

No comment

Additional comments

No comment

Version 0.2

· Dec 23, 2024 · Academic Editor

Minor Revisions

Dear Authors,

Thank you for revising your article. Feedback from the reviewers is now available. We strongly recommend that you address the issues raised by Reviewer 2, related to validity of the findings and resubmit your paper after making the necessary additions.

Best wishes,

·

Basic reporting

Thank you for addressing my comments
I'm recommended to further proceeding

Experimental design

Thank you for addressing my comments
I'm recommended to further proceeding

Validity of the findings

Thank you for addressing my comments
I'm recommended to further proceeding

Additional comments

Thank you for addressing my comments
I'm recommended to further proceeding

Reviewer 2 ·

Basic reporting

no comment

Experimental design

no comment

Validity of the findings

1. While comparisons with leading RL methods (PPO, DQN) are mentioned, a more detailed analysis of performance metrics—both quantitative and qualitative—is needed to clearly highlight the advantages and differences of the proposed approach.

2. To strengthen the claims about the superiority of the multi-agent approach, providing quantitative results or a direct comparison with single-agent models would be beneficial.

Additional comments

no comment

Version 0.1 (original submission)

· Dec 3, 2024 · Academic Editor

Major Revisions

Dear Author,

Your article has not been recommended for publication in its current form. However, we do encourage you to address the concerns and criticisms of the reviewers and resubmit your article once you have updated it accordingly.

Warm regards,

·

Basic reporting

What inspired you to explore reinforcement learning for investment modeling?
How do you define "capital loss" in the context of your study?
Can you explain the key differences between traditional investment models and your proposed RL model?
What specific challenges do you see in the current investment landscape?
How does the actor-critic method work in reinforcement learning?
Why did you choose ReLU neurons for your neural network?
What data sources did you use for training and testing your model?

Experimental design

How did you determine the structure of the adaptable data window?
What criteria did you use to evaluate the performance of your investment agents?
Can you explain the process of fine-tuning your RL model?
How did you address overfitting in your neural network?
What types of features did you include in your model?
How did you handle the variability in the markets you studied?
What reinforcement learning algorithms did you consider, and why did you choose your specific approach?
Why did you focus on crude oil, gold, and the Euro markets?
How do political and economic factors specifically affect your investment model's performance?
Did you observe any unique patterns in market behavior across the three assets?
How does your model adapt to sudden market changes, such as geopolitical events?

Validity of the findings

What does the average loss reduction of your model indicate about its effectiveness?
How did you validate the results from your test phase?
Were there any surprising findings in your results?
How does your model perform under different market conditions (bull vs. bear markets)?
What are the next steps for further developing your investment model?
Are there any plans to expand the model to other asset classes?
How could you integrate more complex market indicators into your model?
What potential improvements do you foresee in reinforcement learning techniques for finance?

Additional comments

What ethical considerations did you take into account while developing your model?
How do you address the risks associated with automated trading?
What advice would you give to new researchers entering the field of financial modeling?
What has been the most challenging aspect of your research process?

Reviewer 2 ·

Basic reporting

The paper addresses a relevant and significant topic in financial trading. It proposes an innovative RL model with an adaptable data window, showing promising results in multiple market tests.
The novelty might be moderate compared to state-of-the-art RL approaches if these methods or similar concepts have been heavily discussed in other literature. The analysis focuses mainly on selected markets and might need further generalization or cross-validation in different environments.

Experimental design

- The introduction discusses the significance and motivation for the research, but the specific objectives and differentiation from existing studies need to be more explicitly stated. Emphasize the unique benefits of the adaptable data window approach compared to traditional fixed data windows early in the introduction.

- While the paper highlights its differences, it lacks a detailed comparison with recent and leading RL models.Include a more structured comparison of the proposed method with other recent RL-based investment models (e.g., deep Q-learning, PPO).

- While the adaptable data window structure is described, its implementation details and specific advantages are not thoroughly explained. Provide more details on the implementation of the data window structure, using diagrams or pseudo-code to illustrate the process and how it enhances learning.

Validity of the findings

- The training and testing phases are described in general terms, but more specifics are needed for reproducibility. Include detailed information on hyperparameter settings, criteria for training termination, and any learning rate scheduling. This will help readers replicate the training process accurately.

- The rationale behind using a multi-agent approach is noted, but its comparative advantages over single-agent learning are not clearly detailed. Include a comparison or discussion of how the multi-agent structure improves results compared to single-agent learning and explain how agents learn sequentially for specific time windows to achieve better outcomes.

Additional comments

The practical implications of the model for financial institutions or individual investors are not sufficiently highlighted. Add content that explains how the proposed model can be beneficial in real-world scenarios, particularly in risk reduction and profit maximization for market participants.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.