Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on August 1st, 2025 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on September 23rd, 2025.
  • The first revision was submitted on September 28th, 2025 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on November 13th, 2025.

Version 0.2 (accepted)

· · Academic Editor

Accept

Dear Authors,

Your paper has been revised. It has been accepted for publication in PeerJ Computer Science. Thank you for your fine contribution.

[# PeerJ Staff Note - this decision was reviewed and approved by Mehmet Cunkas, a PeerJ Section Editor covering this Section #]

Version 0.1 (original submission)

· · Academic Editor

Major Revisions

**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

**PeerJ Staff Note:** PeerJ's policy is that any additional references suggested during peer review should only be included if the authors find them relevant and useful.

**Language Note:** When preparing your next revision, please ensure that your manuscript is reviewed either by a colleague who is proficient in English and familiar with the subject matter, or by a professional editing service. PeerJ offers language editing services; if you are interested, you may contact us at [email protected] for pricing details. Kindly include your manuscript number and title in your inquiry. – PeerJ Staff

Reviewer 1 ·

Basic reporting

This manuscript applies machine learning to predict fatigue crack growth rate, maximum strain energy release rate, and total energy dissipation in adhesive bonds. The study is based on experimental data from double cantilever beam specimens and evaluates sixteen regression models using cross-validation and multiple error metrics. The work also explores feature importance and compares the performance of different modeling approaches. The manuscript is generally well-written and easy to follow. My comments are as follows:

1-The literature review is limited and focuses too much on old references. It does not include enough recent studies where machine learning has been used in fracture mechanics, adhesive bonds, or composite fatigue. Reading newer works in these areas would make the background stronger and show the novelty of this study more clearly. For example:

Zhang, Chenyang, et al. "Machine learning assisted calibration of a fatigue crack growth model considering temperature and stress ratio conditions." Engineering Fracture Mechanics 320 (2025): 111095.

Ye, Jincai, Pengfei Cui, and Wanlin Guo. "A machine learning-based method for fatigue crack growth rate prediction in the near-threshold region." Engineering Fracture Mechanics (2025): 111417.

Su, Miao, et al. "Identification of the interfacial cohesive law parameters of FRP strips externally bonded to concrete using machine learning techniques." Engineering Fracture Mechanics 247 (2021): 107643.

Experimental design

The manuscript applies 16 machine learning models, ranging from linear regressions to ensemble methods and neural networks. However, the rationale for selecting this specific set of 16 models is not clearly stated. It would strengthen the methodology section if the author explained why these models were chosen, whether they represent the most relevant families for the dataset size and problem type.

Validity of the findings

The manuscript does not indicate if hyperparameter tuning was performed. For instance, in Random Forest, key parameters such as the number of trees and maximum depth strongly affect performance. Providing details on how such parameters were selected would improve the transparency.

Additional comments

The manuscript presents the performance of different ML models but does not explain why certain models perform better than others. For example, tree-based and non-linear methods generally outperform linear models when capturing complex relationships, which could be expected here. However, it is unclear if these trends are specific to this dataset (FM94/2024-T3 system) or if they would generalize to other adhesive materials. Adding more discussion on why certain models work better and whether the results are dataset-dependent would strengthen the paper.

Reviewer 2 ·

Basic reporting

This work presents a machine learning framework for predicting fatigue crack growth. Apart from that, the author is also using an ML-based framework for predicting the maximum Strain energy release rate and energy dissipation, and these are physically fundamental quantities for modeling fracture and fatigue. The work is indeed timely, and it is very useful knowledge for other researchers in the field.

Experimental design

1. The DoE is adequate with a heavy tilt towards the ML aspect without much consideration of the mechanics aspects of failure and fatigue. For example, in the literature section, there could discussion which describes some very recent works on modeling fatigue failure through a physics and entropy-based approach, which works well for even predicting variable amplitude loading, and even multi-axial fatigue failure. Such theoretical works have also been implemented into the FEM framework, and results from such constitutive models can be used to generate quality data for training the ML algorithm. Works such as these can be briefly discussed so that some focus on the mechanics and physics aspects of fatigue will be covered.

2. Please comment on the possible use of a physics-informed ML approach, and whether this approach can be used by the current author.

Validity of the findings

1. The da/dN curves are well-reproduced. However,

2. The quality of the displayed graphs needs to be vastly improved, and it does not match the quality of the journal.

3. The results do support the goals set out in the introduction, but again, there is not much of a physics or mechanics angle to the paper. It cannot be just an ML-based approach with no new insights provided from these approaches. Some care is needed to make the paper more wholesome from a predictive and physics point of view.

4. The quality of the match between the ML algorithm and experimental data is good. This is a major plus point of the work. However, focus must also be on the engineering and physics aspects of the problem. For example, how do I modify the ML framework to be able to tell a person when cracks will INITIATE due to cyclic loading? This is where the ML framework can augment a physics-based approach to be able to predict crack initiation. Such statements can be done as future work.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.