Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on May 1st, 2023 and was peer-reviewed by 3 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on June 8th, 2023.
  • The first revision was submitted on October 18th, 2023 and was reviewed by the Academic Editor.
  • The article was Accepted by the Academic Editor on November 3rd, 2023.

Version 0.2 (accepted)

· Nov 3, 2023 · Academic Editor

Accept

Dear authors,

Thank you for the revision. It appears that all of the reviewers' comments have been clearly addressed. Your article is accepted for publication after the last revision.

Best wishes,

[# PeerJ Staff Note - this decision was reviewed and approved by Jyotismita Chaki, a PeerJ Section Editor covering this Section #]

Version 0.1 (original submission)

· Jun 8, 2023 · Academic Editor

Minor Revisions

Dear authors,

Your article has a few remaining issues. We encourage you to address the concerns and criticisms of the reviewer and resubmit your article once you have updated it accordingly.

Best wishes,

[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]

Reviewer 1 ·

Basic reporting

Nothing to add.

Experimental design

Nothing to add.

Validity of the findings

Nothing to add.

Additional comments

The authors propose a machine learning pipeline to estimate ground reaction force to improve legged locomotion. They validate their system with a 2-DoF legged robot. Their ML system is composed by two cascaded MLPs. The first one estimates the GRF from simulation and feeds its output to the second MLP, that estimate the GRF in the real world.
I have the following observations:
- in the Abstract, I would suggest to clarify what the authors mean by "transfer"; I had to read the paper through to understand this, while this should be clear since the beginning;
- although good work has been done in producing the Related Work section, it does not allow us very much to weigh your contribution with respect to current literature; for every section, it should be made clearer how your paper stands out from literature.

Reviewer 2 ·

Basic reporting

I reviewed your work titled “Artificial neural network-based ground reaction force estimation and learning for dynamic-legged robot systems” in detail. I listed the missing points in items.

Experimental design

I would like to point out that the article is generally well written. The literature review section should be expanded. The paragraph at the end of the Introduction section should be moved to the result section. A paragraph about the organization of the article should be added at the end of the Introduction section. Figure 4 should be detailed. The ADABATCH algorithm presented in Table 3 should be detailed. The same statement is valid in figure 7. Figures and tables are not generally interpreted. Limitations of the study should be included.

Validity of the findings

The values presented in the Result section should be presented in tabular form whenever possible. This will increase the fluency of the article.

Additional comments

Spelling and language errors should be reviewed.

Reviewer 3 ·

Basic reporting

The presented research utilizes a 2-stage MLP to estimate the ground reaction force (GRF) induced by a 2-DoF leg during gait-based locomotion. This leg has sensors giving historical torque, velocity, and position as inputs to the MLP, which returns an estimated GRF. This research demonstrates a relatively high 99.5% (RMSE) GRF estimate.

All components are present, and the writing is professional.

Experimental design

The research is well described, and the experimental setup is explained well. Data collection experiments appear to be formulated well. Comparisons against various network structures are good, particularly as they are tested against the sim-to-reality transitions.

The research question is fairly well defined, however, the work does not fill a knowledge gap as the presented work is well understood with more sophisticated tools being employed to solve more complex problems.

While the number of tests is laudable, experiments are somewhat lacking and there is only one surface being characterized. The current state of the art in proprioceptive terrain ID and proprioceptive energy estimations have comparable performance over much more complex scenarios. Particularly as 1-second windows of data collection are used which only consider the touchdown during the gait-cycle. This work only seems to consider one terrain/surface which is held constant?

The sim2real bridge appears to be acting more as a simplifying filter and overfitting. More tests on this would be preferable or another metric for accuracy used. RMSE when values have low perturbation is not very indicative of usefulness, there are cases where high accuracy systems fail to work due to brittleness.

The question is also raised on why a simple MLP is used rather than any recurrent method, as is the state of the art in this space. LSTM or GRU is a typical base algorithm, and the methods described would likely be more suitable for one of these algorithms.

Validity of the findings

The findings should be replicable, all data and code (figure generation scripts in python) are present.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.