Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on December 11th, 2020 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on January 8th, 2021.
  • The first revision was submitted on May 28th, 2021 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on June 10th, 2021.

Version 0.2 (accepted)

· Jun 10, 2021 · Academic Editor

Accept

Your revisions addressed the reviewer's concerns and the article is now ready for publication.

[# PeerJ Staff Note - this decision was reviewed and approved by James Reimer, a PeerJ Section Editor covering this Section #]

Reviewer 2 ·

Basic reporting

This article meets all the requested standards.

Experimental design

The authors have addressed all these points in the revised manuscript.

Validity of the findings

To the best of my knowledge the method and results are valid.

Additional comments

Thanks for your responses and clarifications. The text reads very well after the revision.

Version 0.1 (original submission)

· Jan 8, 2021 · Academic Editor

Major Revisions

Please address the reviewer comments and resubmit.

[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]

[# PeerJ Staff Note: The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at copyediting@peerj.com for pricing (be sure to provide your manuscript number and title) #]

·

Basic reporting

Page 18 lbe 418 missed links/reference to Table twice.

AM is largely a predatory/prey fisheries model. The limitations in terms of the role that the AM model is to play in ecosystem based management is not clearly defined, although it is presented in the context of broader EBM.

Many useful experiments have been run and results are reported. However, the discussion of what this means is weak.

Conclusion missing.

Experimental design

There is circular thinking in the experimental design. While the three models are independent in principle, parameters from AM were used to populate EWE and SS.

If three models are compared, it would be good to include a table that compares the models, beyond the strengths/weaknesses on particular items.

If this is the best contemporary 'what-if' model then more attention could be given to the actual dynamic structures that are at work. It remais a black box expert model with regard to its dyamic capacity.

Calibration and testing of extremes are described in a useful manner.

Connectivity and Influence (6.3) p22 reports findings but there is no conclusion from this paragraphs either. What do the authors learn from this?

Ecosystem based management (EBM) in the introduction includes multiple activties, but the model is fisheries oriented. Seagrass and algae are narrowly included. Considerable assumptions and adjustments had to be made to prevent crashing the model. These considerations should be clearly listed in Discussion and thereafter the Conclusions.

Validity of the findings

The link between the findings and the discussion is not strong.

The conclusion is missing altogether.

The recommendation in the abstract doesn't clearly line up with discussion, given that conclusion is missing. 'We recommend that scenarios relating to ecosystem dynamics of the TBGB ecosystem incorporate initialisation uncertainty, oceanographic uncertainty, and compare responses across all three models where it is possible to do so."

Additional comments

The process of testing the Atlantis model for this bay is very useful. The comparison with the other models is less evident. When would the authors recommend use the AM model and when does it come with caution?

A summary table of comparision between the models might help.

A conclusion is essential.

Mentioning of the national science challenge Sustainable Seas in the abstract doesn't seem relevant. Perhaps move this to the introduction. While acknowledged that the research is incremental, it seems important to position the contribution and the limitation of this model to tie the introduction and the Discussion (Conclusion?) together.

Reviewer 2 ·

Basic reporting

- Figure 3 can be moved to the Appendix (if it is not of course providing any critical information for the model excluding assumptions/inputs).
- I am not sure how to interpret Figure 4 without looking at the text. Would be nice to provide some complementary explanation in the Figure caption.
- Page 18 has multiple formatting issues. Page 32 has several English issue.

Experimental design

- The abstract is not self-descriptive. What is the aims of this research and what is its contribution and industrial application?
- Page 4, it is important to add some information about the methods used for developing the Atlantis model and if it is too difficult to fit the observation, then how we can rely on the results of scenario?

Validity of the findings

- The goodness of calibration has not been reported. By looking at the literature and the values of model outputs in Figure 5, the error looks quite big. Therefore, without providing this type of statics it is hard to say how good the model is calibrated.
- Page 20, 21, was the sensitivity test global or local? What method has been used in what range?
- The managerial implications of this study should be well-explained according to aforementioned results. What advantage industry and practitioners can gain from the findings of your study? What are the specific action plans based on the research findings? These should be addressed.
- The limitations need to be highlighted. Furthermore, the suggestions for future studies could be carefully expanded.

Additional comments

This study aims to Compare three ecosystem models of the Tasman and Golden Bays. The authors have explained the standard tests of models and compared the results of certain scenarios across three. The results are interesting and can contribute to the literature of ecological modeling. Please see few suggestions and comments.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.