Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on June 6th, 2023 and was peer-reviewed by 3 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on July 28th, 2023.
  • The first revision was submitted on October 3rd, 2023 and was reviewed by 1 reviewer and the Academic Editor.
  • A further revision was submitted on November 24th, 2023 and was reviewed by the Academic Editor.
  • The article was Accepted by the Academic Editor on November 27th, 2023.

Version 0.3 (accepted)

· Nov 27, 2023 · Academic Editor

Accept

Dear Authors, I am happy to inform you that the paper is accepted in PeerJ. As Editor, I have seen an improvement in the manuscript and I think that you have fulfilled all the requirements made by the reviewers.

[# PeerJ Staff Note - this decision was reviewed and approved by Anastazia Banaszak, a PeerJ Section Editor covering this Section #]

Version 0.2

· Nov 2, 2023 · Academic Editor

Minor Revisions

Dear Authors,

The reviewer has re-reviewed the manuscript and they agree that you did a great job taking into account all their comments, however, there is still some advice that needs to be addressed.

·

Basic reporting

The authors have made a commendable effort to address the various reviewers’ comments.

I see that some of my earlier comments derived from confusion about the operating model used to generate the simulations. The section “Simulations” should be expanded to make clear what data are generated in the simulations and fed into the control rule. Looking at the code, it appears that annual bycatch data from year -50 are given to the procedure and that these have both a true variance and an observation error. This and other key details should be made clearer.

The claim that the various control rules have been fed the same data is true only in a limited sense, that should be made more explicit. In this analysis, the time series of bycatches plays a double role as a time series of historical removals and as an index of abundance. The simulations provided the RLA with the time series of removals without informing that it is also an index of abundance, whereas the two ART rules are set up to use this abundance index. The RLA can in principle make use of additional indices of abundance if they are identified as such and incorporated into the data stream in the way that the RLA can use.

Experimental design

A quirk of the simulation trial setup is that the pre-management reduction relative to K is fixed, so that past bycatches are higher when r is higher. A management rule that explicitly or implicitly “knows” this could use this to gain info on the true value of r, but that would in a sense be “cheating”. It is unclear whether any of the tested rules do this, unconsciously.

Validity of the findings

Appendix 2 which purports to show the identifiability of parameters is misleading. Merely because the posterior shows marked narrowing relative to the priors does not in itself indicate that the parameters are identifiable. Identifiability of a parameter means that one can distinguish between different true values of the parameter. To show that a parameter is identifiable, one needs to show results from more than one set of true values of the parameters, and show that there is some tendency for the posteriors to accrete towards the true values, e.g. some correlation between the true values and the modes or medians of the posteriors.

Clearly the parameters r, ρ and σ cannot be identified from just one abundance estimate. A property of the way the SPM is fitted in this analysis is that the posteriors tend to narrow even when the data are uninformative, but the narrowing is not correlated with the true values of the parameters.

If the authors do not have time to do the extra work, I would recommend omission of Appendix 2. I have already stated that I don’t think Appendix 1 is necessary.

Additional comments

Line 265 should read “… posterior distribution of the removal limit.”

Version 0.1 (original submission)

· Jul 28, 2023 · Academic Editor

Major Revisions

Dear Dr. Authier,
The referees have completed their reviews, and one of them provided very useful comments that could greatly enhance the quality of the paper. The referee acknowledged the relevance of the work for the journal and found the overall approach to be valid. However, they also pointed out several substantive issues that need to be addressed before the manuscript can be accepted for publication.

So, please revise the manuscript according to the reviewer's suggestions. Taking their feedback into consideration will strengthen the paper.

Sincerely
Federica Costantini

**PeerJ Staff Note:** It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors are in agreement that they are relevant and useful.

**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

Reviewer 1 ·

Basic reporting

The writing in the article is generally clear and unambiguous and is easily comprehensible to an English reader. The literature is thoroughly cited, especially as it pertains to the legal framework for harvest of PETS (much appreciated by someone relatively unfamiliar with the legal framework for marine conservation in Europe) and the application of SPMs to simulating populations and removals. The article could do with some additional citations regarding the MSE framework, notably: Punt et al. (2016) and Rademeyer et al. (2007), both of which are commonly cited when referring to MSE. The article is well structured, and tables and figures are both clear and useful. Moving the “Likelihoods” subsections of the methods to an Appendix may make the text more readable to less-technical readers. As far as I can tell, the likelihoods used here are not novel, and thus a full discussion of them is likely not necessary in the main body of the article.

Requests:
1. Cite, minimally, Punt et al. (2016) on Lines 262-263. Rademeyer et al. 2007 is also a general review of the MSE framework and could be cited here.
2. Substantially reduce the text of the “Likelihoods” subsection, or move the current text to an Appendix, and simply state the likelihood distribution of the removals and population estimates.

Experimental design

The article appears to fall within the “Aims and Scope” of PeerJ, though does seem to be quite technical compared to other published articles. This is merely an observation and should not be held against this article, as its technical nature is warranted given the study question. The research question is well defined and can be generally described as: “What removal limit algorithms best allow EU member states to meet the conservation objectives of the OMMEG and MNPL?” The article clearly outlines the knowledge gap being filled. The methods used in the study are technical, rigorous, and well supported by the scientific literature. They are, generally, sufficiently described as to be replicable or extended to other study species or regions as needed, though further description of the “trend” performance metric is warranted. Inclusion of the MCMC sampling routine (number of chains, number of samples, sampler used, etc.) for the Bayesian estimation model is particularly helpful.

Requests:
1. Carefully define how the “trend” performance metric is calculated. Is it the average difference between successive rates? Is it the difference between the harvest rates in the first and last simulation years?

Validity of the findings

The findings of this study appear statistically and methodologically sound. MSE is a well-known method for assessing this type of research question. The simulation framework makes effective use of substantial population modeling and management simulation literature. Results are clearly communicated, though Figures 3 and 5 use y-axis scales that are atypical and could be confusing. Conclusions are clear and directly linked to the research question and make direct use of the study results.

Requests: None

Additional comments

1. Line 21: Should be “harvest control rules” rather than “control ‘harvest’ rules”
2. Line 174/177”: I think the first use of the term “carrying capacity” is incorrect and should be something akin to “population size”. Maintaining “carrying capacity to 80% of carry capacity” doesn’t make sense.
3. Line 194: The use of the scripted font to denote the log-normal and uniform distributions is atypical. I more commonly see LN(mean, sd) and U(lower, upper), than the scripted versions used here. Additionally, parameterizing the log-normal using “location” and “scale” may be confusing to readers who are more used to the “mean” and “sd” parameterization used by R. I believe they are the same functional form, but the parameters are named differently. Not necessarily a problem, but something to consider.
4. Equation 4: What’s the utility in calculating removals as a function of past removals instead of directly using Equation 3? I would make this abundantly clear to the reader.
5. Line 249: There should be a citation for the statement: “In practice, K is deduced from D0 and the first observed abundance estimate that is available.” An example would be sufficient.
6. Line 295-296: The article states that the PBR and RLA control rules rely on an estimate of r which is unknown and that this can be used to argue against their practical use. However, r is simply an estimated parameter within the model, akin to ρ in the RLA2 and RLA3 control rules. The article seems to imply that the RLA2 and RLA3 rules are “better” because they don’t rely on an estimate of r, but they instead rely on the estimate of a different parameter. Is the point that the authors want to make that ‘r’ is often hard to estimate compared ρ? If not, I would consider carefully rewording the text about the different removal algorithms to not make this implication.
7. Equation 15: Is β (the “trend in abundance”) simply the slope of a regression line through the abundance estimates? Please clarify what exactly β is and how its calculated?
8. Lines 352-357: Writing the full equations for each rule would be helpful as a reference.
9. Line 395: I am not certain how to interpret the “trend” performance metric. Further discussion of its calculation and interpretation is warranted.
10. Line 475-476: I can’t determine where this conclusion came from. Based on Figure 5 and Table 5, didn’t the RLA2 rule also decrease removal rate over time?
11. Figure 3: I would not recommend using a logarithmic y-axis here, as the data are not separated by sufficient orders of magnitude to make interpretation difficult. Rather, by using a logarithmic axis, the figure disguises the true relative difference between the performance of the control rules.
12. Figure 5: I would also not use a square root scale here.

Overall, this article is a technically rigorous application of the MSE framework to PETS removals. It makes thorough use of the technical and legal literature and clearly fills a gap in the available understanding of how to manage the removals of PETS for which there is often little data.

·

Basic reporting

see attached file.

Experimental design

see attached file

Validity of the findings

see attached file

Additional comments

see attached file

Reviewer 3 ·

Basic reporting

.

Experimental design

.

Validity of the findings

.

Additional comments

See PDF.

Annotated reviews are not available for download in order to protect the identity of reviewers who chose to remain anonymous.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.