Review History

All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.


  • The initial submission of this article was received on November 26th, 2020 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on December 29th, 2020.
  • The first revision was submitted on March 3rd, 2021 and was reviewed by the Academic Editor.
  • The article was Accepted by the Academic Editor on March 9th, 2021.

Version 0.2 (accepted)

· Mar 9, 2021 · Academic Editor


Despite saying that I would send the paper to another reviewer, I have decided to accept the paper after reading it myself. You did a good job of responding to Reviewer 1, who requested "minor revisions" and I am happy with your response to my questions.

[# PeerJ Staff Note - this decision was reviewed and approved by James Reimer, a PeerJ Section Editor covering this Section #]

Version 0.1 (original submission)

· Dec 29, 2020 · Academic Editor

Major Revisions

One reviewer suggested minor revisions and the other reject. I disagree with the main arguments of the second reviewer who says that a statistical analysis is inappropriate. Clearly, we are dealing with a stochastic process that will be ongoing into the future. However, you need to deal with his criticisms. I will probably send the revised paper to a new reviewer as well as the first reviewer. My main concern is about the trend regression. Are the standard errors robust to serial correlation? There is a literature of testing for trends in stochastic processes of unknown form (see papers by Vogelsang etc.) Is this analysis robust? Also the methods text mentions year fixed effects and the table of results talks about a coefficient of year, which suggests you used a linear time trend. Explicit equations could be helpful to explain exactly what you do. Please write a detailed response to both my comments and those of the two referees.


Basic reporting

1. The manuscript is well-written and thorough.

2. A figure showing a timeline with the numbers of disaster declarations per year, and then overlaid timeline of some specific environmental events (e.g. marine heatwaves, HABs, hurricanes), would be a useful illustration for this paper to demonstrate some of the specific events as discussed in the paper.

3. Table 1 caption: In describing the ‘combination’ causes, which often have multiple causes, clarify that this may include a combination of anthropogenic and environmental causes specifically, and maybe give an example or two of how those might interact (for example, overfishing and marine heatwaves). As written, it seems like a repetitive list from the two previous categories.

4. Table 2: It would be helpful to have some sort of timeline (see #2), because this table conveys the message that anthropogenic events have been more important because they have caused greater economic impacts. Also interesting is that this seems to vary by region, with anthropogenic causes dominating in Alaska and the Greater Atlantic regions, and environmental causes dominating in the Southeast and West Coast regions.

5. Figure 1: This figure requires a key for the symbols and agency logos. These are difficult to read, and would not be understandable to a non-American audience

6. Figure 2: These regions each need to be labeled on the map particularly for those not familiar with American geography and the NOAA regions

Experimental design

1. This paper is not laid out as addressing a clear and specific research question. The research question is more implicit within the paper than described explicitly. The research question seems to be to understand how the frequency and type of federal fisheries disasters has changed since the inception of the program. This should be explicitly spelled out in the introduction. Additionally, the manuscript does not provide sufficient rationale for why this was being studied. The manuscript makes it clear that this is important, but does not specifically identify a knowledge gap being filled by this research. I suggest the Introduction should be modified to be more explicit about exactly why this research is being conducted and what purpose it serves beyond providing the basis of advocating for changes to this process.

Validity of the findings

1. Were there any disasters declared because of marine heatwaves prior to 2012/13? Is this a new phenomenon, or a new framing of something that has led to disaster declarations in the past? How do the data on environmental causes compare when looking at the trend in just hurricanes and HABs? Do marine heatwaves explain that difference? I think it might be useful to talk about marine heatwaves specifically and explain their relationship to fishery disasters because a) they make up the greatest percentage of the disasters, and b) they have the strongest link to climate change.

2. Lines 401-406: I read your methods to mean that you used NOAA data to extrapolate from the disaster declaration to estimate losses beyond what is provided in the disaster declaration itself, but this was not entirely clear. Can you clarify that this is what you did, and that this is required because this data is not provided by NOAA?

3. Lines 349-351: It seems that this is at least in part more specifically due to the implementation of quotas in most federal fisheries, which make overfishing less likely. Please include a citation here that says more explicitly that fisheries management is improving, rather than just one demonstrating that fewer stocks are experiencing overfishing.

4. Lines 479-480: It is understood that you did not include COVID disasters as part of your analysis because those have not yet been determined and allocated. How would this fit into your analysis – what type of disaster would you consider COVID? If this is a purely anthropogenic disaster (which I would argue), will this skew your results suggesting the shift toward environmentally-driven disasters in recent years? It may be worth discussing how this may or may not upset the trend you describe.

5. Lines 174-176: Specify that the frequency of disasters will vary by management zone in part because different regions are more likely to experience different types of disasters (e.g. hurricanes are not likely to hit the West Coast and Alaska).

6. Line 428-431: “Transparent, detailed, and mechanistic descriptions of economic impacts…” Of course, if NOAA is required to provide these, conducting a full economic impact on each fishery to determine a disaster declaration, this is likely to slow down, not speed up the process of declaring a disaster. How do you propose this could be done without further lagging behind the event? Please either suggest this, or note that this may be incompatible with your goal of speeding up the process.

7. Lines 394-397: Can you expound on why or how those fisheries experienced an increase in revenue? Is this because of switching to other species? Or because of the challenge of parsing out the specific impacts of the disaster from the total revenue gained in that fishery at a larger spatial or temporal scale? This is mentioned twice in the manuscript, so please clarify what it means or how you interpret this finding.


Basic reporting

Please see general comments

Experimental design

Please see general comments

Validity of the findings

Please see general comments

Additional comments

In this manuscript, the authors used data about fishery disaster declarations and revenue in the US to examine the impact of climate change on fisheries. Key claimed findings include disasters increasing over time and more attributed to climate change.

The manuscript provides some detailed information about fisheries and disaster management in relation to fisheries in the US. However, I feel it is not at the standard of an academic publication, and I hope the authors will find my general comments below useful.

1. The paper is not clear about its contribution to the knowledge base. It could better answer the question ‘why should the audience read this?’, ‘what lessons could be learnt from the findings?’, or ‘what are the insights it provides to decision-makers or for developing a new framework or theory?’. Being clear about the contribution is important when key results not new. In fact, the negative impact of climate change on fisheries around the world has been mentioned in every FAO’s biennial publication ‘The State of World Fisheries and Aquaculture’ since 2000.

2. It looks like the authors have managed to collect information about all disaster declarations that have been recognised by the US government in the 1989-2020 period. If the dataset used in the analysis (i.e., Poisson regressions and bootstrapping) includes all recognised declarations, the statistical techniques have probably been overused. In this case, the dataset can be seen as a population, and the population mean (or median) of the revenue loss can be calculated directly, without a statistical model. Estimates and their confidence intervals are needed when the dataset only contains a fraction (sample) of the population while statisticians wanted to have an idea of the population mean. Likewise, whether the number of disasters and the number of disasters caused by environmental factors increased over time (the key results of the paper) can be directly evaluated from the population rather than fitting a statistical model to the data.

3. There are some other smaller issues worth editing effort or further consideration. I just list some examples here. In Figure 3, the number of data points inside the 95% confidence interval is far less than the number of data points outside – which is not consistent with the intuition of the confidence interval. The presentation of the technical contents is not careful – it assumes readers know R syntax. Line 70 lists data limitation as a cause of fishery declines – which is not clear (perhaps, the authors meant data limitation contributed to poor management which, in turn, resulted in overfishing).

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.