Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on September 25th, 2023 and was peer-reviewed by 3 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on November 2nd, 2023.
  • The first revision was submitted on December 17th, 2023 and was reviewed by 2 reviewers and the Academic Editor.
  • A further revision was submitted on February 14th, 2024 and was reviewed by 1 reviewer and the Academic Editor.
  • A further revision was submitted on March 7th, 2024 and was reviewed by the Academic Editor.
  • The article was Accepted by the Academic Editor on March 18th, 2024.

Version 0.4 (accepted)

· Mar 18, 2024 · Academic Editor

Accept

Thank you for taking the time to carefully revise the manuscript and addressing all concerns raised by reviewers and myself. I have carefully checked your revised version and am happy with the current version and believe the manuscript is now ready for publication.

[# PeerJ Staff Note - this decision was reviewed and approved by Patricia Gandini, a PeerJ Section Editor covering this Section #]

Version 0.3

· Feb 29, 2024 · Academic Editor

Minor Revisions

I have now carefully read your revised version. Both I and the reviewer agree that it is now much improved, but there are some areas that still require clarifications. I have listed these below. I look forward to receiving your revised version.
Sincerely,
Darren

1) The reviewer raises concerns about the sample sizes in the rarefaction curves particularly for arthropods. This is understandable as I am assuming there will be many times (orders of magnitude) more species of arthropods present. The observed curves (solid lines??) do not reach an asymptote for any group. This is important and needs to be included in the results and clarified in the discussion. So what % did citizen science observe? This also needs to be clearly stated in your results as it provides an indication of how good a baseline the citizen science provided. These issues are related to my previous comments regarding the need for the discussion to explicitly address such limitations (e.g. use of a single technique) and the need for complementary census techniques – this is still not included adequately in the Discussion.

2) In the Results you still omit the fact that it was university team that accounted for a substantial proportion of the “bioblitz” increase. This is important for the interpretation of your results. If you do not explicitly include this (as repeatedly requested) it will not be possible to accept your submission as I feel that conclusions and results remain incomplete and do not adequately represent the data collected. Universities are an important catalyst. I believe your study provides an important example of this and you need to more clearly present this. I still believe you are missing an opportunity to demonstrate how universities can work to together with society. I do not see this as a weakness, rather something to be celebrated. The Discussion is not the place for results (L271 – 273), so please include the results where you run the analysis with and without the authors in the Results.

3) Lack of statistical tests. Please provide statistical tests when you compare values. Those where confidence intervals do not overlap (e.g. arthropods) are obvious, but including statistical tests provides an important basis to compare. E.g. yes we know they are different but by how much, the same applies when confidence intervals do overlap – we know they are similar but by how much?? Please add statistical tests to support all the comparisons. E.g. I believe the R packages used by the authors include functions to statistically compare rarefied communities. I am sorry this was not explicitly requested earlier, but I would not expect to have to directly ask for this and am surprised that authors did not include these tests as part of their review when they added new analysis (e.g. cumulative curves by different groups) and new details (e.g. standard error and confidence limits to the tables).

4) You use insects, arthropods, and invertebrates interchangeably in the Results. Please establish which groups the results correspond to and clarify the use. Please also take time to carefully revise the text to ensure clarity and consistency throughout.

5) The number of observed bird species was less during your study– I guess that this could perhaps be due to missing winter migrants/very rare species??? This is an illustrative example to show that there are still details missing from the results, which limit a reader’s ability to fairly interpret the findings. Providing more details so readers can appreciate and fairly interpret the results is important. Please provide additional text and illustrative examples in the Results to help clarify the observed patterns.
Please ensure that the R scripts are updated to ensure the analysis and results are reproducible.

6) Revise and clarify legends to be stand-alone. For example, Figure 2 Species accumulation curves. The curves have solid and dashed lines. I assume these correspond to observed and extrapolated values. This needs to be clarified in the legend, which should also include the rarefaction technique used. Please double-check legends for all tables and figures to ensure they are stand alone i.e. readers do not have to go back to text to understand the results presented.

7) The improvements you demonstrate from only 48 hours are impressive, but I still feel this is not presented clearly in the Discussion. I think you should reframe, restructure, and rephrase in ways that help readers understand the differences. The initial paragraph is excessively long and needs to be shortened. An opening paragraph that provides an overall starting point for readers so they know what is coming….

Additional clarification is required, for example, the bioblitz results were “less” even, which is to be expected if earlier results were simplified. As different diversity metrics do not represent the same thing I suggest that you avoid phrases such as L251: “Almost all diversity metrics in our dataset were higher than in the prior work, indicating that significantly more diversity is present in the area than could have ever been realized with the past data.” I believe it is fair to say the bioblitz provided a clear improvement to our understanding of the biodiversity in and around the lake. And then go on to qualify this with a comparison of the difference in diversity metrics………

·

Basic reporting

The authors have now provided new analysis by major taxonomic groups and mention (lines 222-228) that these show similar patterns are the full dataset. I would strongly disagree here: for arthropods, herps and plants, the iNaturalist data is so limited that the actual data points overlap with actual datapoints of Bioblitz data. I would not trust the estimated rarefaction curves with so limited data. The only data where the actual collected data shows differences in accumulation curve is the birds. Indeed, here the difference is contrary to what the full dataset shows: suggesting even that few professionals would be able to collect more reliable biodiversity data than an infinite number of amateur citizens. In other words, I think that here the additional analysis showed what the reviewers were worried about in earlier feedback.

Otherwise, I think that the authors have done a good and efficient job in ironing out the wrinkles in the framing of the manuscript. The R script does not seem to be updated with the new analysis script, though.

Experimental design

no comment

Validity of the findings

no comment

Version 0.2

· Feb 5, 2024 · Academic Editor

Major Revisions

Thank you for addressing the reviewer comments in your revision. We have now received two reviews of your revision. One suggested accepting the manuscript as is, the other still has a major issue with the framing of this work.

I have carefully reviewed your revision myself and think the manuscript has merit for publication. I do however agree more with reviewer 2. While your replies and changes made in response to the simple comments/suggestions were adequate, I do not believe that you were able to address the major concerns as outlined by myself and reviewer 2. The replies should be supported by robust scientific evidence/theoretical support. Replies such as “reviewers did not ask for it” have no place in scientific methodology / professional peer review and are verging on insulting.

The most likely explanation for the majority of the results you present is that university involvement provided a significant contribution and citizen involvement in isolation would not have advanced beyond the pattern from 2008–2021 significantly. The involvement of local communities is vitally important, but must be presented fairly. Below I highlight some specific issues that remain. You should also review once more based on both sets of comments R1 and R2.

I look forward to receiving your carefully revised version, but unfortuneately I need to highlight that the manuscript could still be rejected if the replies/changes are not appropriate.
Siincerely,
Darren

Specific comments
It is my opinion that the framing of the manuscript still does not make sense and the conclusions are not robustly supported by the data and/or analysis. Based on the data collected I strongly suggest you reframe within the context of universities and wider society integrating to provide preliminary results that can be used together with additional studies to monitor biodiversity.

As stated previously citizen science in isolation is relevant but not definitive (see comments by reviewers 2 and 3 in the R1). I reinforce this as a rarefaction curve that does not reach an asymptote indicates that additional samples were needed to monitor biodiversity and the response to any future changes (Figure 1). Additionally, it is not possible to know which or how many species were missed. Using a single technique (e.g. citizen science observations in isolation) to monitor biodiversity change is never going to be robust and extreme care must be taken to avoid misleading conclusions. What would happen if the citizen science was overly generalized/simplified, such that many populations could decline but would still be “present” and obvious (i.e. easy to detect). And therefore any changes wouldn´t be detected until it was too late? See
Simple study designs in ecology produce inaccurate estimates of biodiversity responses: https://doi.org/10.1111/1365-2664.13499
Unstructured citizen science data fail to detect long‐term population declines of common birds in Denmark : https://doi.org/10.1111/ddi.12463

The citizen science data should be most useful, but you need to reframe (see comments by reviewers 2 and 3 from R1, and reviewer 2 this R2). E.g. useful for identifying biomonitor/bioindicator species when combined with knowledge of the expected impacts. But I believe that without complementary techniques cannot in itself be considered scientifically robust in isolation. There was a lack of any complementary field studies adopting recognized census techniques for any groups. So how can you suggest that the data are robust. I.e. you should remove “robust” from the text , for example the sentence L 81 should be: “Our study provides a baseline for documenting the impact on biodiversity derived from the physical changes to the habitat of Cedar Lake and is relevant to future studies that aim to document biodiversity and its changes over time.”

Main issues that I still feel are most relevant to generate results needed to support your conclusions and bring you closer to your stated aim of building better baselines:

Run the comparisons but also using the area identified in Figure 4 to subset both data sets.

The lack of data collected using standard census techniques at least for some key groups / bioindicator species e.g. birds, fish, turtles, butterflies, dragon flies to help calibrate and complement citizen science is relevant and should be discussed more completely in the text.

The insight could be strengthened with the addition of more detailed analysis such as comparing species in different Families for relevant groups. This can provide important support for conclusions related to the benefits of the approach you adopted. The results would need to be restructured and expanded, but I believe this is possible as part of a major review.

There is a lack of statistical comparisons of values presented in the tables e.g. Tables 1 and 2. You could consider a randomization approach to compare data collected over multiple years with your project data. I.e. select only months that correspond to your project study in the prior work (2008 - 2021) , then obtain confidence intervals for annual mean of prior work values using randomization, then compare your study values to these confidence intervals.

Please include additional comparisons in all analyses excluding all the iNaturalist data collected by the authors or other students/professors directly involved with the project at Coe College. You know who they all are as they were part of your university project. This is important and relevant as it enables readers to understand the importance of universities for catalyzing citizen involvement together with BioBlitz. Currently, the results you present (seems vast majority of records were from the university team) suggest citizen involvement in isolation would not have advanced beyond the pattern from 2008–2021 significantly.

**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

·

Basic reporting

no comment

Experimental design

no comment

Validity of the findings

no comment

Additional comments

no comment

·

Basic reporting

no comment

Experimental design

no comment

Validity of the findings

no additional comments

Additional comments

The authors have responded to my concerns mainly in an adequate fashion. That, plus the changes done in response to the comments from the other reviewers have improved the quality of the manuscript.

For my major issue, I remain in disagreement with the authors. I do agree with the many points that they are raising in the answer of my comments, but I think that their answer (and answers to AE's concerns) is partially contradictory to the frame that they have adopted to the manuscript.

In short, I think that the theoretical and narrative framework adopted for the manuscript (i.e., “building better baselines”) cannot be answered with the analysis – maybe not even with this dataset. Nevertheless, I think that the dataset here is valuable as is the analysis itself. It just seems to have a different focus than the manuscript does – as the authors outlined in their answer to me. Whereas providing a reliable or actionable baseline would require a much more analysis, I think that the current contents already tell us something about how citizens collect data on the biodiversity during bioblitz events or using accessible apps to document biodiversity.

Version 0.1 (original submission)

· Nov 2, 2023 · Academic Editor

Major Revisions

Please accept my sincere apologies for the delay in reviewing your interesting submission. I have now received comments from three expert reviewers. All agree that your study is useful and relevant. All three reviewers provide clear guidance and helpful suggestions for improvements that are necessary before the submission can be published. The comments are well articulated so I will not repeat them here. Based on the concerns listed below I feel that a major revision is necessary before the submission can be considered for publication.

General comments/suggestions
Authors ran a dedicated citizen science project using iNaturalist for 1 year (1 April 2021 to 31 March 2022), which included bioblitz events to help raise awareness and maximize the number of photos.

1) A major challenge to monitoring biodiversity is obtaining standardized samples over time. Running the citizen science project over multiple years would have provided additional insight into the potential for the citizen science data to monitor changes at the lake. I see authors include satellite images to show changes have started. It is important to run the comparisons but also using the area identified in Figure 4 to subset both data sets. This provides a baseline of the species directly impacted. The results will need to be updated and figures adjusted accordingly e.g. perhaps Figure 1 would become two part to also zoom into the impacted area. Additionally, please clarify in the discussion how (i.e. legal basis) such changes can take place without any biodiversity impact assessment. This is most relevant for readers who are unfamiliar with relevant Federal and State legislation.

2) It was a shame that the citizen science data was not complemented with data collected using standard census techniques at least for some key groups / bioindicator species e.g. birds, fish, turtles, butterflies, dragon flies….. The lack of such data to help calibrate and complement citizen science is relevant and should be discussed.

3) I agree with reviewers that the analysis is somewhat preliminary and should be strengthened. At the moment you present a general overview. The insight could be strengthened with the addition of more detailed analysis such as comparing species in different Families for relevant groups. This can provide important support for conclusions related to the benefits of the approach you adopted. The results would need to be restructured and expanded, but I believe this is possible as part of a major review.

4) There is a lack of statistical comparisons of values presented in the tables e.g. Tables 1 and 2. You could consider a randomization approach to compare data collected over multiple years with your project data. I.e. select only months that correspond to your project study in the prior work (2008 - 2021) , then obtain confidence intervals for annual mean of prior work values using randomization, then compare your study values to these confidence intervals. It could also be informative to look at temporal patterns in iNaturalist data before and after your project. Although your project appears to have stopped, are there now more iNaturalist records per year (2022, 2023)? This would help support the wider benefits of such projects.

5) Please include additional comparisons in all analyses excluding all the iNaturalist data collected by the authors or other students/professors directly involved with the project at Coe College.


Minor suggestions
Please include metric measures throughout. For example L86 you state “..120-acre urban lake…”. I understand acre is a statutory measure in the USA but for international readers please also provide this area estimated in appropriate metric units e.g. square kilometers .

L88/89 – How deep is the lake? The temperature of the lake you report was obtained at which depth?

L94 “The biodiversity of Cedar Lake has never been formally assessed because no systematic surveys have ever been conducted there.” Please rephrase. Elaborate on what exactly is systematic. Perhaps put this in the context of minimum requirements in the State of Iowa for a biodiversity assessment or recognized best practices for monitoring? It appears that existing data fall far short of that necessary to monitor changes over time but there are at least some previous studies that appear to be “systematic” e.g. benthic fauna. I found the following with a quick Google Scholar Search.
Connolly, Noreen L. "A species diversity study of the benthic population of Cedar Lake, Illinois." (1981).

Reviewer 3 has suggested that you cite specific references. You are welcome to add it/them if you believe they are relevant. However, you are not required to include these citations, and if you do not include them, this will not influence my decision.

**PeerJ Staff Note:** It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors are in agreement that they are relevant and useful.

**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

·

Basic reporting

In this study, Ahern and Hughes spent a year adopting a citizen science approach to document the baseline biodiversity of an urban lake located in Eastern Iowa, which biodiversity survey was lacking prior to a recent confirmed renovation project. They adopted the online platform iNaturalist together with structured BioBlitz events to perform the documentations. They successfully identified 232 species from 1345 observations, that is comparable to previous observations. Their work demonstrated a success in applying citizen science in accessing a baseline biodiversity of a particular habitat, given a short period of time.

The idea is presented clearly and unambiguously. The manuscript is well written with standard English quality. The literatures are appropriately cited. The Figures are clear and adequate in presenting the idea of this study. This study is a baseline biodiversity survey that no specific hypothesis is required.

Experimental design

The site was chosen appropriately which there was no baseline survey had been conducted. The way they carried out the documentation is valid by recruiting a large number of people to take documentations and upload the photos to a public identification platform, which include experts from different taxonomy in identifying the species. I think the number of sampling time is reasonable. The survey lasted from April to October, which means the authors covered the late spring to autumn season. However, the data is not covered in winter and early spring. The authors should point out this seasonal effect on biodiversity and their limitations of this dataset. The statistical analyses were performed appropriately, with the standard parameters accessing biodiversity, including the Shannon Diversity Index and Simpsonís Evenness. Besides, a potential problem would be raised if the documentations were totally relying on citizens: citizens are amateur and they may be able to capture sessile species or slow-moving ones, yet those fast-moving ones and the tiny species would be always missed. I see this study has improved a lot on the species coverage compared to the past data, still there is some limitations that I would like the authors to address in the discussion section.

Validity of the findings

The result is encouraging but I see the lack of some information in the dataset. I see most of the identified species are large animals and are free-living around the lake. They may be susceptible to human activities, but their ability to move away or migrate is easier. Though the species living under the water or in the soil would be most vulnerable to such kind of renovation projects. I presume there are more other species under the lake or in the soil around the lake. I would like to see how the authors address these missing search points in their study in generating a comprehensive biodiversity survey.

Additional comments

- It is good to describe how the citizen document the species in different events. For example, walking next to the lake and taking photos would be easy, but how the documentation can be done for area in the centre of the lake? Or even the species under the water? Are there any limitations on the documentations? For instance, the size of the small animals or the fast-moving ones. And are there any missing tracks, like the benthic fauna of the lake? Especially these would be the most vulnerable part in the lake. You can refer to Figure 1 that the sampling is scarce in the middle of the lake compared to the surrounding regions. It may pose severe bias in data analysis. These limitations should be discussed in the discussion section.

- In the result section, it would be good for the authors to also list out or provide the name (at least the common name) of the organisms they had observed. For instance, in line 213-214, they stated that 4 birds, 3 invertebrates, 2 plants and 1 fish have been observed. It would be nice if the authors can provide the names, that the readers would know what exactly the species they have identified.

- The authors should also point out the seasonal effect on the biodiversity and their limitations of this dataset because only data from April to October is collected.

- Please also state the total number of participants in each survey events and in the overall project.

·

Basic reporting

no comment

Experimental design

The one major issue that I have is the question of whether the comparisons between previously citizen science collected data and BioBlitz collected data is a bit like comparing of apples and oranges. For any research purposes, it would be strange to compare the whole biodiversity within an area as it is rare or maybe even impossible to do wholesale mappings of the whole area. Thus, the comparisons are usually done through a set of taxa than can be easily surveyed and that can work as a proxy for larger biodiversity. I am thinking species such as birds or macrophytes here. A real life scenario then would be looking for example changes in bird community. If you would compare only birds, are your findings still valid? The other approach would be to map out the occurrences of red listed species – can you find more endangered species with BioBlitz than non-structured citizen participation? As it currently is, I am not sure the the comparison has a real-life relevance.

Validity of the findings

no comment

Additional comments

The authors present a manuscript which compares the citizen-generated species occurrence data on a lake during a longer period of passive data collection to a data collected over shorter period of time but with actively organized BioBlitz events, which had a specific aim to collect data on diverse set of species. The authors found that organized collection events not only lead to a larger data set, but it was also more representative and is expected to lead to a more accurrate analysis of the ecological state of the lake.

Citizen science data is more and more used for decision-making in both policy and in actual management of different habitats. Thus it is imperative to understand the limitations of this data and also whether the quantity and quality of this data can be improved. Thus I found this research valuable
The study is well organised and argued for and while it concerns a rather small lake with a limited number of data in total, it provides clear results.


Minor issues:
Line 12: What does ”recreation hub” mean? For a non-native speakers such as I, it does not tell much.
Line 19: Who is ”we” who ”experiences changes to global biodiversity”?
Line 20: What does ”the extinction rate of biodiversity” mean?
Line 24: What does ”city” count? Urban area, rural area, any human made structures?
Line 30: How does ”aesthetic changes” create new niches?
Line 42: Does ”organismal diversity” mean species diversity?
In general, there is a lot of heavy sentence structures, starting with participles (such as line 42, 45, 52, 54). Maybe a bit of editing for readability is warranted.
Line 66 onwards: Are these limitations specific to citizen science? I would expect that any non-purposefully collected professional data would have same limitations.
Line 74: How does people use the lake?

Reviewer 3 ·

Basic reporting

Dear editor,
this manuscript reports an interesting initiative for boost biodiversity data from an urban lake using a sequence of bioblitz events. They show that these events have produced a considerable contribution, especially increasing the knowledge on the local biodiversity. As evidence of this achievement they filtered data from iNaturalist considering two time periods that allowed them to compare invited and scheduled contribution (bioblitzes) with opportunistic self-initiative contribution (data before the bioblitz events).

Experimental design

Although I applaud the authors initiative, and the quality of the text presented, I am concerned with their experimental design to achieve the main conclusion. In many occasions the authors describe their bioblitz events as a structured sampling method (see lines 14, 15, 77, 141, 259), but there was no evidence on supportive protocols and training for the volunteers to collect the data. Only in the discussion they recognized the bioblitz as an approach more akin to a semi-structured sampling (see the lines 225, 227).

Validity of the findings

While I understand the use of diversity metrics to compare the datasets, I think some important limitations need to be mentioned to interpret the results, at least in the discussion section. For example, some biases, such as the difference of the number of observers among the datasets (60 and 22) can also produce the differences they found on diversity. Not to mention that the set of observers may have different preferences to survey groups of animals and plants. Other issue is the time frame these datasets were collected, they are very different. The authors should provide a highlight and discussion on such limitations related to the main conclusion of the article.
Another point is: how these initiatives complement each other? It would be interesting to present at least a rarefaction curve with the full dataset to conclude if the different approaches together can provide a better picture for the knowledge on the local biodiversity.

Additional comments

I also felt the authors are missing some important and recent references to cite, such as:

Meeus, S., Silva-Rocha, I., Adriaens, T., Brown, P. M., Chartosia, N., Claramunt-López, B., ... & Groom, Q. J. (2023). More than a Bit of Fun: The Multiple Outcomes of a Bioblitz. BioScience, 73(3), 168-181. – this is an important and conceptual review of the benefits of bioblitz events.

Forti, L. R. (2023). Students as citizen scientists: project-based learning through the iNaturalist platform could provide useful biodiversity data. Biodiversity, 24(1-2), 76-78. – this is an interesting report on the usefulness of engage citizens to boost biodiversity data using iNaturalist in a poorly-known region.

Rokop, M., Srikanth, R., Albert, M., Radonic, C., Vincent, R., & Stevenson, R. (2022). Looking more carefully: a successful bioblitz orientation activity at an Urban Public University. Citizen Science: Theory and Practice, 7(1). – this is also an interesting case report of a citizen science initiative generating engagement and data on biodiversity in a National Park (on Thompson Island).

Gigliotti, F. N., Franzem, T. P., & Ferguson, P. F. (2023). Rapid, recurring, structured survey versus bioblitz for generating biodiversity data and analysis with a multispecies abundance model. Conservation Biology, 37(2), e13996. – This is a very important article about limitations and usefulness of bioblitz on boost biodiversity data compared to structured survey.

**PeerJ Staff Note:** It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors are in agreement that they are relevant and useful.

Annotated reviews are not available for download in order to protect the identity of reviewers who chose to remain anonymous.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.