All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Dear Dr. Dickinson, I am pleased to inform you that this article has been accepted for publication.
[# PeerJ Staff Note - this decision was reviewed and approved by Jennifer Vonk, a PeerJ Section Editor covering this Section #]
Dear Dr. Dickinson, I ask you to make minor corrections before the article is accepted for publication. Please see the reviewer's attached PDF.
Clear and improved from the previous draft. Some new references were added to bolster the intro and discussion. I found a few typos and formatting errors. Some sentences might have missing text or words that were meant to be deleted (see annotated PDF).
No issues here - plenty of detail.
My feelings have not changed from the previous draft - I think these are great results on how dogs and handlers without a history of doing scent work can be trained to help with conservation efforts and adjacent projects related to invasive species like the spotted lanternfly.
Minor revisions suggested - mostly related to some formatting and typos.
No comment
No comment
No comment
I agree with the changes made by the authors.
Dear Dr. Dickinson, I ask you to make some minor corrections to this manuscript before it can be approved for publication.
The paper is clearly written, although there are numerous long paragraphs that would benefit from breaking them up into multiple paragraphs.
Plant hopper is one word: planthopper
In the abstract, It is stated that SLF is an economic pest of apples. But this is not really the case. Apples are not a host for SLF for more than a few weeks during a season, and over the years, no damage has been reported on apple by SLF in the field. However, ornamentals are at serious risk of both direct damage and high costs to prevent spread through shipping plant material. Thus, I suggest listing ornamentals here and removing apples as a major economic pest.
L74 and L104 and a few other places: Spotted lanternfly (SLF) appears in numerous places. Once this abbreviation has been defined the first time (see L46), it does not need to be repeated elsewhere in the paper.
L85: Rutter et al. (2021) is then repeated as (Rutter et al. 2021) in the same sentence. Similar problem on line 573.
The literature review is mostly sufficient, but I think it should be pointed out in the introduction that except on small trees, > 95% of SLF egg masses on larger trees are > 6 m aboveground (see Keller et al. 2020. Dispersion pattern and sample size estimates for spotted lanternfly, Lycorma delicatula (Hemiptera: Fulgoridae) egg masses. Environ. Entomol. 49(6): 1462-1472).
Figures, tables, and raw data shared are all good.
The methods are very well explained and the experimental design is very good. Research questions are relevant and well defined and the experiments were conducted with sufficient rigor. However, for distraction odors (negative controls), did the authors consider including egg masses of a different insect species to determine if dogs can distinguish SLF eggs from other insects, such as lepidopteran eggs?
L271-273: Why did the height at which SLF egg masses were placed stop at 1 m above the ground given that the vast majority of egg masses are more than 6 m above-ground (see Keller et al. 2020 cited above).
Given the height of most egg masses, can the authors report the maximum height in the tree a live SLF egg mass was detected by sniffer dogs in the FE?
The conclusions are well supported by the results and analyses were statistically sound. I have a few suggestions for the conclusions:
L 605-607: There needs to be a citation here when stating that sniffer dogs outperform other detection methods. A study that could be used for comparison was published in 2023 by Keller et al. They found that egg mass counts using visual search were 52% accurate in counting the number of SLF egg masses, and if there were > 10 egg masses on a tree, counts were 100% effective even on large trees (see Keller et al. 2023. Approach to surveying egg masses of the invasive spotted lanternfly. Environ. Entomol., https://doi.org/10.1093/ee/nvad051).
L 494-495: “Despite this, the FE sensitivity aligns with prior studies reporting acceptable levels as low as 25% when compared to alternative detection methods (Mathews et al., 2013).” This study was about detection of dead bats, but why not compare to other insects? For example, see the paper by Moser et al. 2020. Biosecurity Dogs Detect Live Insects after Training with Odor-Proxy Training Aids: Scent Extract and Dead Specimens. Chemical Senses: https://doi.org/10.1093/chemse/bjaa001
L504-506: Another reason for this could be that dogs that went on to be tested in the FE were the more successful performers in the ORT trials, so this gave the FE a better chance of success among all dogs tested.
nothing further.
The article is written in clear and professional English, and the structure follows scientific standards. The background is well contextualised with up-to-date and relevant references. Figures and tables are appropriate and informative. However, some of them could be better integrated into the discussion. For instance, Figures 6, 8, and 9, which provide insights into search times and blank trials, could be more explicitly cited when addressing specificity and handler uncertainty. Similarly, Table 2 on dog breed distribution could be used to reinforce that this detection model may be applicable to a broad range of companion dogs, not just traditional working breeds. Making these connections more directly in the discussion would improve the overall clarity and internal coherence of the article.
The research is well designed and clearly described. The methodology is appropriate for the question posed, which is relevant and clearly defined. The study addresses a practical need and fills a documented gap in the field. Experimental procedures are explained in enough detail to be replicated. That said, one limitation worth discussing more openly is the variation introduced by the absence of a shared training protocol or standard alert behaviour. While the flexibility is understandable in a citizen science context, acknowledging how this may have affected team performance would strengthen the experimental section. It might also help to suggest practical steps for reducing such variability in future studies.
The findings are well supported by the data, which are clearly presented and include key performance metrics such as sensitivity, specificity, and precision. The statistical treatment is sound and appropriate. The study’s second phase (transition to live samples) confirms the potential applicability of the method in realistic settings, and this is an important contribution. However, the discussion could briefly explore why sensitivity did not improve compared to earlier phases, despite the accumulated experience of the teams. Offering possible explanations or suggestions for further training would add value. Overall, the conclusions are reasonable, tied to the original question, and do not go beyond what the data support.
This is a solid and well-executed study with real-world relevance. It demonstrates a practical model for involving non-professional handlers in species detection with promising results. A few points could be further developed to increase its usefulness: including more actionable guidance in the conclusions (e.g., the benefits of standardising a neutral response or addressing handler interpretation), and reflecting more on the high dropout rate and how future studies might reduce it. These are details that could strengthen the practical application of the findings, but they don’t affect the overall quality or validity of the research.
Dear authors, I ask you to carefully correct the shortcomings pointed out by the reviewers. It is necessary to respond to each of the reviewers' fundamental comments. The manuscript is poorly formatted, and the problem is not only that it was not read by a proficient English speaker, but also that the wording is often illogical and inconsistent.
**PeerJ Staff Note:** Please ensure that all review and editorial comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.
**Language Note:** The Academic Editor has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title). Alternatively, you should make your own arrangements to improve the language quality and provide details in your response letter. – PeerJ Staff
Too soon to comment on.
Didn't get that far
Didn't get that far.
I only read through the introduction when I realized that this manuscript should be sent back to authors to clean up all of the typos and missing citations (e.g., A ? sign where a citation should be in several places), many duplicate citations (citation given two times in a row). None of the in-text citations follow the guidelines for the journal; they are not properly within parentheses. This paper was not adequately reviewed by the authors before submitting it. I am willing to review it once the authors have cleaned the manuscript up.
This might simply be an issue with uploading or migrating the manuscript to the journal, but it seems that dash symbols (-) were removed, e.g. line 25 "a field test over 3-6 months" shows no dash. Alco check line 54 - "each mass contains 30-60 eggs" not 3060. That's a lot of eggs!
There might also be an issue with references being used in the text. The year of the references appears in parentheses but the names just appear in the sentences without proper formatting. In some places the references are used twice, e.g. line 59 uses Aviles-Rosa et al. (2023) two times.
Line 74 says "have excelled in sport trials?" Here and in other places there is a question mark. Not sure if this is supposed to be a reference that is missing, a typo, or a symbol that didn't get transferred to the PDF file, similar to the aforementioned issues with dash symbols.
Aside from some of the formatting issues, the English is well-written and clear. The tables and figures look good.
The methods and photos were very clear and easy to understand. Impressive numbers of volunteers provided rigorous testing for both indoor and outdoor trials. I especially liked the summary of participation and success rates at the beginning of the results section. Pretty valuable information for future researchers who might recruit detection dogs and handlers from the public when they have a minimum sample size in mind.
I was a little confused at first on why the wind and temperature were not analyzed for experiment 2, but upon rereading section 3.1 it was obvious that these trials with live eggs were only done outdoors with the dogs that completed the field trial with nonviable eggs. The bullet points in this section helped with keeping the studies separated by their main goals and findings.
The main takeaways are that a variety of dog breeds and handler experience with scent detection can contribute to these scientific studies and serve stakeholders on the hunt for invasive pests. The study also provides a great model design for how you can train dogs to search for SLF eggs both indoors and outdoors.
It sounds like the search for participants was mainly (or only) done with Facebook. I wonder if you would get overall higher retention of volunteers using different platforms. Facebook casts a very wide net but that might be what it takes to get 70+ people to go through the entire study from start to finish.
Line 55-57 says "human visual searches for SLF egg masses are challenging...[reasons]". This might be too broad of a generalization. I would rewrite the sentence to say something along the lines that the COULD be challenging to find and it really depends on where you are searching. I've searched for SLF eggs before and I do think you can train vineyard workers and volunteers to properly ID and find egg masses in vineyards. There are known patterns and SLF behaviors to figure out where they will lay their eggs (see the reference I mention in the additional comments sections). But if you're searching for egg masses in a busy shipping yard, forests, along railroads, and other places with difficult terrain, scent detection dogs are preferred in these places that can be difficult for humans to navigate.
Line 47 - Most literature on SLF avoids using "the SLF" and simply writes "spotted lanternfly" at the start of a sentence or uses SLF as an abbreviation in the remainder of the text.
Reference worth reading - a publication from 2024 that also used scent detection dogs on SLF in vineyards and surrounding areas. Only so many dogs and human searchers were involved, but it also talks about environmental factors and the efficiency of detection dogs for SLF.
https://doi.org/10.1002/ecs2.70113
https://esajournals.onlinelibrary.wiley.com/doi/full/10.1002/ecs2.70113
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.