Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on October 5th, 2021 and was peer-reviewed by 3 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on November 15th, 2021.
  • The first revision was submitted on December 17th, 2021 and was reviewed by 2 reviewers and the Academic Editor.
  • A further revision was submitted on January 14th, 2022 and was reviewed by the Academic Editor.
  • The article was Accepted by the Academic Editor on January 17th, 2022.

Version 0.3 (accepted)

· Jan 17, 2022 · Academic Editor

Accept

Thank you for addressing the last reviewer concerns and congratulations again!

[# PeerJ Staff Note - this decision was reviewed and approved by Paula Soares, a PeerJ Section Editor covering this Section #]

Version 0.2

· Jan 7, 2022 · Academic Editor

Minor Revisions

Thank you for the extensive additional work that has greatly improved the manuscript. One reviewer has brought up a couple of points that may be worth consideration. As such, the decision is "minor revisions", but the decision to address these comments is left to the authors' discretion. No additional external reviews will be required for the manuscript, but please provide a rebuttal if you do not choose to make these modifications.

The manuscript was originally accepted into PeerJ Computer Science. However, Reviewer 1 does bring up a valid point with regard to choice of journal that could potentially impact (positively) the scope of audience that would likely see this work. After consultation with editorial staff, we agree that your submission could be transferred to PeerJ Life & Environment.

Again, thank you for your work.

[# PeerJ Staff Note: It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors are in agreement that they are relevant and useful #]

·

Basic reporting

• Language used is clear throughout the article.
• Literature is well referenced & relevant.
• Structure conforms to PeerJ standards.
• Figures are high quality and described, but too many (please see general comments).
• Complete raw data is supplied

Experimental design

• Original primary research is NOT within Scope of the journal.
• Research question well defined, relevant & meaningful.
• It is stated how the research fills an identified knowledge gap.
• Rigorous investigation performed to a high technical & ethical standard.

Validity of the findings

• Conclusions are well stated, linked to original research question & limited to supporting results.

Additional comments

I would like to thank the authors for their extensive work and the quality of their manuscript. However I disagree with the authors and I still think that the Original primary research is NOT within the Aims and Scope of the journal, as per stated in point 5 of the Aim and Scope page (please see https://peerj.com/about/aims-and-scope/cs). For clarity, I have copied the cited paragraph:
“Submissions should be directed to an audience of Computer Scientists. Articles that are primarily concerned with biology or medicine and do not have a clearly articulated applicability to the broader field of computer science should be submitted to PeerJ - the journal of Life and Environmental Sciences. For example, bioinformatics software tools should be submitted to PeerJ, rather than to PeerJ Computer Science. “
Finally, as I understand that the editor considers that the paper IS within the scope of the journal (because of the reviewing process), I will recommend the paper for publication as is.

Reviewer 2 ·

Basic reporting

The authors have answered all my questions, but the importance of automatic quantification has not been well explained. I believe that the improved Englished language will not be a problem for understanding. They have also corrected the wrong references.

Experimental design

no comment

Validity of the findings

I believe validity of certain plaque plates(96-well,or other sizes) with Image Algorithm is not a problem. On the contrary, the repeatability of plaque assay itself should be addressed.

Additional comments

I have said that the authors should provide more application scenarios of automatic quantification for viral plaques. The authors have not provided substantial applications and failed to emphasize the importance of automatic quantification. In other words, they should not limited it to counting viral plaques. The sizes of plaques caused by antivirals should be included. In fact, a high-throughput antiviral drug screening using plaque assay in 96-well plates can provide this scenario(DOI: 10.1002/jmv.25463).

Version 0.1 (original submission)

· Nov 15, 2021 · Academic Editor

Major Revisions

While the reviewers were generally positive, they did bring up a number of, primarily methodological, concerns. Please address these as part of your revision.

[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]

·

Basic reporting

• Language used is NOT clear throughout the article, several ambiguities are present (please see general comments).
• Literature is well referenced & relevant.
• Structure conforms to PeerJ standards.
• Figures are high quality and described, but too many (please see general comments).
• Complete raw data is supplied.

Experimental design

• Original primary research is NOT within Scope of the journal.
• Research question well defined, relevant & meaningful.
• It is stated how the research fills an identified knowledge gap.
• Rigorous investigation performed to a high technical & ethical standard.
• Methods described with sufficient detail & information, but require specific elements and machinery to replicate accordingly (please see general comments).
• Sample size well chosen for the problem at hand.

Validity of the findings

• Data is robust but critical controls, e.g. different forms of plaques (from different virus) is not present.
• Conclusions are well stated, linked to original research question & limited to supporting results.

Additional comments

The manuscript by Phanomchoeng et al. presents an automated quantification machine for viral plaque counting. This machine is a convenient way to reduce the workload in plaque counting. The authors show the performance of the machine with an example of a Dengue virus dataset that they have produced, comparing it to manual (expert) counting.
I commend the authors for their extensive work, tutorial videos and found their proposed method interesting and useful. However, I believe there are several concerns that should be addressed before Acceptance.

Major Concerns:

1. As the contribution of the authors is a bioinformatics software tool aimed to virologists with no prior experience in image analysis, the Computer Science category of the PeerJ journal seems inappropriate.
2. The English language is not clear enough and should be improved. Some examples where the language could be improved include lines 57, 58, 81, 82, 85, 100, 140, 170, 171, 191, 192, 260, 261 282, 308, 370, 373 and 407. The current phrasing makes comprehension difficult. I suggest you have a colleague who is proficient in English and familiar with the subject matter review your manuscript, or contact a professional editing service.
3. The authors propose an automated quantification machine and the software to operate it. Although the authors claim that the hardware is relatively simple to set up, I could not test it accordingly due to not having access to the specific instruments.
4. It is not clear how the machine would perform if presented with plaque assays from different viruses, or with differently shaped plaques. In fact, the software was only evaluated using Dengue virus. The author should discuss it or change the scope of the Main title accordingly.
5. The authors should consider reducing the total number of main figures. Perhaps some figures could be supplementary? For example Figure 5, 6, 10, 12. Also, Figures 13 and 14 could be merged into a single image.
6. How were the “Maximum Number of Errors” for the expert defined (Table I)? This relevant point seems too arbitrary.

Minor points:

1. The phrase “The viral plaques appear in the image as white circled areas (Fig. 3) since the viruses eat the cell around themselves.” should be rewritten.
2. The phrase “Thus, when the number of viral plaques is large, it is more difficult to justify the number of viral plaques.” should be rewritten, having in mind that the authors should not justify their results, rather than present them.
3. Typo: in line 98 the number 4 should be superindexed (“cells at 1 x 104 cells”).
4. Typo: in line 310 there seems to be an extra period(“the counting by the expert and machine. Pearson's”)
5. The authors should be more explicit about the future developments regarding the Firebase database.
6. The authors should be more explicit about which filters are applied to the binary image (line 216).

Reviewer 2 ·

Basic reporting

1. Is an automated quantification machine really needed in counting plaques? The authors should provide more application scenarios of automatic quantification for viral plaques. It is easy to count 1-10 plaques in one well, in an appropriate dilution of viral stocks. The authors are focusing on statistic and algorithm of image recognition, ignoring the repeatability of viral plaque assay itself.
2. On the other hand, the shapes of plaques are variable in some viruses. How to distinguish overlapped plaques(two or more)from a single enhanced big plaque caused by viral mutations. What about those smaller plaques caused by attenuated viruses in the picture of plaque assay?

Experimental design

The experiments about image recognition are well designed within the scope of the journal.

Validity of the findings

I have no questions about this part.

Additional comments

Citation 1 (Delbruck, 1940) is inappropriate for the first sentence of the introduction section. It is well accepted that the first viral plaque assay on eukaryotic cell lines was described by Dulbecco et al. (Dulbecco, R., 1952. Production of Plaques in Monolayer Tissue Cultures by Single Particles of an Animal Virus. Proc. Natl. Acad. Sci. U. S. A., 38 (8), 747-752.)

Reviewer 3 ·

Basic reporting

From figure 5 onwards, the numbers do not match the description, and it becomes very difficult to follow. Some sentences in the text are not finished (e.g. line 58: "Recently, automated imaging-based counters for plaque assays have been employed, but the current versions."). There are also parts that are vague regarding the description of the algorithm. For instance line 194, the authors write "a shape-based matching algorithm is employed".

Experimental design

see next section

Validity of the findings

Phanomchoeng et al. describe a new method to enumerate viral plaques in well plates. Their work is original in that it combines hardware and software into a single solution. There were, however, several points I found would need to be addressed.

Accessibility -- A publication such as this one the software described should be published as well. I could not find in the manuscript a link to a repository and a set of data/images. The PeerJ editorial policy states: "For software papers, 'materials' are taken to mean the source code and/or relevant software components required to run the software and reproduce the reported results. The software should be open source, made available under an appropriate license, and deposited in an appropriate archive. Data used to validate a software tool is subject to the same sharing requirements as any data in PeerJ publications." Furthermore, I advise the author to name their method and to provide online documentation (otherwise, I anticipate low adoption). In several instances, the authors describe their method as cost-effective. It would be necessary to give an estimate of the overall price.

Performance evaluation -- The number of plaques is the variable of interest, and the authors find a good correlation between expert and machine. It is difficult for the reader to understand why errors are negligible for large (>12) plaques. Could the authors explain or provide references? In several instances, the authors describe their correlation between expert and machine as significant, which is trivial. A fairer assessment would be to compare the machine vs human error with human vs human error (in other words, is the error/bias) due to automation close to the error between experimenters. What is the justification for table 1. It seems extremely arbitrary and unnecessary.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.