Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on August 16th, 2023 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on September 26th, 2023.
  • The first revision was submitted on December 1st, 2023 and was reviewed by 2 reviewers and the Academic Editor.
  • A further revision was submitted on January 22nd, 2024 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on March 11th, 2024.

Version 0.3 (accepted)

· Mar 11, 2024 · Academic Editor

Accept

The paper is ready to be accepted. Congratulations!

[# PeerJ Staff Note - this decision was reviewed and approved by Xiangjie Kong, a PeerJ Section Editor covering this Section #]

·

Basic reporting

No comment.

Experimental design

No comment.

Validity of the findings

No comment.

Additional comments

N/A.

Version 0.2

· Jan 12, 2024 · Academic Editor

Minor Revisions

The writing should be improved. The motivation and applicability of the proposed research should be highlighted.

Reviewer 1 ·

Basic reporting

no comment

Experimental design

Experimental section is sufficient

Validity of the findings

The method can compare withthe latest methods.

Additional comments

There is an issue with the writing style of the paper, it feels like a splicing module. You can describe your job more.

·

Basic reporting

No comment.

Experimental design

No comment.

Validity of the findings

No comment.

Additional comments

Personally, I still find it strange to deploy larger models for diminishing returns. However, it shows a clear advantage over existing studies, and there is an attempt to justify the motifs learned, so it will suffice. It would be very nice to add a measure of "return on investment" in terms of model complexity increase (number of parameters) and the gain in predictor performance.

Version 0.1 (original submission)

· Sep 26, 2023 · Academic Editor

Major Revisions

Based on the review reports, the paper needs a major revision.

**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

Reviewer 1 ·

Basic reporting

The manuscript is well orginised.

Experimental design

In the 3.1 Result:DiatomNet vs. Other CNNs
The model was trained using the original dataset,which we could call it m1.Then another model was trained using augmented dataset which we could call it m2.
In the 3.2 Result:Transfer Learning
Did the pre-trained model mean m1 ?Later the re-trained model,which we could call it m3, was trained using the augmented dataset,and evaluated on the augmented dataset.
The performance of m2 and m3 was different.Could the ahthor give the reasons?

Why the m2 and m3 didn't evaluated on the original dataset like m1?

How did the author set the parameters of other CNN models ?

Validity of the findings

The conclusion is well stated.

·

Basic reporting

Good review. Code is included. The literature could also include other image networks.

Experimental design

It is unclear why other image segmentation models were not considered. Why CNNs? What about comparisons to graph network models? In general, it would be nicer to see how one can extract features and use it on new diatoms.For the annotations in Figure 1, it seems like these are incredibly simple shapes.

Validity of the findings

The models definitely show the classic overtraining signal, where training losses go to 0 (accuracy to 100%) with the validation saturating too.


Additionally, since the splits are per class, it is wholly unclear how this would perform on other (small sample) new data.

The model's baseline accuracy is suspiciously high as well. Given the simplicity of the shapes of the "diatoms" it seems like the model is learning fairly arbitrary representations.

There is no attempt made to discuss / view what the layers are learning.

Additional comments

I think with more insight this can find a place in the literature, given that this seems to be a popular system of study, however, most of the literature cited are conferences, which do not have the same quality standards. In particular, note that a novel data / architecture found experimentally without any theory (post-hoc or predesigned) is *not sufficient*. It is *necessary* to include a feature wise explanation.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.