Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on December 4th, 2017 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on December 29th, 2017.
  • The first revision was submitted on January 10th, 2018 and was reviewed by the Academic Editor.
  • The article was Accepted by the Academic Editor on January 11th, 2018.

Version 0.2 (accepted)

· Jan 11, 2018 · Academic Editor

Accept

The authors floowed carefully the reviuewer´s suggestions while revising the manuscript, which now appears to be in good shape for publication.

Version 0.1 (original submission)

· Dec 29, 2017 · Academic Editor

Major Revisions

The reviewers considered the paper potentially useful for PeerJ audience, but suggested some changes. A better tutorial describing how to use the program is mandatory.

·

Basic reporting

Overall good: readable, very clear, well-structured and appropriately contextualized.

Experimental design

I the the comparison with prior art is well designed and thoroughly explored. I wonder, is Fiji’s Analyze Skeletons plugin the only comparable software?

Validity of the findings

Looks good.

Additional comments

This paper was a pleasure to read. It's exciting to see another scientific application of Numba (indeed, there aren't very many yet). I appreciate the use of real experimental data and the detail-oriented comparison to existing software.

·

Basic reporting

Nicely written, objective, and to the point. The software is available and is licensed under a FOSS model, but the authors should also include a tutorial, for users to be able to both test their installation and learn how to use the software. I see that they have included test data, which is great. But it is useless to most users, as I will expand on below.

Experimental design

PeerJ "considers articles in the Biological Sciences, Environmental Sciences, Medical Sciences, and Health Sciences", and this article, being bioinformatic in nature, makes it. But as the software package is right now, it is more of a mathematical work than something usable by the PeerJ audience.

A significant flaw of this work is the complete lack of any documentation whatsoever for the software. There are no instructions on how to install or use it. No tutorial on how to use the program or its alleged GUI, no screenshots of it in action.

Thus, very few, if any, biologists will use this program as it is right now. I have at least some experience programming in three languages (not Python though) and I have no clue what to do to get Skan up and running. So I didn't, and have to take the authors at their word that the program works. Because I know very little Python. Other biologists typically know even less than I do. Time is limited, people will go use Fiji instead, I am afraid.

The authors say Skan is a library, with an extensive API. However, an API is not much use without documentation --don't think many people will go read your code to find out how to access your functions, what they return, etc.

Validity of the findings

No comment.

Additional comments

The article itself seems OK to me, although I thought the discussion was a little thin. But, OK, no use in belaboring the point if there is not more to say.

What I believe the authors must do in order to avoid this apparently nice piece of software becoming yet another example of "abandonware" littering the scientific softwarescape is to write a point by point tutorial that:

1) Explains to the general biologists, who I believe are the main target user of Skan after all, how to get the program to work;
2) Explains to the general biologists how to use the program (GUI or not), by employing the test data set included (it is advisable to have also included the expected output in a directory, so the users can compare what they got to what was expected).

Besides this "general user" guide, in order for the Skan API to be usable by other programmers, there must also be some documentation on how to use the API (e.g., what should be imported to begin with?) and of each of the available functions. Save the many user a significant amount of time later by spending a bit of time now and they will be very grateful to you.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.