Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on October 6th, 2022 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on November 14th, 2022.
  • The first revision was submitted on January 17th, 2023 and was reviewed by 1 reviewer and the Academic Editor.
  • A further revision was submitted on February 9th, 2023 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on February 20th, 2023.

Version 0.3 (accepted)

· Feb 20, 2023 · Academic Editor

Accept

This manuscript is ready for publication.

[# PeerJ Staff Note - this decision was reviewed and approved by Jyotismita Chaki, a PeerJ Computer Science Section Editor covering this Section #]

Reviewer 2 ·

Basic reporting

no issue

Experimental design

no issue

Validity of the findings

no issue

Additional comments

I am satisfied with the latest revision.

Version 0.2

· Feb 6, 2023 · Academic Editor

Minor Revisions

Address the remaining issues within the manuscript.

Reviewer 2 ·

Basic reporting

The paper has been significantly improved.

Experimental design

I have only a few notes for this revised version of the manuscript. I hope that in the final version, the author uses the notation that is already commonly used, for example, flowcharts. Inputs and outputs on the flow chart are different from the process. There are many kinds of symbols, but at least the writer needs to differentiate the block diagram for input/output and process in Figure 5 to increase the readability of the diagram.

Validity of the findings

I am satisfied with the improvements made by the author

Additional comments

-

Version 0.1 (original submission)

· Nov 14, 2022 · Academic Editor

Major Revisions

Please incorporate the reviewers' comments.

[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]

Reviewer 1 ·

Basic reporting

This paper proposed a skin detection method based on attention mechanism to improve YOLOv4-tiny. The automatic bathing robot automatically changes the bathing mode by detecting different skin areas of the human body in the bathing scene through a visual sensor. This paper enhanced the feature extraction and feature fusion capabilities of the network by adding three attention mechanisms. And ultimately it improved the detection effect of different skin regions of the human body.
The subject is of interest. The following problems should be addressed properly or explained reasonably.
In introduction, the focus of related works is not clear. Many traditional methods are introduced. In fact, this work is focus on the methods based on deep learning theory. Current deep learning related SOTA methods should be mainly described.

Experimental design

In the experimental setting, the comparison method only uses YOLOv4-tiny. It is not convincing enough to say that the YOLOv4 algorithm has a high mAP in skin detection in bathing scenes. Other recent models should be compared to the YOLOv4-tiny model and the results should be clearly represented.
In the Materials & Methods, it contains not only the research content of other scholars (three attention mechanisms and the network structure of YOLOv4), but also the experimental data of some methods and the experimental content of this paper (improved structure of YOLOv4-tiny). The authors need to highlight this paper's innovative contributions.

Validity of the findings

The data is a self-made dataset, a total of 1500 images containing human skin are collected, considering factors such as location, illumination, resolution, blur and water mist. The total amount of data is small, and the data samples should be clearly explained.
This paper studied the skin detection technology that can identify different parts of the human body for the first time and made a data set. However, the visual sensor to detect the skin involves personal privacy issues. The author should explain the problem clearly to better demonstrate the advantages of the proposed method.

Additional comments

In the experimental results, the establishment of comprehensive evaluation index W. A represents the change of weight file, and B represents the change of mAP. The basis for the establishment of comprehensive evaluation index should be introduced in detail. The source of the index relationship between A and B, please analysis.
The authors are suggested to proofread the draft, including formula letters, labels, picture size, etc. The method abbreviations in the text, tables and pictures are not uniform, such as CBAM_YOLOv4-tiny in the text and tables, and YOLOv4-tiny_CBAM in the pictures. Pictures and tables are located at the end of the article, typesetting unreasonable. English writing should be improved.

Reviewer 2 ·

Basic reporting

The main idea of this manuscript sounds good. Not only recognizes the skin but also the area. The writing is well-presented and easy to understand.

Experimental design

The method used is commonly used in computer vision. There is no specific novelty, but still OK because the proposed research tends to be applied. Unfortunately, the equations look messy and poorly presented (Eq. 5 and 6). The equation is not just an ornament in the manuscript but explains the method you are using and where your method stands.

Validity of the findings

The number of datasets may still be increased for further research. For now it might be enough. Unfortunately, I don't see a detailed description of this data set. The author mentions in line 100 factors that they use in the dataset. The details of the data set for each factor are not well explained. Does each image contain all the areas to be recognized? Are there areas covered by body members? How many images of each factor? One factor may produce high accuracy but not the other factors. Here the authors can adjust the number of data sets such that accuracy looks good. Furthermore, the authors did not take skin colour into account. Ideally, the proposed model is capable of handling a wide range of skin tones, regardless of ethnicity. The authors did not show the robustness of the proposed model against the skin of various races.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.