All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
After checking the comments of the reviewers, I think the paper can be accepted for publication. Nevertheless, I encourage authors to include the advice given by reviewer 1 in the definite version of the article.
[# PeerJ Staff Note - this decision was reviewed and approved by Keith Crandall, a PeerJ Section Editor covering this Section #]
The authors have added and clarified questions from previous revision. The introduction and methods are well literature supported. There are some grammatical errors in the manuscript, e.g. lines 242-246.
The authors used several evaluation metrics, some of them specially applied for imbalance data, which was suitable. In lines 324-329 the authors stated that the proposal was evaluated by using the following setting: Task 1 5000 for training and 403 for testing, Task 2 9618 for training and 955 for testing, Task 3 5219 for training and 536 for testing. Could you briefly clarify in this section of the manuscript how many times did you repeat this evaluation? For example, you evaluated Task 1 with 5 repetitions, choosing a set of 955 different images in each of them for testing, and 9618 for training. The header of Table 4 seems to give this information; however, it is expected full setting is explained in the section already mentioned.
Results corroborated the proposal outperformed other state-of-the-art techniques.
the authors respond to my comments
the presentation has been reviewed and the authors respond to my comments.
One of the reviewers has serious concerns that have to be addressed before the paper was ready for publication. Please prepare a new version considering all their suggestions.
[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]
The goal of the work is to proposes a multi-task pipeline that takes advantage of the growing advances in deep neural network models. The authors used Inception-v3 and transfer learning, as well as widely adopted datasets. The paper is in the scope of PeerJ Computer Science. The introduction is extensive literature supported. Some grammatical errors are there in the content, need to be removed. Some images are low quality, such as Figure 1, 4, 5, 6, 7, 8, 9; authors should consider using vector graphic images such as EPS.
A big computational effort was made. The authors evaluated the segmentation proposal using COVID-19 CT Segmentation Dataset, which is suitable.
Random transformations such as rotation, horizontal and vertical translations, zooming and shearing, were applied to increase the training data. However, there is a big concern regarding data augmentation. The authors should describe in detail in their work how was data-augmentation applied, e.g. how many images were increased in each category? Was data augmentation used in train, validation and test? The above can affects the results if augmented images from training were used in validation or test.
Apparently, the authors used hold-out for training and testing respectively, however, this is one of the simplest evaluation methods considering the low number of available images; there are other more exhaustive evaluation methods such as k-fold cross validation and leave-one-out cross validation which are more appropriate in this context.
The authors should support the Conclusions based on the performance in the test data, in this regard, the experimental study needs to be clarified in order to sustain the validity of the work.
The present work proposes COVID19 detection and CT scan segmentation method using deep learning model for. The field is important, and the use of deep learning can help for quick diagnostic for COVID19 diagnostics.
The following comment can enhance the work:
1- Many methods use transfer learning for detection. The authors should give a simple comparison with the existing methods.
2- A table that summarize the hyperparameters and some details about the proposed segmentation model.
3- It better to move section dataset to be after the proposed method or as subsection of experiments section.
4- There are few methods for COVID19 lung infection segmentation. the authors should add these methods in comparisons including .
 Elharrouss, O., Subramanian, N. and Al-Maadeed, S., 2020. An encoder-decoder-based method for COVID-19 lung infection segmentation. arXiv preprint arXiv:2007.00861.
5- The quality of some figures should be enhanced. Like figure 8.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.