Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on October 12th, 2020 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on November 23rd, 2020.
  • The first revision was submitted on January 24th, 2021 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on February 8th, 2021.

Version 0.2 (accepted)

· Feb 8, 2021 · Academic Editor

Accept

The resubmission has been examined by one of the two reviewers who had looked at the previous version of the manuscript.

[# PeerJ Staff Note - this decision was reviewed and approved by Jun Chen, a PeerJ Section Editor covering this Section #]

Reviewer 2 ·

Basic reporting

NA

Experimental design

NA

Validity of the findings

NA

Additional comments

The authors have addressed the concerns.

Version 0.1 (original submission)

· Nov 23, 2020 · Academic Editor

Major Revisions

The manuscript has been reviewed by two referees and both have raised significant concerns. The revision should completely address all of these concerns.

[# PeerJ Staff Note: It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors are in agreement that they are relevant and useful #]

[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.  It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter.  Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]

·

Basic reporting

The contribution of this paper, in terms of novelty and performance improvement, is not enough.

Experimental design

No comment.

Validity of the findings

The introduction and discussion of variety of the algorithm principles used in this study and the responding results were insufficient.

Additional comments

This study constructed two cox regression models based on crucial ceRNAs and immune cells to predict prognosis in LUAD. This is a meaningful study, but there are several issues that the authors should address.
1. Two cox regression models based on crucial ceRNAs and immune cells respectively were constructed in this study, however, what the correlation of these two regression models were stated insufficient. In short, the contribution of this paper, in terms of novelty and performance improvement, is not enough.
2. In this study, ESTIMATE and CIBERSORT were mentioned to evaluate or estimate the proportion of stromal and immune cells, what’s the different between these this two methods, and why these two methods are both chosen should be clarified.
3. In addition, in the ‘Multiple databases validation’ part, UCLAN, GEPIA and TIMER (line 181) are also used on different databases; and in line 194, ssGSEA also as an algorithm for calculating the abundance of immune cells was used, I doubted why so many different immune cell infiltration related abundance-calculating methods were used on different datasets? If the authors want to compare the effectiveness of different algorithms, the results of different algorithms on the same dataset should be given, or the same algorithm is compared on different datasets.
A wide range of different datasets were used in this manuscript, at the same time, a variety of methods were used to calculate the abundance of immune cells such as CIBERSORT, ssGSEA, UCLAN, GEPIA and TIMER, how and why these different abundance calculating methods were selected was not stated. A sufficient discussion of the advantages and disadvantages of each method and why they are chosen should be provided.
4. For Figure 3 and Figure 7, the content in the figures should be described in detail, such as the meaning of the shapes, colors and size of the node, and so on.
5. This article uses many algorithms like CIBERSORT to calculate the abundance of immune cells, but the introduction and discussion of the algorithm principle and some necessary details of parameter setting for all the applied methods were missing.
6. A variety of datasets were used in the manuscript like mRNA, lncRNA, miRNA, for each dataset, however, the details including the number of samples and RNAs, information of patients and so on are missing.

Reviewer 2 ·

Basic reporting

Grammatical error in line 28, 169. spell check for DESeq in line 69
Figure legends need to be more elaborate

Experimental design

na

Validity of the findings

The authors have efficiently used the published dataset and tried to eludeidate a bunch of biological questions related to LUAD.

In performing the differential expression between normal and tumor
A. Did the authors compared their results to the results from 3 previous studies ; https://doi.org/10.1186/s12935-020-01295-8, https://doi.org/10.3892/ol.2018.9336, https://doi.org/10.3892/ijo.2016.3716, https://doi.org/10.1111/jcmm.15778
B. Looking at the heat maps of the DEG in fig1c - the 2 groups are not segregateing very well based on the clustering. If the genes were significantly different, I would expect clustering to separate them out.
C. A fold change of >|1| seems a bit lenient especially considering the fact effect sizes in tumor vs normal
D. For the dataset used in the first set- were any other co variants taken into account? Especially doing the cell type estimation

2. It is not clear what motivated the authors to look into the smoking status in nomograph predictions?
3. Is it possible for the authors to show some kind of clustering for the high vs. low risk groups transcriptomic profiles overlayed with any other important covariate?

4. More information is needed in the figure legends to make it more interpretable.

5. Also for the validation dataset, which seems to be a set of samples with different mutations, how was the dates used or conditioned for these mutations.

6. Now that there are tremendous amount of single cell transcriptomic data available, can the authors validate/look up some findings in any public datasets if available?

7. In fig 5E- except for a subset of samples- the distribution of the >20 and <20 smoking status look very similar. Did the authors look into what that subset is which is driving the most differences?

Additional comments

The authors in this manuscript used published dataset to identify differentially expressed mRNA,miRNA,lncRNA between normal tissue and tumor samples which were then used to construct a regulatory network of interactions. They further predicted patients prognostic values based on these findings. They also used CIBERSORT to estimate the immune cell types differences in normal vs. tumor and eventually integration into the multivariate model. They found the hub ceRNA to be associated with stemless and the microenvironment of which the key members were validated in another independent published dataset.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.