All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Thank you for consideration of the changes proposed by our reviewers.
[# PeerJ Staff Note - this decision was reviewed and approved by un Chen, a PeerJ Section Editor covering this Section #]
The basic findings stated in the manuscript have not been changed after the revision. The authors have well addressed my questions and concerns.
The experimental design is sound.
The performance evaluation can be trusted.
The revised version of the manuscript has reached a very high standard. I recommend to accept the paper for publication in PeerJ.
The current presentation arrears to be sufficiently appropriate, but intensive proofreading is required.
The experiments are designed well
Experiments validity is appropriate
It would help if you improve the English style
Please, pay attention that both reviewers are very positive and their suggestions have been made to improve your manuscript.
[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]
With regard to two diseases, for a list of genes they are associating with these two diseases, when the ranks of these genes are considered in the association with the diseases, the author has proposed to use the geometric mean of the reciprocals of the ranks as the basic unit to measure the similarity between the two diseases by summing up all of the geometric means. The measurement is termed as SimSIP. The new similarity measurement was tested on simulation datasets, and also on real gene expression datasets from TCGA. The method was compared with WeiSumE*, OrderedList, FES0.01 and EucD to demonstrate its superior performance in the identification of similar diseases. The author also reported associated genes and pathways with colorectal cancer.
Comments: The manuscript is well written and organized; the novelty of the ideas is not that strong; the performance evaluation is well designed and discussed.
Suggestions:
1. I don’t understand why equation \sum^n_{i=1}(sqrt{1/a_i})^2 is popped up in the second paragraph of page 3. Why did you need this sentence? Please clarify.
2. Ranking methods for genes are less discussed in the method section. In fact, the SimSIP results are very sensitive to the ranking lists of the genes. Which ranking methods should be recommended? Please clarify.
See above.
Generally, the paper makes a good impression mainly connected to the experimental study section. However, the theoretical part raises many questions. First of all, the motivation according to the implication of the new connection measure is very blurred. Computanchilay, a vector composed from the inverse ranges, provides more reliable results, but it is entirely not understanding why. An explanation given in rows 115-118 is very abuse and does not offer any useful information about why such a measure performs better in comparison with the standard inner product.
The paper is written in a deplorable manner. For example, what does it mean:
All empirical sizes shown in Supplementary 158 Table S1 are around the significance 0.05 and are well controlled. I feel that the paper provides exciting results, but it very hard to understand it from the text.
Provided in a good manner
I thing that more solid comparison has to be done
Generally, the paper makes a good impression mainly connected to the experimental study section. However, the theoretical part raises many questions. First of all, the motivation according to the implication of the new connection measure is very blurred. Computanchilay, a vector composed from the inverse ranges, provides more reliable results, but it is entirely not understanding why. An explanation given in rows 115-118 is very abuse and does not offer any useful information about why such a measure performs better in comparison with the standard inner product.
The paper is written in a deplorable manner. For example, what does it mean:
All empirical sizes shown in Supplementary 158 Table S1 are around the significance 0.05 and are well controlled. I feel that the paper provides exciting results, but it very hard to understand it from the text.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.