All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Both reviewers are happy with the revision, and I am therefore pleased to accept this manuscript for publication in PeerJ Computer Science.
I have read the author's response. The newly added content addresses my concerns, and I have no further questions.
no comment
no comment
Even in its initial form this paper was competently written and organized. By addressing the issues raised by the reviewers, the article is improved further.
The rationale behind this work, the methodology and the obtained results are now thoroughly explained and communicated to the reader.
As I have noted before, the authors, by making available their data, have their experiments reproducible and verifiable. The conclusion follows naturally from the obtained experimental results.
The authors have addressed in a convincing and thorough manner all the issues I raised during the first revision of their paper.
**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.
**PeerJ Staff Note:** PeerJ's policy is that any additional references suggested during peer review should only be included if the authors find them relevant and useful.
**Language Note:** When preparing your next revision, please ensure that your manuscript is reviewed either by a colleague who is proficient in English and familiar with the subject matter, or by a professional editing service. PeerJ offers language editing services; if you are interested, you may contact us at [email protected] for pricing details. Kindly include your manuscript number and title in your inquiry. – PeerJ Staff
The manuscript is generally well-written, with a clear structure and commendable attention to reproducibility through shared data and models. However, the reference list could be expanded to include more recent studies in deep learning approaches for solving TSP. Relevant topics to explore include neural combinatorial optimization techniques for vehicle routing and TSP, scalable deep learning models that adapt across problem sizes, and reinforcement learning-based non-autoregressive architectures for efficient pathfinding. Incorporating these areas would help position the paper within the current landscape of AI-driven combinatorial optimization.
**PeerJ Staff Note:** It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors are in agreement that they are relevant and useful.
The research question is clearly defined and relevant to the current advancements in optimization and machine learning. It addresses an important gap in predicting the computational workload for solving TSP instances. But I have the following concerns:
1. Were there any challenges encountered in generating synthetic datasets for different topologies, and how might these affect the generalization of the models?
2. Could the authors clarify the rationale for selecting the specific machine learning models used in the study?
The results are robust, but the impact and novelty could be discussed more explicitly. The conclusions are generally well-supported by the findings.
Allow me to begin by saying that, overall, this article is competently written and organized.
• The command of the English language is quite good, with just one or two typos that don’t really hinder the reader’s understanding.
• The organization of this article is suitable for its content, and the interested reader can easily understand the motivation, the methodology, and the results.
• Formulas are also well formatted. Note, though, that I feel there is some discrepancy in the notation of the very first formula: the second term uses {} in the subscript, whereas such brackets are missing inside the summation. Also, unless there is some rule that I am not aware of, I prefer formulas to be numbered.
• All Tables are informative with a clear layout.
• Figures 1 and 3 are excellent and convey the intended meaning. The group of 4 smaller Figures that make up Figure 2 should probably be a little bigger in size for better readability.
The rationale behind this work, the methodology, and the obtained results are all clear to the reader. What I am about to state is clearly subjective, but I’ll say it anyway. TSP is an iconic problem for computer scientists, and it’s only proper to invest so much in its theoretical and experimental treatment. I appreciate the authors’ rationale and believe it’s worth pursuing. However, I would like to know more about the integration of ML with combinatorial optimization. For instance, I think the relevant paragraphs (lines 68-85) should be expanded, and certainly more references should be added. Also, the authors should make clear if their approach is the first or follows some prior work.
The authors have made their data available, and this makes the experiments reproducible. The conclusion follows naturally from the obtained experimental results.
First, let me mention some insignificant typos that the authors will have no trouble correcting.
• In lines 240-242, the phrase “using scikit-learn’s GridSearchCV using a grid search and 10-fold cross-validation to optimize each model statistically” seems awkward, and I believe it should be revised.
• In line 249, the phrase “The table 1” should probably have Table, like the rest of the text.
Now, let me express my biggest question regarding this work. The authors explain that they created a specialized dataset of Traveling Salesman Problem (TSP) instances. My question is, why is that necessary? TSP, being such a famous problem, has many accepted libraries that are used for benchmarking (say, TSPLIB). Do they feel that the instances there are outdated? Is there another reason? In any event, the choice of experimental datasets is quite important. Thus, I feel the authors should clearly explain their rationale behind their decision.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.