All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Comments from the reviewers indicate that the paper is now suitable for publication.
[# PeerJ Staff Note - this decision was reviewed and approved by Xiangjie Kong, a PeerJ Section Editor covering this Section #]
Even though the majority of the comments are taken, the paper still requires proof reading to improve flow and language issues. Hope this can be achieved with a minimum effort.
Acceptable experimental design has been presented
The suitability and accuracy of DTs for automating the classification phases of software maintenance is assessed. The justification for the finding seems convincing.
The authors have incorporated my comments. It could be accepted for publication.
• The figures and tables are suitable for the publication level, with no redundant or non-contributory elements.
• The literature review is sufficient, adequately justifying the issue related to the automation of the classification phase using decision trees. While there might be better options for this type of task, the proposed solution could be a low-cost alternative
Some previous comments mainly removed elements without delving into their implications, particularly in the method and data collection sections. It is recommended to supplement these sections.
No comment
• Although some aspects are concise regarding the method description, the supporting data for the study's replicability are appropriate.
• Almost, all my concerns have been sufficiently addressed. The paper is now suitable for publication.
Reviewers have highlighted some aspects that needs to be considered to publish the paper. Please, be specially careful to actually address the requirement about identifying the research gap that the paper covers.
**PeerJ Staff Note:** It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors agree that they are relevant and useful.
**PeerJ Staff Note:** Please ensure that all review and editorial comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.
**Language Note:** The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title). Alternatively, you should make your own arrangements to improve the language quality and provide details in your response letter. – PeerJ Staff
The paper has shown the application of DT to sort, rank, and accept/reject maintenance requests. Especially, the implementation of the DT is clearly shown. However, the following points require due attention.
1. The paper is full of launguage problems. Grammer, flow/coherense, tense, redundency, Acronyms (undefined, redundently defined, ...), etc...problems has to be resolved. Language editor is mandatory
2. The research gap is not convincing - MR sorting, ranking, accepting/rejecting is already done by many researchers using many machine learning techniques.
You have proved this by the statement "Most research focuses on sorting, severity, priority, or accept/reject MR". If this is the case where is your contribution.
Solely Combining the three tasks into one research doesn't suffice as a journal paper.
3. Some concepts are not explained sufficiently. For instance at line 118, "...and the kappa was 63%...". novice readers deserve to know what is kappa and what it measures
4. Literature Review - explains what has been done in the relation to the research in question. But the drawback of the literature is not explained. That could have been helpfull to
show research group.
5. Some part of the paper containes unnecessary details. For instance, line 249 to 255 explains search for the data. It suffices to tell just what data you used and where you got.
6. Some citations are followed by paper [x]. I don't think this is good practice
7. "Additionally, it was reviewed by my supervisor". The paper has to be also reviewed by your supervisor. Such statements wouldn't have been appeared.
8. Present the demography of subjects in terms of tables
It is good.
no comment
1. The write-up of the paper needs improvement
2. The research gap or contribution is not obvious. The authors need to elucidate it.
- It seems that the most recent version of the ISO/IEC/IEEE article was not reviewed, which defines 6 types of maintenance: Corrective, Preventive, Adaptive, Additive, Perfective and Emergency maintenance. The inclusion of the above can significantly improve the categories used, which are somewhat stilted (see Table 1). It is suggested to include a justification for the exclusion of the above.
IEEE standard, ISO/IEC/IEEE 14764, IEEE Std. 14764:2022, Software Engineering — Software Life Cycle Processes — Maintenance, third ed: 2022 01, 39p.
- In general, the figures and tables are adequate, except for the decision trees in Figures 19, 20 and 21, which are not distinguishable and are very relevant to the article.
- There are irrelevant figures such as Figures 16, 17, 18 and 22 (just mention the attached files).
- While the answer to the research question is somewhat obvious, given the type of binary answer, the knowledge gap is adequately justified.
- Clarify whether any of the datasets used had data from mobile applications. The nature of the problems of a traditional software is different from a mobile one. If the datasets did not contain mobile data, detail how the above is mitigated.
- The method is described in a very detailed manner which is appreciated, which increases the degree of replicability of the whole process. However, the reference [29], on which the method is based, corresponds to an inappropriate article that describes methodological aspects in a very generic way, which weakens the rest. It is suggested to review and select one of the following:
◦ [1] C. Wohlin, P. Runeson, M. Höst, M. C. Ohlsson, B. Regnell, y A. Wesslén, Experimentation in software engineering, 1.a ed. Springer Science & Business Media, 2012. [En línea]. Disponible en: https://doi.org/10.1007/978-3-642-29044-2
◦ [2] M. Felderer y G. H. Travassos, Eds., Contemporary Empirical Methods in Software Engineering. Cham: Springer International Publishing, 2020. doi: 10.1007/978-3-030-32489-6.
◦ [3] T. Menzies, L. Williams, y T. Zimmermann, Perspectives on data science for software engineering. Morgan Kaufmann, 2016.
no comment
- It raises a problem that is real in the operation of systems, particularly legacy systems, and requires automation.
- Review the accuracy of the paragraph from lines 335 to 228 and the references therein. There appear to be inconsistencies with what the references indicate or is there a problem with the wording?
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.