Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on October 15th, 2021 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on January 17th, 2022.
  • The first revision was submitted on March 17th, 2022 and was reviewed by 2 reviewers and the Academic Editor.
  • The article was Accepted by the Academic Editor on April 11th, 2022.

Version 0.2 (accepted)

· Apr 11, 2022 · Academic Editor

Accept

Congratulations, your manuscript has been recommended for publication.

[# PeerJ Staff Note - this decision was reviewed and approved by Yilun Shang, a PeerJ Computer Science Section Editor covering this Section #]

·

Basic reporting

no comment

Experimental design

no comment

Validity of the findings

no comment

Additional comments

This was the second revision of the article. All suggestions and corrections proposed in my review were taken care of. I consider that the review and suggestions to the article were completed.

Reviewer 2 ·

Basic reporting

The revised version of the paper addresses the issues that I raised in my previous review of the original submission.
In particular, the results are evaluated and explained much more detailed in the revised version.
Together with the rewritten Section 2.2., this makes the paper much easier to read and the reader gets a better understanding of the scheduling problem, the heuristics and the approach.

Experimental design

The missing information about the heuristic used in the evaluation and strategies for estimating the execution time for the first run of a task have been added in the revised version.

Validity of the findings

With the added paragraphs, the impact and novelty of the work are described much better.
The revised version points out the main improvements and drawbacks of the approach which makes the contribution of the paper much clearer.

Version 0.1 (original submission)

· Jan 17, 2022 · Academic Editor

Major Revisions

Please carefully address all the concerns raised by the reviewers and re-submit the revised version in the due time. Thank you.

[# PeerJ Staff Note: The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at copyediting@peerj.com for pricing (be sure to provide your manuscript number and title) #]

[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]

·

Basic reporting

Please let me to comment some guidelines about this research work.

This research work is based on to already published works (I mean these references Bramas, B. (2016). Bramas, B. (2019). Bramas, B., Flint, C., and Paillat, L. (2021). Bramas, B., Helluy, P., Mendoza, L., and Weber, B. (2020).
), although the authors do not express it explicitly; And it looks for gaps for improvement as the authors comment: "the previous work used static priorities".

This research work do not try to obtain priorities for each task but for each type of task.

This work uses a set of metrics to measure the performance of the system proposed.

And now please let me to comment some suggestions:

When you work on parallel architectures, you use specific programming languages and methodologies for these types of architectures. Could you explicitly indicate which programming languages or methodologies you used on the hardware architectures? In the "Software" section I only found the applications that were parallelized.
The programming language is important because it helps us achieve the desired results. And future works may propose other programming languages.


Although the makespan metric is used as a metric (the most important) to measure the performance of the proposed system, I didn´t find a formal formulation of it in 3.1 Relevant metrics section. I comment that makespan is defined on page 6 as basic definition only. A formal definition can help us understand how you get the results of the makespan metric in section 4.2 Evaluations.

Experimental design

I consider that the evaluations are developed correctly; they are extensive and interesting. As Figures 2 and 3 show, LAAutoheteroprio is superior in acceleration and overcome in both experiments.

Validity of the findings

I want to highlight the findings in the experiments. As you explain in section 4.2.3.

StarPU overhead is relatively high as it has been designed to handle tremendous data (what we already know).
The use of scheduler is only relevant when the expected gained time is greather than the scheduler's overhead.

My comments: it is clearly observable that the results are biased by the architecture and the structure of the application task.
Although extensive experimentation was carried out (I am referring to figures 5-12) there is a high dependency on the hardware configuration (as you express on before section 4.2.3), so I recommend describing each phrase widely, of the contributions of this study in the section: "Introduction”, in order to better understand the contributions of carrying out this research work.

Additional comments

I was trying access this reference Bramas, B., Flint, C., and Paillat, L. (2021). auto-heteroprio analysis. https://gitlab.inria.fr/cflint/auto_heteroprio_analysis.
(On Click):
https://gitlab.inria.%20fr/cflint/auto_heteroprio_analysis
But I couldn’t access. It is important reference because we can view the code of the form how you can create the graph.


Regarding contribution: "We describe different heuristics to obtain priorities;" at the final of the paper (in the conclusions section), you wrote: "Our results also demonstrate that our automatic strategy is typically competitive with manual priorities".
Then I do not find differences between automatic and manual priorities (according to your comments and experimentation), so you do not seek to obtain priorities but to match these?
Can you explain this justification further?

Do you have any future jobs? Any work after this investigation?
Generally, a research projects a set of works after the work presented.
Could you comment in a section of future works (but not mandatory) some possible investigations to be carried out?

I find discrepancy in the spelling of the planner name LaHeteroprio and LAHeteroprio. I was looking if you were indicating another planner, but it is the same planner; I strongly suggest correcting this name.

Reviewer 2 ·

Basic reporting

The paper is generally well written, deals with an interesting topic and provides a useful contribution to the state of the art.
However, the following issues need to be addressed:
1. As the main part, the heuristics and the corresponding formulas should be described in more detail. Also in Sections 3.1 and 3.2, the metrics and heuristics contain the term v_i which is not introduced.
2. A more detailed discussion of the results would be useful. Especially in the Sections 4.2.3 and 4.2.4, more explanations may help to clarify the improvements of your approach. A legend should be added to Figures 5-12.

Experimental design

There are some important details missing in the manuscript:
1. Which heuristic is used to determine the automatic priorities in the evaluation in Section 4.2?
2. In Section 3.3, it is stated that the task execution times are taken from previous task executions of the same type. Is there a strategy to select the values of the first task execution or are they chosen randomly?

Validity of the findings

1. The authors should point out the impact of their work.
What is the advantage of the automatic strategy compared to manual priorities?
Does it reduce the effort for the user or does it lead to better schedules (which is the case for some applications and for some not)?
2. There are some contradictions in the manuscript:
- In line 353, it is stated that the choice of heuristic is expected to have a significant impact, whereas the Conclusion contains the opposite sentence.
- In the lines 426 and 427, it is stated that "using automatic priorities does not hurt performance", although they are slower in some cases and also for some following results, e.g. the solve step of PaStiX.

Additional comments

Minor issue:
- Table 4 contains some misplaced characters

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.