Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on March 7th, 2016 and was peer-reviewed by 3 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on April 4th, 2016.
  • The first revision was submitted on May 11th, 2016 and was reviewed by the Academic Editor.
  • The article was Accepted by the Academic Editor on May 12th, 2016.

Version 0.2 (accepted)

· May 12, 2016 · Academic Editor

Accept

Thank you for addressing the reviewers' comments and for improving the paper. I think it is ready to be published.

Version 0.1 (original submission)

· Apr 4, 2016 · Academic Editor

Minor Revisions

All reviewers find the paper interesting and suitable for publication. All of them, however, agree that there are a few things to fix before final acceptance. Most of them concern the experimental design. Please analyse the reviews throughly and make sure that you address all reviewers' comments.

Reviewer 1 ·

Basic reporting

The paper is well written and provide the insides of novel achievements in terms of containers management.
The background is up to date and the motivation presented in the introduction is reflecting a good knowledge of the current problems in virtualization techniques.
The structure is conform to the natural templates for computing science papers, including the state-of-the-art criticism, the presentation of the proposal, the references to the proof-of-concept implementation as well as the test results. The figures are relevant for the proposed methodology.
The text is self-containing and can be an example of good practice in writing a paper in the field of distributed systems.

Experimental design

The research questions can be identified in the first section of the paper and are related to real problems of the current Cloud computing environments.
The methods that are exposed are presented to a level that can be reproduced in similar conditions.

Validity of the findings

The conclusions of the work are relevant for the addressed community.
Valid ideas for the future work are also correctly identified.

Additional comments

The paper is well written and presents a valuable contribution to the state-of-the-art in a hot topic.
The related work can be improved with the latest achievements in the what concerns the large scale experiments using container technologies.
The scale of the experiments is quite small - the effect of increasing the number of instances should be studied too.

Reviewer 2 ·

Basic reporting

This paper presents the smart brix framework, which offers the ability to check sets of containers against vulnerability breaches and other specific requirements. It also provides mechanisms for mitigating the identified issues by evolving the containers.

The paper is well written and easy to follow, the overall objective and challenge are well defined.

Experimental design

I believe the description of the technical objectives and challenges could be improved. Moreover, smart brix is presented as a framework for the continuous evolution of container-based deployment but I felt that the runtime monitoring aspect was not properly presented in the paper or at least could be more detailed.

The overall architecture of the framework is well presented but I would suggest to the authors to also provide a description on how (technically and methodology) the framework should/could be used in a production setting (maybe using the case study). Also, few technical details are provided on how can be used the framework and how easy it is to implement and extend such a “continuous evolution” system.

Whilst reading the formula describing how can be calculated the overall confidence value, I was wondering if some more complex options have been considered by the authors. If yes, I would suggest discussing this.
Also, regarding the confidence adaptation model escalation, it would be interesting to evaluate and discuss what is the current limit to the automation, how often it goes to the human interaction level and to provide an example of what is/can be provided to the human to ease its job.

To improve the readability of Section 3.2, I would suggest to illustrate the content presented in the paragraph starting by « If no issues … » with a Figure.

The proposed approach seems to be quite tight to the package managers; I would suggest the author to clarify this.

Validity of the findings

Regarding the experiments, it is being said that the experiment was repeated 3 times maybe the authors could justify this choice and also provide the standard deviation. These experiments are interesting and it is good that the authors have tested their approach on large sets of images. Maybe It could also be interesting to perform tests on very specific images and to identify the impact of the “software stack” they embed both in term of size and diversity.

I would also suggest to the authors to provide a link to their data and source code (if open source).

Reviewer 3 ·

Basic reporting

The paper is generally well-written, clear, and well-argued.
There are only a few places where I think this is not the case, and I have the following suggestions:
- Evaluation criteria are discussed for the first time in Section 4. I think that is too late. I strongly suggest the authors add upfront a short overview about the research goals the Smart Brix framework is supposed to achieve, how they intend to measure those achievements and the basics of their evaluation method and criteria. This provides a better scope for the paper, makes the reading more focused, and facilitates assessing the values of the approach.
- Section 3.2 explains how the Smart Brix Manager works in collaboration with the rest of the framework. The text is dense with description of all the interactions, and it is easy to get lost in them. I would suggest a Figure with a corresponding diagram (e.g. a UML Sequence Diagram) to accompany the text as a visual aid.
- In Section 5 "Related Work", lines 452-454 there is a rather long list of references on works that "propose a novel method for analyzing cloud-based services for certain types of vulnerabilities". Citing in numbers without differentiating the works in any way from your own work, or among one another, is not considered good form. Also, some of these "cloud-based" papers are not from the Cloud world (or epoch!) including one dated 2000. Please revise this part.
- The main contributions and take-away claimed by the paper remain somewhat implicit and are not summarized anywhere that I could see in a concise way.
- It looks like an "Acknowledgements" section may be missing, as per the PeerJ guidelines.

Experimental design

As said above, there is a certain lack of upfront clarity on what the research goals are, and how they are intended to be measured. That means that the paper does not lead with a clear definition of its research questions.
Although that does not hamper the understanding of the approach and method, in Sections 1 to 3, and its potential value, the reader remains with an unanswered question on how the benefits of SmartBrix can be assessed until Section 4 (page 9)
The method and procedure for evaluation in Section 4 make sense and showcase some of the benefits of Smart Brix, in particular its efficiency and performance. However, as far as I can see they do not shed light on one of the most interesting and potentially valuable characteristics touted in the paper, i.e., the pipeline self-assembly capabilities, which should enable to automatically compose complex workflows for Analysis and Compensation out of a potantially large set of candidate micro-services. Since the overhead of this kind of automated composition typically grows super-linearly with the number of candidate components, discussing the number of components in the Analysis and Compensation sets vs. the related complexity and costs (e.g. time, or chance of not synthesizing a suitable or correct pipeline) of the self-assembly approach would be quite interesting, and would speak to a different kind of scalability of the approach.
A more detailed analysis of success vs. failure within the described experiments (e.g. false positive and false negatives for the Analysis pipelines, or correctly completed vs. failed compensation attempts) seems also like a missing, but important, aspect of evaluation

Validity of the findings

This paper basically discusses the feasibility and efficiency of the Smart Brix approach towards the managed evolution of Cloud container-based deployments. The paper does quite a good job and is sound in that respect. The experiments are intended for the purpose and seem well designed and simple.
The kind of evaluation described though is partial and a bit shallow, as I indicated above, and begs the question of a fuller assessment of the benefits of the Small Brix approach. I provided above some suggestions on how to expand and strengthen the evaluation and better showcase the work.
I also think that a "Threats to validity" and/or "Limits of applicability" section would be appropriate as part of the post-evaluation discussion.

Additional comments

I believe the work has value and that the paper makes a good case for it. The evaluation can be much more convincing if strengthened. My guesstimate is that you do not need to do very substantial extra work for that strengthening. I am assuming this amounts to minor revisions.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.