To increase transparency, PeerJ operates a system of 'optional signed reviews and history'. This takes two forms: (1) peer reviewers are encouraged, but not required, to provide their names (if they do so, then their profile page records the articles they have reviewed), and (2) authors are given the option of reproducing their entire peer review history alongside their published article (in which case the complete peer review process is provided, including revisions, rebuttal letters and editor decision letters).
Thank you for addressing the reviewers' comments and for improving the paper. I think it is ready to be published.
All reviewers find the paper interesting and suitable for publication. All of them, however, agree that there are a few things to fix before final acceptance. Most of them concern the experimental design. Please analyse the reviews throughly and make sure that you address all reviewers' comments.
The paper is well written and provide the insides of novel achievements in terms of containers management.
The background is up to date and the motivation presented in the introduction is reflecting a good knowledge of the current problems in virtualization techniques.
The structure is conform to the natural templates for computing science papers, including the state-of-the-art criticism, the presentation of the proposal, the references to the proof-of-concept implementation as well as the test results. The figures are relevant for the proposed methodology.
The text is self-containing and can be an example of good practice in writing a paper in the field of distributed systems.
The research questions can be identified in the first section of the paper and are related to real problems of the current Cloud computing environments.
The methods that are exposed are presented to a level that can be reproduced in similar conditions.
The conclusions of the work are relevant for the addressed community.
Valid ideas for the future work are also correctly identified.
The paper is well written and presents a valuable contribution to the state-of-the-art in a hot topic.
The related work can be improved with the latest achievements in the what concerns the large scale experiments using container technologies.
The scale of the experiments is quite small - the effect of increasing the number of instances should be studied too.
This paper presents the smart brix framework, which offers the ability to check sets of containers against vulnerability breaches and other specific requirements. It also provides mechanisms for mitigating the identified issues by evolving the containers.
The paper is well written and easy to follow, the overall objective and challenge are well defined.
I believe the description of the technical objectives and challenges could be improved. Moreover, smart brix is presented as a framework for the continuous evolution of container-based deployment but I felt that the runtime monitoring aspect was not properly presented in the paper or at least could be more detailed.
The overall architecture of the framework is well presented but I would suggest to the authors to also provide a description on how (technically and methodology) the framework should/could be used in a production setting (maybe using the case study). Also, few technical details are provided on how can be used the framework and how easy it is to implement and extend such a “continuous evolution” system.
Whilst reading the formula describing how can be calculated the overall confidence value, I was wondering if some more complex options have been considered by the authors. If yes, I would suggest discussing this.
Also, regarding the confidence adaptation model escalation, it would be interesting to evaluate and discuss what is the current limit to the automation, how often it goes to the human interaction level and to provide an example of what is/can be provided to the human to ease its job.
To improve the readability of Section 3.2, I would suggest to illustrate the content presented in the paragraph starting by « If no issues … » with a Figure.
The proposed approach seems to be quite tight to the package managers; I would suggest the author to clarify this.
Regarding the experiments, it is being said that the experiment was repeated 3 times maybe the authors could justify this choice and also provide the standard deviation. These experiments are interesting and it is good that the authors have tested their approach on large sets of images. Maybe It could also be interesting to perform tests on very specific images and to identify the impact of the “software stack” they embed both in term of size and diversity.
I would also suggest to the authors to provide a link to their data and source code (if open source).
The paper is generally well-written, clear, and well-argued.
There are only a few places where I think this is not the case, and I have the following suggestions:
- Evaluation criteria are discussed for the first time in Section 4. I think that is too late. I strongly suggest the authors add upfront a short overview about the research goals the Smart Brix framework is supposed to achieve, how they intend to measure those achievements and the basics of their evaluation method and criteria. This provides a better scope for the paper, makes the reading more focused, and facilitates assessing the values of the approach.
- Section 3.2 explains how the Smart Brix Manager works in collaboration with the rest of the framework. The text is dense with description of all the interactions, and it is easy to get lost in them. I would suggest a Figure with a corresponding diagram (e.g. a UML Sequence Diagram) to accompany the text as a visual aid.
- In Section 5 "Related Work", lines 452-454 there is a rather long list of references on works that "propose a novel method for analyzing cloud-based services for certain types of vulnerabilities". Citing in numbers without differentiating the works in any way from your own work, or among one another, is not considered good form. Also, some of these "cloud-based" papers are not from the Cloud world (or epoch!) including one dated 2000. Please revise this part.
- The main contributions and take-away claimed by the paper remain somewhat implicit and are not summarized anywhere that I could see in a concise way.
- It looks like an "Acknowledgements" section may be missing, as per the PeerJ guidelines.
As said above, there is a certain lack of upfront clarity on what the research goals are, and how they are intended to be measured. That means that the paper does not lead with a clear definition of its research questions.
Although that does not hamper the understanding of the approach and method, in Sections 1 to 3, and its potential value, the reader remains with an unanswered question on how the benefits of SmartBrix can be assessed until Section 4 (page 9)
The method and procedure for evaluation in Section 4 make sense and showcase some of the benefits of Smart Brix, in particular its efficiency and performance. However, as far as I can see they do not shed light on one of the most interesting and potentially valuable characteristics touted in the paper, i.e., the pipeline self-assembly capabilities, which should enable to automatically compose complex workflows for Analysis and Compensation out of a potantially large set of candidate micro-services. Since the overhead of this kind of automated composition typically grows super-linearly with the number of candidate components, discussing the number of components in the Analysis and Compensation sets vs. the related complexity and costs (e.g. time, or chance of not synthesizing a suitable or correct pipeline) of the self-assembly approach would be quite interesting, and would speak to a different kind of scalability of the approach.
A more detailed analysis of success vs. failure within the described experiments (e.g. false positive and false negatives for the Analysis pipelines, or correctly completed vs. failed compensation attempts) seems also like a missing, but important, aspect of evaluation
This paper basically discusses the feasibility and efficiency of the Smart Brix approach towards the managed evolution of Cloud container-based deployments. The paper does quite a good job and is sound in that respect. The experiments are intended for the purpose and seem well designed and simple.
The kind of evaluation described though is partial and a bit shallow, as I indicated above, and begs the question of a fuller assessment of the benefits of the Small Brix approach. I provided above some suggestions on how to expand and strengthen the evaluation and better showcase the work.
I also think that a "Threats to validity" and/or "Limits of applicability" section would be appropriate as part of the post-evaluation discussion.
I believe the work has value and that the paper makes a good case for it. The evaluation can be much more convincing if strengthened. My guesstimate is that you do not need to do very substantial extra work for that strengthening. I am assuming this amounts to minor revisions.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.