To increase transparency, PeerJ operates a system of 'optional signed reviews and history'. This takes two forms: (1) peer reviewers are encouraged, but not required, to provide their names (if they do so, then their profile page records the articles they have reviewed), and (2) authors are given the option of reproducing their entire peer review history alongside their published article (in which case the complete peer review process is provided, including revisions, rebuttal letters and editor decision letters).
The authors have made substantial changes to the initial draft. In particular, a new timing experiment has been conducted for neuroscience and astronomical datasets, where CAVE2 is compared to a personal desktop computer. There are also minor changes in section 5.3, and also a compared analysis with other systems, along with many amendments and improvements of the explanation.
Not all the suggestions of rev 2 have been followed. In particular, figures for standard techniques have not been included, however the justification given by the authors is admissible. Generally speaking, explanations in the rebuttal letter are reasonable.
The technical quality of the draft could be further improved. The scientific contributions should be clearly stated and supported by data thus the reader can perceive the novelty of this work.
The article is clear and has a correct structure. Motivation is duly substantiated and related work is extensive and updated.
Section 3 is somewhat generic in terms of the design description. Section 4, and more specifically 4.3, provides much space to the description of the control software implementation, but only describes a few examples.
It could be explained in this section the appropriate set of tools and functionalities for the analysis of the two proposed problems: Magnetic Resonance Imaging (MRI) data from IMAGE-HD and large-scale systematic morphological classification of the kinematic structures of galaxies.
Regard this second problem is cited in the article but later nothing is explained about him.
The developed system, ENCUBE, has some excellent features such as:
It allows comparative visualization and analysis of large amounts of data, actions aplied to data cubes in parallel, large display visualization area, very high processing power, collaborative workspace, stereoscopic capability and workflow serialization.
It is a good tool for scientific visualization.
The submission must describe original primary research within the Scope of the journal. -> Ok
The submission should clearly define the research question, which must be relevant and meaningful. The knowledge gap being investigated should be identified, and statements should be made as to how the study contributes to filling that gap. -> There is no research question.
The investigation must have been conducted rigorously and to a high technical standard. -> There is no experiment.
Methods should be described with sufficient information to be reproducible by another investigator. -> Ok
The research must have been conducted in conformity with the prevailing ethical standards in the field. -> Ok
The data should be robust, statistically sound, and controlled. -> There is no data gathering.
The data on which the conclusions are based must be provided or made available in an acceptable discipline-specific repository. -> All conclusions are speculative.
The conclusions should be appropriately stated, should be connected to the original question investigated, and should be limited to those supported by the results. -> This is not ok in the writing.
Speculation is welcomed, but should be identified as such. -> It is not identified as speculative.
Decisions are not made based on any subjective determination of impact, degree of advance, novelty, being of interest to only a niche audience, etc. Replication experiments are encouraged (provided the rationale for the replication, and how it adds value to the literature, is clearly described); however, we do not allow the ‘pointless’ repetition of well known, widely accepted results. -> The conclusions are mostly subjective.
Negative / inconclusive results are acceptable. -> There are no experiments.
The article describes encube, a data visualization deviced aimed at the exploration of large data.
There is no doubt that the authors have put great effort into this work, and that it seems to present relevant features. However, the text has some serious flaws. Most of them seem to be linked to the excessive description of the system, which makes the paper miss points that are relevant in scientific literature: evaluation and the detection of the contributions of the work using objective data.
These are my comments:
“For a comprehensive review of the variety of standard techniques see, for
example, Akenine-M ̈oller et al. (2008), Toriwaki and Yoshida (2009) and Szeliski (2010).”
> It would be very useful for the reader to have some (two? Three?) figures with examples of standard techniques and how they work. Also, why are there three review papers on the subject in three consecutive years? It can be the case that there was a lot of innovation in the field in these years. Could the authors state what is the contribution of each of these reviews?
“Structured three dimensional (3D) images or data cubes are ubiquitous in scientific research.”
> I disagree with that. There are many fields in science that are oblivious about data cubes. Again, some figures and a deeper discussion would greatly increase the appeal of this paper.
> Along “Related Work”: again, I miss some figures that could highlight the differences between visualization systems proposals and the preceeding ones. It would probably be a good idea to only select the two or three that provided inspiration for encube.
In Section 5.3
> Without an user study, all of these discussions are based on anedoctal data or speculation. This article needs an user study. My next note also regards this question.
> Although the authors state that there are three important questions (“1) how to integrate comparative visualisation and analysis into a unified system; 2) how to document the discovery process; and 3) how to enable scientists to continue the research process once back at their desktop.”). However:
> 1) I am not convinced that this was discussed along the article. Again, the reader needs at least some comparative figures displaying how encube is different from previous systems. Also, this only accounts for the proposal of the system: proving that this point was properly addressed requires an user study.
> 2) Again, an user study is necessary to validate the proposed system. How did users interact with the discovery process history?
> 3) By research, I am guessing the authors mean “discovery” or “exploration”? The server-based service is a good solution for this. The user study, however, should highlight whether the server-based approach actually improved the data exploration experience.
I believe that the core of the work (the proposed system) is of good quality, and that the authors should calmly address the issues above prior to publishing.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.