All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Thanks for your submission and for your work on the revision.
Both reviewers found your study to be important, valid and well presented. The reviewers made quite specific suggestions for "minor revisions" to the text and especially to figures. These revisions will make the paper clearer and more effective.
Please implement all of the suggested revisions or explain why you did not.
I look forward to seeing this paper published soon.
Although lengthy, the paper is well written with clear language and structure. It provides a thorough overview of the literature and introduces the reader into the subject matter of the experimental work. The paper introduces several hypotheses that are tested and the results are discussed and summarized at the end of the manuscript. The results are presented in a clear way for the most part. Some of the figure captions could be improved with additional context and explanations.
The work addresses important area of shared virtual experience in the context of sound which has been by and large neglected by the VR community.
The experimental design section needs to provide more information on the subjects. It is only mentioned in the abstract that 52 users were tested, while no specific information is given on the subjects' background, age, recruitment process.
Information on the experimental hardware setup should be provided: what VR headset was used, how was the tracking of the user position done, how were the hands tracked (input device).
The experiment includes four different conditions that the subjects experienced. Multitude of conditions poses threat to bias the subjects, however the authors address the learning effect by randomization and repeated use. The final analysis is performed on the last two sessions which seems adequate to address the issue of learning.
The results are discussed in details while the authors also provide summary of the results that and several key findings that could be used as guidance in designing collaborative spaces.
Captions for Figures should provide additional information on what results are presented.
Figure 4, it's not fully clear how this was presented tothe users. Could there be examples added for each what the users saw?
Figures 5 and 6 should include value of N (number of measurements). The chart labels for questions could perhaps also include the theme of the questions Q1-11, as in Table 2 (in parenthesis next to the question). Why are questions there labeled as CQ1-11 while labels in Figures are Q1-11?
Figure 7, caption should explain what the labels mean; location (shown as dots), direction (shown as arrows), etc. Why was this particular group selected?
Figure 8 needs more explanation. Is this top view? How was it measured?
Page 13, last paragraph.. "The redder/bluer..." should be rephrased to refer to the color intensity.
The authors present a shared virtual environment system called LeMo, which was developed to support collaborative music making between pairs of people. This study tested four boundary configurations of shared and personal space within the environment, with the goal of determining which configuration is preferred for creative collaboration by users. The authors build on their previous work in understanding personal space, or territory, within the virtual environment. In this paper, the territory is manipulated through an experimental design and tested with 26 pairs of participants.
Basic reporting was overall okay, but needs minor revisions. I would like the authors to address the following items:
Lines 102-104 need more clarification as to how the authors addressed the concern in their project, and how it differs (or not) from this study. Additional context would help here.
Line 106, define 'tabletop'.
Lines 171-172, address how the new system differs from the old one, and briefly why the changes were made. The authors also note that Figure 2 is from the other 2019 study, which is confusing to readers since it was stated that the new system is very different. Did the interface not change?
The results section could use clarifications. Please review and clarify the language in lines 350-414.
Figures 6 and 7 have corresponding colors to indicate conditions, but the distinction is not strong on black/white printouts. Please make it more clear without using color which condition is which.
Line 464: What is Cpi? I think this is a typo.
There were multiple areas where what appeared to be research questions were stated (lines 46-47 and 76-78). Please clarify which set of questions you sought to answer and ensure that they are the same question if repeated elsewhere in the paper.
The experimental design itself was fine with four independent variables. The authors accounted for ordering effects through counterbalancing. The questionnaire and interviews after seem standard.
Hypotheses were clear.
I do have a concern about the Post Session Questionnaire in Table 1. The questions were either phrased positively or neutrally, with no negative statements (e.g. The spatial configuration of this virtual world was difficult for me to understand). Please provide rationale for using positive or neutral phrases in the questionnaire.
The description of how the study was conducted was thorough. The statistics provided were sufficient to understand how each condition performed.
Interview transcription and analysis was clear. The theme on learning effects was especially useful seeing as this is a limitation of such work, so it was important to address this in the results.
Aside from the language clarifications requested above, this study was controlled and methods/procedures were implemented in a standard way.
The participant quotations brought the statistical findings to life and provided a deeper level of analysis to the themes that arose out of the questionnaire. Removal of the LeMo feedback (lines 618-621) kept the paper in focus.
There are minor grammatical errors in the Discussion section. I recommend a grammar check to rectify these errors.
The discussion related back to the results of this study and tied in with the results of the author's previous study of the same system. It would be helpful if the authors restated their research questions at the beginning of the discussion, and used them as references for the structure of the discussion.
Key findings are clearly stated and are based on the results of the study. Any speculation was presented using terms such as 'suggest' or 'seems'. Design implications are also based on the results and the discussion.
Overall the findings and design implications are interesting and of value for researchers working in the creativity field and/or the virtual environment space.
No additional comments.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.