Review History

To increase transparency, PeerJ operates a system of 'optional signed reviews and history'. This takes two forms: (1) peer reviewers are encouraged, but not required, to provide their names (if they do so, then their profile page records the articles they have reviewed), and (2) authors are given the option of reproducing their entire peer review history alongside their published article (in which case the complete peer review process is provided, including revisions, rebuttal letters and editor decision letters).

New to public reviews? Learn more about optional signed reviews and how to write a better rebuttal letter.


  • The initial submission of this article was received on August 22nd, 2015 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on September 17th, 2015.
  • The first revision was submitted on October 12th, 2015 and was reviewed by the Academic Editor.
  • The article was Accepted by the Academic Editor on October 18th, 2015.

Version 0.2 (accepted)

· · Academic Editor


I thank the authors for very carefully addressing the points made by the reviewers and documenting these very clearly in their letter. I do think these points make the paper considerably stronger as a publication. I am now very happy to accept the paper for publication.

Version 0.1 (original submission)

· · Academic Editor

Minor Revisions

This is an interesting and generally well-presented paper. The reviewers have raised a small number of issues which need to be addressed before the paper is ready for publication, but I hope that these issues will not be difficult to address. WE look forward to receiving a revised version.

Reviewer 1 ·

Basic reporting

• Missing data sets and code for application. I am a novice at this particular type of review/publication, but careful reading of the review documents seems to say that the authors should have raw data available as well as the code or downloadable application that was used.

• References good missing this one:
Sveistrup, H., et al. Experimental studies of virtual reality-delivered compared to conventional exercise programs for rehabilitation. in Cyberpsychology & behavior : the impact of the Internet, multimedia and virtual reality on behavior and society. 2003.[1]

• A mention of a ‘Coach’ first appears on the 11th page of the paper. There should be a description of the function and role of this Coach. Is it a wizard of OZ type person? Is it driven by rules or algorithmically in some how? Does it differ in interaction and content from participant to participant?

• More formal definitions would be helpful for:
− Social Learning
− Social support
− Social facilitation
− Normative Influence

Experimental design

• There is a skewed woman to man ratio (almost 3:1). However research (that I can find) seems to disprove the ‘common sence’ (please don’t blame me for pervasive sexist streriotypes) notion that femails are more responcive to social cues and reinforcement than males. So I would take a look at and possibly add these references:
o Ryan, R.M., et al., Intrinsic motivation and exercise adherence. International Journal of Sport Psychology, 1997. 28: p. 335-354.
o [3] Koivula, N., Sport Participation: Differences in Motivation and Actual. Journal of Sport Behavior, 1999. 22: p. 360.
and also discuss the 28 to 9 proportions as not invalidating the generalizability of the results across genders.

• Really confusing second hypothesis - If I read this right the authors were measuring:
In control group how many exercised together when there was no way for the group members to know if they were exercising together versus the social group knew when there were other people exercising. This is not a good hypothesis unless we were testing for the random possibility versus the test group. If that was their purpose it makes sense but needs to be stated more clearly.

• They might have tested for self-efficacy to remove that variable

• They might have done some testing of the ease of use of the interface and app to make sure that was not an issue

Except for the above - the design was excellent and the consideration of possible confounding results (i.e. weighing the social groups in h2) was also good

Validity of the findings

See the discussion on gender imbalance above.

Comments for the author

I was surprised in doing my literature search that there have not been more research done on this topic. The subject of virtual presence as motivation or to encourage socialization has been quite prevalent (especially in CSCW and HCI), but this is one of few studies I have read being done.

My suggestions for improvement are to:
1) Clarify the gender issue
2) Explain the persona and action of the coach
3) Expand the section on the second hypothesis to prevent readers from thinking that this is either just common sense (i.e. if I can see you it is more probable that I may exercise with you) or trivial (why are they measuring random occurrences (i.e.exercising together purely chance and with no knowledge that I am doing so))


Basic reporting

Overall I think this is a very good paper, interesting and well laid out for the reader. The article will add to the field of knowledge in this area. The literature review is appropriate and well referenced. The methodology seems sound, but I have some questions about the measurements used which should be answered prior to publication. I was unable to access the raw data as suggested by the criteria for the review. There did not appear to be any files present with raw data.

Besides the Frailty Index were there any other inclusion/excusion criteria? Please put this information into your text.

I think it is important to give the background information as to whether or not the study participants (both groups) were previously experienced tablet users, if so, how much experience did they have. If not, how did you train them? Also had they participated previously in exercise programmes either on of off line? What were their motivations to participate in this study? Please expand a little on what the training involved, how long it took etc. Did the training involved teaching users how to use the tablet or just how to use the programme?

How much support did users from both groups need during the study? Was there any difference in the support requirement between the groups?

Were the study participants living alone or with others? Did living status have any affect on the results of the exercise participation?

How was participation measured? Was it just by signing in and agreeing to exercise or did the participant have to engage regularly with the tablet? Were participants fitness levels measured before and after the programme to see if there was any difference in their fitness levels? How do you know that participants actually carried out the exercises and did not just interact with the tablet?

How was completeness measured? The method of measurements should be explained to the reader.

While the issues I have mentioned are minor enough and should be verified by the editor before publication, I am happy to review the changes if required.

Experimental design

No comments

Validity of the findings

Once the questions on how measurements were made and activity verified are answered and explained for the reader , then the findings should be sound.

Comments for the author

In the attached pdf I have highlighted a few areas to suggest where the english may be improved but overall it is well written. I have also entered sticky notes in places where I was confused or felt more explanation was needed for the reader. These issues should be addressed before publication. Thank you .

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.