Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on November 15th, 2018 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on February 7th, 2019.
  • The first revision was submitted on February 28th, 2019 and was reviewed by 2 reviewers and the Academic Editor.
  • A further revision was submitted on March 22nd, 2019 and was reviewed by the Academic Editor.
  • A further revision was submitted on April 3rd, 2019 and was reviewed by the Academic Editor.
  • The article was Accepted by the Academic Editor on April 4th, 2019.

Version 0.4 (accepted)

· Apr 4, 2019 · Academic Editor

Accept

Thank you for responding to my comments, I am pleased to recommend your paper for publication in PeerJ. I do have one final comment that you might want to consider in terms of formatting and copy-editing - I wonder if the outline of the intervention structure (i.e. the breakdown by week - lines 213-277) might be better presented in a table?

# PeerJ Staff Note - this decision was reviewed and approved by Stephen Macknik, a PeerJ Section Editor covering this Section #

Version 0.3

· Apr 1, 2019 · Academic Editor

Minor Revisions

Thank you for responding to the Reviewers' comments in revising your manuscript, I believe that this has strengthened the paper. However, I have two final comments that should hopefully be straightforward to address:

1) The final line of the abstract states that the feedback was positive. Given that there were a number of neutral comments, I believe that this should be qualified to state "feedback was generally positive".

2) Apologies for not picking up on this before. The introduction highlights the need to "provide an app together with human coaching". However the method states that the coaching aspect of the app is for the development phase only? If this is the case, what are the implications of this feasibility/pilot study going forward (and how useful are the findings)? Might this aspect of the app is actually be needed (and if not, how will you know this)? This needs to be explicitly addressed in the discussion, as essentially going forward the app could be missing an important component?

Thank you for revising the manuscript, and apologies again for not picking up on point 2 earlier.

Version 0.2

· Mar 18, 2019 · Academic Editor

Minor Revisions

Thank you for revising your manuscript. Both reviewers have responded to your revision, and I would like to thank them for their effort. Overall, the reviews noted that the manuscript was improved, but there are a number of areas that still need to be addressed. The reviews are appended below and I won’t reiterate all the comments. Please ensure you address all reviewer comments; however, I believe that the following issues warrant particular attention when revising your manuscript:

1) Reviewer 1 has suggested that the study would be better described as a feasibility study, and I agree with this suggestion. The lack of a control group is a significant limitation of the paper, but this is negated somewhat if the focus is more on program development and a feasibility/pilot assessment of the app. Related to this, I believe that some of the nature of the app/intervention development should captured in the title, and that given the limited information of the cost of the app, it might be better to remove reference to cost in the. Perhaps something along the lines of “Development and feasibility study of an app (Ladle) for weight loss and behaviour change”

2) I agree with Reviewer 2’s comment that, given there is no formal analysis, the comparison information would be better placed in the discussion. The suggestion to keep the results simply a clear description of the intervention development and a descriptive evaluation of the intervention also very much maps onto the point above, regarding considering the study more of a feasibility study or pilot.

3) I agree with Reviewer 2’s comment that the comparisons of the Ladle app with other programs still needs clarification, and need to be reported when they reflect both favourably and unfavourably on Ladle. This is critical for contextualising the findings.

4) Related to my comment about mentioning “low cost” in the title, I would recommend limiting the mention/discussion of cost throughout the paper – as far as I can tell there is absolutely no information on how the estimate of £20 was estimated. Perhaps discussion of cost would be better in a more rigorous evaluation of the program, with a more formalised health economics/cost effectiveness component?

Reviewer 1 ·

Basic reporting

No comment

Experimental design

To make it clear to the reader the design of the study and the rationale for not having a comparison group, I would suggest that the authors consider evaluating whether the study would be better described as a feasibility study. Under a broad definition, a feasibility study helps investigators prepare for full-scale research leading to intervention. I would suggest referring to the following paper to evaluate this suggestion Am J Prev Med. 2009 May; 36(5): 452–457.doi: 10.1016/j.amepre.2009.02.002

Validity of the findings

No comment

Additional comments

In my opinion, the authors have been able to perform substantial improvement and the paper is much clearer in the present version. However, I still have some comments:

1. To make it clear to the reader the design of the study and the rationale for not having a comparison group, I would suggest that the authors consider evaluating whether the study would be better described as a feasibility study. Under a broad definition, a feasibility study helps investigators prepare for full-scale research leading to intervention. I would suggest referring to the following paper to evaluate this suggestion Am J Prev Med. 2009 May; 36(5): 452–457.doi: 10.1016/j.amepre.2009.02.002

2. I suggest adding to the limitations that recent evidence suggests that the role of breakfast in weight loss is questionable – this evidence was not available by the time of the development of the behaviour change intervention. BMJ 2019;364:l42

3. Lines 390-405: Please insert citations for all the studies that are being compared to the present one.

Reviewer 2 ·

Basic reporting

The article is clearly written using professional English throughout. The following amendments would resolve ambiguities:

it is stated that the completion definition was set to reflect existing research. Please cite the research that supports this definition.

typo on line 471 - should be 'it' rather than 'in'.

line 494 - "completion is higher than a few of the intensive interventions". But not if you use the same definition of completion, please clarify and cite the comparison studies you are referring to.

line 499 - clarify that this is 52% of completers

provide ref for lines 501-503.

Table 2 needs a Note defining the abbreviation ITT. Also, the term "finishers" should probably be removed from the column headings, as this is not correct for all data in that column, and those that do use completers data should be delineated similar to those using ITT.

The terms completer and finisher are used interchangeably. While they clearly mean the same thing, consistency would aid the reader. (personally I prefer completer, but perhaps it is just my misspent youth playing computer games that considers "Finisher" to have aggressive connotations)

Experimental design

I do wonder if the comparison information should be in the discussion rather than the results. The studies chosen for comparison was not systematic and there is no formal analysis. Perhaps having claims of "lower" or "higher" in the results section adds too much weight to these comparisons, particularly when other characteristics are so unbalanced. Having the comparisons in the discussion, perhaps in more broad terms such as "comparable" would be more appropriate. The results could then simply be a clear description of the intervention development and a descriptive evaluation of the intervention. However, I do not have strong feelings on this latter point, and I do think that table 2 is helpful.

The role of the "Expert support" should be clarified (p279): "There was also space for participants to ask questions and comment on any aspect of the course. Responses from a Ladle trained professional were given within 24 hours." Apologies that I missed this the first time round. Is this human support element an integral part of the app, and is it included in the estimated costs?

Validity of the findings

The authors have added some clarification to the results section on comparative effectiveness, but they are still not consistently explicit about the differences in definitions of completion, timing of follow up, and analysis method (ITT vs intervention completers). This is absolutely essential for ALL of the comparisons made. Comparisons of mean weight loss are also only described where they reflect favourably on Ladle.

An example is L534: "superior to that achieved by other apps with no human coach." I think you mean "some other apps" as you did not do a systematic search and comparison, and did not do a statistical comparison.

Other problematic phrases include
L33 - just say comparable
L390-393 - clarify differences between this study and comparators
L399-400 - ditto
L404 - what is meant by "similarly measured"
L484 - is this a vaild comparison?
L490 - ditto
L522-524 - ditto
L525 - There is insufficient evidence to claim this is "as effective as other approaches"

Version 0.1 (original submission)

· Feb 7, 2019 · Academic Editor

Major Revisions

Our apologies for the delay in handling your article. The Academic Editor who was originally handling it is currently unavailable, and so I was asked to step in and take over the Editorial responsibility for this submission.

Thank you for submitting your article to PeerJ. we have now received two reviews and I would like to thank both reviewers for their thoughtful assessments of the manuscript. The reviews are appended below and I won’t reiterate all the comments. However, both reviewers have highlighted important issues that need to be addressed in any revision.

Please ensure you address all reviewer comments; however, I believe that the following issues warrant particular attention when revising your manuscript:

1) Reviewer 1 has recommended some important clarifications to the methods and analyses, and I agree that including more information on these would strengthen the paper. Some of these clarifications (e.g. “authors should acknowledge that comparisons to other studies´ retention rates might not be valid since populations and interventions might differ substantially from the one in the present study”) are also relevant to the point below.

2) The comments from Reviewer 2 regarding the comparisons made between the current data and previous studies, as well as those regarding criteria/outcomes (and the definitions of these across studies) are important and need to be seriously considered in any revision of the paper. A number of useful references have been provided, and I agree with Reviewer 2 that the literature review needs to engage more with recent literature.

Thank you again for submitting your article to PeerJ. I hope the reviewers’ comments are helpful in revising your manuscript, and that the points above are useful in focusing your response.

Reviewer 1 ·

Basic reporting

- Language is appropriate.
- Literature review is sufficient and updated.
- Raw data was shared appropriately. I suggest that results on the repeated measures should be presented as a table rather than in the text (lines 316-318).
- In regards to results, I suggest the authors provide dispersion parameters of the data on weight loss (e.g., standard deviation, 95% CI, interquartile range) - lines 322-324.

Experimental design

- The article is in accordance with the journal´s aims and scope
- The research question was well contextualized by the authors
- There was not an explicit statement about how the authors obtained informed consent from participants.
- I think the Methods need some clarification in regards to design, power to detect the expected outcomes, outcomes assessment and data analysis (see detailed comments bellow)

Validity of the findings

- It is not possible to infer about robustness of the data as the authors have not provided details on sample size or power and statistical analysis.

-Overall conclusions are supported to the results. However, the authors should acknowledge that comparisons to other studies´retention rates might not be valid since populations and interventions might differ substantially from the one in the present study i.e., was not controlled.

Additional comments

I commend that the article described in detail the rationale and development of an app targeted to people who want to lose weight. However, I think the report of the study itself and its results needs some clarification in regards to the following issues:

- The authors claim that the app is low cost. However, there is no information in the article in regards to production costs.

- In order for the readers to be able to assess the power of the study, I suggest the authors how they managed sample size issues (sample size calculations, for example)

- The authors should clarify exclusion criteria and whether the criteria for ‘staters’ and ‘finishers’ were established prior or after the intervention

- It is not clear whether outcomes on weight loss were assessed by measurements or participant´s self report.

- it is not clear in the Methods (Lines 258-262) that the comparison to other weight loss interventions was conducted in regards to results of previous works. I suggest making a clear statement on this issue to avoid misunderstanding on the design of the study.

- Description of data analysis in the paper has been too simplistic. I suggest that the authors inform the readers about statistical tests that were applied, how missing values were handled, statistical softwares that were used.

- In lines 322-324, the authors present the results on absolute weight loss. To allow a more comprehensive understanding of these numbers, I suggest that the authors provide dispersion parameters of the data (e.g., standard deviation, 95% CI, interquartile range)

-The authors should acknowledge that comparisons to other studies´retention rates might not be valid since populations and interventions might differ substantially from the one in the present study.

Reviewer 2 ·

Basic reporting

The article is well written and clear with good use of English.

The literature review is a little superficial, focussing largely on a systematic review from 5 years ago, rather than reviewing more recent studies. The discussion does not describe the limitations of this evaluation and it is important to do this.

I think there is a typo either in the abstract or the method. In the abstract, it says the intervention has 36 hours of audio lessons. In the method it says 36 short audio lessons of 5 minutes. I suspect the abstract is wrong.

Experimental design

This is a really interesting intervention and it is important that more apps are developed by experts in psychology of eating behaviour and obesity and are properly evaluated. However, I have some reservations about the comparisons made in this evaluation. The choice of studies seems to be somewhat arbitrary. This is a research study rather than a service evaluation (i.e. participants are recruited by advert, must meet strict inclusion criteria, consent to participate), but the comparisons made with commercial weight loss programmes use a service evaluation (Ahern, 2011) and a service evaluation non-inferiority analysis that does not give key information used in the comparison (Madigan 2014; see Table 1). A number of other service evaluations have been conducted of Slimming World which provide the information required (e.g. Stubbs 2015 https://www.ncbi.nlm.nih.gov/pubmed/26359180) and several large trials have been conducted using Weight Watchers (Ahern 2017 https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(17)30647-5/fulltext) and Slimming World (Aveyard 2016 https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(16)31893-1/fulltext). What is the rationale for not using the intervention arms from the trials? In addition, I would have expected to see a comparison with the Power + trial (Little 2016 https://www.thelancet.com/journals/landia/article/PIIS2213-8587(16)30099-7/fulltext) which uses a web based intervention with nurse support. I would like to see a more comprehensive and appropriate use of comparators, but as a minimum I would like to see a justification of the comparison studies chosen in the methods.

The choice of what criteria are compared should also be justified. Currently they compare % of completers, but completion is defined differently across the studies. Eg. In this study completion is defined as 1/3 of the course, whereas in Ahern 2011, this is defined as 100% of the course and in Logue 2014 it is 1/2 the course. If you apply the Ahern 2011 criteria to this study, the completion rate is 9%, if you apply the Logue 2014 criteria it is 25%. While this is described in the paper, it is not acknowledged in the discussion of findings. Was the completion criteria defined prior to the data analysis or after examining distributions?

Weight change for all who engaged is not provided for comparison with other studies.
The discussion also focusses on % of completers losing 5%. It ignores the differences in median weight loss, which appear to be more substantial.

Validity of the findings

The findings of the evaluation are reasonably well described. The comparisons with other studies are somewhat haphazard and superficial. I would expect more rigour in terms of study selection and I would want more discussion of the limitations of comparisons across studies, including definitions of outcomes, demographic make up, recruitment methods, etc.

Additional comments

I really like the intervention and I think it has a lot of potential. I would like to see this evaluation published. However, the comparisons of effectiveness/completion rates with other studies needs to be revised to be more transparent and to acknowledge limitations.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.