Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on April 1st, 2020 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on May 5th, 2020.
  • The first revision was submitted on June 18th, 2020 and was reviewed by the Academic Editor.
  • The article was Accepted by the Academic Editor on June 20th, 2020.

Version 0.2 (accepted)

· Jun 20, 2020 · Academic Editor

Accept

I have now read your response to the referee feedback and the revised manuscript and am satisfied that you have addressed the concerns of each. The referees were already enthusiastic about your manuscript, and I find that it has been improved through revision.

I am confident that the referees would be satisfied with the revisions and see no reason to waste time with further review that will only delay this decision. Thus, I am happy to move this revised manuscript forward into production.

I believe that this is an extremely useful paper, and I thank you for selecting PeerJ to submit your work.

[# PeerJ Staff Note - this decision was reviewed and approved by Dezene Huber, a PeerJ Section Editor covering this Section #]

Version 0.1 (original submission)

· May 5, 2020 · Academic Editor

Minor Revisions

I now have feedback from 2 referees who are equally enthusiastic about the value of your manuscript. Each has provided you with a number of comments for improvement of the clarity of the manuscript. In particular, the question of exactly how pervasive an issue these pitfalls represent and the negative tone that is communicated is important to consider. Ultimately the paper is likely to be better received and implemented with a more positive tone, so it is to your own benefit to incorporate this constructive feedback.

Overall, the recommended changes are minor and should be easy to incorporate into your revision. I look forward to reading your revised manuscript.

Reviewer 1 ·

Basic reporting

The review will be of broad interest for biologists, ecologists and other scientists using mixed models to deal with difficult data sets. I've seen a number of texts and reviews on mixed models, but this manuscript focuses explicitly on 7 common pitfalls, using easily understandable language and examples. The introduction is excellent. There are some unnecessary adjectives and overly-complex word usage. But this is easy to fix. The writing is excellent.

Experimental design

The study focuses on 7 common pitfalls for scientists using mixed effect models. I would say the selection of pitfalls is broad ranging and doesn't appear biased. Some of the pitfalls did not contain much referencing. For example, pitfall 1 has one reference; pitfall 2 has no references. Perhaps the referencing could be improved, at very least to back up that these are well-known pitfalls in biology. The review is organized logically.

Validity of the findings

I am not a statistician. Please look to other reviews for advice on the robustness of the statistical pitfall and reporting.

Additional comments

I am a regular user of mixed effect models, and do feel that I do not know many of the pitfalls. This paper is an excellent reader for people like me.

I think detail missing form the paper is about "detection" of pitfalls. I realize the authors cover this briefly in the section "Detecting and resolving problems with mixed model estimation". However, I think the paper would benefit from suggesting explicit tests for each of the pitfalls, integrated into Table 1 (as a column) as well as the supporting R code. Many readers will not be able to fully understand the textual description of pitfalls and the R code supporting these descriptions, and they will just want to know how to detect if they have an issue or not.

Minor comments:
Line 22: Awkward sentence, and perhaps "careless" instead of "incautious."
Line 103: remove "extreme" and be careful with unnecessary adjectives.
Line 115: Word choice: "Anathema." It feels like the authors are using unnecessarily complex words in places. Keep the language simple and readable - just like you're keeping the statistical explanations simple. People will thank you for it!
Line 218: "provided"

Reviewer 2 ·

Basic reporting

This manuscript presents seven perils and pitfalls of mixed-effects models. It is very well written, clear and concise. Therefore, I expect it to be useful to a broader audience. Nothing within the manuscript is entirely new, but the combination, I hope, will create awareness of particular issues and will thus promote better statistical modelling across the fields.

Experimental design

The manuscript is very well presented and I really like how simple and clear simulations are introduced to illustrate the main points. This is very well done.

Validity of the findings

The points raised in the manuscript are all reasonably obvious. My only small concern is that the manuscript has an overall slightly negative tone as if there were many hidden pitfalls. Some of those, however, are rather special cases such as when the number of groups is very low. The manuscript is overall very good in highlighting specific problems, so I think the general massage could be a little more encouraging: There are pitfalls, but they can be rather clearly identified and are manageable.

Additional comments

I’ve made one of my main points above (a slightly more optimistic/encouraging tone). This can be addressed with quite minor changes to the manuscript. I’ve got a few more specific comments that I list below. And I’ve got one pitfall that I think is worth being added:

Random effects do not control for all kinds of non-independence. I’ve frequently seen people fitting random effects for cases where outcomes are not necessarily positively correlated within groups. A common case are choices of alternative outcomes measured in the same trial (e.g. time spent in different parts of an arena in behavioral trials) or area covered by different plant species that compete with each other. In such cases, outcomes are actually negatively correlated and a random effect TRIAL or PLOT does not control for this kind of non-independence. There seems to be a rather widespread believe that random effects care for everything when group levels have the same name. This point is related to Peril #2 and Peril #7.

Specific comments

L24: I don’t think “as long … are conservative” is the ultimate goal. If this was the case, we would use lower alpha levels. Something like “as long as statements are sufficiently cautious and well-informed” sounds better.

L92: Update the reference to R. The most recent versions have a reference to 2019 or 2020. Furthermore, the “authors” of R are called “R Core Team”.

L123 (and throughout the section): Since I don’t see why one would like to do model simplification in the first place, I don’t like that this seems to be the default here. I am aware that this is frequently done, but the manuscript could be a bit more nuanced. Forstmeier et al. (2017) Biological Reviews, 92: 1941-1968. is a useful reference.

L128: Suggest “it is safe” for style.

L128 and L144+: I think this is overstating the issue. The simulation is quite extreme: 12 datapoints in 6 groups, actually a paired t test situation. With 100 datapoints in 12 groups this issue is almost completely gone (L156+). When people use mixed models, they typically have many datapoints, so that the second situation seems much more realistic and the issue overall rather minor.

L162: Suggest “do not cure all pseudoreplication”

L174-177: This is a rather heavy sentence for a manuscript that is supposed to be very low level. Furthermore, I find the issue again a bit overstated, because it applies to the case of few groups and many observations.

L194: This is one of the few places where the manuscript is too unspecific. What does it mean “must have a clear understanding”? Being more specific helps readers to avoid the issues.

L213: “absurd” seems overstated and unnecessary.

L213: “always” seems overstated and unnecessary.

L291: A rather complicated sentences for a low entry-level manuscript

L323: I think the issue is a bit overstated. Often enough, some coarse control for spatial (or temporal) structure is better than none at all. However, absence of random effect variance with coarse structure does not mean absence of a spatial structure. Random effects do not take care of all non-independence (my point above).

Reference list:
- Several references lack volume and/or page numbers.
- The reference to lme4 is Bates et al. 2015 J Stat Software.
- First and last names are mixed up in Dormann et al. 2007.
- Fletcher et al. is lacking information on editors.
- Harting 2020 is incomplete.
- Lüdecke “sjPlot”

Table 1:
Some suggestions to be more specific about the actual perils:
#1 Anticonservative significance tests at low sample size.
#2 Pseudoreplication with group-level predictors
#4 Random-intercepts when random-slopes are need (or Random-intercepts when groups vary in response to treatment)

Table 1
The example for 5a is actually not clear. It is not obvious from the example that there is within and between-group variation and that the causal relationship might differ between levels. Btw: This is something to clarify in the main text as well, that the problem is specific to cases where causal relationships vary across the hierarchy (surely, this does occur, but it might not be all that common).

Table 1:
In example 6 suggest to remove “binary”

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.