To increase transparency, PeerJ operates a system of 'optional signed reviews and history'. This takes two forms: (1) peer reviewers are encouraged, but not required, to provide their names (if they do so, then their profile page records the articles they have reviewed), and (2) authors are given the option of reproducing their entire peer review history alongside their published article (in which case the complete peer review process is provided, including revisions, rebuttal letters and editor decision letters).
The authors adequately addressed the final reviewer comments.
The reviewers raised a few minor additional comments that need to be addressed before the manuscript can be accepted.
I thank the authors for their changes. The grammatical changes make the manuscript much easier to read.
Thank you for clarifying the analyses and for including analyses with the healthy control sample.
The only thing that I would recommend would be to avoid stating that you found a non-significant trend (abstract and conclusion). Otherwise, using the same logic, we could argue that the correlation between PSS and the somatic focus subscale of the TSK (p=0.03) is actually supporting a trend towards non-significance (given that it is quite close to p = 0.05).
I would just state that you had non-significant results. You could then clarify that based on your findings, you could not rule out inadequate power as a reason for this (given the higher variability in PSS than anticipated).
Excellent work on the revisions!
Still very happy with this.
I think this has been greatly improved, particularly with regard to clearly labelling primary and exploratory analyses.
The new version is much more faithful to the results in my view. I appreciate the timing of data collection may have preceded the real push for publishing protocols etc, but i do not think that the rarity of the practice is very good justification for not doing it. I am pleased the authors mention this in limitations. I am also still submitting papers that were started before we adopted this policy and I too see it as a limitation of those papers.
I think the response to all reviewers has been excellent. I think the paper is very interesting and moves the field forward.
I think the authors have done a good job in responding to my and the other reviewers' previous comments. There are a few typos throughout (e.g. in line 40 "my" should be "by" and in line 387 "we the spatial bias" does not make sense) but otherwise I think the paper is suitable for publication.
While the reviewer were generally positive about the clarity of writing and data sharing, they also raised several concerns regarding the reporting and interpretation of the results. The authors follow the detailed and constructive feedback from the reviewers to revise and improve their manuscript.
This paper does an excellent job referring to past literature both in terms of background and referencing relevant past work.
Raw data is shared - one small suggestion would be to also provide a key that identifies what each abbreviation refers to (for example, add an extra tab to the spreadsheet entitled Variable Key that has one column that lists each variable name and one column that provides an explanation of what the variable name refers to).
Overall, professional article structure.
I would recommend an extra read of the paper from a person with English as first language. Overall, the paper is very well written and clear, but there are some sentences where the grammar used is a bit awkward and could be improved, which would assist in readability. For example, lines 51-52: "Also observed are abnormalities in tactile processing". This could be combined with the first sentence to read: "Temporomandibular disorder (TMD) is typically characterized by chronic pain in the temporomandibular joint that is often accompanied by abnormalities in the processing of tactile input to this area." And lines 60-61: "However, these studies did not include unilateral TMD patients, but also bilateral TMD patients who were "asked about the most painful side"." This could be re-worded to: "However, these studies did not exclusively include patients with unilateral TMD; rather, patients with bilateral TMD were included, using the most painful side (self-reported) as the TMD-affected side in analyses." Also lines 103-104: "The majority of the sample (70%) did not yet receive treatment at the moment of testing". This could be re-worded to "The majority of the sample had not received treatment for their TMD pain at the time of testing". Also line 343 - "To pain further insight...." I am not sure what this is meant to read.
Overall, well done.
Hypotheses: Currently, hypotheses in both directions are given. That is, according to the 'spatial neglect hypothesis', patients will have slower processing of tactile stimuli at the painful vs non-painful joint, where as based on the 'hypervigilance hypothesis' the opposite is expected. I think it needs to be clearly written that the overarching hypothesis of the paper is that there will be a difference in spatial processing of tactile input in people with unilateral TMD, but based on the literature, the direction of this spatial bias (prioritising painful vs non-painful tactile input) is unclear. At the moment, it is not clear if the authors have gone with the last hypothesis presented at their main hypothesis. Also, it is unclear to the reader why there is no hypothesis specific to comparisons between the TMD patients and the healthy control group (we find out later that this hasn't been compared).
Analyses: I do not understand why a statistical comparison cannot be made between TMD and healthy controls. The current argument is that because people with TMD had pain on different sides (left vs right-sided jaw pain), then the meaning of the PSS is different than in the control group. However, if healthy controls have no difference in spatial processing bias between stimulation on the left and the right, and if TMD patients have no difference in spatial processing bias based on whether they had left-sided TMD pain or right-sided TMD pain, then a between group comparison should be fine to make. At the least, you could take half of the healthy control sample and switch the test stimuli to the left joint and the reference stimuli to the right joint such that the data is matched to test and reference stimuli of the left-sided TMD patients (n=10 left sided pain and n=10 right sided pain). Or perhaps compare the right-sided TMD patients to the whole sample of the healthy controls (and vice versa for the left, switching the healthy control test and reference stimuli). I would suggest including some sort of comparison of PSS between groups in the analysis.
Abstract: I don't think that the first sentence of your abstract's discussion aligns with your results. The results found no difference in spatial processing in patients with unilateral TMD (although I do agree this was close - 0.07 - and may have been underpowered). However, the way it currently reads is that there was a difference in tactile processing even in people without fear-avoidance beliefs. I would argue that this should read: "The results suggest that patients with unilateral TMD show a tactile processing bias toward the painful side of the jaw, but only when high levels of fear-avoidance beliefs are present." Then you could specifically state that this supports the hypervigilence hypothesis.
Similarly, the conclusion of the paper needs to be updated to reflect this.
1. Introduction: Consider adding in the 'why do we care' aspect of spatial processing bias, e.g., understanding the presence and nature of any spatial processing bias may help us determine new treatment for these patients.
2. Methods, Line 115: "attaining the requested performance criteria during the task". At the moment, the reader is unaware of what this refers to. Please reference the appropriate section, e.g., (see TOJ data handling).
3. Cronbach Alpha results for self-report questionnaires: This is commendable that this was evaluated in the study sample. I might suggest providing an overall statement at the start of this section saying that internal consistency was acceptable for all questionnaires and then refer to a supplementary table that has the Cronbach alpha values for all measures. At the moment, this section is quite text/number heavy.
4. Methods, lines 210: Perceived intensity of tactile stimuli during testing - if it differed between left and right side of the jaw, was the intensity used updated in real-time during testing? I don't think this is a problem if not, but important to know.
5. Results, TOJ, Line 282: Should this read mean difference?
6. Perceived tactile intensity between painful and non-painful side, lines 295 - 297: Given that tactile stimuli was matched between sides at baseline, is this not just a validity check (i.e., to ensure that any differences in spatial processing were not due to differences in perceived intensity)?
7. Discussion, line 403: contextual factors - please provide an example of what you mean by this.
I think this is very high quality. My only suggestion is that the authors think whether there might be better papers to cite in some instances than their own. This is particularly relevant for the methods and discussion.
I think the study is within the scope of PeerJ.
I think the knowledge gap is defined but should focus on the potential benefit of knowing this stuff in TMD, which is a highly problematic condition.
I think the experimental design has solid fundamentals, but it is not well suited to test the hypotheses the paper suggests it is testing. I actually think that the study is testing the hypothesis that TMD pain patients will have a spatial bias to the affected side. The other hypothesis they mention has not been proposed for TMD and in fact has been discussed as a potentially characteristic to just CRPS and perhaps unilateral low back pain. It seems odd and the rationale not well presented, to pitch these two hypotheses, which relate to different situations, against each other.
I think any experiment nowadays should build on a protocol and analysis plan that is ideally peer reviewed - if the analysis here was as planned, it would not have got through peer review. These things should also be locked prior to data collection and analysis so the reader can be assured that the study is undertaken in line with the principles of transparency and replicability. That this process was not followed represents a limitation of this work, which should be mentioned in the discussion.
The wider design is in my view highly problematic because there are about 15 analyses and no mention of the risk this presents to findings. Moreover, there is not a clear presentation of what results would be required to support or refute the hypothesis about vigilance. That is, the authors use several measures and several sub factors but don't stipulate a priori what would be required to refute. This leave it highly vulnerable to false results and as it stands, i can't interpret the results in light of this substantial threat to validity.
Methods are very well described and could easily be replicated.
The authors do not accept the negative results. Rather they suggest that despite the statistical analyses clearly not supporting the hypotheses, that a positive result existed. I find this very surprising. This means that the discussion, which focusses to a large extent on the positive result, is inappropriate because the result was negative. In my view, the discussion needs to be completely rewritten to reflect the results, not the predicted results. The authors report that 65% of the patient group did what they predicted they would, but they don't state how many of the control group did. The positive finding on one sub factor of one outcome for one hypothesis can not in my view be given any weight on the grounds mentioned above.
There is a large literature on spatial biases and their implications and possible interactions with other systems, yet this literature is not really addressed even though the current findings are very relevant to it. That unilateral TMD does not have a spatial bias is important and interesting (i suspect that they may have a smaller one, but your study wasn to powered to detect that).
I have attached an annotated PDF with a range of specific comments that i hope you find helpful.
The paper is very well-written. It has a clear structure and is easy to follow. I particularly liked the clear explanation of how the data may help us differentiate between two rival hypotheses.
The dataset should have a key to explain what each of the variables are and how they are coded. For example, hand dominance and pain location are both coded as 1 and 2 but I don't know which side is 1 and which side is 2.
The paper meets all of the criteria for this section and I have no suggestions for improvement.
For clarity I would prefer exact p values to be reported rather than "ns" (e.g. lines 118 and 119).
The authors say they used the TOJ analysis guidelines proposed by Spence, Shore and Klein (2001), but they appear to be slightly different. Spence et al said "Participants were excluded for one of three reasons: if any of the eight correlations were less than 0.4, if two or more of the calculated PSSs were greater than +/-250, or if the z score for a particular stimulus pairing at the +250 ms and -250 ms SOAs were, when averaged, less than 1.29 (i.e. less than 90% correct)" (p. 805-806). Could the authors add an explanation for the differences and/or add a note to say whether the results would be any different if using the original exclusion criteria.
I thank the authors for including details of their power analysis. It is a shame that there wasn't sufficient power for the key effect, which is reported as d=0.42 (i.e. medium-sized), to be significant. Future studies should be powered to detect the smallest effect size that would be considered interesting or relevant, rather than that found in previous studies, possibly even with 90% power.
At several points in the results I wondered whether the control was really necessary since the key hypotheses being tested didn't require one and the PSS was not compared between groups. Perhaps the authors could make the need for the control group clearer in the manuscript.
I think the authors could say more about their analyses of the perceived intensity of the tactile stimuli in the introduction and discussion. The intensity on both sides and in both groups was tailored to be a 3 on a 1-5 scale (lines 132-141) so was the analysis comparing sides (lines 295-298) and groups (lines 293-295) intended as a manipulation check or a hypothesis test? What are the implications of the difference between groups? Does this mean that the TMJ group must have experienced increasing pain over time?
On line 340 I think it would be better to say "did not reach significance" rather than "just failed to reached significance". Also I think the authors could be more nuanced on line 338 by saying something to the effect of "which had it been significant would suggest enhanced...".
On line 354 the authors note that the correlation should not be overstated given the small sample size, but equal important is that is was an exploratory analysis rather than a hypothesis test, so I think this should be noted here too.
On line 364 I think it is incorrect to say that the result contradict previous findings, since there were significant. It may be more correct to say that the results failed to support previous findings.
I enjoyed reading the paper and I think it should be published. Although I have written quite a bit in the 'Validity of findings' section, these concerns should be fairly easy to address.
I appreciate that the authors have stated whether most analyses were hypothesis tests or exploratory (which many authors do not do) but I hope that pre-registeration of analysis plans will become more common in this field.
A few minor points:
Line 82: "Rivalry hypotheses" should be "Rival hypotheses".
Line 127: "procedure is highly similar as" should be "procedure was highly similar to".
Line 168: is "...your relation..." correct? It would make more sense without the "your".
Line 226: Should this be "(virtual) interval", as with the PSS description on line 227?
Line 375: "and is this feature" should read "and this feature is".
Line 377: "More specific" should be "More specifically".
Line 380 "Anyway" sounds quite informal and "Nevertheless" may be better.
Figure 1 would benefit from a note in the description to say what the scale of the y axis was (-200 to +200?).
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.