Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on November 27th, 2019 and was peer-reviewed by 3 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on January 14th, 2020.
  • The first revision was submitted on June 9th, 2020 and was reviewed by 2 reviewers and the Academic Editor.
  • A further revision was submitted on September 18th, 2020 and was reviewed by 1 reviewer and the Academic Editor.
  • A further revision was submitted on October 31st, 2020 and was reviewed by the Academic Editor.
  • The article was Accepted by the Academic Editor on November 6th, 2020.

Version 0.4 (accepted)

· Nov 6, 2020 · Academic Editor

Accept

Dear Kate and fellow authors,

Thank you for your latest round of revisions. I have looked at your rebuttal, as well as your marked up PDF, and all looks good to me. If I have missed any minor issues, I am sure you and the typesetters will pick them up in the next phases of processing prior to publication.

Congratulations on all your hard work. I look forward to seeing this in print. Oh, and it is great to know that now at least two people in the world (ie you and I) will now giggle whenever they read details about posterior distributions. We all need a laugh right now.

With the best of wishes to you and your team,

Genevieve

[# PeerJ Staff Note - this decision was reviewed and approved by Bob Patton, a PeerJ Section Editor covering this Section #]

Version 0.3

· Oct 2, 2020 · Academic Editor

Minor Revisions

Dear Kate and fellow authors,

Thank you for putting so much effort into revising your manuscript in line with reviewers' last set of suggestions. These changes were rather extensive, and so I asked Reviewer 2 to have another look at your manuscript to make sure they addressed her suggestions. She was very happy with your work.

I have proof-read the manuscript - as a non-expert in the field - and have identified a few minor issues with readability that I outline below. All easy to fix.

1. There are a few sentences that require some minor adjustments in wording to fix the grammar. I have highlighted these sentences in the attached PDF.

2. Throughout the manuscript, there is inconsistent use of hyphen between particle and verb (e.g., particle verb and particle-verb and vice versa). Choose one convention and use consistently.

3. I am pretty sure PeerJ will want references in parentheses in alphabetical order. Please go through and amend throughout the manuscript. Also, please double-check the formatting for references in the text, and make sure you use PeerJ conventions in terms of the use of "and" or "&" etc. [** PeerJ Staff Note - as long as the references are complete, the formatting will be done during typesetting **]

4. The use of the term "surprisal" is grammatically correct in English. Surprisal account is perfectly OK. I suggest you identify all sentences that include the word "surprisal" on its own, and revise them to accommodate "surprisal account" in a grammatically appropriate way.

5. Your manuscript is very "dense" in terms of terminology. Please avoid the use of acronyms in the text (e.g., NP, BF) because it makes it just that much harder for the reader to follow your meaning.

6. I am not sure why the distributions throughout the manuscript are called "posteriors". I suggest you just called them distributions (google posteriors and you will see why - actually, don't!! do that - just look up the meaning of posteriors).

7. Table 8 and similar. Please provide full version of acronyms either in table (you may have room in column 1) or in the title or the notes.

8. When putting things in lists, make sure you used the PeerJ formatting for numbers (e.g., i, ii, iii OR (1), (2), (3) etc).

9. If a number is less than 10, then write in full (e.g., nine).

Once you have made these changes, I will forward in the PeerJ process towards proof editing.

Congratulations on all your hard work, and I look forward to seeing this in press.

Best wishes,

Genevieve

·

Basic reporting

NA

Experimental design

NA

Validity of the findings

NA

Additional comments

I thank the authors for the very thorough revision of the Introduction! I find the Introduction extremely clear now. The discussion around example (1) in particular is very helpful and accessible (and I believe it will benefit other readers too). I'm looking forward to teaching this paper in my classes!

One tiny comment, on line 119: please clarify what PCFGs stands for.

Version 0.2

· Jul 3, 2020 · Academic Editor

Minor Revisions

Dear Dr Stone,

I sent your revised manuscript to the two reviewers who suggested major changes in the first round of reviews. As you will see, one reviewer is satisfied with those changes while the second reviewer still requires a number of minor changes - most relating to improving the clarity of the manuscript. To maximise the impact of your work, it is important that your manuscript can be understood by as many people as possible - regardless of background - and hence I strongly suggest you consider all of the suggested changes of Reviewer 2 - to improve clarity. Once this has been done, I will revise the manuscript myself for readability, since it is quite a "dense" piece of work, and we want to make the content as accessible as possible. So, we may have a couple of revisions ahead of us, but if you are able to address all points of concern, I am hopeful for acceptance.

With best wishes,

Genevieve

[# PeerJ Staff Note: Reviewers 1 & 2 declared a potential Conflict of Interest, and the Editor was aware of this when making their decision #]

·

Basic reporting

no comment

Experimental design

no comment

Validity of the findings

no comment

·

Basic reporting

no comment

Experimental design

no comment

Validity of the findings

no comment

Additional comments

I thank the authors for their serious and thorough consideration of my comments from the previous round of review. I find the manuscript much improved, clearer, more focused and easier to read and understand.
I still have some remaining relatively minor comments, mostly with regard to the presentation in the Introduction. If these concerns are addressed, the manuscript can be published in PeerJ.
Specific comments:
Abstract:
"Locality effects induced by interference and working memory have been…": this would probably be clearer if it said "induced by interference and working memory load have been…" (as in the previous sentence).
Also - and this is not crucial at all for the paper, I'm just wondering about it – I'm not sure what the authors mean by "effects of working memory load". The way I see it, interference can come about as an effect of high working memory load (more items that are similar to one another); decay can also come about as an effect of high working memory load (not enough resources to keep the item active while keeping other items active too). So I wonder if in saying "working memory load" the authors mean some other effect, possibly displacement, i.e. forgetting some material to make room for other material?

Introduction:
I think the presentation of surprisal and related ideas is still somewhat confusing, mainly because, I believe, there are two distinct, important predictions made by surprisal, but the presentation sort of mixes the two:
1. First, surprisal says "more predictable is easier". This is by no means something that was first claimed by surprisal theory. It was shown in ERP studies since Kutas & Hilliard (1980) and in reading times since Ehrlich & Rayner (1981) (and maybe before?). Surprisal is just one way to model this observation. This generalization is the basic tenet in the predictions of both theories contrasted in Figure 1 (in both, reading times on the left are shorter than on the right).
So I think in the subsection discussing word predictability, the discussion shouldn't really start with or focus exclusively on surprisal. In fact, on the next page the authors offer an explanation for the effect of predictability based on decay (lines 116-118). The discussion in the subsection discussing word predictability can therefore outline the main observation (predictable is easier) and findings, and then mention the surprisal account, and also the decay account.

2. The second thing, which is more specific to surprisal, is the prediction for antilocality effects. Antilocality is briefly explained in the abstract and then is sort of assumed, but never really presented methodically.
So I would suggest including an explanation of this effect in a dedicated subsection. So the order will be: the "word predictability" section (the prediction of which are identical for the two hypotheses); the "antilocality" section (suprisal); and then the "decay" section.
Also, the two paragraphs on the interaction of predictability with distance (p. 2 line 65 onward) are very confusing. They sort of go back and forth between discussing the interaction of predictability with distance and discussing the interaction of predictability with working memory load without explicitly explaining why or whether the two (distance/working memory load) are interchangeable.
Finally, the bottom line of these two paragraphs is "facilitation in the reading times of a distant word .. may only occur when that word is highly predictable" (this is also stated in the predictions section, and in the conclusion) – but this interaction is not represented in Figure 1, where the effect of distance is identical for more predictable and less predictable words.
One general suggestion: it would perhaps be helpful to have one example sentence in the introduction (perhaps even with a verb-particle dependency) to accompany the discussion, so the different predictions can be exemplified with regard to that sentence, to make them concrete and easier to understand.
Experiment 1 Methods:
I think it would be helpful to state explicitly, around line 307, that the set size manipulation therefore did not result in a difference in the predictability of the particle.
The entropy formula on line 320 should be explained.
One last thing – just a thought, no need to do this – I wonder whether in the eyetracking there would be effects on rates of skipping the particle altogether (since we know that more predictable words are skipped more often). The authors say that the particle was not always fixated – I wonder, for future studies, if there could be something interesting there.

Typos and very minor comments:
Abstract, 8th line from bottom: should be "decay, predictability or their interaction".
Abstract, 5th line from bottom: perhaps instead of "facilitate or hinder reading times", change to "facilitate or hinder processing"?
Line 210: parentheses missing around Lewis and Vasishth, 2005.

Version 0.1 (original submission)

· Jan 14, 2020 · Academic Editor

Major Revisions

Dear Dr/Profs Stone, von der Malsburg, and Vasishth,

Thank you for submitting your article "The effect of decay and lexical uncertainty on processing long-distance dependencies in reading" to PeerJ. At the outset, I must apologise for my delayed response to you. All three reviewers provided their reviews in good time. However, Australia has its summer holidays in the late December/early January period and hence I was on leave when the reviews arrived. I returned to work yesterday, and have prioritised your paper as quickly as possible.

As mentioned, three reviewers have had a close look at your manuscript, and all provide favourable responses. One reviewer suggests accepting the manuscript as is; two reviewers have asked for further clarification about various aspects of the manuscript to improve understanding. Since it is important for both yourselves, as well as the journal, to make the content of your research and manuscript accessible to as many readers as is possible, I suggest that you address all the comments provided by Reviewers 1 and 2 - either in the manuscript itself (ideally) or in a response to the appropriate reviewer (if the clarification/suggestion cannot be accommodated in the manuscript). Reviewer 3 also makes some useful suggestions for improvement. While the reviewer indicates that these suggestions are not mandatory, for the sakes of accessibility, I suggest your address as many of these in the manuscript as is possible, and again provide a direct response to the reviewer if this is not possible.

I believe that the suggested clarifications are important in light of my own review of the manuscript. Like the reviewers, I believe the article is well written - particular for a first submission. However, the research is specialised and quite "dense", and hence would benefit from making it easier to understand for non-specialists. To this end, I would suggest the following (minor) changes in addition to those outlined by the three reviewers:

(1) Predictions section (page 3). I found this section confusing. It seemed to "come out of the blue" - partly due to unclear wording, I believe, and partly because not much background had been provided about the two models that were being pitched against each other. I believe Reviewer 2 had a similar concern, and has offered some specific suggestions for how this might be addressed in the Introduction. In addition to those suggestions, please ensure that the Prediction section is precluded by a clear explanation of the two theories, and that the logic behind each prediction is described as clearly and simply as possible.

(2) Participants section (page 4). Please clarify if "language" disorders include reading disorders.

(3) Materials section (page 4). I was a bit confused by the presentation of the stimuli. Would it be possible to reformat the examples to improve clarity by adding a blank line between the two lines of the German/English stimuli, and also provide the meaning of the text prior to the stimuli? For example, something like:

Small set/short distance (perhaps in bold)

Meaning: With the newly bought rag, she scrubbed the plates in the kitchen to create space for cooking

German: Mit dem ....
English: With the ....

German: Platz zum ....
English: Place for ....

(I hope that makes sense).

(4) I understand why you might decide to outline the history of the development of the stimuli under Materials (pages 4-7). However, the length of this history narrative the reader from the flow of information for Experiment. I wonder if this extra information might be included in a supplementary file OR described as a separate experiment prior to Experiment 1 and 2.

(5) At some point, there appeared to be an abrupt switch from the use of the term "predictability" to "entropy". I think I worked out that they were related concepts, but I could not tell if they were the same thing, given this area of research is not my area of expertise. If they are the same thing, it would help the reader to use the term "predictability" throughout the manuscript, since it is a less specialised word. However, if a switch to entropy is required, this needs to be explained clearly at the appropriate point in the narrative.

I hope you find the suggestions of the reviewers, plus my own minor comments, useful in the further development of your work.

Genevieve

·

Basic reporting

Excellent reporting, although minor improvements are possible (see General Comments)

Experimental design

The only thing that remains unclear is if/how spillover was taken into account (see General Comments)

Validity of the findings

No comment

Additional comments

This paper deals with a topic that is very timely and relevant to the study of human sentence processing. The experiments are well designed, the analyses are state of the art, and the writing is very clear. Nevertheless, there are two issues need to be resolved before I can recommend publication.
(1) Does the verb in every stimuli sentence require a particle? If not, how was the “no particle” option incorporated in the cloze test and data analyses?
(2) There is no mention of how spillover was taken into account, even though this phenomenon is prevalent in reading, in particular self-paced reading. Were reading times on words directly following the particle also taken considered? If not, could this be why the expected effects were not found?

Minor comments:
- line 47: what does it mean for something to be “anecdotally assumed”?
- When introduction German particle verbs, it would be good to mention that moving the particle to after the object NP is required in German.
- line 115: the Dutch prefix “ver” in “verdelen” is not a particle (i.e., it is not split: “hij deelt het ver” is not possible)
- line 127-128: “self-paced reading and eye tracking modalities” and “reading modalities” -> shouldn’t this be “paradigm” instead of “modality”? In both cases, the modality is written/visual.
- Table 1 shows 95% CI instead of the standard error mentioned on line 221. Also, the caption is not quite accurate because the table presents cloze statistics but not the cloze test results.
- It would be helpful if the goal of the cloze test data analysis were explained before the technical details (starting on line 226)
- line 232-234: “the probability of the target particle was lower … for the interaction” -> for which combination of factor levels was the probability lower?
- The violin plots of Fig. 4 shows probability mass for negative values of entropy, even though entropy is by definition non-negative
- line 414: what did the preprocessing of eye-tracking data entail?
- line 417: the citation to R is “R Core Team”, not just “Team”.
- line 448-449: the problem of evaluating multiple dependent measures is not a “limitation of the BF analysis” in particular, is it?
- line 474: “The statistical analysis” should probably be “The outcome of the statistical analysis”

·

Basic reporting

No comment (see section 4 below for all comments).

Experimental design

No comment

Validity of the findings

No comment

Additional comments

The paper reports the results of one self-paced reading and one eyetracking-while-reading experiments, aimed to investigate the effect of decay, and its interaction with predictability, on the processing of verb particles which are dependent on the verb but appear downstream from it. This is an interesting question, the experiments are overall well thought-out, the analysis is rigorous (with data and code provided) and the discussion is careful and responsible.
In my view, the main weakness of the manuscript is the presentation the hypotheses of the two frameworks, and in particular that of the LV05 model. Personally, the reasons for the predictions of the LV05 model were not clear to me, which made it hard to understand certain aspects of the experiment and interpret the results. Below I elaborate some more on this concern, and offer some other, more minor, comments.

Clarity of opposing hypotheses, particularly the predictions of the LV05 model:

To me, the Introduction (particularly the first page) was very confusing.
If I understand correctly, the experiments set out to test the predictions of Surprisal vs. LV05. First, I think this should be stated clearly and consistently throughout the paper, starting from the abstract (where now only Surprisal is mentioned, in contrast to "other theories") and then explicitly in the Introduction. (And also in the discussions - the Discussion of the SPR results now starts with "we hypothesized" – is "we" = LV05? and the hypothesis of Surprisal is not spelled out at all there; the authors only state that the results were not consistent with it).
Once this is established, I think the authors should take MUCH more time to introduce these frameworks. Currently, both Surprisal and the LV05 model are simply mentioned (along with their relevant predictions for the current research, but with no other explanation). I think the authors should present these theories for readers who are unfamiliar with them. What is Surprisal theory, what are its main tenets? What is the LV05 model? What is it modelling, what are its assumptions?
Then, the authors should explain both frameworks' hypotheses about decay and its interaction with predictability. For Surprisal, line 40 "interference and working memory constraints may negatively impact…": in "working memory constraints", do the authors mean "the limited capacity of working memory"? Why would interference and limited capacity interact with predictability in this way, according to Surprisal? More explanation is needed, along with the relevant results (from German? Hindi? Persian?)
Even more so, for the LV05 model, it was hard to understand the proposed reason for the interaction between predictability and decay. The crucial sentence is this: "If an upcoming lexical item is highly predictable, it can be pre-integrated into the pursued parse, facilitating its retrieval once encountered. However, if there is uncertainty about the lexical identity of a word, this will increase the likelihood that the parser either pursues a parse with a different lexical item to the one yet to be encountered, or makes no lexical prediction at all".
This raised a lot of questions for me:
- Line 149 onwards "in the absence of interference, decay over distance … will make the long condition more sensitive to predictability". Why? I understand that these are the results of a simulation, but can the authors provide an intuition as to why this is so? Do the authors claim that when a lexical item is highly predictable, it is integrated (prior to its occurrence in the input) and it is therefore amenable to decay? If so, it should be stated clearly.
- What's "highly predictable"? Consider for example a verb from the small set size group which takes five possible particles. If one of them appears in 80% of cases, and each of the other four – in 5% of cases, is the most probable one highly predictable, therefore integrated and amenable to decay? If this is the case, shouldn't that also happen for a hypothetical verb in the large set size group which takes 15 particles, with the most probable one appearing in 80% of cases, and each of the other ones in ~1.5% of cases? And what about a "small set" verb with 60%-10%-10%-10%-10% distribution of particles and a "large set" verb with 60%-4%-4%... distribution? Would the most probable particle be integrated?
- What happens when there's no one highly predictable completion? What's the role of decay in these cases? What's the predicted difference between a small set verb with 5 possible completions each appearing in 20% of the time, and a large set verb with 15 completions, one appearing in 20% of the time and each of the others in ~6%?
- The upshot from the last two questions is: shouldn't we look at constraint (cloze probabilities) *at the verb* in order to know what was preactivated/integrated there? Or perhaps at entropy, if it is assumed to modulate preactivation/integration (e.g. integration only happens when there are no strong competitors, i.e. low entropy), but again, *at the verb*? As the authors say in line 263, the study wanted to test "whether the number of potential particles pre-activated at the verb would affect reading times". But to know whether they are preactivated, don't we need cloze data from that point? (even though possibly subsequent material, i.e. the object, can prove our prediction wrong, leading to reanalysis? Since I'm not sure what the assumptions of LV05 are, I don't know what it will predict).
- The manuscript does discuss entropy, but measured right before the particle. In the pre-test, it turns out that there's no difference between the two groups, but this is only discussed in the Results section, before carrying out the alternative analysis. I think it would be much better to acknowledge the potential problem, namely that the two verb types have similar entropies (before the particle), and why this may undermine the verb type manipulation, when the pretest is presented. Otherwise the reader is left very confused.
- This is related to another minor point that was not clear to me: how were the verbs selected? Based on the cloze pretest, namely based on their preference after the object, before the particle? Or based on their particle selection options regardless of the specific object?

Other comments:
Line 36, after "there are accounts modeling the effect of intervening material…": I think it is natural to start the Introduction with the discussion of decay (which now appears in the second paragraph), as these are the more traditional approaches to distance effects. Then, Surprisal and anti-locality can be presented.
The manipulation of decay was introduced by adding a very short constituent – a two-word phrase. Could that be the reason why no effect of decay was found? Does the LV05 predict an effect of decay with such a minimal manipulation? Related to this, line 526, "it would have been difficult to construct longer sentences without reintroducing these factors (interference), which supports the idea that they are the source of processing difficulty": why does it support this idea? I think it only means that it's very hard (perhaps impossible?) to test the influence of decay by itself.
When entropy is first discussed, the concept should be explained – not only with a mathematical formula, but also with the intuition as to what it means.
Minor comments
Line 47 "activation decay is anecdotally assumed…": another relevant reference here is Chow & Zhou (2019), which is a replication of Wagers & Phillips (2014) (though the original authors do not frame their study as investigating decay).
Line 52 "decay is not a useful predictor": perhaps also cite Van Dyke and Johns' (2012) review which argues against a role for decay in sentence processing.
Materials section: Do all the experimental verbs necessarily take particles at all? I assume this is the case, but I think this should be stated explicitly
Line 217 "24 items that suited the experimental design" – meaning what? That they selected 6 or less, or 15 or more, particles?
Online norming study (line 249 onwards): Why is this pretest necessary? In the experiment, the verb is several words upstream from the particle, so why are reading times of the verb+particle relevant?
Line 382, "a second possibility is that locality and antilocality effects simply cancelled each other out": how is this relevant to the effect of predictability, which is the topic of discussion? I would think that it is relevant to the (lack of) effect of decay, not predictability.
Line 484 "speed up at the verb": this sounded to me like the authors were referring to a speed up at the verb relative to preceding material; it took me some time to understand that it means lower reading times in the large set verbs compared to the small set verbs.
Line 544, "a potential explanation for the lack of speed-up… more preactivated particles may have led to slower reading". I'm not sure I would predict this. I would think activations are not usually viewed as costly. Perhaps the source of increased reading times here is that the verbs are more ambiguous/vague, i.e. have more possible meanings?
Typos etc.
line 34: length > amount
line 166: items > item
line 128: delete second 'also'
line 319: delete second 'the'
line 341 and caption for Figure 5: I initially thought the RTs in the table are reading times for the particle (and wondered why they were so high). The text and caption should say that these are RTs for answering the comprehension questions. Same for line 440 and table 9.
Line 389: the number "1" is missing
Line 457, "the results of the statistical analysis": in all the reading time measures? If so, maybe "analyses"?

References:
Chow, W. Y., & Zhou, Y. (2019). Eye-tracking evidence for active gap-filling regardless of dependency length. Quarterly Journal of Experimental Psychology, 72(6), 1297-1307.‏
Van Dyke, J. A., & Johns, C. L. (2012). Memory interference as a determinant of language comprehension. Language and linguistics compass, 6(4), 193-211.‏
Wagers, M. W., & Phillips, C. (2014). Going the distance: Memory and control processes in active dependency construction. The Quarterly Journal of Experimental Psychology, 67(7), 1274-1304.‏

·

Basic reporting

The paper is well-written and clear. References and contextualization are appropriate, and the paper is self-contained. Raw data is available.

Comments:
Supplemental material is referenced (e.g., Page 8, last sentence (in Analysis section), states “...can be found in the supplementary material”. Similarly page 13, in Data analysis.), but I couldn’t find supplementary material, either at the end of the PDF, in the PeerJ review materials, nor in the OSF repository. This was not a big problem for me because all information is contained in the source code, but this should be fixed (if it was not my oversight).

Experimental design

no comment (everything is satisfying)

Validity of the findings

The statistical analysis methods are appropriate (e.g., using a full random-effects structures and an appropriately transformed dependent variable). While I did not check all the details in the source code, the code ensures reproducibility of reported results. Planned and exploratory analysis are cleanly separated. Conclusions are well stated and supported by the analysis results.

Minor comment (change encouraged but not mandatory):
The authors note that surprisal generally predicts that long dependencies reduce processing effort. Whether this actually applies to the verb-particle dependencies studied here is, however, not easy to establish due to data sparsity (as the authors explain convincingly). Intuitions about surprisal can be misleading, and simulations with a probabilistic model are needed to really understand what the predictions are. Given that the stated surprisal predictions are not supported by simulations, I suggest the authors temper their claim about the predictions of surprisal. This is already acknowledged in the "Predictions" section, but I'd suggest the authors also acknowledge it in the conclusion. E.g., in the conclusion: “the surprisal account would predict…” is an overstatement given that there are no simulation results or explicit claims about these particles in the cited surprisal studies to support this. After all, it might well be that the absence of evidence for a distance effect is exactly what surprisal would predict. The authors already acknowledge this possibility in the Conclusion, where they state that previous observed antilocality effects might have been due to stronger lexical constraints created by intervening material than in the stimuli here.
For what it’s worth, I ran five replicates of a high-quality neural language models (trained on 800M words of German Wikipedia text, about at the level of the models in Gulordava et al. 2018 (https://www.aclweb.org/anthology/N18-1108/)) on the stimuli to obtain surprisal estimates on the particle, predicting model surprisal from distance (long/short) and the entropy from the cloze study, with full random effects for items and model replicates. There was a clear effect of entropy (beta=1.2, t=3.1), at most marginal evidence for an effect of distance (beta=-0.17, t=-1.6), and no evidence for an interaction (t=-0.8). (Also, no evidence for any effects was observed when predicting from the discrete 2x2 contrasts of the experiment). So, while these surprisal models predict an effect of entropy, they predict at most a small effect of distance.
Given that there are no established neural language models for German that have been used in prior psycholinguistic research, there is no need at all to run such models for the purposes of this paper. However, these simulation results suggest being careful with evaluating surprisal predictions and emphasizing that the predictions described here are intuitive, not based on an actual probabilistic model. Alternatively, the authors could also acknowledge that it is simply not clear whether or not surprisal predicts an antilocality effect for these data.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.