Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on February 27th, 2025 and was peer-reviewed by 3 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on May 15th, 2025.
  • The first revision was submitted on September 18th, 2025 and was reviewed by 3 reviewers and the Academic Editor.
  • The article was Accepted by the Academic Editor on October 27th, 2025.

Version 0.2 (accepted)

· · Academic Editor

Accept

The authors have addressed reviewers' comments, and the article is now ready for publication..

[# PeerJ Staff Note - this decision was reviewed and approved by Xiangjie Kong, a PeerJ Section Editor covering this Section #]

·

Basic reporting

The manuscript has improved significantly, and the authors clarified points that needed attention, as well as improved the overall structure and logical flow of the manuscript.

Experimental design

While the dataset is still rather small, the level of detail provided supports reproducibility and points towards an interesting method of detecting piglet crushing.

Validity of the findings

While the dataset is still rather small, the level of detail provided supports reproducibility and points towards an interesting method of detecting piglet crushing.

Reviewer 2 ·

Basic reporting

The overall structure and language of the article have significantly been improved. The introduction and abstract have been simplified and refined to focus on information relevant for the article's objectives, making it much easier for the reader to read and understand.

Experimental design

The design limitation (e.g. low diversity in the training data) have been more clearly expressed in the materials and methods section and addressed in discussion.

Validity of the findings

The article is sound and conclusions are well supported.

Cite this review as

Reviewer 3 ·

Basic reporting

The writing has significantly improved throughout the revision. It was a pleasure to see the effort the authors invested into manuscript.

Experimental design

The study has a proper design, while the number of annotated pictures still is the lowest level. Nevertheless, the highly pragmatic approach of the authors is well suited to make the best out of the data set.

Validity of the findings

Validity is good, and the authors did their best, also in their revision, to show via sensitivity and specificity their robustness and limitations of their data. All these issues are properly addressed in the discussion.

Cite this review as

Version 0.1 (original submission)

· · Academic Editor

Major Revisions

Dear authors,

All three reviewers acknowledge the potential behind your work.
However, the paper should be deeply revised to improve the presentation, to properly refer to the relevant literature, and to better detail and justify the problem considered and the experimental setting.

**PeerJ Staff Note:** Please ensure that all review and editorial comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

**Language Note:** The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title). Alternatively, you should make your own arrangements to improve the language quality and provide details in your response letter. – PeerJ Staff

·

Basic reporting

Generally speaking, the paper lacks structure and details that allow for the FAIR re-creation of the research methodology. Authors need to consider adding the physiological backbone of crushing (why, how often, potential individual variation) and use that as a backbone for the analytical approach to predict the onset of the event. That will also dictate the choice of the appropriate time window to bring the real value to the farmer.

General comments:
L 37 Infectious disease will be more advisable here, if that is what authors mean
L 38 What do authors mean by external sources?
L 41 Among… this sentence more or less repeats the previous one.
L 45 Please provide references for this statement: “crushing sows have more…”
L 52 “Avoiding contraction…” The sentence might be a bit unclear
L 59 IoT system? Solution? Something else?
L 59 – 67 If authors focus on computer vision, an extensive description of the sound-based method could be misleading. A comparison with alternative solutions (sound, accelerometer-based) could be beneficial, but it needs to be written differently.
The introduction needs serious rework to be clearer and more consistent. The main aim and hypothesis should also be extended.
Generally, the manuscript will benefit from proofreading to improve fluidity, grammar, and word choice.

Experimental design

Warning solution: What is the potential for such a solution to be integrated into on-farm SOPs (Standard Operating Procedures)? Does it need to work in real or near-real time? What is the window of prediction for the solution to bring the most value to farmers and minimise production loss? This could partially be lifted in the Intro section.

Authors need to provide information about the number of animals, housing and management conditions, ethical permits, camera setup, etc.

How was the data collected, framerate, etc?

What is the number of frames used for annotation? Out of a total number of videos recorded?

2.2.1 Does not provide enough details about the dataset – total number of images, how they were selected etc.

L 99 Avoid using the sufficient or similar words when it comes to measurable management-related parameters (e.g. lightning, ventilation, etc). Either specify or write – variable natural lightning.

Table 1 The ethogram needs a slight rework to be clear and consistent, showing how each posture was described to be transferred into the annotation process. For example, it isn't easy to understand what ST, SI, etc., mean. Readers should not guess. The table should read without additional text explanation (coming later in this case). Sternum is a chest in Latin; check the terminology alt—Latin names for indicating sides/directions.
L 104 How was the labelling performed, the tools used, the format, etc?

2.2.2 More specifics on a model (even briefly are needed). Is there any specific reason why all versions of YOLOv8 were used? A table with comparisons across models within the v8 iteration should be provided in such cases.

Hardware used for training?

Validity of the findings

Since the M&M section lacks clarity and structure, it isn't easy to set the results in a proper context.

Reviewer 2 ·

Basic reporting

The English language needs review on the whole manuscript. The abstract and introduction are very difficult to understand - the paper cannot be published until this major issue is fixed.
Besides grammar, the introduction shows a relative poor understanding of the authors of the issue of piglet crushing. Biosecurity, sickness and infections are mentioned throughout the introduction and discussion, though they have little to do with crushing. In contrast, all the important aspect of what is crushing and what behavioral changes are involved are mentioned in a few lines with no reference.
The more technical part of the introduction (line 55 onwards) is very hard to understand considering the grammar.

Experimental design

The aim of the paper is extremely relevant and the methods extensively discussed, but the English is again hampering understanding.

The results are interesting but there are flaws in the design and interpretation.
First, it should be clarified how many sows were actually used for creating the training dataset, and how many days per sows were used. The amount of data included in the training is relatively low, so the reliability of the model is a worry.
How the videos were categorized as 'crushing day' or not is also quite confusing. It all seems to be dependent on farmer' report, but that doesn't seem very precise; what if crushing happened at night and the farmer sees it in the morning, is that day still a crushing day? There is also a large difference if the crushing happens early in the morning vs late in the evening. It would have been much more precise to look for crushing on the videos and label them based on the specific hour of crushing.

Validity of the findings

The aim of the study is again very relevant and novel, but the robustness of the methods is limited and the outcomes are therefore quite limited.

Cite this review as

Reviewer 3 ·

Basic reporting

The overall reporting of the study is quite good and written in a good language style. One shortcoming is that in the methods some detailed information should be added. These specifics points are listed under “General comments”. Also the overall research question is interesting and of importance. Crushing of piglets is a severe problem in pig farming and warning systems as aimed to be developed in the present study is very interesting. Also the approach using Video based monitoring is interesting as it is also easily for small farms.

Experimental design

The study design is appropriate and suited to examine to research question of the study. A question I raised in the “General question” is the relative low number of labelled pictures used for training. Maybe the authors could explain why this number is sufficient or if they can not (what I do not assume) they may increase that number for revision.

Validity of the findings

The validity of findings is good also based on the proper methodological approached used. However, as for the experimental design stated, it may be a bit too low number of labelled images included yet.

Additional comments

- Lines 17-18: This sentence is a bit unclear to me, how to you mean do disease outbreaks prevent crushing? As piglets already died from the diseases and not from crushing? Please clarify that sentence a bit.
- Line 20: It should read “YOLOv8” instead of “YOLOv8n”
- Line 22: please mentioned the meaning of the abbreviation “mAP50”
- Lines 24-26: Are the number of postural changes provided means or medians, please clarify?
- Line 27: please mentioned the meaning of the abbreviation “SVM”
- Line 35: Why is this only true for “in the country”?
- Lines 44-45: This is an important statement especially with regard to the aim of the paper, thus please provide a reference for this.
- Lines 55-56: Isn’t this part of the research question of the present paper? I suggest to rephrase this sentence or move it more to the end of the introduction.
- - Lines 66-68: I agree in general with that sentence, however, could you maybe specify the minimal requirements that are required to these simple surveillance cameras to be used for the suggested purpose? I.e. with respect to the device itself and also with respect to the environmental settings (light, computer or internet required, etc?).
- - Line 85: On several occasions you referred to “smallholder farmers”, could you provide typical dimensions of the livestock of such farmers?
- Line 98: Could you please indicate how many sows were considered in your study and of which genetic?
- Line 99: How many days were used for analysis?
- Table 1 legend: Please make the legend a bit more informative and only non-locomotory postures were decoded?
- Line 100: I do not get fully what you intend to say by this sentence, why should posture information only be available during day?
- Lines 105-106: How many images were labelled in total and how many per posture?
- Lines 120-121: Where these 224 images different ones between the datasets?
- Line 177: The number of 280 labelled images seems quite low, is there a threshold for YOLOv8 networks with respect to minimal number of labelled pictures for training?
- Line 177-178: Thus, more than one posture can be coded on one picture? How is this possible as on your example pictures only a single sow is visible and one sow can only have one posture at a time? I presume that I just do not get I right, maybe you can describe the details a bit more intense.
- Table 2: What size categories refer the different sizes in the table to? What are the definitions and thresholds for the different sizes?
- Table 3: Please provide the explanations of the Posture abbreviations also in the table legend.
- Figures 5 and 6: Please provide proper x and y axis and also respective labellings. Number the panels in each figure and provide heading for each panel in the figure legends rather than in the figure itself.
- Lines 261-262: A good critical summary, but the conclusion derived from it may also be that other parameters shall be included or used for crushing warning.

Cite this review as

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.