Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on April 23rd, 2025 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on August 12th, 2025.
  • The first revision was submitted on September 25th, 2025 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on November 11th, 2025.

Version 0.2 (accepted)

· · Academic Editor

Accept

Reviewer 2 has not re-reviewed this work. I have carefully checked the revisions and authors' responses against the reviewers' reports and can confirm that the authors' have addressed all the reviewers' suggestions.

Reviewer 1 mentioned lack of novelty and suggests to do more advanced methodology eg. multi-modal approach. However, novelty is not a requirement of the journal, but the work must show a contribution to the existing field.
The authors revisions have acknowledged this as a limitation. I commend the authors for ensuring reproducibility and implementing this work in Jupyter notebooks, and provenance in Zenodo. This manuscript is now ready for publication.

[# PeerJ Staff Note - this decision was reviewed and approved by Shawn Gomez, a PeerJ Section Editor covering this Section #]

·

Basic reporting

no comment

Experimental design

no comment

Validity of the findings

no comment

Additional comments

no comment

Version 0.1 (original submission)

· · Academic Editor

Major Revisions

**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

**PeerJ Staff Note:** It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors agree that they are relevant and useful.

·

Basic reporting

The paper is overall clearly written with logical flow, the intro/background has provided sufficient context on the importance of predicting IV fluid utilization in EDs, and the potential for integrating unstructured patient narratives with structured clinical data, with the introduction section adequately stating the motivation for the study. And the manuscript is well-structured. However, the methodology is not clear -- there's no overall pipeline/framework illustration figure, and the method design, though sound, but some limited novelty (multi-modality encoding then late fusion).

Experimental design

As suggested in basic reporting, there's no overall pipeline/framework illustration figure, which is highly recommended to be included in the manuscript. The provided code implementation is fine; the evaluation methods, assessment metrics (AUC, accuracy, sensitivity, specificity, precision), and model selection methods (Logistic Regression, Gradient Boosting Classifier, late fusion for integrated models, 5-fold cross-validation) are adequately described. However, there seems to be no adequate baseline methods compared, limiting the paper's validity/superiority against other more advanced multimodal learning on structured & unstructured EHR data.
E.g,. An ICML paper: https://dl.acm.org/doi/10.5555/3618408.3620139 Improving medical predictions by irregular multimodal electronic health records modeling

**PeerJ Staff Note:** It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors are in agreement that they are relevant and useful.

Validity of the findings

The conclusions drawn are well-stated and appear to be directly supported by the presented results. The primary conclusion that integrated models (structured data + GPT-2 embeddings from narratives) outperform models using only structured or unstructured data is evident from the reported AUC values and other metrics. But as I suggested before, the paper could be much better with some more advanced methodology design in tackling multi-modal learning challenges (e.g., sparsity, irregularity, unpaired, etc.) with more methods to compare.

Reviewer 2 ·

Basic reporting

The article focuses on the advantages of using GPT-2 to improve the results of the supervised analysis conducted.

However, it does not provide an adequate comparison with similar studies in the literature.

In particular, a comparison should be made with similar studies that address the issue of textual data, providing the reader with an overview of the possible solutions. Indeed, there are several methods based on generative artificial intelligence that effectively tackle this type of problem. The authors should therefore provide the necessary tools to help readers fully understand the advantages of using generative AI.

Experimental design

The methods are described, but more detail should be provided regarding the rationale behind the choice of machine learning techniques used, as well as the specific advantages offered by the use of generative artificial intelligence.

Validity of the findings

To better justify their methodological choices and assess their level of innovation, the authors should also discuss similar studies in greater detail and present a comparative analysis of the results obtained using GPT-2 versus more established approaches, such as CountVectorizer, Word2Vec, etc. This would allow for a clearer understanding of the actual benefits brought by the use of GPT-2

The authors should at least compare their results with those obtained without using GPT-2.

Additional comments

Table 1 could be converted into an equivalent graphical representation to enhance clarity and visualize the data structure.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.