Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on February 14th, 2024 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on April 1st, 2024.
  • The first revision was submitted on April 17th, 2024 and was reviewed by 2 reviewers and the Academic Editor.
  • A further revision was submitted on May 20th, 2024 and was reviewed by the Academic Editor.
  • The article was Accepted by the Academic Editor on May 21st, 2024.

Version 0.3 (accepted)

· May 21, 2024 · Academic Editor

Accept

Thank you for addressing the reviewers' comments and suggestions. I believe that the manuscript is ready for publication.

[# PeerJ Staff Note - this decision was reviewed and approved by Jyotismita Chaki, a PeerJ Section Editor covering this Section #]

Version 0.2

· May 19, 2024 · Academic Editor

Minor Revisions

The reviewers suggested a couple of minor addition/edits that can improve the manuscript. They will not be critical to the acceptance of the manuscript, but I'd encourage the authors to incorporate the suggestions as they would improve the readability of the manuscript and clarify some of the results.

Reviewer 1 ·

Basic reporting

a. The abstract and introduction are very clear.

b. The literature review and references are comprehensive.

c. No issues with the structure of the paper.

d. The figures in the method sections have better flow.

Experimental design

a.The research is within the scope of the journal

b.The research question is well-defined in the abstract and introduction

c.The proposed method is rigorously compared against many other methods. However, it is difficult to understand their specific difference with the proposed method without a brief introduction to them.

d.The experimental methodology is simple as it relies on the DocRED benchmark.

Validity of the findings

a. The claim ‘all sentences are not required for document relation extraction’ in the abstract is supported in the ablation study.

b. The model is 'improving on inter-sentence' relation extraction is strongly supported by figure 4.

c. Overall ECRG performs strongly compared to other methods.

Additional comments

Line 273 - 277 can still be further improved in the final version to help the readers understand how the distance between entities are shortened. The sentences are too long. One of the sentences starts with 'And' further increasing the length of the previous sentence. I would highly recommend the use of chat-based LLMs to edit and rephrase paragraphs and sentences.

Cite this review as

Reviewer 2 ·

Basic reporting

no comment

Experimental design

no comment

Validity of the findings

The proposed method performs worse than ATLOP in DocRED, but outperforms ATLOP in Re-DocRED. Some explanations will be beneficial.

Cite this review as

Version 0.1 (original submission)

· Apr 1, 2024 · Academic Editor

Major Revisions

As the validity of the study is the most critical criterion, please address all comments regarding the validity, especially ones about missing baselines and proper support for all claims made in the manuscript. Although one of the reviewers recommended rejection, I think the comments may be potentially addressable and thus I recommend a major revision.

**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

**Language Note:** PeerJ staff have identified that the English language needs to be improved. When you prepare your next revision, please either (i) have a colleague who is proficient in English and familiar with the subject matter review your manuscript, or (ii) contact a professional editing service to review your manuscript. PeerJ can provide language editing services - you can contact us at [email protected] for pricing (be sure to provide your manuscript number and title). – PeerJ Staff

Reviewer 1 ·

Basic reporting

1. The abstract and introduction are clear, but the method section contains incomplete, grammatically incorrect, or ambiguous sentences. Examples include sentences starting at line 168 “We can…”, 171 “Capture more…”, line 208 “By connecting…”, line 219 “Connecting the same…” and line 250 “Shorten the distance between the two…”. There is also a typo in line 346: "reasonging graph."

2.The literature review and references are comprehensive, but a description of some relevant methods like graph-based methods is missing. Line 280 introduces graph-based models without describing them. Also line 190 introduces the methods "HESRE" and "RE-EA-PATH" without explaining the acronyms.

3.The structure of the paper is sound.

4.The method section contains unclear figures in a nonlinear sequence. For example, Figure 2's groups are unclear and may be related to the rules in Figure 3, suggesting that the figures' order should be flipped. Additionally, Figure 2's legend and caption need improvement. Specifically, the legend in Figure 2 differentiates ‘mention’ and ‘entity’ nodes but the ‘mention’ node is the same color as the node for “National Statuary Hall Collection”

Experimental design

1. The research falls within the journal's scope.

2. The research question is well-defined in the abstract and introduction.

3. The proposed method is rigorously compared against other methods, but their specific differences with the proposed method are challenging to understand without a brief introduction.

4. The experimental methodology is straightforward, relying on the DocRED benchmark.

Validity of the findings

1. The main claim in the abstract is that not all sentences are required for document relation extraction. The ablation study description is unclear, so it's uncertain if the 0.55-point performance drop for "w/o center-sentence rule" in Figure 4 confirms this. If it does, the readers should be explicitly directed to that specific ablation as evidence of the paper's main claim.

2. The second main claim is that the model specifically improves inter-sentence relation extraction. It would be valuable to see how the model performs on DocRED relation pairs spanning two sentences, as such an analysis would further validate the results and performance improvement.

3. It's challenging to assess the novelty, as the center-sentence rule and evidence graph construction are unclear to me. Even for the entity-level graph, except for entity aggregation, the description in unclear. Specifically, the motivation for and how the proposed method shortens the distance between entities unclear.

Additional comments

1. More motivation for Document Relation Extraction in the introduction would be beneficial.

2. The paper exhibits rigor in its experimental design and ablation study.

Cite this review as

Reviewer 2 ·

Basic reporting

In lines 190-192, HESRE and RE-EA-PATH seem to appear for the first time in this paper, but the paper does not explain which work they each represent. The DRE-EA-path that appears in Table 1 seems to be a typo, it should be RE-EA-path.

Experimental design

no comment.

Validity of the findings

1. There is a lack of comparison with important and highly relevant baselines. This paper has two contributions, one is the evidence-based method for finding the context related to two entities, and the other is the use of graph networks for inference. For the former, the work of ATLOP[2] uses the attention mechanism to filter information related to the target two entities and achieves an F1 of 59.05% on the DocRED test set using BERT-Base as the backbone. For graph network inference, the work of GAIN[1] adopted an inference pattern very similar to the method proposed in this paper, achieving an F1 of 61.24% on DocRED. This paper needs to clarify the differences from previous work and explain why these two works are not compared.

2. The contribution is overstated. This paper asserts that the proposed method enhances entity RE performance compared to existing work. However, there are numerous existing works that outperform the proposed method in DocRED, such as GAIN[1].

Additional comments

no comment

Cite this review as

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.