Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on June 4th, 2024 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on June 26th, 2024.
  • The first revision was submitted on August 27th, 2024 and was reviewed by 1 reviewer and the Academic Editor.
  • A further revision was submitted on October 1st, 2024 and was reviewed by the Academic Editor.
  • The article was Accepted by the Academic Editor on October 4th, 2024.

Version 0.3 (accepted)

· Oct 4, 2024 · Academic Editor

Accept

Dear author

Thank you for resubmitting your article after incorporating comments. I am pleased to inform you that your manuscript is now good enough and therefore, we are recommending it for publication. thank you

[# PeerJ Staff Note - this decision was reviewed and approved by Jyotismita Chaki, a 'PeerJ Computer Science' Section Editor covering this Section #]

Version 0.2

· Sep 9, 2024 · Academic Editor

Minor Revisions

Dear authors,

Thank you for re-submitting your manuscript with the incorporation of technical comments. I am pleased to inform you that reviewers are now satisfied with the technical contribution of your manuscript. To uphold the further quality of the paper we suggest you add the future research direction and improve the language of the paper in the second round as this is a good time to improve the paper's language.

Thank you

[# PeerJ Staff Note: The review process has identified that the English language must be improved. PeerJ can provide language editing services if you wish - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title). Your revision deadline is always extended while you undergo language editing. #]

Reviewer 2 ·

Basic reporting

I am satisfied with the revisions and contributions however, I recommend checking for English/grammatical improvements along with the figures and tables (if any).

Experimental design

satisfied

Validity of the findings

satisfied

Version 0.1 (original submission)

· Jun 26, 2024 · Academic Editor

Major Revisions

Based on the input received from the reviewers, I would like to inform you that your manuscript is not acceptable in its current condition. Therefore we request you to please update your manuscript in light of the comments of the experts.

Please also justify the novelty and validity if your approach. Moreover, you need to get your manuscript professionally read by a native speaker.

Please resubmit after comprehensive updates and a detailed response.

[# PeerJ Staff Note: The review process has identified that the English language must be improved. PeerJ can provide language editing services if you wish - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title). Your revision deadline is always extended while you undergo language editing. #]

·

Basic reporting

This paper proposes a novel image reconstruction model by using of a non-local feature fusion network. The proposed method is potentially so interesting but in my opinion it should benefit from a more clear presentation of the methodology, more detailed experimental setup, and a thorough quantitative analysis of the results. Please act in this direction

Experimental design

The literature review should include a deep discussion of existing methods in compressed sensing and image reconstruction. Highlight gaps in current research that this paper aims to fill.

The algorithm for the co-reconstruction group should be described step-by-step, possibly with pseudocode, to enhance reproducibility.

Explain how the non-local feature fusion network differs from traditional CS models. Provide a deeper insight into its architecture and the innovation it brings.

Detail the preprocessing steps applied to the training datasets. Explain any data augmentation techniques used and their impact on the training process.

Validity of the findings

The finding are interesting and really new but the paper has to be implemented

Additional comments

None

Reviewer 2 ·

Basic reporting

The manuscript presents a novel and promising approach to visual emotion recognition in the context of intelligent user-interior interaction design. The innovative use of a deep learning-based multimodal weighting network model and the incorporation of a self-attention mechanism are commendable. However, the manuscript would benefit from a more detailed literature review, a thorough presentation of the experimental setup and data, and a deeper discussion of the implications and limitations of the findings. Addressing these aspects will enhance the manuscript's clarity, rigor, and overall impact.

1. The study addresses a significant challenge in the field of intelligent user-interior interaction by improving visual emotion recognition through a novel deep learning-based multimodal weighting network model. The introduction of a self-attention mechanism within a convolutional neural network (CNN) and the development of a weight network classifier to optimize weights during training are innovative contributions. The importance of this research is evident as it aims to enhance human-centric and intelligent indoor interaction design, a crucial aspect of modern interior interaction design.

Experimental design

2. The research methodology appears well-conceived and scientifically sound. The use of a convolutional attention module employing a self-attention mechanism within a CNN is appropriate for handling extensive input data and enhancing model comprehension. The integration of a multimodal weighting network model to optimize weights during training and the subsequent development of a weight network classifier are logical and methodologically robust steps. However, more details on the training process, dataset, and specific implementation of the attention mechanism would enhance the understanding and reproducibility of the study.

3. The manuscript reports an impressive correctness rate of 77.057% and an accuracy rate of 74.75% in visual emotion recognition. While these figures are promising, the manuscript would benefit from a more detailed presentation of the experimental setup, including the size and characteristics of the dataset used, the preprocessing steps taken, and any potential biases in the data. Ensuring that the data is comprehensive and representative of diverse visual emotions is crucial for validating the findings.

Validity of the findings

4. The results are presented clearly, and the discussion logically follows from the experimental outcomes. The comparative analysis against existing models effectively demonstrates the superiority of the proposed multimodal weight network model. However, the discussion could be enriched by exploring the implications of the findings in practical applications of interior interaction design and by providing a deeper analysis of potential limitations and areas for future research.

5. The manuscript briefly mentions the limitations of current visual emotion recognition methods that rely solely on singular features. However, a more comprehensive literature review is necessary to contextualize the proposed model within the broader field of visual emotion recognition. This should include a discussion of related works, recent advancements, and the specific gaps that the current study aims to fill.

Additional comments

6. The manuscript does not mention any ethical considerations. If the study involves human subjects or data derived from human participants, it is essential to include a statement on ethical approval and consent. Additionally, addressing potential biases in the data and ensuring the privacy and confidentiality of the participants' information are crucial ethical aspects.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.