All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Dear author
Thank you for resubmitting your article after incorporating comments. I am pleased to inform you that your manuscript is now good enough and therefore, we are recommending it for publication. thank you
[# PeerJ Staff Note - this decision was reviewed and approved by Jyotismita Chaki, a 'PeerJ Computer Science' Section Editor covering this Section #]
Dear authors,
Thank you for re-submitting your manuscript with the incorporation of technical comments. I am pleased to inform you that reviewers are now satisfied with the technical contribution of your manuscript. To uphold the further quality of the paper we suggest you add the future research direction and improve the language of the paper in the second round as this is a good time to improve the paper's language.
Thank you
[# PeerJ Staff Note: The review process has identified that the English language must be improved. PeerJ can provide language editing services if you wish - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title). Your revision deadline is always extended while you undergo language editing. #]
I am satisfied with the revisions and contributions however, I recommend checking for English/grammatical improvements along with the figures and tables (if any).
satisfied
satisfied
Based on the input received from the reviewers, I would like to inform you that your manuscript is not acceptable in its current condition. Therefore we request you to please update your manuscript in light of the comments of the experts.
Please also justify the novelty and validity if your approach. Moreover, you need to get your manuscript professionally read by a native speaker.
Please resubmit after comprehensive updates and a detailed response.
[# PeerJ Staff Note: The review process has identified that the English language must be improved. PeerJ can provide language editing services if you wish - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title). Your revision deadline is always extended while you undergo language editing. #]
This paper proposes a novel image reconstruction model by using of a non-local feature fusion network. The proposed method is potentially so interesting but in my opinion it should benefit from a more clear presentation of the methodology, more detailed experimental setup, and a thorough quantitative analysis of the results. Please act in this direction
The literature review should include a deep discussion of existing methods in compressed sensing and image reconstruction. Highlight gaps in current research that this paper aims to fill.
The algorithm for the co-reconstruction group should be described step-by-step, possibly with pseudocode, to enhance reproducibility.
Explain how the non-local feature fusion network differs from traditional CS models. Provide a deeper insight into its architecture and the innovation it brings.
Detail the preprocessing steps applied to the training datasets. Explain any data augmentation techniques used and their impact on the training process.
The finding are interesting and really new but the paper has to be implemented
None
The manuscript presents a novel and promising approach to visual emotion recognition in the context of intelligent user-interior interaction design. The innovative use of a deep learning-based multimodal weighting network model and the incorporation of a self-attention mechanism are commendable. However, the manuscript would benefit from a more detailed literature review, a thorough presentation of the experimental setup and data, and a deeper discussion of the implications and limitations of the findings. Addressing these aspects will enhance the manuscript's clarity, rigor, and overall impact.
1. The study addresses a significant challenge in the field of intelligent user-interior interaction by improving visual emotion recognition through a novel deep learning-based multimodal weighting network model. The introduction of a self-attention mechanism within a convolutional neural network (CNN) and the development of a weight network classifier to optimize weights during training are innovative contributions. The importance of this research is evident as it aims to enhance human-centric and intelligent indoor interaction design, a crucial aspect of modern interior interaction design.
2. The research methodology appears well-conceived and scientifically sound. The use of a convolutional attention module employing a self-attention mechanism within a CNN is appropriate for handling extensive input data and enhancing model comprehension. The integration of a multimodal weighting network model to optimize weights during training and the subsequent development of a weight network classifier are logical and methodologically robust steps. However, more details on the training process, dataset, and specific implementation of the attention mechanism would enhance the understanding and reproducibility of the study.
3. The manuscript reports an impressive correctness rate of 77.057% and an accuracy rate of 74.75% in visual emotion recognition. While these figures are promising, the manuscript would benefit from a more detailed presentation of the experimental setup, including the size and characteristics of the dataset used, the preprocessing steps taken, and any potential biases in the data. Ensuring that the data is comprehensive and representative of diverse visual emotions is crucial for validating the findings.
4. The results are presented clearly, and the discussion logically follows from the experimental outcomes. The comparative analysis against existing models effectively demonstrates the superiority of the proposed multimodal weight network model. However, the discussion could be enriched by exploring the implications of the findings in practical applications of interior interaction design and by providing a deeper analysis of potential limitations and areas for future research.
5. The manuscript briefly mentions the limitations of current visual emotion recognition methods that rely solely on singular features. However, a more comprehensive literature review is necessary to contextualize the proposed model within the broader field of visual emotion recognition. This should include a discussion of related works, recent advancements, and the specific gaps that the current study aims to fill.
6. The manuscript does not mention any ethical considerations. If the study involves human subjects or data derived from human participants, it is essential to include a statement on ethical approval and consent. Additionally, addressing potential biases in the data and ensuring the privacy and confidentiality of the participants' information are crucial ethical aspects.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.