All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Both reviewers have confirmed that the authors have addressed their comments.
[# PeerJ Staff Note - this decision was reviewed and approved by Mehmet Cunkas, a PeerJ Section Editor covering this Section #]
'no comment'
'no comment'
'no comment'
The paper addresses an important topic with clarity and originality, making a valuable contribution to the field. It is well-prepared and suitable for publication
This review article investigates how to best design the artistic brand designs by combining theories of visual perception and multi-models of decision-making approaches. It emphasizes the impact that human perception based on vision has in affecting consumer recollection, affect and brand. The article is a synthesis or compilation of studies in the field of aesthetic design principles, cognitive psychology, and decision-making methods through computation to give an orderly explanation of branding strategies. In addition, it tests models that strike the balance between creativity, functionality and consumer liking. This review synthesises perceptual knowledge and decision-making frameworks to offer an extensive basis on the creation of artistic brand patterns that may help to increase aesthetic visual quality and improve market competitiveness.
The given study is experimentally designed to unite visual perception analysis with multi-model decision-making methods. Colour psychology, eye-tracking and pattern matching techniques will be used in order to gauge the consumer interaction with artistic brand designs. The quantitative measures include decision matrices, modelling preference, etc. combined with the qualitative perspectives of user surveys and focus groups. This framework allows systematic comparison of aesthetic appeal, cognitive effect, and consumer liking because one can test several alternative designs under controlled conditions to ensure the best pattern of brand creation.
The accuracy and correctness of the results is based on the hybridization of visual perception laws with multi-model decision-making standards guaranteeing scientific and functional applications. The cross-validation of the empirical studies, consumer testimonials and test of similar models makes it more reliable. With the combined aim of psychological understanding and computational assessment, the results can be applied to many other branding situations, proving that the proposed methods will have consistency and precision in giving effective outcomes when involved in optimizing artistic patterns design of the brand.
This critique draws attention to the increasing importance of artistic brand pattern designing to improve consumer interaction/market identity. It gives equilibrium in the aspects of creativity and functionality by bringing together visual perception theories and multi-model-based decision-making solutions. The research focuses on interdisciplinary input on the psychological, the design and the computational sciences. The present study can be further examined in future studies in terms of AI-based models, cultural impact and branding as something sustainable to enhance the feasibility of optimization design.
Please see the detailed both reviewers. Reviewers mentioned the lack of clarity on feature importance, theoretical justification for using GRNN, and comparisons with advanced or learnable representations like CNNs. While the experimental design is robust, the reliance on non-branding datasets (LIVE, CSIQ) limits generalizability, and key details—such as Log-Gabor parameters, computational complexity, and statistical significance of results—are omitted. The absence of benchmarking against modern deep learning metrics (e.g., LPIPS) and insufficient analysis of cross-domain performance further weaken the claims of scalability and generalizability.
**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.
While the paper introduces an 11-dimensional feature vector combining phase coherence, edge, color, and texture attributes, it does not clarify the relative contribution or importance of each feature type in the final regression.
Although the empirical results show GRNN outperforms SVM and Random Forest, the manuscript does not sufficiently explain why GRNN is more suitable from a theoretical standpoint.
The paper primarily employs Sobel, HSV, and Log-Gabor for feature extraction, yet more advanced or learnable representations (e.g., CNN-based filters, pretrained vision encoders) are not considered. This limits the scalability of the method to more complex branding imagery.
The use of the SCI metric for texture analysis is implemented without comparison to standard alternatives (e.g., GLCM, LBP, Gabor features). Benchmarking against these could enhance the credibility of the chosen texture descriptor.
The manuscript assumes all 11 features are essential, yet does not evaluate multicollinearity or redundancy.
Although the Log-Gabor filter is used for frequency feature extraction, the paper omits the detailed parameters used (e.g., number of orientations, scales). These hyperparameters critically affect performance and reproducibility.
The model claims generalizability and low-latency deployment, but computational complexity (e.g., runtime per image, FLOPs) is not quantified. Including these would better support claims of efficiency and scalability.
The experimental design is well-structured and clearly articulated
The findings appear to be valid given the robustness of the design
The LIVE and CSIQ datasets are not specifically curated for artistic or commercial branding images. The domain gap between these datasets and real-world art brand patterns limits the external validity of the conclusions.
Although cross-validation within datasets is performed, the model's ability to generalize across domains is not fully explored. A cross-dataset evaluation (e.g., train on LIVE, test on CSIQ) is mentioned but not extensively analyzed for overfitting or domain drift.
The paper only compares its performance with FSIM, PSNR, and VIF. Inclusion of recent deep learning-based full-reference IQA methods (e.g., LPIPS, DISTS, PieAPP) would better contextualize the performance of the proposed scheme.
While performance metrics like PLCC and RMSE are reported, the manuscript does not conduct t-test or Wilcoxon signed-rank test to validate whether improvements over baselines are statistically meaningful.
The experimental section lacks clarity on how datasets were split. Were k-fold cross-validation or fixed training/testing partitions used?
Different image distortions (blur, noise, compression) may affect model behavior differently.
A breakdown of model performance across various distortion types would help understand its robustness.
The paper only reports mean values (e.g., PLCC = 0.9725) without standard deviation or confidence intervals.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.