All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Dear Authors,
The reviewers have concluded that all issues have been addressed and recommend that the manuscript be published.
Best wishes,
[# PeerJ Staff Note - this decision was reviewed and approved by Massimiliano Fasi, a PeerJ Section Editor covering this Section #]
All my comments have been thoroughly addressed. It is acceptable in the present form.
All my comments have been thoroughly addressed. It is acceptable in the present form.
All my comments have been thoroughly addressed. It is acceptable in the present form.
The authors were requested to correct reporting errors, ensure consistency in terminology and units, improve mathematical notation, and enhance overall clarity. These issues have been addressed through careful re-typesetting of equations, correction of physical constants and units, standardized terminology throughout the manuscript, expanded figure captions, and improved language quality. The reporting is now clear and consistent.
The review requested clearer explanation of the modeling framework, justification of algorithm components, and transparency in parameter settings and data usage. The authors responded by revising the methodological sections, clarifying the parameter decomposition strategy, explicitly detailing the memory and adaptive update mechanisms, justifying hyperparameter choices, and expanding the experimental setup. The experimental design is now well defined and technically sound.
Concerns regarding robustness, statistical reliability, and generalizability were addressed by extending experiments to an additional PV module, running all algorithms over multiple independent trials, adding statistical tests, and providing stability and convergence analyses. These revisions adequately support the validity of the reported findings.
All reviewer comments have been carefully addressed as documented in the response letter and reflected in the revised manuscript. The paper has improved substantially in clarity, rigor, and completeness, and it is suitable for publication.
• Language: The revised manuscript uses clear and professional English.
• Background: The literature review provides sufficient background and relevant references.
• Structure: The article is well structured with clear figures, tables, and documented data sources.
• Completeness: The results are self-contained and adequately support the study’s hypotheses.
• Formal clarity: Key terms and formulations are clearly defined, ensuring reproducibility.
• Originality and scope: The study presents original primary research that fits well within the aims and scope of the journal.
• Research question: The research question is clearly defined, relevant, and addresses an explicitly identified gap in PV parameter identification.
• Rigor: The investigation is conducted with appropriate technical rigor and follows accepted ethical standards.
• Reproducibility: The methods are described in sufficient detail to allow replication by other researchers.
• Impact and novelty: The study focuses on methodological validity rather than claims of impact, and provides a clear rationale that supports meaningful replication.
• Data robustness: The underlying data are clearly described, controlled, and sufficient to support the reported results.
• Conclusions: The conclusions are clearly stated, directly linked to the research question, and limited to what is supported by the results.
Dear Authors,
Thank you for the submission. The reviewers’ comments are now available. It is not suggested that your article be published in its current format. We do, however, advise you to revise the paper in light of the reviewers’ comments and concerns before resubmitting it.
Best wishes,
**PeerJ Staff Note**: Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.
The manuscript entitled “A memory and update strategy-based social group optimization for unknown parameter identification of photovoltaic modules” proposes MUS-SGO, a Social Group Optimization variant augmented with (i) a historical memory repository with decaying weights to guide exploitation and (ii) an adaptive population update that periodically replaces low-fitness individuals, to identify single-diode PV model parameters. A parameter–decomposition step solves linear parameters from a nonlinear subset. Experiments on the KC200GT module across irradiance/temperature compare MUS-SGO against seven algorithms and include an ablation on CEC’17 functions. Reported RMSE values are markedly lower and r²≈1.0 in most settings.
The PV equations list fundamental constants with impossible magnitudes (e.g., Boltzmann’s constant is shown as “~10²²³ J/K” and electron charge as “~10²¹⁹ C”), which appears to be typesetting/exponent errors. Such errors cast doubt on all downstream numeric results. The constants and units must be corrected, double-checked, and re-derived; otherwise the identification is not reproducible or credible.
Units for Rs and Rsh are inconsistent across tables (e.g., headers marked mΩ while values look like Ω, and MUS-SGO yields Rsh ≈ 12–18 Ω where other methods produce kΩ–range, a major physical discrepancy). Provide a unit audit (axis labels, table headings, text) and sanity checks against manufacturer curves.
r² values of exactly 1.0000 and identical Mean = Min = Max RMSE across 30 runs for MUS-SGO (several settings) are implausible for noisy I–V data unless the simulated curve is effectively over-parameterized or post-fit to the same points used to evaluate. Explain the data source, noise model, and whether you are fitting manufacturer digitized points (noise-free) rather than independent measurements. If you use the same points for both tuning and evaluation, results overstate performance. Provide train/test split or cross-validation and report uncertainty.
The decomposition from X=[a,R_s,R_sh,I_ph,I_o]to X_1=[a,R_s], X_2=[I_ph,I_o,R_sh]and the matrix system N(X_1)X_2=M(X_1)is presented, but dimensions, invertibility conditions, and conditioning of Nare not established. You must prove existence/uniqueness (or regularization) of X_2and provide numerical stability guidance (conditioning, scaling).
Several equations contain misprints (“N_21 (X_1) is the inverse matrix of N(X_1)”—notation mismatched; unclear definitions of P,Q,E; ambiguous use of NFE/T). Please re-typeset the math, define all symbols once, and verify every equation with consistent indexing.
The dynamic memory weight w_i=λ^(m-i)–style decay is claimed (your text shows w_i=λ" " m_i^2/ “λm^2i” formatting artifacts). Provide the exact implemented formula, normalization, and how many memories m are stored (text says “m=5 and also 10% of NP”—these only coincide for NP=50; clarify the general rule).
The adaptive population update uses an exponential decay to choose the elimination ratio, but n=max(1,⌊λ(t)⋅NP⌋)and the definition of R(t)are not consistent. Give the actual code-level rule and a figure reporting diversity metrics (e.g., average Hamming/L2 spread) over iterations.
Max function evaluations for PV identification are fixed at 5,000 for all methods, yet for the CEC’17 ablation you allow 500,000 evaluations—this makes the ablation conclusions non-transferable. Keep identical FE budgets or justify the difference.
Provide proper tuning for each baseline (learning factors, CR/F, NP), ≥30 independent runs, and report mean±SD with statistical tests (e.g., Wilcoxon/Bonferroni). Current tables lack CIs and p-values, and many baselines appear to use default hyperparameters (potentially biased).
“Discussion” section should be added in a more highlighting, argumentative way. The author should analysis the reason why the tested results is achieved.
The authors should clearly emphasize the contribution of the study. Please note that the up-to-date of references will contribute to the up-to-date of your manuscript. The studies named- “A robust chaos-inspired artificial intelligence model for dealing with nonlinear dynamics in wind speed forecasting; Solving the dynamic stability challenge of automatic gain-controlled phase-locked loops: A robust alternative via enhanced SRF-PLL design and implementation” can be used to explain the methodology and optimization process in the study or to indicate the contribution in the “Introduction” section.
MUS-SGO1 (memory only) and MUS-SGO2 (update only) improve over SGO in different function classes, but the PV result superiority could also stem from parameter decomposition rather than the metaheuristic tweaks. Add a fourth ablation: SGO + decomposition without memory/update, and MUS-SGO without decomposition, to isolate each factor’s contribution on the PV task.
MUS-SGO routinely finds very small Rs and very small Rsh (dozens of ohms) while achieving r²≈1.000. Such parameters can still fit the curve but may be physically implausible for this module. Add physical bounds (from datasheet/engineering constraints) and report parameter confidence intervals and identifiability (profile likelihood or bootstrapping). Temper claims until parameter realism is demonstrated.
The article provides sufficient background information in terms of the PV parameter definition literature, but it is somewhat inadequate in terms of placing it in a broader scientific and applied context (energy optimization, real system integration, current meta-heuristic trends). This section should be strengthened by discussing the importance of the topic in the field and the unique contribution of MUS-SGO more clearly.
1- The source of the data is only specified as "KC200GT dataset (Düzenli et al., 2022)". How the data was obtained (e.g., experimental measurement, manufacturer data, simulation) is not clearly explained. Details such as the data collection scheme, measurement devices, margin of error, and sampling frequency are not provided. Only equation (22) is provided to show how the "Isc(G, T)" relationship is calculated, i.e., empirical modeling is performed, and the actual experimental setup is not defined.
2- The results of the study are shown only for a single PV module (KC200GT, polycrystalline). The data obtained cannot be used for different modules and technologies. The authors should explain in detail why they conducted the study on only one PV module.
3- How hyperparameters such as population size, memory capacity, and adaptive coefficients were selected is not sufficiently explained.
The results are only given in the form of "Mean, Min, Max, Std." The claim of "superior performance" is therefore not statistically reliable.
This study proposes a novel social group optimization algorithm (MUS-SGO) based on a memory-and-update strategy for determining photovoltaic (PV) module parameters. The study is based on the single-diode PV model and provides comprehensive numerical comparisons on KC200GT module data. The method has yielded noteworthy results, particularly in terms of convergence speed and accuracy. However, while the article generally succeeds in demonstrating the method's potential, it has significant shortcomings in experimental validation, statistical significance, and generalizability.
Strengths
• The manuscript follows a clear structure (Abstract , Introduction , Related Work , Method ,Experiments , Conclusion).
• The topic: improving parameter identification of PV modules via a modified Social Group Optimization (SGO) is timely and relevant to renewable-energy modelling.
• Figures and tables are generally well formatted and correspond to the text.
Weaknesses
• Language and clarity: Although readable, the English needs substantial polishing. Sentences are often long and repetitive (e.g., multiple uses of “thereby” and “which”). Several grammatical errors and awkward phrasings occur (e.g., “individuals primarily rely on information from the current best individual and random individuals”).
• Terminology: Inconsistent naming occurs sometimes “Memory and Update Strategy based SGO,” sometimes “MUS-SGO.” Standardize throughout.
• Literature review: The review in Introduction and Related Work is extensive but descriptive. It lists algorithms (GA, PSO, ABC, DE, TLBO, etc.) sequentially without synthesizing them thematically or analytically. The research gap that current algorithms fail to balance exploration/exploitation and lack memory mechanisms appears only implicitly.
• Citations: Recent 2024–2025 works are included, but key references on PV parameter identification via hybrid metaheuristics (e.g., Harris Hawks, Whale, Salp Swarm, Bio-geography-based Optimization) are missing.
• Figures/tables: Some captions (Figures 3 and 6) are too brief and do not explain variables or parameter ranges. Tables 1–8 lack units for several quantities (e.g., Rs, Rsh).
Recommendations
• Engage a professional English editor for grammar, concision, and technical tone.
• Reorganize the literature review thematically (e.g., original heuristics , improved algorithms , hybrids , limitations). End with an explicit statement of the research gap.
• Strengthen figure/table captions for stand-alone comprehension and specify units.
• Harmonize notation and ensure consistent use of “MUS-SGO.”
• Briefly explain how this work advances beyond recent 2024 PV-parameter optimization papers.
Strengths
• The paper clearly defines its research objective: improving PV parameter identification via an enhanced SGO incorporating memory and adaptive update strategies.
• The algorithmic design (dynamic memory guidance + adaptive population update) is conceptually sound.
• Benchmarking on both synthetic functions (CEC17) and a real PV dataset (KC200GT) provides a solid evaluation framework.
Weaknesses
• Novelty framing: MUS-SGO builds directly on SGO; the innovation adding historical memory and adaptive population update should be positioned as an incremental yet practical enhancement, not as a fundamentally new paradigm.
• Dataset limitation: Only one PV module (KC200GT) from a single source is used. No validation on multi-type or field-measured data, limiting generalizability.
• Preprocessing details: The manuscript omits critical data-handling information how I–V curves were sampled, noise removed, normalization applied, or measurement errors addressed.
• Hyperparameters: While default constants (C = 0.2, λ = 0.9, etc.) are listed in Algorithm 1, other meta-parameters (population size sensitivity, Max NFE justification) are not discussed.
• Comparison fairness: Competing algorithms (ITLBO, EJADE, L-SHADE, etc.) use heterogeneous parameter sizes (Np = 20–50). Equal population or equivalent function-evaluation budgets should be ensured for fair comparison.
• Ablation study: Although both MUS-SGO1 and MUS-SGO2 are compared, statistical tests (e.g., Wilcoxon rank-sum) are absent.
Recommendations
• Explicitly state all algorithmic hyperparameters in a summary table (learning rates, population, termination criteria).
• Provide full preprocessing and I–V data-acquisition details.
• Add experiments on at least one additional PV dataset (e.g., mono-crystalline PV cell or commercial data) to demonstrate generalization.
• Apply statistical significance tests for comparisons across algorithms.
• Frame novelty carefully: emphasize efficiency and robustness rather than revolutionary innovation.
Strengths
• Results show consistent superiority of MUS-SGO in RMSE and r² across irradiance and temperature conditions.
• The inclusion of multiple irradiance and temperature levels reflects realistic PV variability.
• Ablation experiments demonstrate each strategy’s contribution.
Weaknesses
• Error analysis: No discussion of error distribution or parameter sensitivity. Which parameters (Rs, Rsh, a) contribute most to residuals?
• Confidence measures: No standard deviations/error bars on RMSE plots or r² values; yet small numerical differences are claimed as significant.
• Overfitting risk: The same dataset is used for both development and evaluation; no external validation or k-fold procedure is employed.
• Computational cost: The paper lacks runtime comparison (CPU seconds or iterations) with baselines.
• Limitations section: The conclusion only briefly states that MUS-SGO is accurate and robust; limitations (single dataset, no field validation, potential complexity) are underdeveloped.
Recommendations
• Add parameter-sensitivity or contribution analysis (e.g., Sobol indices).
• Include error bars or confidence intervals in all performance plots.
• Report computation time and convergence iteration counts.
• Split dataset into training / validation subsets or use cross-validation to verify robustness.
• Expand the limitations and future-work section to include:
o validation on other PV technologies,
o hybridization with deep learning-based I–V models,
o parameter interpretability and physical constraints.
General Comments and Recommendations
1. Language and structure: Perform a thorough English and stylistic edit for clarity, conciseness, and grammatical accuracy.
2. Contextualization: Reframe the contribution modestly emphasize practical robustness improvements rather than conceptual novelty.
3. Figures & Tables: Ensure all units, symbols, and abbreviations are defined; expand captions for stand-alone understanding.
4. Comparative baselines: Consider adding modern optimizers such as Harris Hawks Optimization, Whale Optimization, and Grey Wolf variants for stronger benchmarking.
5. Statistical robustness: Apply non-parametric significance tests and include p-values.
6. Discussion depth: Add physical interpretation of identified parameters how much deviation in Rs or Rsh is acceptable in PV diagnostics?
7. Future scope: Mention possible integration with hybrid metaheuristic-ML frameworks or adaptive PV-monitoring systems.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.