All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Dear Authors,
Your paper has been revised. It has been accepted for publication in PEERJ Computer Science. Thank you for your fine contribution.
[# PeerJ Staff Note - this decision was reviewed and approved by Massimiliano Fasi, a PeerJ Section Editor covering this Section #]
In reviewing both revised versions of the paper and the response, I am happy that there is now sufficient background and context provided for the reader. This includes clarity in the new sectioning, explicit references to other papers by the team where things can be found, and improved figures. The clarity of the text has been improved as well.
I believe the work to be original and within the Aims and Scope of the journal. The improvements have now made the research question clearer, and the description of the methods has been improved.
The limitations of the results are better expressed in this version. While I would personally prefer a paper to be standalone in terms of replication, the additional references to previous work do make it possible for a determined reader to recreate the experiment.
Overall, I am satisfied that the authors have responded to my comments and those of my esteemed co-reviewers.
Thank the authors for addressing all of my comments. The manuscript has been revised and is now clearer, particularly in presenting the authors’ work and contributions. Although a few typographical errors remain, I am overall satisfied with the authors’ responses.
The response is clear and well-explained.
The response is adequate.
**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.
The authors have addressed some of my questions and focused their responses specifically on the reviewers’ concerns. However, in my opinion, some key points, such as the authors’ main findings and future perspectives, should be presented more clearly to ensure the audience can easily understand them.
In the “Materials & Methods” section, in the paragraph starting with “To ensure real-time feasibility, a WCET Estimation Module is embedded within the framework…”, you describe profiling T₁ and T₂ execution times in simulation while noting that hardware-related delays (such as memory contention and I/O latency) are not included. I am curious, how might omitting these delays affect the accuracy of your WCET estimates when moving from simulation to real hardware?
In the “Results” section, where you present RMSE and SNR comparisons for different noise reduction methods, the evaluation is based on a single biosensor model using simulation data. How might the results vary if the framework were tested on multiple biosensor designs or under different simulated environmental conditions, and do you see value in incorporating such variability tests in future experiments?
In the “Discussion” section, you show a big improvement in RMSE and SNR, and only a small 4% error in execution time. But since these results come only from simulations and not from real hardware tests, how sure can we be that they will work the same in real life? What would you do in future work to make these results more reliable for real-world use?
**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.
**Language Note:** The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title). Alternatively, you should make your own arrangements to improve the language quality and provide details in your response letter. – PeerJ Staff
This paper describes a co-simulation for a biosensor that includes WCET calculations. Overall, the paper is broadly clear, but it misses specifics and details that would build confidence in the results and allow replication.
The end of the introduction only refers to Sections 2, 3, and 6, and it also refers to "this proposal" (line 162), and Figure 1 is introduced twice. This suggests the paper would benefit from even a basic level of further proofreading to check for such errors and inconsistencies.
The introduction is long, around six pages, with no subsections or other structure. It contains material that might be expected in a Background and Related Work section. Scanning the various paragraphs shows where, for example, several tools and techniques are each described - we may well expect this to be where the section break would be. I would strongly encourage the authors to rework this material into a clear Introduction and Background / Literature Review.
More specifically, on background material, the motivation for the adopted approach could be placed in context more clearly. For example, FMI is mentioned twice, which is appropriate given the relevance, but the approach in the paper is not compared to it - why was FMI as-is or an augmentation of FMI not appropriate? Similar points can be made for the alternative modelling approaches; much of the material describing them is positive and does not.
While there is quite a lot of background material, given the general scope of this journal, we cannot expect readers to be experts. From the paper, I get no real sense of what COMSOL is or looks like. Similarly, CODIS+ is not well described - CODIS+ is a co-simulation environment which supports discrete and continuous models, so statements that the "proposed co-simulation framework integrates CODIS+ and COMSOL" (l.424) do not make it clear exactly what was done. What is the "framework" specifically? Are the elements, such as the WCET calculations, now available to anyone using CODIS+ themselves? I encourage the authors to make this much clearer in both the background and the method.
The paper provides Figure 1 to augment the discussion of the framework; however, the diagram is small and hard to read. Defining a framework through a sequence diagram seems too low-level. I would encourage the authors to think about a conceptual diagram that clearly shows the framework and its elements. Figure 2 may provide this, but if it is not framed in this way.
How we define an experiment in this case will necessarily affect the judgment of the experiment design. The paper is not framed with a research question directly (e.g., can integration of COSMOL into CODIS+ produce better simulations of biosensors), but more like a working hypothesis. In that regard, it is fairly clear. The "background" section broadly supports the research gap.
It is also not clear how this is a "framework" rather than simply an integration of a COSMOL model into CODIS+. Can any COSMOL model now be plugged in? Are the WCET features added to CODIS+ and released for others to use, for example?
It is less clear that the methods have sufficient detail to replicate. There is zero information on the models that were actually plugged in during the testing. Given that it is a co-simulation, one would expect at least some details about the model in the co-simulation. For example, "The biosensor’s electrical properties are modeled to generate raw sensor data" would require a lot more detail and explanation. I would expect at least a "Case Study" section, with screenshots and diagrams, showing the models that were used.
An Excel spreadsheet is provided with the SNR data for Figure 3. As I am not a biosensor expert, I may be wrong, but I was expecting to see some (at least indicative) outputs from the co-simulation so the reader can see what it does. The RMSE / SNR results seem focused on performance rather than data validity. I would also expect more than one case study to meet the criteria for robust, statistically sound, & controlled.
Moreover, for a paper such as this, I think we would say that the framework and models are also data - if someone wished to replicate the results, then they need these plus instructions on running them. As presented, there is insufficient "data" for any meaningful replication.
Overall, I believe there are interesting results here. The paper should be revised to meet professional style, to provide more detail on the "framework", and provide more information on results and how to replicate these results (by providing models, sources, etc.).
The manuscript is generally well-written, well-structured, and supported by clear figures and a comprehensive literature review. However, minor grammatical and formatting issues, including encoding errors and inconsistent technical terminology, need correction. Additionally, the manuscript would benefit from more comparative discussion of related works to better highlight the novelty and contribution of the proposed approach.
The manuscript presents a methodological approach with a well-integrated co-simulation framework using COMSOL and CODIS+ and appropriate use of WCET and ETE metrics. The simulation and profiling methods are clearly explained, enhancing the technical rigor. However, the lack of experimental validation and limited details on the CNN architecture are notable weaknesses including the simulation-based validation and providing more specifics on the CNN implementation and FPGA performance would improve the study's clarity, reproducibility, and support for its real-time feasibility claims.
The findings are supported by clearly presented RMSE and SNR metrics with a low 4% Execution Time error, indicating strong robustness. The CNN-based filtering method is shown to outperform traditional techniques, reinforcing its effectiveness. However, the absence of experimental validation limits the generalizability of the results; including real-world testing and statistical measures such as confidence intervals would improve the reliability and applicability of the study’s conclusions.
The innovative combination of AI-based signal processing using a CNN alongside WCET profiling is both timely and relevant, showcasing advancement in simulation-driven biosensor design.
However, several revisions are recommended to improve clarity and accuracy:
1) Formatting and encoding issues, such as the incorrect display of characters (e.g., “û” instead of “fi”), should be addressed throughout the manuscript.
2) The authors should explicitly clarify in the abstract and the limitations section that the validation is based solely on simulation data, setting appropriate expectations for readers.
3) Providing more detailed information about the CNN architecture, including the number of layers, types of filters, and details on the training dataset, would enhance the technical rigor and reproducibility of the work.
4) The results and discussion sections would benefit from refinement to improve clarity and conciseness, particularly by removing redundant statements.
5) The conclusion should be reworded to moderate claims regarding real-time readiness unless supported by further empirical validation, such as results from FPGA implementation or other hardware testing.
1. The characters in Figure 1 are too small, and the resolution is too low to be easily readable.
2. line 154: CODS+ -> CODIS+1. line 471: The framework is evaluated entirely via simulations (COMSOL is used as the “high-fidelity” reference). No experimental or empirical biosensor data are provided, and the authors acknowledge the lack of real measurements. This limits confidence in the real-world performance claims. It is recommended to validate the approach on actual hardware or synthetic test data, or at least clarify the limitations: the manuscript itself notes that WCET estimation ignores hardware factors like memory contention and I/O delays, so the true real-time feasibility remains unproven.
1. Line 471: The framework is evaluated entirely via simulations (COMSOL is used as the “high-fidelity” reference). No experimental or empirical biosensor data are provided, and the authors acknowledge the lack of real measurements. This limits confidence in the real-world performance claims. It is recommended to validate the approach on actual hardware or synthetic test data, or at least clarify the limitations: the manuscript itself notes that WCET estimation ignores hardware factors like memory contention and I/O delays, so the true real-time feasibility remains unproven.
2. The manuscript gives no details on the 1D CNN model's architecture, training data, or how it was validated. The authors should describe the CNN design (layers, training process, dataset) to justify the reported noise-reduction gains.
1. Line 307: The authors state that the framework removes iterative control feedback to COMSOL. It is not explained why this is acceptable; in real sensors, control actions often affect the physical state (e.g., via bias adjustments). If feedback is truly negligible for this application, the authors should explicitly justify that assumption. Otherwise, they should discuss how omitting feedback might limit the model’s fidelity. Clarifying this point will improve the argument.
2. Line 529: While WCET is profiled, the simulation ignores real system delays. The text admits that the safety margin “does not account for real-world hardware constraints such as memory contention, scheduling overhead, and I/O delays”. This means the 4% ETE figure may be overly optimistic. The authors should either include some modeling of these delays or explicitly note that the WCET result is idealized. At minimum, phrases like “real-time feasibility” should be qualified as “in simulation”.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.