All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Thank you for your valuable contribution in oncology.
[# PeerJ Staff Note - this decision was reviewed and approved by Vladimir Uversky, a PeerJ Section Editor covering this Section #]
1. I ensured consistent terminology and improved reproducibility by providing detailed configuration and parameter descriptions.
2. The author expanded the Introduction to include challenges in AI-driven drug discovery, creating a more balanced narrative.
1. The author added quantitative results and clearer structural comparisons to strengthen the Abstract and Results sections.
1. The author clarified the rationale behind the use of REINVENT4, Chemprop, and AMBER99SB, with adequate justification for each methodological choice.
2. Also, enhanced the discussion on novelty, diversity analysis, docking validation, and ADMET considerations.
1. The author added a dedicated Limitations and Future Directions section, improving the manuscript’s transparency and translational perspective.
Please address the criticisms and comments thoroughly before resubmitting your manuscript.
**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.
Comment 1: The manuscript presents a comprehensive AI-driven approach for BRAF inhibitor design; however, the abstract would benefit from more quantitative results to enhance clarity for the reader.
Comment 2: The manuscript would benefit from a clearer explanation of how the generated inhibitors compare structurally to existing BRAF inhibitors, highlighting novelty beyond predicted activity.
Comment 3: The integration of Chemprop and reinforcement learning is commendable, but details regarding model validation and performance metrics (e.g., RMSE, R²) should be added to assess model robustness.
Comment 4: While the docking and MD simulations are thorough, adding a brief comparison with experimental binding affinities would improve the context for computational results.
Comment 5: The PCA analysis provides valuable insights into chemical space exploration, yet the manuscript could be strengthened by discussing the potential implications of reduced internal diversity.
Comment 6: The conclusion is too brief; it should be expanded to highlight the study's significance. A separate discussion with clear limitations would further improve the manuscript.
Comment 7: Please ensure consistent use of terminology throughout the manuscript (e.g., "BRAF" vs. "B-Raf") to avoid confusion.
Comment 8: The manuscript demonstrates novelty through low structural similarity to existing inhibitors; highlighting potential risks of off-target effects or ADMET concerns would make the study more robust.
Summary: solid computational pipeline that shows REINVENT4 + Chemprop at BRAF V600E, filter with simple property rules, dock to one crystal structure, and run MD with MMPBSA. The story is coherent and reproducible. The biggest concern is selectivity. I couldn’t find any actual selectivity analysis across WT BRAF or the wider kinome, despite the text implying “selectivity” in places, especially in the introduction, to indicate that this is the problem the paper is trying to solve. So I land on “good and useful to show the workflow,” but I’m not convinced yet about real-world novelty, selectivity, or robustness when RL for de novo generation objectives gets more complex.
A more undecided conclusion: I like this paper for what it is - a clear demonstration that modern generative RL tied to a learned potency scorer can produce BRAF V600E candidates with nicer property profiles, reasonable docking, and stable MD behavior in a single-structure setup. It is publishable after clarifying the tempering claims and more analysis/report on reproducibility. This is because the paper is not introducing a novel workflow, but something that is relatively standard now. The biggest value to the scientific community would be clearly reporting the challenges faced when setting up the workflow and the means of solving these challenges. Furthermore, as evidence for selective and truly novel inhibitors, it is also not yet convincing.
- Prior model tuned with RL to push drug-likeness via QED, alerts, and stereocenters.
- Target focused with TL on 1,671 BRAF V600E actives from ChEMBL with pIC50 > 7.2, then integrated a Chemprop pIC50 predictor back into the RL scoring.
- Selected top candidates by Chemprop and QED thresholds, docked to PDB 8C7X using Vina after removing the co-crystal ligand and waters, then did 200 ns MD duplicates and MMPBSA.
Strengths
- Clear, reproducible toolchain where REINVENT4 + TL + RL, Chemprop in the loop, RDKit for props, Vina for docking, GROMACS for MD, MMPBSA for affinity.
- Some novelty vs approved BRAF drugs with Tanimoto 0.26–0.45 to five approved inhibitors suggests they are not trivial analogs of marketed scaffolds.
- Shown some general drug-like property profile to move in the right direction
- Diversity is not ignored, PCA suggests exploration into a new region, not just copying the prior.
- MD done with duplicate runs. It is nice to see two independent velocities and a discussion of replicate differences.
Weaknesses
- No real selectivity analysis: The pipeline evaluates only BRAF V600E with one structure (8C7X).
- Docking setup is minimal, and we did not try to indicate whether this is set up robustly. It is also important to show docking / other validation that can indicate selectivity (even if this is not carried forward to lab testing).
- Novelty vs training set not established. No scaffold comparison to training inhibitors. The generated molecules may look very similar to the existing compounds in ChemBL, thus not novel.
- Any docking or ML scoring against WT BRAF, ARAF, CRAF, or a kinome subset?
- What exact weights and transforms were used for RL reward design?
- Details on diversity filter parameters and any mode collapse signs?
- Docking controls: Were the co-crystal ligand and existing known inhibitors re-docked and compared against decoys?
- Novelty vs training: Show scaffold distances to training set.
- ADME interpretation: How will liabilities be handled in RL?
-
-
-
1. The shift from general AI applications to the specific REINVENT4 framework feels abrupt. If authors explain why they especially selected REINVENT4 as well-suited for BRAF inhibitor design compared to other methods mentioned earlier. It could be more informative and supportive.
2. Introduction authors emphasize AI’s success stories, but it lacks the challenges and failures of AI-driven drug discovery (e.g., reproducibility, interpretability, overfitting, data biases).
3. The rationale for using Chemprop for pIC50 prediction should be better explained. Was it benchmarked against other QSAR models, or chosen for convenience?
4. The choice of the AMBER99SB force field is not justified. More recent force fields (e.g., AMBER14SB, CHARMM36m) may provide improved accuracy for protein-ligand systems.
5. Details about ligand parameterization in ACPYPE are brief. Did the authors use GAFF or GAFF2? Were charges derived via AM1-BCC, RESP, or another scheme?
6. The docking protocol (AutoDock Vina) lacks mention of exhaustiveness and the number of docking runs, which strongly affect results.
7. For MD, the duration is reported (200 ns), but it is unclear whether replicates beyond the two velocity seeds were performed (which would strengthen robustness).
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.