All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
The reviewers are satisfied with the recent modifications and therefore I can recommend this article for acceptance.
[# PeerJ Staff Note - this decision was reviewed and approved by Shawn Gomez, a PeerJ Section Editor covering this Section #]
Thanks for agreeing with my comments. The paper is now in good shape, and there are no further comments from my side.
The experimental design is well-explained and understandable to the readers.
The paper contains sufficient results about key findings and data validation.
All comments were responded to and considered for acceptance.
-
-
**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.
**PeerJ Staff Note:** It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors agree that they are relevant and useful.
**Language Note:** PeerJ staff have identified that the English language needs to be improved. When you prepare your next revision, please either (i) have a colleague who is proficient in English and familiar with the subject matter review your manuscript, or (ii) contact a professional editing service to review your manuscript. PeerJ can provide language editing services - you can contact us at [email protected] for pricing (be sure to provide your manuscript number and title). – PeerJ Staff
This paper introduces a deep learning-based cursive text prediction model integrated into a mobile application, utilizing steps such as data collection, image preprocessing, and prediction with transfer learning from CNNs like ResNet and DenseNet. The authors evaluated their model on historical cursive datasets; the model, particularly DenseNet-201, achieved high performance metrics, and the application enables accurate CC input and recognition. Overall, the paper is well-written; however, there are some gaps that I found during the review process:
1. There are many cursive languages globally, like English, Urdu, Pashto, and many others. It's highly recommended that the authors mention in the Abstract section which cursive language they targeted for their research work. It will improve the readability of their article.
2. It is recommended that the authors mention the prime contribution of their work in bulleted form in the Introduction section.
3. I found the authors cited many references in the Introduction and Literature Review sections, but there is no clear comparison between their work and the literature cited. Please provide a clear comparison so that the readers can find that you did this research work.
4. It will be more interesting if you mention a short details about your experimental design and dataset in the Introduction section. Also, provide concise information about your experimental results as well.
5. Significant work has been reported in cursive language recognition https://doi.org/10.1155/2021/5558373. And even these models work well for small datasets, so why do you choose ResNET and other dense models?
The experimental design is very straightforward, but I have a few concerns about this section:
1. During the data accumulation section, how many participants contributed?
(a) What are their gender specifications?
(b) Their age and education status.
2. I think avoid explaining VGG16, ResNET, DenseNet, and others because these are very well-known to the research community and present your paper more like a review paper. Better to focus on simulation results.
3. The visibility of Figure 1 and Figure 5 should be enhanced; it's too blurry.
4. There is more overfitting in Figure 5, during the training and validation phase, but the authors mentioned more than 90% performance values for their simulations. Do the authors explain how they achieved these values without such a high overfitting?
5. Figure 1 should be redesigned to enhance understandability. For reference, you can see the paper http://dx.doi.org/10.32604/cmc.2021.015054, which draws a step-wise diagram for cursive text recognition.
6. The problems that the authors choose to explore have diacritics (dots). During the image processing process, how did the authors deal with this key problem?
The authors provide sufficient information for validating their work; however, I am confused about
1. There is more overfitting in the training and validation curves, while conversely, the authors reported high performance values. I don't know how this is possible.
(a) Maybe the authors mixed the test class with the training class
(b) The test class is exposed during the training process.
(c) I want the authors to consider this key problem.
2. The authors provided the training and validation charts, but didn't provide the test and training charts. I will highly recommend providing these charts as well to see the losses between the two sets.
3. Also, provide results for test sets using different epoch values.
I would highly recommend that the authors work on this section.
I didn't find a limitation section in your paper that explains what the key challenges were that the authors faced during this research investigation.
Abstract
The abstract is generally well-structured and clearly outlines the problem, methodology, and results. It appropriately summarizes the three-stage pipeline (data collection, preprocessing, prediction), includes the main result (DenseNet-201 achieving high accuracy), and mentions the mobile app implementation. However, it could be improved by stating the size of the dataset, the number of classes, and the novelty compared to previous work for added clarity.
Introduction
The introduction provides a good background on cursive handwriting recognition and its historical importance (especially Korean CCs). It transitions into the use of CNNs and sets the motivation well. However, I suggest providing a sharper definition of the research gap. The need for mobile-based solutions and issues in generalizing across cursive styles could be more explicitly emphasized.
Literature Review
The related works section references several important studies and networks. However, there is limited critical discussion comparing these studies to the current work in terms of limitations or performance gaps. Clarify why the DenseNet-201 is more suitable compared to other models in this context.
1. The methods section is well-structured.
2. The authors dropped classes with insufficient data. While this addresses the imbalance, it may bias the model toward frequent classes. So, please add justification of the threshold used and discussion of alternative methods like synthetic oversampling.
3. There’s no clear mention if the test set contains unseen classes or instances with similar forms. This affects generalizability.
1. Consider including confusion matrices or examples of common misclassifications to identify limitations in the model’s recognition ability.
2. While DenseNet-201 is reported as superior, no deep analysis is offered as to why this architecture worked better (e.g., feature reuse, gradient flow).
3. The mobile app is introduced, but no usability study, latency measurements, or real-world performance stats are reported. Only a few screenshots and single predictions are shown.
In the Discussion Section: The comparison with prior works is primarily metric-based. A deeper discussion on why your model performed better and what architectural/design choices contributed to this would strengthen the paper.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.