Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on January 16th, 2025 and was peer-reviewed by 3 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on April 11th, 2025.
  • The first revision was submitted on May 10th, 2025 and was reviewed by the Academic Editor.
  • A further revision was submitted on July 16th, 2025 and was reviewed by the Academic Editor.
  • The article was Accepted by the Academic Editor on July 31st, 2025.

Version 0.3 (accepted)

· Jul 31, 2025 · Academic Editor

Accept

Happy to see that you have addressed all my comments.

[# PeerJ Staff Note - this decision was reviewed and approved by Arkaitz Zubiaga, a PeerJ Section Editor covering this Section #]

Version 0.2

· Jul 4, 2025 · Academic Editor

Major Revisions

Thanks for your submission. Kindly do three more improvements (a) Improve literature review (as also mentioned by reviewer 1). The literature review needs to be detailed and must include missing references. This section should be as rigorous as possible. (b) The Discussion section needs to be explicitly mentioned. Currently, results are provided under results and discussion. Go ahead of your results, and trigger a nice discussion, also about existing literature. It will be good to include practical implications also. (c) Kindly read your paper once again owing to language-related issues.

**PeerJ Staff Note**: Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

**Language Note**: The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title). Alternatively, you should make your own arrangements to improve the language quality and provide details in your response letter. – PeerJ Staff

Version 0.1 (original submission)

· Apr 11, 2025 · Academic Editor

Minor Revisions

Please address the remaining comments from reviewers 1 and 3.

**PeerJ Staff Note:** Please ensure that all review and editorial comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

**Language Note:** The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title). Alternatively, you should make your own arrangements to improve the language quality and provide details in your response letter. – PeerJ Staff

Reviewer 1 ·

Basic reporting

The manuscript is well-written and structured, following a logical flow from the introduction to the conclusion. However, some sections contain long, complex sentences that could be simplified for better readability.
There are minor grammatical errors and inconsistencies in phrasing. A thorough proofreading is recommended to enhance clarity and coherence.
The figures and tables are referenced in the text but are not included in the document. Ensure that all figures and tables are properly uploaded and formatted.
The literature review is comprehensive and provides a solid background on ICT solutions for non-literate populations. However, it would benefit from a critical analysis of existing gaps rather than just summarizing previous works.
The use of terminology (e.g., "non-literate consumers," "illiterate people," and "semi-literate") is inconsistent. Standardizing terminology throughout the paper will improve clarity.

Experimental design

The study follows a well-defined methodology, using the ISO 9241-11 standard for usability evaluation, which strengthens the research framework.
The justification for selecting 40 participants should be expanded. How was this number determined, and does it adequately represent the target population?
The participant demographics section is detailed, but presenting this information in a table format would improve readability.
The observational study and icon assessment procedures are well-explained, but a flowchart or diagram illustrating the process would enhance clarity.
The study mentions ethical approval, which is good practice. However, it would be beneficial to elaborate on measures taken to protect participants' privacy and ensure informed consent.

Validity of the findings

The results are systematically presented, and the use of quantitative metrics (task completion rates, time efficiency, error rates) supports the validity of the findings.
The study demonstrates that the designed e-governance solution is usable by non-literate consumers, but the findings should be discussed in comparison to similar studies for better contextualization.
The usability evaluation is robust, but additional qualitative insights from user interviews (e.g., participant experiences and challenges faced) could strengthen the interpretation of results.
The sample is geographically limited to one region. The discussion should acknowledge whether the findings can be generalized to other populations with different cultural and technological backgrounds.
The conclusion is supported by the data, but it would be beneficial to provide specific recommendations for improving the application based on user feedback.

Additional comments

The paper makes a valuable contribution to the field of ICT solutions for digital inclusion and provides a well-documented approach to e-governance accessibility for non-literate populations.
The authors should consider incorporating user feedback on interface design challenges and how future iterations of the application could be improved.
Formatting issues (such as the placement of figures and tables) should be resolved before final submission.
The discussion on future work should be expanded to outline potential enhancements, such as increasing the database of recognized fruits/vegetables and testing the app in a larger, more diverse sample.
The disclosure statement regarding the use of ChatGPT for grammar correction is transparent, but a professional language editing service might further improve the manuscript’s quality.

Annotated reviews are not available for download in order to protect the identity of reviewers who chose to remain anonymous.

·

Basic reporting

The article addresses a critical issue of digital inclusion by proposing an e-Governance Information and Communication Technology (ICT) solution tailored for non-literate populations. This research is significant as digital governance is crucial in providing equitable access to public services, particularly for marginalized communities. The study attempts to bridge the digital divide by designing and evaluating an ICT framework that ensures accessibility and usability for non-literate users.
This research fulfilled the 3 aims
(1) Designing an ICT solution to facilitate access to the price list of fruits and vegetables for
non-literate and semi-literate consumers.
(2) Empowering consumers to report discrepancies in fruit/vegetable prices to governmental
authorities through complaints.
(3) Creating interfaces for tracking complaints and disseminating information regarding actions taken by the government.
So commendable. Good work done by the researchers
Literature references are satisfied. Data, figures, and tables are mentioned. Hypotheses are also good.

Experimental design

Well-defined research problem and objectives.

Comprehensive methodology incorporating user-centered design principles.

Practical implications for policymakers and developers of e-Governance platforms.

Strong empirical backing with usability testing and feedback analysis.

The study presents a well-researched and practical approach to addressing digital inclusion for non-literate populations. Despite minor limitations, the findings and recommendations provide a strong foundation for further research and implementation in the domain of e-Governance.

Validity of the findings

The research presents insightful findings, including:

Non-literate users prefer voice-based navigation and simplified icon-based interfaces.

Language and cultural considerations are crucial in designing effective e-Governance solutions.

Digital literacy training enhances user adoption and engagement.

The proposed ICT solution improves accessibility and reduces dependency on intermediaries to access government services.

The study contributes to the field of e-governance by providing empirical evidence on the effectiveness of multimodal interfaces for non-literate users and proposing best practices for ICT development in similar contexts.

Additional comments

The following limitations were identified from the research work done by the author
1. 15 recognizable items of fruit/vegetable were added.
2. In the future, images of all the available fruits and vegetables could be added
There is a scope for future research work.

·

Basic reporting

The article is written in plain English, making it accessible to an international audience.
The introduction provides a reasonable context for the study, highlighting the challenge of access to e-government services by uneducated populations globally, using Pakistan as a case study.
The figures are generally relevant and well-labeled, providing visual support for the described interfaces and translated into English (e.g., Figures 1-11).
Raw data has been provided, which is in line with PeerJ's policy, promoting transparency.

===========
Criticisms and suggestions:

The literature review is rather weak and limited. There is no breadth and depth.
Consequently, it is not possible to deduce from them what needs to be developed in the current work compared to what has been listed previously. Table 1 is mentioned, but is not considered sufficient.
The structure of the essay is clear, but the transition between the previous studies and the design does not show the significance of the table and its impact on the design.

Emphasis on terms such as “consumer” is used interchangeably with “user” throughout the manuscript (e.g., “non-literate consumers” vs “users” in lines 636-637). Standardize this to “user” for consistency with HCI literature

Translated with DeepL.com (free version)

Experimental design

The main issue I was struck by was the sample size and representativeness: The sample of participants (35 in Phase I, 30 in Phase II for icon evaluation; 40 for usage evaluation) is small and geographically restricted to a specific location in the great country of Pakistan (lines 237-247, 437-446). This limits generalizability, especially with the cultural and demographic diversity mentioned in the “Related Work” section (lines 154-171). The authors acknowledge this limitation (line 672), but the experimental design should justify why this size is sufficient or propose a plan to verify the results with a larger population. Any statistical sample should have more than 100 representatives.

Task selection bias: The five tasks selected for the usability assessment (lines 449-458) cover key functions but lack justification for their selection. For example, why was “accessing the built-in gallery” (task 3) prioritized over other potential interactions (e.g., re-registering a complaint)? Provide justification in lines 447-460 that links the tasks to the objectives of the study.
A second important point, we are talking about uneducated people, so how can we ask them to log in? And how can they have the option to choose the login and admin panel?
This is unbelievable: The app should be image-based and not require a login or registration at all.

Lack of control group: The study compares the performance of uneducated users to the first author's “optimal values” (line 465), but lacks a control group of educated users, despite the claim of comparison in the abstract (lines 27-28). This undermines the claim that the differences with educated users “were not significant”. Either include an educated control group or amend the abstract to reflect the actual comparison.

Validity of the findings

As for the results, they are good and statistically based, but they need further review. For example
Lack of statistical analysis impairs precision (e.g., SUS 68.0625 with no confidence intervals).
Inconsistencies (excessive clicks, Appendix B does not contain SUS questions).
Overgeneralization without scaling evidence.
The results are partially acceptable, but need deeper analysis and clarification to be fully reliable.
Appendices need more verification, especially Appendix B.

Additional comments

Additional comments by the authors

I would like to offer some general observations that may help improve the manuscript and enhance its clarity and credibility, beyond the three main points previously reviewed:

1. Standardization of terminology and titles:
- There are minor variations in the title of the paper across documents (e.g., “Design and Evaluation...” in the Ethical Declaration versus “...A Case Study of Pakistan” in the Certificate of Compliance). It is recommended to standardize the title to avoid confusion and ensure consistency.

2. Documentation of Appendix B:
- The text indicates that Appendix B contains SUS questions translated into Urdu, but it only contains a list of icons. This inconsistency may confuse readers. If the appendix is intended to document the evaluation of icons only, please amend the text to reflect this (e.g, “Icon names have been translated...” instead of “questions”), or add the translated questions if they are inadvertently missing.

3. Explain the data collection methodology:
- The data (clicks, time, errors, SUS) is rich, but how it is collected (e.g, number of attempts allowed, instructions for non-literate users) is not detailed. Adding a brief paragraph in the methodology section about test setup and participant training would enhance transparency.
4. Link icons to tasks:
- Appendix B lists 14 icons, while the test includes only 5 tasks. It is not explained how these icons are used in tasks (e.g., which icon is used to track complaints?). Suggest adding a table or description linking icons to tasks to clarify the context.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.