All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Thank you for your contribution to PeerJ Computer Science and for addressing all the suggestions. We are satisfied with the revised version of your manuscript and it is now ready to be accepted. Congratulations!
[# PeerJ Staff Note - this decision was reviewed and approved by Shawn Gomez, a PeerJ Section Editor covering this Section #]
Thank you for submitting a revised version of your manuscript to PeerJ Computer Science. The reviewers' concerns have been satisfactorily addressed. I only suggest a few minor revisions as indicated below:
- Figure 8 caption: Duplicate label (d): "(d) 12 hours and (d) 24 hours" -> should be "(d) 12 hours and (e) 24 hours."
- Line 460: "13 hours" appears instead of "12 hours" when listing prediction horizons.
- Line 477: "RMSE and MAPE" -> should be "RMSE and MAE".
- Results section: When reporting statistics, ensure consistent decimal precision (i.e., use the same number of decimal places).
- Ensure consistent hyphenation: use "multi-step-ahead" (not "multistep-ahead") consistently.
**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.
**Language Note:** When you prepare your next revision, please either (i) have a colleague who is proficient in English and familiar with the subject matter review your manuscript, or (ii) contact a professional editing service to review your manuscript. PeerJ can provide language editing services - you can contact us at [email protected] for pricing (be sure to provide your manuscript number and title). – PeerJ Staff
The problem statement of this paper is clear and concise. The authors have targeted a relevant subject for the journal.
-
I think the authors should bring something novel in terms of analysis or model structure.
I would suggest that the authors compare the model's performance with the same data (input and output), with the existing literature on the temporal bike demand prediction problem. More than just mentioning it in the literature, to emphasize the novelty of the research, reproducing the methods of existing literature (based on deep learning, particularly after 2020) is necessary. Except for this comment, others are all minor. Authors can devise alternative methods to present results that reinforce the significance of the study.
All comments have been answered.
All comments have been answered.
All comments have been answered.
**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.
The topic of this study is important and straightforward. However, because it is straightforward, there is a vast amount of literature related to current research. I would suggest adding additional output to differentiate the paper from existing ones, because modifying the model structure would be hard.
No comment
The finding is clear. It has improved the performance. However, because there are already a lot of papers related to this topic, I think it is really hard to add additional value in this field. There are some studies that included multi-step prediction and included time-related features. I am not sure how to differentiate from the studies below.
Zhou, Xian, et al. "Multi-level attention networks for multi-step citywide passenger demands prediction." IEEE Transactions on Knowledge and Data Engineering 33.5 (2019): 2096-2108.
Leem, Subeen, et al. "Enhancing multistep-ahead bike-sharing demand prediction with a two-stage online learning-based time-series model: insight from Seoul." The Journal of Supercomputing 80.3 (2024): 4049-4082.
Zhou, Xian, et al. "Predicting multi-step citywide passenger demands using attention-based neural networks." Proceedings of the Eleventh ACM international conference on web search and data mining. 2018.
Zhou, Xian, et al. "Multi-level attention networks for multi-step citywide passenger demands prediction." IEEE Transactions on Knowledge and Data Engineering 33.5 (2019): 2096-2108.
**PeerJ Staff Note:** It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors are in agreement that they are relevant and useful.
I suggest adding any different outcome (such as multi-task learning) that may enhance the performance of the current model. To get some novel contribution of this study.
This manuscript demonstrates a competent application of machine learning to urban mobility data. However, its novelty is limited — MLPs and timestamp features have been well studied. The lack of deeper technical contributions and generalizability testing reduces its suitability for publication without major revision. Here are some of the weaknesses of paper:
----Literature review, while extensive, leans heavily on existing summaries without synthesizing clear gaps in methodology.
--The choice of Multilayer Perceptron (MLP) over more advanced temporal models (e.g., LSTM, GRU, TCN) is not adequately justified.
--No detailed explanation is provided for hyperparameter selection, model architecture (number of layers, neurons), or training strategy.
--Evaluation is limited to a single dataset (Seoul bike-sharing dataset); generalizability is not demonstrated.
--Error analysis is missing; there's no insight into when and why the model performs poorly.
--Cross-validation is not used—only a single train/test split is applied.
--The manuscript contains grammatical and stylistic issues that affect readability in some sections.
--Some figures lack sufficient interpretive commentary (e.g., Figures 3 and 8).
-Dataset Temporal Coverage Not Discussed with no external validation: The dataset spans only one year only on the Seoul dataset, and potential effects of seasonality, holidays, or external events are not addressed.
-Limited Evaluation Metrics: Only RMSE, MAE, and R² are used; no additional error metrics or confidence intervals are provided for deeper evaluation.
-Simplistic Data Imputation: Missing values are filled with mean or median without justification or comparison to more advanced imputation techniques.
-Model Choice Justification Missing: The selection of MLP is not adequately justified, especially given the availability of time-series-specific models like LSTM or GRU which are more suited for sequential data.
-Hyperparameter Details Lacking: Key design choices such as the number of layers, activation functions, batch size, learning rate, and optimizer are not explained or tuned systematically.
-Data Splitting Strategy Is Limited: Only a hold-out validation set is used. There is no k-fold cross-validation or time-series split to assess model robustness.
- Lack of Statistical Significance Testing: The manuscript reports performance metrics (RMSE, MAE, R²) but does not include statistical tests (e.g., t-tests, ANOVA) to confirm the significance of differences between models.
- No Error Distribution or Residual Analysis: The paper does not analyze prediction errors in detail (e.g., through residual plots or distribution curves) to identify patterns or outliers.
- Insufficient Temporal Analysis: There is no evaluation of model performance across different time segments (e.g., peak vs. off-peak hours, weekdays vs. weekends, or seasons), which is critical in time-series forecasting.
- No Robustness Checks: The model is not tested for sensitivity to parameter changes, data noise, or partial feature removal.
- No Baseline Model Comparison with Classical Time-Series Techniques: The study compares MLP to ML models but omits benchmarks like ARIMA, Prophet, or exponential smoothing.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.