Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on March 3rd, 2021 and was peer-reviewed by 3 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on March 24th, 2021.
  • The first revision was submitted on April 28th, 2021 and was reviewed by 2 reviewers and the Academic Editor.
  • A further revision was submitted on June 11th, 2021 and was reviewed by the Academic Editor.
  • The article was Accepted by the Academic Editor on June 24th, 2021.

Version 0.3 (accepted)

· Jun 24, 2021 · Academic Editor

Accept

The paper has been well revised. I think the paper can be accepted.

Version 0.2

· May 25, 2021 · Academic Editor

Major Revisions

Please revise your manuscript based on the review comments. Importantly, authors need to consider how to emphasize the relevance of the work to the topical coverage of this journal. The final decision will be made on the revision.

Reviewer 1 ·

Basic reporting

no comment

Experimental design

no comment

Validity of the findings

no comment

Additional comments

The authors have significantly revised their manuscript according to review comments. The authors have changed their main contribution from classifying popular music according to cultural backgrounds to analyzing popular music.

However, I still do not agree that the analysis methods used in this study have novelty.
Also, the authors have not shown applications or potential applications of the analysis methods.

Therefore, the current version of the manuscript is not suitable for journals in the computer science area.
I want to recommend the authors search for journals dealing with data analysis results, not analysis methodologies.

Reviewer 2 ·

Basic reporting

no comment

Experimental design

no comment

Validity of the findings

no comment

Version 0.1 (original submission)

· Mar 24, 2021 · Academic Editor

Major Revisions

Please take a look at the comments from reviewers.

[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.  It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter.  Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]

[# PeerJ Staff Note: The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at copyediting@peerj.com for pricing (be sure to provide your manuscript number and title) #]

Reviewer 1 ·

Basic reporting

The title is too ambiguous and grammatically wrong.
The authors have to check term usages on the overall manuscript, e.g., boom in music streaming.

Motivation is not reasonable: "Preferences for music can be represented through music features."
User preferences are combinations of item features and users' tastes.
Please, give some better explanations without logical leaps.

I cannot find the originality of this study.
The authors merely classified musics into market regions by using the conventional ML-based classifiers.
Although the authors said that contributions of musical features to the classification accuracy can show cultural differences between the market regions, they did not provide adequate discussions for this point (just a feature has higher contribution to this market region than the other regions).

There have been numerous studies for classifying musics according to their physical features.
Therefore, the following sentence cannot be the contribution of this study.
"We demonstrate that machine learning can reveal both the magnitude of differences
38 in music preference across Taiwanese, Japanese, and American markets, and where these preferences are different."
Average readers have already known that we can do these things with the conventional ML models.

Experimental design

The experimental subjects and procedures are adequate.
Also, they have been described well.

Validity of the findings

The findings of this paper should be cultural differences between music markets.
However, the authors have concentrated on the effectiveness of the machine learning on classifying musics.
According to this point, the abstract and introduction should be modified.

Also, to reveal the cultural differences, contributions of musical features to the classification accuracy are not enough.
Please, add more in-depth analysis to ensure the originality of this study.

Reviewer 2 ·

Basic reporting

no comment

Experimental design

no comment

Validity of the findings

no comment

Additional comments

The paper is, generally, well-described and clear.
However, Figure 2 and 3 need to be redrawn since they are not clear.

Reviewer 3 ·

Basic reporting

Overall, this research paper is well structured. Although it is a minor thing, authors are suggested to prove read the article to the professional reader to improve the quality of the word choice and the paper's story flow.

Experimental design

The authors define the experiment steps in a good way. However, several things still need to be clarified:
1. The sufficiency of the sample size.
2. The justification of the features that the authors used for classification.
3. The training-testing split ratio whether it is optimum.
4. The tuned cross-validation parameter if it is enough.

Validity of the findings

1. The authors are strongly encouraged to use a statistical measure to compare the model's quality rather than just comparing the accuracy and AUC score. For instance, using Delong's method to compare the model performance.
2. It will be better for the authors to explicitly restate this study's novelty, on how this study differs from other studies and its contribution, in the conclusion part. It could link the finding and the research goal stated initially and make the importance of this study more sound.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.