Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on May 2nd, 2019 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on June 27th, 2019.
  • The first revision was submitted on October 18th, 2019 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on November 8th, 2019.

Version 0.2 (accepted)

· Nov 8, 2019 · Academic Editor

Accept

The new version has received favorable comments from the reviewers, and I believe that it is now in a condition to be accepted.

·

Basic reporting

My concerns have been addressed. I'm glad this paper is moving toward publication!

Experimental design

My concerns have been addressed

Validity of the findings

My concerns have been addressed

Additional comments

My concerns are addressed. I congratulate the authors on a job well done.

Version 0.1 (original submission)

· Jun 27, 2019 · Academic Editor

Minor Revisions

Please follow the recommendations of both reviewers, and prepare a response letter explaining how you addressed them.

Reviewer 1 ·

Basic reporting

On English:
Some typos in the abstract: “we us all other dissonances” -> “we use all other dissonances” and some shorthand also that could be made clearer. Abstract ll 14-16 could be broken into multiple sentences; and “with each other and *with* manually programmed rules” 16-17: “We chose genetic programming for our machine learning technique” I’m not sure what “as positive examples” means in l. 23 – “as examples used to train the algorithm in a particular dissonance category”? This seems redundant. The next lines 24-25 seem more important: “We also trained the algorithm in many types of negative examples such as…” l. 27 “category *of* suspension”

The cumbersome reading of the abstract almost made me not agree to review the paper; I hope others don’t decide not to read it for the same reason.

The introduction’s writing and later is stronger and better edited. But there are still errors that make reading hard – careful rereading by a native speaker is crucial:
35: methods OF algorithmic composition
144-5: “the algorithm better leaves” -- “an algorithm that leaves a few complex dissonance categories undetected is stronger than one that wrongly marks notes…” or something like that?
181: “To be at the save side” – “To be on the safe sade”

On Literature references, sufficient field background/context:
The relevant literature is addressed and cited well. However, the original encoders of the Palestrina corpus should be mentioned as well as the editions from which they are encoded. (There are errors in encoding and in parsing that are known to people involved in the original project and the music21 project; I doubt they are sufficient to affect the end results however)

On Professional article structure, figures. Raw data shared.:
Raw data is shared -- in abundance. Version of software used should be noted. Some issues w/ figures (discussed below) but in general very professional.

Figure 1 includes in addition to “p.t.” “n.t.” and “s.” also “p.t.s”. This could be read as “p.t.” followed by “s.” but the presumably correct reading as multiple passing tones is not included

On Clear defs of all terms, theorems and proofs:
Very clear in general.

A pass, but some revision needed

Experimental design

The research question is well defined, and how this work on genetic programming will advance the field of computational music research is clearly stated and well argued (well done!).

There are no ethical concerns raised by the paper.

38–42 are key and important parts of the paper (and might be highlighted in the abstract): rules that can be studied by humans are important for acceptance of this work in the discipline of music theory (as an interdisciplinary paper should aim for). Cuthbert/Ariza/Friedland 2011 p. 391 (http://ismir2011.ismir.net/papers/PS3-6.pdf), among others, discuss the importance of understanding how a decision was made in the context of classification of musical features. Very important and good point.

148 It is unclear what the custom algorithm (after chordification) does in cases where the voice leading of the chordified version is unclear.

153-159: this seems a VERY smart way of working. Applause.

The description of handling of suspensions (160-162) is still unclear to me though. After chordification, the dissonant pitches would already be tied to a previous pitch.

163-166: There are some cases in the Palestrina corpus of incorrect clefs (namely treble8vb as treble, causing fifths to become fourths). The authors are cautioned to check their corpus manually against the original encodings; all of which are on IMSLP. The authors should also be sure to be on the latest release of music21, as prior versions did not necessarily parse properly the (partially incorrectly encoded) krn files where the number of voices in Agnus II is fewer than in Agnus I, but the original (usually five-voice) structure is restored in Agnus III. This can make for a large number of dissonances.

211: were melodic intervals given in semitones, as diatonic steps, or (perhaps best) in Hewlett’s base-40 interval encoding? This information is needed to replicate. (I’m guessing from Table 1 later that it’s diatonic steps)

253: are random values that match the positive examples excluded from set 3? Or are they too few to care about? (ah, this is mentioned in 282-83. I think they should be removed).

267: again the interval should be described as diatonic or semitonal?

285–325: my knowledge of this part of the paper is low. It looks good to me, but please weigh other reviewers’ comments more highly for this section (Genetic Programming)

317-18: “The fitness assigned … is 10 times its accuracy…plus 0.001 times the size of the tree” – should this be minus 0.001 times the size of the tree? It seems to reward bloated trees?

Validity of the findings

132 – unclear why the further dissonance categories are irrelevant; they certainly do occur in the Palestrina Mass corpus, and at this point in the paper it is hard for the reader to know why rejection by the algorithms makes evidence irrelevant. (ah, l. 168 gives the answer to what the corpus is; that such dissonances do not occur is possible, but not certain)

171-175: not enough information is given here to verify the authors’ claim that the detection works rather well. Can this data be provided (or see next comment)

183–186: a reader would be interested in a list of these occurrences of dissonances that cannot be automatically classified by the custom algorithm. They would be of interest for music theorists and Renaissance scholars in addition to future researchers who may want to improve the algorithm.

196–199 is the most problematic passage I have yet read: the “key” detection of a Palestrina mass movement is likely to be very faulty as the Krumhansl-Schmuckler key determination algorithm (which should get citation) and the Sapp weights were designed for tonal music of the common practice period (1650–1880) and not for this period. I believe I have seen, in other forums, the authors’ search for an algorithm that would classify Renaissance mode better, and I believe they can say that no such algorithm yet exists, and they should be able to satisfy me that the KS algorithm is the best thing available. But the argument needs to be made.

215 – what does it mean to normalize the pitch class of the dissonance? does rotating pitch classes so, say G = 0, affect the results?

221 – the total number of points in each cluster are noted below, but the total number of points recognized as outliers should be noted. (it’s at 246)

228-233 is very well explained.

Figure 2 is hard to read – the blue bars could be made more prominent and the dots bigger as well. A note that each pink dot represents one run could be added.

The bigger issue though is that C5 and C6 are included in the diagrams but have not been explained in the text yet. What are they? [Ah! From the scripts I discover they are the cases in l. 249; this should be explained]

Figure 3 – why not plot #s 0 0 0 0, 1 1 1 1, etc. instead of abstract figures that I need to return to the caption often to read? The switch from 0-1 to 0-10 score is also confusing.

Figures 5 and 6 should explain again what cluster number they refer to – we’re going back and forth between cluster numbers and verbal descriptions.

Returning to 293–300 in light of Figure 5–6: 295: is the “dissonant” note necessarily dissonant? l. 263 seems to imply it could be simply a middle note. Figure 5 line 1: I read the right side as duration(i+3) and was wondering why the duration of the note three notes later was being taken into account. More space or something to show that it is duration(i) + 3 would help. It took me some time to note that “dissonance status” was NOT an input variable, and that we are only looking at linear (melodic) segments. If this is true, then the results are even stronger, but either way, this point should be made very clear in the article.

355 – it is unclear here and in table 2, what “can be” means – does this mean that “the generated rule/solution allows the note to be an eighth note” or “the received rule allows the note to be an eighth note but the generated solution does not”? The direction of the deviation is important.

The Qualitative evaluation (339-356) is quite cursory and there is not enough information given here to evaluate the accuracy of the results, I’m afraid.

374 – I’m not sure what an “easy beat” is. Is this a mistranslation for “simple beat” (which would also need explanation), or a integer number beat? It is not an English language term in UK or US.

381: “We leave that to the reader” – the reader does not know the grouping – does the ^ at the beginning of the line constrain the entire line or not? So are lines 2-3 to be read as
“accentWeight >= 1 AND ( 2 < duration(i) OR duration(i-1) >= 2 )”
or “(accentWeight >= 1 AND 2 < duration(i)) OR duration(i-1) >= 2”? When writing something that could be seen as arrogant such as “we leave that to the reader” then the reader needs to be given a fighting chance at figuring it out.

Discussion area is strong and well written.

After reading the paper I discovered the electronic Appendices, which give the paper a much more positive result for reproducability, etc. Very strong. Should be referenced in the text where they appear (the “Resulting rules.pdf” is especially interesting). The version of music21 used (and other software) should be mentioned in the interest of reproducing.

Additional comments

Paper needs some minor revisions noted above before accepting, but it is very close and I anticipate that the next version will be ready for publication.

The research area is of interest and the research design is well done, and well written. Professional, solid, and will have impact.

I am choosing not to publish my review publicly, but authors may know my name: Michael Scott Cuthbert (MIT).

·

Basic reporting

Good, but I have some misgivings about music terminology and some unusually translated terms: I'll paste a few paragraph from my letter here:

This paper should be accepted, but there are several minor revisions that should be undertaken. There are three main areas in which I would encourage revisions include: 1) tweaking some of the language usage such that it conforms to English musical terminology, 2) adding some citations and engagement with similar and related research, 3) making a few aspects of the presentation more clear.
First, there are some terminological snafus in the article. The term “easy” beat is incorrect: I believe the author means “weak” beat. The phrase “without getting noticed much” in line 125 is awkward and conceptually suspect. I believe the author means that these dissonances are less structural insomuch as they are heard as elaborating the surrounding consonances. In lines 127-128, the author might cite Jeppesen (or even the venerable J.J. Fux, 1725) to illustrate these dissonances because there are, in fact, some disagreements about categories of dissonance in this musical style. (There’s no reason to get into this debate in this paper, and it can be sidestepped by simply citing these standard dissonances.) In line 165, I believe the author means “whole note” rather than “whole tone,” since they’re talking about duration at this point. Additionally, it should be recognized that this music was not in 4/2 in the original manuscripts, but rather was probably signed with either a half circle or a cut half circle to indicate a duple grouping of the constituent breves. In other words, the 4/2 barring – and therefore the music21 metric weights – are modern anachronisms. (This fact doesn’t change the study’s results: it just should be acknowledged.) On line 199, the author means “tonic” not “root” (chords have roots, pieces and keys have tonics). Additionally, on this front, the abstract needs a careful edit. As it currently stands, it is difficult to read and uses unidiomatic words and turns of phrase.

Experimental design

It's good.

Validity of the findings

I would encourage the author to connect the findings to other existing research. Again, here are the relevant paragraphs:

The paper cites no other work machine learning on musical corpora or automated harmonic analysis. To my mind, even though no one has done this particular approach to dissonance treatment, the author should still cite other research into modeling musical harmonies. I’ll include a short bibliography below, which could provide a springboard for a more comprehensive literature review. Additionally, the author argues that their contribution is most valuable to automated musical composition research. I’d argue that this modeling is also valuable to research into statistical learning. You could argue that this article models the experience of a learner who tries to construct a series of cognitive templates through trial and error until settling on generalizable rules. The author might consider adding some discussion of these implications.
A few other discussions could be tightened, clarified, or removed. First, Table 2 doesn’t do a lot of work and takes up a lot of space. To my eye, all the table illustrates is that there exists very little variation within the clustered rules. I think the table should basically be deleted and perhaps replaced with another sentence (in the Quantitative Evaluation section) describing this lack of variation and the caveats present in the table’s footnotes. If the author believes the table to be important enough to stay in the article, more discussion is necessary to indicate why. Additionally, Figure 5 should be talked through step-by-step from top to bottom (see lines 360-380-ish) rather than the piecemeal approach used now, and the value i on which the dissonance actually occurs should be explicitly stated (I believe the dissonance occurs on i, and not i+1 or i-1, but I had to think about that for a bit). Also, explanations of computational specifics were often not comprehensible to me, but that’s probably because I’m a music theorists and not a computer scientist– I’ll defer to the editors and other reviewers on the assessment of those topics.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.