All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
As you can see, both authors felt that your changes sufficiently addressed their concerns, and I agree with them.
I would note a few small changes for reading clarity that I would recommend that you correct at the proofing stage (optional, of course):
ln 38: “make abstraction of” change to “abstract away from the properties of”
ln 118: delete “about it”
ln 127: change “made them” to “helped them” ?
[# PeerJ Staff Note - this decision was reviewed and approved by Stephen Macknik, a PeerJ Section Editor covering this Section #]
I would like to thank the authors for their revision. I feel that the new structure of the paper will give it the attention it deserves.
As you can see, both reviewers generally found the research to be good and the manuscript to be well written. Both had several minor comments, all of which I agree with. Please address each of these comments in your revision.
Generally good. Please see the "small points" section of the general comments for a few suggestions.
Please see items 2 and 3 of the general comments.
The authors provide a 10,000-image dataset that is organized by broad semantic category (5 superordinate-level categories) and normed in terms of memorability by an average of ~100 observers per image. This is a lovely effort and a great service to the community. I am generally pleased with the writeup of this work, but I do have a few suggestions.
-1- Motivating the database
As I stated, I believe that this database is a great tool for the field, but I feel that the uses and implications are not clearly stated early on in the introduction where most folks might start to read. For example, the framing of the manuscript in the first paragraph is a little unusual. Most of the cited papers are not studying how photographs “differ” (line 38) for their own sake, but studying how perception, attention, and memory work in the context of naturalistic visual inputs. I feel like the current motivation (to paraphrase: photography is popular, we should study it) actually cheapens the value of this work, which is of broad use to the cognitive science community. More broadly, the hierarchical category structure of the database is an untapped asset. The authors could ground this in terms of the cognitive science of hierarchical concepts (e.g. Rosch etc) and/or known neuroscientific concept distinctions such as animacy, real-world size, etc.
-2- Possible confound in sports category
Line 179-182: I am not following the logic here. I am assuming that the authors mean that non-incidental images of people were only allowed in the sports category. However, this might be an issue for two reasons: (1) if people are more intrinsically memorable, this could artificially inflate the memorability of this category; (2) if the people depicted in the sports images are famous, this would further artificially inflate memorability and/or increase variability in memorability because some (but not all) participants may recognize the players.
-3- Memorability scores
Lines 227-240: These memorability scores are somewhat non-standard. I am unsure of why the authors chose to combine two previous scores rather than more accepted precision/recall, and/or sensitivity/specificity.
-4- Black boxes
Lines 63-64: There are two types of “understanding” that the authors might be referring to, and it would provide clarity to know which one is intended. The “black box” argument often refers to the lack of human interpretability of dCNNs. However, dCNNs can also be critiqued for not having a full, semantic understanding of a picture. I suggest a clarification and a reference to help the reader understand which of these interpretations is intended.
-5- Small points
--Abstract, line 30 final sentence: suggested re-phrase “allows the study of” rather than “allows to study”
--Line 62: what is being predicted? I assume memorability score, but the sentence is a bit ambiguous
--Line 102: I suggest that the authors hyphenate “memory-relevant”
--Line 103: Citation needed about studies that show that these categories differ in terms of memorability.
--Line 112: No comma needed after “there”
The narrative is easily understood and fits the scope of the work, all in all a well-written text with a clear structure. I only have three recommendations (ranked by importance from most to least important) for further improvement according to PeerJ guidelines.
1. Much of the current literature concerning memorability is presented although one important paper is ignored: https://ieeexplore.ieee.org/abstract/document/8704932. Ideally many members of the community will use this image set (it certainly deserves to be used). To increase understanding of memorability as a concept and how it may vary *within* semantic categories, I think it would be of benefit if the potential role of Visual Memory Schema is shortly addressed in the introduction, for instance in page 4 after line 80 when goodness of image organisation is introduced.
2. The concept of memorability relevance (page 5, line 96) is introduced but only explained three sentences later. Comprehensibility of the paragraph would be improved if memorability relevance were explained immediately.
3. Data visualisations are informative and aesthetically pleasing. Captions only include basic descriptions of what is shown but results could be communicated even more effectively if captions include a short summary of the meaning of the figure/results.
The methods are very well explained and fit the overarching research question. No further comment.
Analyses are statistically appropriate and accessibility to and composition of the stimulus set and (raw) data files are exemplary. No further comment.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.