All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Thank you for doing such a thorough job of your changes and your response document outlining what changes were made.
[# PeerJ Staff Note - this decision was reviewed and approved by Jafri Abdullah, a PeerJ Section Editor covering this Section #]
I found your article interesting, as we do see all the weird and wonderful claims about personality based on interpretation of abstract images out there in popular culture (perhaps historically inspired by the old Rorschach inkblot test popularised in film and TV). Based on my read of the three reviews, here are some main takeaways:
- R1 and R2 both flag that hypotheses at the end of the introduction are largely absent. I do think this is an issue that should be rectified. It appears to me that you have material to include hypotheses based on assessing if certain populist claims are evident or not (e.g., seeing younger woman first is associated with being more independent). Although from the measures reported, it doesn’t look like you have an ‘independence’ kind of measure? So perhaps this example won’t work. I hope you get the idea though of trying to be more explicit about what specific claims you are testing in your research. In the discussion section I note you do a bit of this in the second paragraph (e.g., “…those who perceive the older woman first in the Younger-Older Woman image are more agreeable than those who see the younger woman first…”
- R1 notes that all statistically significant findings are associated with very small to negligible effect sizes, and questions what conclusions can be drawn on those tiny effects. I think this is a very reasonable observation that I concur with. When I read your paper, because of this, I did not find the subsequent paragraphs related to “However, two significant relationships warrant further discussion.” very compelling/interesting. Thus, I do recommend the authors have a rethink on what can be reasonably concluded from your results.
- All reviewers appear to suggest delving more into the younger-older women image result with further analyses. However, considering my above point, if the effect size is negligible from a basic analysis standpoint, I don’t expect there to be much utility in doing this. You perhaps might consider including something as a supplementary online analysis alongside your online data to appease reviewer curiosities on this point.
- R2 suggests the possibility of conducting multilevel or mixed modelling analyses, but do not provide any rationale for precisely why that would be required. I don’t expect the authors to engage in additional analyses where reviewers have not been clear about precisely why such things should be done.
- R1 has questioned the use of very brief short personality questionnaires, but I do note that is fairly common practice these days, and I am aware of literature that justifies the validity of such approaches that you might draw upon to respond.
- R2 and R3 have questioned if the paper falls within the scope of the journal. As someone who does handling editor work for the journal, I can confidently assert that it does. PeerJ publishes a lot of psychology type articles. I do understand that is not overly clear from the journal description though and appreciate the reviewer concern.
- R3 has flagged that it sounds like the order of response options was not randomised? For example, were participants always offered ‘duck’ as the first option for the duck-rabbit image? If so, this is a limitation that should be addressed in the discussion section. I don’t share the reviewer sentiment that a whole new set of data must be acquired to justify publication… but I do agree this is a limitation that should be acknowledged by the authors if indeed the procedure was carried out in the way that it has been described in the initial submission of the manuscript methods section.
- As per standard practice, when resubmitting your paper, can the authors please upload a separate thorough response document to reviewer feedback that goes through the reviewer feedback point-by-point where you clearly describe what was changed in the paper (via direct copy-pasting of text into the response document from the paper being very explicit about what was edited). In any instances where the authors have not made changes based on reviewer feedback, please provide a rebuttal. The more thorough and clear your response document is, the easier/smoother the review process is for all parties involved.
The manuscripts describes a single study making an attempt to verify the relationship between the perception of ambiguous image and people’s personality and thinking style. The basic structure of the article is mostly correct. I would probably like the literature review to be a bit more extensive and detailed, but it is structured correctly. I appreciate the fact that the database is available, and the tables presenting the data are clear. However, in the whole manuscript I cannot find the hypotheses (the section “Aim & hypotheses” is entirely missing), so I cannot evaluate whether they are properly presented and tested.
I also would re-design the Discussion section – the beginning of it belongs in the introduction and is very discriptive. Those information should be presented much earlier, while the Discussion should start with reminding about the hypotheses and verifying whether they were rejected or not. I also think that strengths and limitations of the study could use some work.
While the design itself seem to be correct, I am worried about using extremely short questionnaires to measure complex personality traits. It would probably be fine if it wouldn’t be the main focus of the whole manuscript, but when it is, it seems concerning. We have strong conclusions drawn from using literally a few images and two-item questionnaires, and that definitely raises a question about the reliability of the findings.
Apart from the concerns which I mentioned in Experimental design, the manuscript would definitely benefit from analyses other than correlations (which are bi-serial and conducted on rather big sample, thus having a great chance of being statistically significant). In order to explore the phenomenon, I would like to see the chi-squared tests, maybe some regression, possibly some other variables (e.g. response time to the stimulus?). Authors mentioned the fact that the mechanism behind this tendencies in perception is still unknown, but I do not see any attempt to explore it.
Furthermore, all of the correlations are weak (if I am not mistaken, none of them exceeds the correlation coefficient of .20). Therefore, I am not sure whether we can draw any conclusions from that.
In the discussion you mention the fact that “with the Younger-Older Woman image, older participants tended to see the older woman first whereas younger participants tended to see the younger woman first.”. I do not see that presented in the results section at all. What was that result? How were participants divided into the groups of older and younger?
I think that this study is a great introduction to future research and it could do really well as a first out of series of experiments. However, I would like to see more analyses, more examples, and in general more complex and thorough investigation in order to publish the manuscript.
The manuscript was written in clear and unambiguous professional English. The structure and the tables are appropriate and the raw data were shared.
The introduction section lead the reader to an objective and clear reasoning. Nevertheless, there are recent references missing, e.g., Blake and Palmisano (2021), and Koivisto & Pallaris (2024). Furthermore, in line 103 the expression “subconscious perception” is used, which is anachronistic and might result in terminological confusion about perceptual processes.
The manuscript is "self-contained", but the results especially about Younger-Older and Age has not been adequately explored. For instance, these data are discussed based on Nicholls et al. (2018), which analyzes two age groups (18-30y, over 30y). Thus, the discussion could be more robust with a comparison between different age groups (e.g., 18-30y, 31-40y, 41-50y, 51-60y and over 60y).
The hypotheses are not clearly stated in the Introduction nor in the Discussion. Thus, it’s difficult to verify the relevance of the methodological strategies adopted to test them.
I'm not sure that the research is within Aims and Scope of the journal. There is a clear applicability to the core areas of Health sciences, but the manuscript do not explore this sufficiently (e.g. Amador-Campos et al., 2015 about ADHD and Binocular Rivalry).
In general, the Method is well described. However, some parts are not described with sufficient detail and information to replicate. For example, the images shown to the participants are missing (Duck-Rabbit, Rubin’s Vase, Younger-Older Woman and Horse-Seal). The images are well known, but there are some variations of them. Therefore, it is important to show the versions used in the study, including their size scale. Also, it is not clear whether all participants viewed the images and completed the instruments in the same order.
The conclusions are limited due to the analytic strategy. The data could be analyzed with multilevel regressions or linear mixed models. The Discussion is brief and does not thoroughly engage with the implications of the results or connect them in meaningful ways to theory/practice. Also, there is a prevalence of papers published more than 5 years ago.
I strongly suggest adding a figure that shows the four ambiguous figures used as stimuli.
You should add a table specifying age ranges. Assuming that all participants were 18 or older, you could group them based on age. For instance group 1 from 18 to 28; group 2 from 28 to 38; etc.
I found only two typos:
Line 195: "researhch"
Line 258: "This study examined whether the way in people perceive". Shouldn't it be "way in WHICH people..."?
I'm afraid that the manuscript does not fit the aims and scopes of the journal: PeerJ "considers articles in the Biological Sciences, Environmental Sciences, Medical Sciences, and Health Sciences". Maybe I'm Missing out on something, but I fail to see how the topic of the manuscript falls within one of those sciences. But maybe I am wrong, so it's up to the editor to have the last say on the matter.
There are a few issues that need to be addressed by the authors:
1) Why did you not ask participants to indicate their sex?
2) I suggest conducting analyses in which you factor in age.
3) Finally, since the authors report that seasons may favor one outcome over another in the duck-rabbit figure, maybe they could tell in what season was the experiment conducted.
This is the most crucial observation: haven't the authors noticed that the first option offered – e.g. "This picture can be viewed in different ways. What did you see when you first looked at the image? (Options: Duck; Rabbit; Neither or something else)" – is what most participants declared to see? To be published, you would need to run another experiment in which you reverse the order of your options (not duck vs rabbit but rabbit vs duck). This would also serve as a further test for what you have found (or not found).
This issue needs to be fully addressed as it does show a strong bias due to the task instructions provided.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.