Just enter your email
Interesting data Dr. DeSoto! Sounds like a good meta-analyses topic :) I have a few comments/questions.
Instead of actually having a slider anchored at 50%, I was wondering if you've considered having participants click to a point on a line as a way to gather confidence. This may help eliminate the 50% bias.
Additionally, although the groups differed in the distribution of confidence ratings, in your current data set it didn't actually affect their overall metacognition monitoring accuracy (at least as measured by gamma). Have you explored other metacognitive measures (e.g., type2 ROC, meta d', calibration) to see if perhaps group differences would emerge using these other measures? It would be nice to demonstrate that these different uses of the scale actually have some consequence for performance.
Finally, what effect do you think finer grading of confidence has and how fine do you think people can grade confidence? So, does giving a confidence rating of 80% vs. 82% really mean something qualitatively different? Does having finer measures of confidence influence our data outcomes (especially given that confidence is often collapsed into larger bins for several analyses in order to increase power)?