Interesting data Dr. DeSoto! Sounds like a good meta-analyses topic :) I have a few comments/questions.
Instead of actually having a slider anchored at 50%, I was wondering if you've considered having participants click to a point on a line as a way to gather confidence. This may help eliminate the 50% bias.
Additionally, although the groups differed in the distribution of confidence ratings, in your current data set it didn't actually affect their overall metacognition monitoring accuracy (at least as measured by gamma). Have you explored other metacognitive measures (e.g., type2 ROC, meta d', calibration) to see if perhaps group differences would emerge using these other measures? It would be nice to demonstrate that these different uses of the scale actually have some consequence for performance.
Finally, what effect do you think finer grading of confidence has and how fine do you think people can grade confidence? So, does giving a confidence rating of 80% vs. 82% really mean something qualitatively different? Does having finer measures of confidence influence our data outcomes (especially given that confidence is often collapsed into larger bins for several analyses in order to increase power)?
You can also choose to receive updates via daily or weekly email digests. If you are following multiple preprints then we will send you no more than one email per day or week based on your preferences.
Note: You are now also subscribed to the subject areas of this preprint and will receive updates in the daily or weekly email digests if turned on. You can add specific subject areas through your profile settings.
Usage since published - updated daily