Edit history

What is the purpose of producing flawed research like this?

Let's see, this paper has now been mentioned by multiple newspapers and magazines with headlines like:

"GitHub Users Find Women's Code Better Than Men's — Until They Know Who Wrote It"

"Women considered better coders – but only if they hide their gender"

"Women coders do better than men in gender-blind study" etc. etc.

While Murphy-Hill's "In the paper, we present several alternative theories that might explain the data that we gathered" sounds very academic and reasonable, however the reaction of the media certainly certainly supports Dowell's "Performed with an already decided agenda". Also the authors received a grant (money) from the NSF.

The authors definitely have received attention, and if they claim that attention is not what they were seeking, they need to prove otherwise.

Honestly, the very first thing that should jump out to a researcher from the data is that there were 150,000 men and 8,000 women (as reported by Villemaire). The stark difference here is in the participation rates of men and women, not some difference between 74% and 78%.

I mean, how much intelligence does it take to understand that a self-selected sample of 8,000 vs. a self-selected sample of 150,000 from populations of approximately equal size cannot be used to draw inferences about the general populations. The media headlines about this article such as "Women coders do better than men in gender-blind study" certainly show no such discernment.

by Nayan Jyoti ·

What is the purpose of producing flawed research like this?

Let's see, this paper has now been mentioned by multiple newspapers and magazines with headlines like:

"GitHub Users Find Women's Code Better Than Men's — Until They Know Who Wrote It"

"Women considered better coders – but only if they hide their gender"

"Women coders do better than men in gender-blind study" etc. etc.

While Murphy-Hill's "In the paper, we present several alternative theories that might explain the data that we gathered" sounds very academic and reasonable, however the reaction of the media certainly certainly supports Dowell's "Performed with an already decided agenda". Also the authors received a grant (money) from the NSF.

The authors definitely have received attention, and if they claim that attention is not what they were seeking, they need to prove otherwise.

Honestly, the very first thing that should jump out to a researcher from the data is that there were 150,000 men and 8,000 women (as reported by Villemaire). The stark difference here is in the participation rates of men and women, not some difference between 74% and 78%.

I mean, how much intelligence does it take to understand that a self-selected sample of 8,000 vs. a self-selected sample of 150,000 from populations of approximately equal size cannot be used to draw inferences about the general populations. The media headlines about this article such as "Women coders do better than men in gender-blind study" certainly show no such discernment.

by Nayan Jyoti ·