You do understand that the premise of this is completely false? That an evaluation of how many pull requests are accepted/rejected is of no value at all of it doesn't consider why the request was accepted/rejected. ie was it rejected because of bad coding, or because it didn't fit with the project aims and/ethos. For example anyone of those rejected pulls (from either gender (identifiable or not) might have been because the particular code modifictaion was a duplicate of something someone else had already done, or because the project owner doesn't want his/her code to contain that capability. etc.
This looks like another complete misuse of data to represent something which the data doesn't prove in anyway, shape or form. Surely if you wanted this study to be of any value you would have gone back to the project owners to understand why rejects were rejected. Not assumed it was always because of gender bias and/or bad coding.