A survey of accepted authors in computer systems conferences

View article
Loading...
PeerJ Computer Science
Despite its large research output and enormous economic impact, we found no consensus definition for the field of “systems”. For the purposes of this paper, we define it to be the study of computer hardware and software components, which includes research in operating systems, computer architectures, databases, parallel and distributed computing, and computer networks.
We recognize that gender is a complex, nonbinary identity that cannot be captured adequately by just photos or pronouns. However, the focus of this study is on perceived gender, not self-identification, which is often judged by the same simplistic criteria.

Main article text

 

Introduction

  • What are the demographic properties (position, gender, country, English proficiency) of survey respondents?

  • Are these demographics, and especially the low number of women, representative of all accepted authors?

  • How long does a systems paper take to write? How many attempts does it take to publish?

  • How do authors feel about a rebuttal process? What explains differences in opinions?

  • How do authors evaluate reviews, and which factors affect these evaluations?

  • What are the grade distributions of accepted papers across different categories?

  • What are the differences to survey responses for authors of different genders, English proficiency, and publication experience?

Organization

Materials and Methods

Limitations

Ethics statement

Author Survey Results

Demographic questions

Which best describes your position during 2017?

What is your gender?

What is your English level proficiency?

Paper history

How many months did it take to research and write?

How many conferences/journals was it submitted to prior to this publication?

Please type in their names [of the rejecting conferences]

Rebuttal process

Did the conference allow you to address reviewers concerns before final acceptance notice?

Did you find the response process helpful?

Review quality assessment

How many reviews did this paper receive?

How well did the reviewer understand the paper, in your estimation?

How helpful did you find this review for improving the paper?

How fair would you say the review was?

Review scores

  1. Overall score or acceptance recommendation (often ranging from “strong reject” to “strong accept”).

  2. Technical merit or validity of the work.

  3. Presentation quality, writing effectiveness, and clarity.

  4. Foreseen impact of the work and potential to be of high influence.

  5. Originality of the work, or conversely, lack of incremental advance.

  6. Relevance of the paper to the conference’s scope.

  7. Confidence of the reviewer in the review.

Discussion and Author Diversity

Gender

  1. The ratio of women in the 25 double-blind conferences, where reviewers presumably remain oblivious of the authors’ gender, is in fact slightly lower than for single-blind conferences (10.06% vs. 10.94%, χ2 = 3.032, p = 0.22). This ratio does not support an explanation that reviewers reject females at a higher rate when they can look up the author’s gender.

  2. When we limit our observation to lead authors only, where the author’s gender may be more visible to the reviewers, the ratio of women is actually slightly higher than in the overall author population (11.25% vs. 10.48%, χ2 = 1.143, p = 0.285). If we assume no differences in the submission rates to a conference based on gender, then female lead authors appear to suffer no more rejections than male authors.

  3. We found no statistically significant differences in the overall acceptance grades of women and men (t = 0.291, p = 0.772), even when limiting to lead authors (t = 0.577, p = 0.566), papers accepted on their first attempt (t = 0.081, p = 0.935), or single-blind reviews (t = 1.159, p = 0.253). This equitability extends to most other grade categories, except for originality (t = 4.844, p < 0.0001) and technical merit in single-blind conferences (t = 2.288, p = 0.0294). In both categories, women scored significantly higher than men. It remains unclear whether there is any causal relationship here, and if so, in which direction; do women have to score higher than men in the technical categories to be accepted in single-blind conferences, or do women submit higher-quality papers to begin with? At any rate, this small difference is unlikely to explain the 2–3x difference in women’s ratio compared to CS, but it does provide a case for wider adoption of double-blind reviewing.

English proficiency

Publication experience

Geographical regions

Conclusions and Future Work

  • Why is the representation of women in systems so low?

  • Do women actually need to receive higher technical scores in their reviews just to be accepted to single-blind conferences?

  • What are the effects of double-blind reviewing on the quality of reviews, conferences, and papers?

  • What other publication differences and commonalities exist between systems and the rest of CS?

  • How do review grades correlate across categories?

  • How might reviewer load affect our results?

  • How do any of these factors affect the eventual success of a paper, as measured by awards or citations?

Supplemental Information

Full text of survey questionnaire

DOI: 10.7717/peerj-cs.299/supp-1

Data and source code for project

This is a snapshot of the github repository that includes the data and source code required to reproduce this paper (except for confidential survey data). The snapshot represents commit 6663a253f1ac4dc351a78ccc74c0de80c7cc06ad of http://github.com/eitanf/sysconf. The most pertinent article files are under pubs/diversity-survey/.

DOI: 10.7717/peerj-cs.299/supp-2

Anonymized survey data

The legend for all fields is in File S4.

DOI: 10.7717/peerj-cs.299/supp-3

Metadata description of the fields in survey responses file

DOI: 10.7717/peerj-cs.299/supp-4

Additional Information and Declarations

Competing Interests

The authors declare there are no competing interests.

Author Contributions

Eitan Frachtenberg conceived and designed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft.

Noah Koster performed the experiments, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft.

Ethics

The following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers):

This study and the survey questions were approved by the Reed College IRB (number 2018-S13).

Data Availability

The following information was supplied regarding data availability:

All the code and data (except confidential survey responses) are available at Github: http://github.com/eitanf/sysconf. A snapshot of this repository is also available as a Supplementary File. Additionally, the complete survey questionnaire and anonymized individual survey responses are available as a Supplemental File.

Funding

This work was supported by a grant from the office of the Dean of the Faculty at Reed College. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

8 Citations 3,743 Views 718 Downloads

Your institution may have Open Access funds available for qualifying authors. See if you qualify

Publish for free

Comment on Articles or Preprints and we'll waive your author fee
Learn more

Five new journals in Chemistry

Free to publish • Peer-reviewed • From PeerJ
Find out more