Structured unit testable templated code for efficient code review process

Department of Mechanical and Industrial Engineering, LSU, Baton Rouge, Louisiana, USA
DOI
10.7287/peerj.preprints.2163v1
Subject Areas
Computer Architecture, Programming Languages, Software Engineering
Keywords
Modern Code Review, Template, Peer Review, Extreme Programming, Agile, Software Inspection, Reviewer Participation, SQL Review, Architecture, Software Quality
Copyright
© 2016 Patwardhan
Licence
This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Preprints) and either DOI or URL of the article must be cited.
Cite this article
Patwardhan AS. 2016. Structured unit testable templated code for efficient code review process. PeerJ Preprints 4:e2163v1

Abstract

Background: Modern software development teams are distributed across onsite and off-shore locations. Each team has developers with varying experience levels and English communication skills. In such a diverse development environment it is important to maintain the software quality, coding standards, timely delivery of features and bug fixes. It is also important to reduce testing effort, minimize side effects such as change in functionality, user experience or application performance. Code reviews are intended to control code quality. Unfortunately, many projects lack enforcement of processes and standards because of approaching deadlines, live production issues and lack of resource availability.

Objective: This study examines a novel structured, unit testable templated code method to enforce code review standards with an intent to reduce coding effort, minimize revisions and eliminate functional and performance side effects on the system. The proposed method would also result in unit-testable code that can also be easily rolled back and increase team productivity.

Method: The baseline for traditional code review processes using metrics such as code review duration, bug regression rate, revision count was measured. These metrics were then compared with results from the proposed code review process that used structured unit testable templated code. The performance on 2 large enterprise level applications spanning over 2 years and 9 feature and maintenance release cycles was evaluated.

Results: The structured unit testable templated code method resulted in a decrease in total code review time, revision count and coding effort. It also decreased the number of live production issues caused by code churn or side effects of bug fix when compared to traditional code review process.

Conclusion: The study confirmed that the structured unit testable templated code results in improved code review efficiency. It also increased code quality and provided a robust tool to enforce coding standards in a cross-continent software maintenance team environment. It also relieved core resources from code review effort so that they could concentrate more on newer feature development.

Author Comment

This is a submission to PeerJ Computer Science for review.

Supplemental Information

Comparison of pre-template and post-template metrics from code review

Comparison of pre-template and post-template metrics from code review

DOI: 10.7287/peerj.preprints.2163v1/supp-1

Average lines of code per release

Average lines of code per release.

DOI: 10.7287/peerj.preprints.2163v1/supp-2

Average development duration time in days per release

Average development duration time in days per release.

DOI: 10.7287/peerj.preprints.2163v1/supp-3

Average bugs regressed per release in QA

Average bugs regressed per release in QA.

DOI: 10.7287/peerj.preprints.2163v1/supp-4

Code review process workflow and structured unit testable code template block diagram

Code review process workflow and structured unit testable code template block diagram.

DOI: 10.7287/peerj.preprints.2163v1/supp-5

Verification duration in days per release

Verification duration in days per release.

DOI: 10.7287/peerj.preprints.2163v1/supp-6

Average production bug regression count per release

Average production bug regression count per release

DOI: 10.7287/peerj.preprints.2163v1/supp-7

Sample of template for data definition language (ddl) such as create, and alter table

Sample of template for data definition language (ddl) such as create, and alter table

DOI: 10.7287/peerj.preprints.2163v1/supp-8

Side effect bug count per release

Side effect bug count per release.

DOI: 10.7287/peerj.preprints.2163v1/supp-9

Average code review comments per release

Average code review comments per release

DOI: 10.7287/peerj.preprints.2163v1/supp-10

Sample template for data manipulation (dml) such as insert, update,and delete

Sample template for data manipulation (dml) such as insert, update,and delete

DOI: 10.7287/peerj.preprints.2163v1/supp-11

Average rollback effort in days per release

Average rollback effort in days per release

DOI: 10.7287/peerj.preprints.2163v1/supp-12

Average coding standard violations per release

Average coding standard violations per release

DOI: 10.7287/peerj.preprints.2163v1/supp-13

Average bugs regressed per release in UAT

Average bugs regressed per release in UAT.

DOI: 10.7287/peerj.preprints.2163v1/supp-14

Average revision count per release

Average revision count per release

DOI: 10.7287/peerj.preprints.2163v1/supp-15