The impact of test case summaries on bug fixing performance: An empirical investigation
- Published
- Accepted
- Subject Areas
- Natural Language and Speech, Software Engineering
- Keywords
- Test Case Summarization, Software testing, Empirical Study, JUnit
- Copyright
- © 2016 Panichella et al.
- Licence
- This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ PrePrints) and either DOI or URL of the article must be cited.
- Cite this article
- 2016. The impact of test case summaries on bug fixing performance: An empirical investigation. PeerJ PrePrints 4:e1467v3 https://doi.org/10.7287/peerj.preprints.1467v3
Abstract
Automated test generation tools have been widely investigated with the goal of reducing the cost of testing activities. However, generated tests have been shown not to help developers in detecting and finding more bugs even though they reach higher structural coverage compared to manual testing. The main reason is that generated tests are difficult to understand and maintain. Our paper proposes an approach, coined TestScribe, which automatically generates test case summaries of the portion of code exercised by each individual test, thereby improving understandability. We argue that this approach can complement the current techniques around automated unit test generation or search-based techniques designed to generate a possibly minimal set of test cases. In evaluating our approach we found that (1) developers find twice as many bugs, and (2) test case summaries significantly improve the comprehensibility of test cases, which is considered particularly useful by developers.
Author Comment
Several refinements and improvements throughout the paper.