Regression Test Optimization
Software regresses when existing functionality stops working upon the change of the program. Thus, the importance of automatic regression testing is increasing; this is especially true for fast-evolving systems and continuous delivery done right. To stress functional behavior, the regression test suite and its execution time is, in general, large. Moreover, by definition, regression test suites are executed recurrently, and thus, the number of test cases greatly influences the execution time. However, a change has only a partial impact on the system, so we can choose to execute only relevant test cases (test selection). Similarly, we can permanently remove test cases that are “irrelevant” (test minimization). Last but not least, the order of the test cases can reveal regression errors “faster” (test prioritization).
In research, several optimization techniques as proposed. However, those strategies often exists only on paper, don’t scale, and tools are very rare to non-existent. We are working on a framework for Java and JUnit that allows an easy integration into existing projects and enables researches to implement new optimization strategies. Moreover, a concept for pointing out “irrelevant” tests and automated evaluation for strategies has to be developed. Further, we want to identify prerequisites and requirements for projects to be suitable for test selection and prioritization techniques.
- Adapting Regression Test Optimization for Continuous Delivery
- Evaluation of Regression Test Optimization Strategies
- A Report Interface for an Extendable Check Execution Framework
- Enhancing Lazzer for Metric-based RTO Strategies
- Localizing Error-inducing Commits in CI Environments
- A Framework for Regression Test Prioritization and Selection
- S. Sok, C. Plewnia, S. Tanachutiwat, H. Lichter (2020): Optimization of Compute Costs inHybrid Clouds with Full Rescheduling. In 2020 IEEE International Conference on Smart Cloud (SmartCloud) (to be published), IEEE Computer Society, Los Alamitos, CA, USA.
- L. Andika, A. Dyck, H. Lichter (2017): Towards A Design for An Extendable Reporting Interface. In Procedia Computer Science, Vol. 116, 318-325.
- A. Dyck, R. Penners, H. Lichter (2015): Towards Definitions for Release Engineering and DevOps. In 3rd International Workshop on Release Engineering (RELENG) associated with the with the 37th International Conference on Software Engineering (ICSE 2015), 19 May 2015, Florence, Italy, 3-3.
- C. Plewnia, A. Dyck, H. Lichter (2014): On the Influence of Release Engineering on Software Reputation. In 2nd International Workshop on Release Engineering, April 11, 2014, Mountain View, CA, USA.
- A. Dyck, A. Ganser, H. Lichter (2014): On Designing Recommenders for Graphical Domain Modeling Environments. In Modelsward 2014, Proceedings of the 2nd International Conference on Model-Driven Engineering and Software Development, Lisbon, Portugal, 7.-9. January 2014, SCITEPRESS – Science and Technology Publications, 291-299.
- A. Dyck, A. Ganser, H. Lichter (2014): A Framework for Model Recommenders – Requirements, Architecture and Tool Support. In Modelsward 2014, Proceedings of the 2nd International Conference on Model-Driven Engineering and Software Development, Lisbon, Portugal, 7.-9. January 2014, SCITEPRESS – Science and Technology Publications, 282-290.
- A. Dyck, A. Ganser, H. Lichter (2013): Enabling Model Recommenders for Command-Enabled Editors. In MoDELS MDEBE – International Workshop on Model-driven Engineering By Example 2013 co-located with MoDELS Conference, September 29, 2013, Miami, Florida, CEUR, 12-21.