Project description: Testing is responsible for up to 50% of the total software/systems development effort. In
large-scale complex systems, the number of test cases is growing continuously and rapidly, in particular when in
addition to discrete systems (software), continuous dynamics (physical systems with sensors and actuators) have
to be tested (e.g., embedded, cyber-physical systems). The high testing effort makes it impossible to get frequent
and quick feedback about the whole system. Model-based approaches furthermore lack effective definitions of test
coverage. Deciding which subsets of tests to run under which conditions in a limited time frame is a challenge.
There are many approaches for test case prioritization, few of them have been evaluated/validated though. In our
earlier work, we have successfully used value stream mapping to identify “waste” in software testing and developed
a taxonomy to assess the utility and relevance of software testing solutions. This work showed that better tools
are needed to support practitioners in understanding relevant factors for test case selection (e.g., criticality, risk,
coverage) and to visualize them in an effective way.
In this project, we investigate which factors are most important for practitioners in test case selection and develop
notions of model coverage and robustness. We will develop tools to visualize these factors and connect them to
model-based engineering tools like Matlab and Acumen. This will improve testing efficiency (time needed to design
and execute tests) as well as testing effectiveness (ability to detect critical defects).