Home About us Contact | |||
Test Suite (test + suite)
Selected AbstractsA test suite for parallel performance analysis toolsCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 11 2007Michael Gerndt Abstract Parallel performance analysis tools must be tested as to whether they perform their task correctly, which comprises at least three aspects. First, it must be ensured that the tools neither alter the semantics nor distort the run-time behavior of the application under investigation. Next, it must be verified that the tools collect the correct performance data as required by their specification. Finally, it must be checked that the tools perform their intended tasks and detect relevant performance problems. Focusing on the latter (correctness) aspect, testing can be done using synthetic test functions with controllable performance properties, possibly complemented by real-world applications with known performance behavior. A systematic test suite can be built from synthetic test functions and other components, possibly with the help of tools to assist the user in putting the pieces together into executable test programs. Clearly, such a test suite can be highly useful to builders of performance analysis tools. It is surprising that, up until now, no systematic effort has been undertaken to provide such a suite. In this paper we describe the APART Test Suite (ATS) for checking the correctness (in the above sense) of parallel performance analysis tools. In particular, we describe a collection of synthetic test functions which allows one to easily construct both simple and more complex test programs with desired performance properties. We briefly report on experience with MPI and OpenMP performance tools when applied to the test cases generated by ATS. Copyright © 2006 John Wiley & Sons, Ltd. [source] Towards a deeper understanding of test coverageJOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 1 2008Teemu Kanstrén Abstract Test coverage is traditionally considered as how much of the code is covered by the test suite in whole. However, test suites typically contain different types of tests with different roles, such as unit tests, integration tests and functional tests. As traditional measures of test coverage make no distinction between the different types of tests, the overall view of test coverage is limited to what is covered by the tests in general. This paper proposes a quantitative way to measure the test coverage of the different parts of the software at different testing levels. It is also shown how this information can be used in software maintenance and development to further evolve the test suite and the system under test. The technique is applied to an open-source project to show its application in practice. Copyright © 2007 John Wiley & Sons, Ltd. [source] |