Header Ads

Test Tool

Test design tools
It is called tool support testing i.e. CAST.
The tools may be..
Requirements testing tools
Static analysis tools
Test design tools
Test data preparation tools
Test running tools - character-based, GUI
Comparison tools
Test harnesses and drivers
Performance test tools
Dynamic analysis tools
Debugging tools
Test management tools
Coverage measurement

Test design tools generate test inputs or executable tests from requirements, from a graphical user interface, from design models (state, data or object) or from code. This type of tool may generate expected outcomes as well (i.e. may use a test oracle). The generated tests from a state or object model are useful for verifying the implementation of the model in the software, but are seldom sufficient for verifying all aspects of the software or system. They can save valuable time and
provide increased thoroughness of testing because of the completeness of the tests that the tool can generate.

Other tools in this category can aid in supporting the generation of tests by providing structured templates, sometimes called a test frame, that generate tests or test stubs, and thus speed up the test design process.

Test data preparation tools
Data manipulation
selected from existing databases or files
created according to some rules
edited from other sources
Test data preparation tools manipulate databases, files or data transmissions to set up test data to be used during the execution of tests. A benefit of these tools is to ensure that live data transferred to a test environment is made anonymous, for data protection.

Requirement Management tool:
Automated support for verification and validation of requirements models
consistency checking
animation


Static analysis tools
Provide information about the quality of software
Code is examined, not executed
Objective measures
cyclomatic complexity
others: nesting levels, size

Test design tools:
Generate test inputs
from a formal specification or CASE repository
from code (e.g. code not covered yet)

execution Tool
Interface to the software being tested
Run tests as though run by a human tester
Test scripts in a programmable language
Data, test inputs and expected results held in test repositories
Most often used to automate regression testing

Character-based
simulates user interaction from dumb terminals
capture keystrokes and screen responses
GUI (Graphical User Interface)
simulates user interaction for WIMP applications (Windows, Icons, Mouse, Pointer)
capture mouse movement, button clicks, and keyboard inputs
capture screens, bitmaps, characters, object states


Comparison Tools:
Detect differences between actual test results and expected results
screens, characters, bitmaps
masking and filtering
Test running tools normally include comparison capability
Stand-alone comparison tools for files or databases

Test harnesses and drivers
Used to exercise software which does not have a user interface (yet)
Used to run groups of automated tests or comparisons
Often custom-build
Simulators (where testing in real environment would be too costly or dangerous)

Performance testing tools
Load generation
drive application via user interface or test harness
simulates realistic load on the system & logs the number of transactions
Transaction measurement
response times for selected transactions via user interface
Reports based on logs, graphs of load versus response times


Dynamic analysis tools
Provide run-time information on software (while tests are run)
allocation, use and de-allocation of resources, e.g. memory leaks
flag unassigned pointers or pointer arithmetic faults

Debugging tools
Used by programmers when investigating, fixing and testing faults
Used to reproduce faults and examine program execution in detail
single-stepping
breakpoints or watchpoints at any statement
examine contents of variables and other data


Test management tools
Management of testware: test plans, specifications, results
Project management of the test process, e.g. estimation, schedule tests, log results
Incident management tools (may include workflow facilities to track allocation, correction and retesting)
Traceability (of tests to requirements, designs)


Coverage measurement tools
Objective measure of what parts of the software structure was executed by tests
Code is instrumented in a static analysis pass
Tests are run through the instrumented code
Tool reports what has and has not been covered by those tests, line by line and summary statistics
Different types of coverage: statement, branch, condition, LCSAJ, et al

Test execution tools

Test execution tools enable tests to be executed automatically, or semi-automatically, using stored inputs and expected outcomes, through the use of a scripting language. The scripting language makes it possible to manipulate the tests with limited effort, for example, to repeat the test with different data or to test a different part of the system with similar steps. Generally these tools include dynamic comparison features and provide a test log for each test run.Test execution tools can also be used to record tests, when they may be referred to as capture
playback tools. Capturing test inputs during exploratory testing or unscripted testing can be useful in order to reproduce and/or document a test, for example, if a failure occurs.



Performance testing/load testing/stress testing tools
Performance testing tools monitor and report on how a system behaves under a variety of simulated usage conditions. They simulate a load on an application, a database, or a system environment,such as a network or server. The tools are often named after the aspect of performance that they measure, such as load or stress, so are also known as load testing tools or stress testing tools. They are often based on automated repetitive execution of tests, controlled by parameters.

Monitoring tools
Monitoring tools are not strictly testing tools but provide information that can be used for testing purposes and which is not available by other means.Monitoring tools continuously analyze, verify and report on usage of specific system resources, and
give warnings of possible service problems. They store information about the version and build of the software and testware, and enable traceability.
Powered by Blogger.