Ultimate glossary of software testing terms for beginner testers

When you first get started in software testing (just like me), I’m pretty sure you will be confused by terms used in software testing…The reason is simple: software testing industry has been there for a long time and it keeps changing and being updated frequently while you are so new.

No wonder why you often get lost in testing terms.

In an effort to make life easier for you and me both, I consolidate and make a list of common terms used in software testing. Well, to be honest, I did not invent those terms. I just collected those terms from the best resources on the Internet and put it all together to make this list. Please feel free to add more if you find somethings interesting not in this list.

Let’s not waste any more time, I present to you all, ladies and gentlemen, the (almost) complete list of the terms used in Software testing, in alphabet order:

A | B | C | D | E | F | G |H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z


Application Binary Interface (ABI)

Describes the low level interface between an application program and the operating system, between an application and its libraries, or between component parts of the application. An ABI differs from an application programming interface (API) in that an API defines the interface between source code and libraries, so that the same source code will compile on any system supporting that API, whereas an ABI allows compiled object code to function without changes on any system using a compatible ABI.

Acceptance testing

The final test level. Conducted by users with the purpose to accept or reject the system before release.

Accessibility Testing

Verifying a product is accessible to the people having disabilities (visually impaired, hard of hearing etc.)

Actual result

The system status or behaviors after you conduct a test. An anomaly or deviation is when your actual results differ from the expected results.

Ad hoc testing

Testing carried out informally without test cases or other written test instructions.

Agile development

A development method that emphasizes working in short iterations. Automated testing is often used. Requirements and solutions evolve through close collaboration between team members that represent both the client and supplier.

Alpha testing

Operational testing conducted by potential users, customers, or an independent test team at the vendor’s site. Alpha testers should not be from the group involved in the development of the system, in order to maintain their objectivity. Alpha testing is sometimes used as acceptance testing by the vendor.


Any condition that deviates from expectations based on requirements specifications, design documents, standards etc. A good way to find anomalies is by testing the software.

Application Development Lifecycle

The process flow during the various phases of the application development life cycle.

Application Programming Interface (API)

Provided by operating systems or libraries in response to support requests for services to be made of it by computer programs

Arc Testing

See branch testing.

Automated Software Quality (ASQ)

The use of software tools, such as automated testing tools, to improve software quality.

Automated Software Testing

The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions, without manual intervention.

Automated Testing Tools

Software tools used by development teams to automate and streamline their testing and quality assurance process.


Backus-Naur Form (BNF)

A meta syntax used to express context-free grammars: that is, a formal way to describe formal languages.

Basic Block

A sequence of one or more consecutive, executable statements containing no branches.

Basis Path Testing

A white box test case design technique that fulfills the requirements of branch testing & also tests all of the independent paths that could be used to construct any arbitrary path through the computer program.

Basis Test Set

A set of test cases derived from Basis Path Testing. Baseline: The point at which some deliverable produced during the software engineering process is put under formal change control.


A popular software engineering technique used to measure test coverage. Known bugs are randomly added to a program source code and the programmer is tasked to find them. The percentage of the known bugs not found gives an indication of the real bugs that remain.


The combination of input values and preconditions along with the required response for a function of a system. The full specification of a function would normally comprise one or more behaviors.

Benchmark Testing

Benchmark testing is a normal part of the application development life cycle. It is a team effort that involves both application developers and database administrators (DBAs), and should be performed against your application in order to determine current performance and improve it. If the application code has been written as efficiently as possible, additional performance gains might be realized from tuning the database and database manager configuration parameters. You can even tune application parameters to meet the requirements of the application better.

Benchmark Testing Methods

Benchmark tests are based on a repeatable environment so that the same test run under the same conditions will yield results that you can legitimately compare. You might begin benchmarking by running the test application in a normal environment. As you narrow down a performance problem, you can develop specialized test cases that limit the scope of the function that you are testing. The specialized test cases need not emulate an entire application to obtain valuable information. Start with simple measurements, and increase the complexity only when necessary.

Beta testing

Test that comes after alpha test, and is performed by people outside of the organization that built the system. Beta testing is especially valuable for finding usability flaws and configuration problems.

Binary Portability Testing

Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.

Big-bang integration

An integration testing strategy in which every component of a system is assembled and tested together; contrast with other integration testing strategies in which system components are integrated one at a time.

Black box testing

Testing in which the test object is seen as a “black box” and the tester has no knowledge of its internal structure. The opposite of white box testing.

Block Matching

Automated matching logic applied to data and transaction driven websites to automatically detect blocks of related data. This enables repeating elements to be treated correctly in relation to other elements in the block without the need for special coding.
See TestDrive-Gold

Bottom-up integration

An integration testing strategy in which you start integrating components from the lowest level of the system architecture. Compare to big-bang integration and top-down integration.

Boundary value analysis

A black box test design technique that tests input or output values that are on the edge of what is allowed or at the smallest incremental distance on either side of an edge. For example, an input field that accepts text between 1 and 10 characters has six boundary values: 0, 1, 2, 9, 10 and 11 characters.


A conditional transfer of control from any statement to any other statement in a component, or an unconditional transfer of control from any statement to any other statement in the component except the next statement, or when a component has more than one entry point, a transfer of control to an entry point of the component.

Branch Condition Coverage

The percentage of branch condition outcomes in every decision that has been tested.

Branch Condition Combination Coverage

The percentage of combinations of all branch condition outcomes in every decision that has been tested.

Branch Condition Combination Testing

A test case design technique in which test cases are designed to execute combinations of branch condition outcomes.

Branch Condition Testing

A technique in which test cases are designed to execute branch condition outcomes. Branch Testing: A test case design technique for a component in which test cases are designed to execute branch outcomes.

Breadth Testing

A test suite that exercises the full functionality of a product but does not test features in detail.

BS 7925-1

A testing standards document containing a glossary of testing terms. BS stands for ‘British Standard’.

BS 7925-2

A testing standard document that describes the testing process, primarily focusing on component testing. BS stands for ‘British Standard’.


A slang term for fault, defect, or error. Originally used to describe actual insects causing malfunctions in mechanical devices that predate computers. The International Software Testing Qualifications Board (ISTQB) glossary explains that “a human being can make an error (mistake), which produces a defect (fault, bug) in the program code, or in a document. If a defect in code is executed, the system may fail to do what it should do (or do something it shouldn’t), causing a failure. Defects in software, systems or documents may result in failures, but not all defects do so.”

See also debugging.


Capture/playback tool

See record and playback tool.


A general term for automated testing tools. Acronym for computer-aided software testing.

Cause-Effect Graph

A graphical representation of inputs or stimuli (causes) with their associated outputs (effects), which can be used to design test cases.

Capability Maturity Model for Software (CMM)

The CMM is a process model based on software best-practices effective in large-scale, multi-person projects. The CMM has been used to assess the maturity levels of organization areas as diverse as software engineering, system engineering, project management, risk management, system acquisition, information technology (IT) or personnel management, against a scale of five key processes, namely: Initial, Repeatable, Defined, Managed and Optimized.

Capability Maturity Model Integration (CMMI)

Capability Maturity Model® Integration (CMMI) is a process improvement approach that provides organizations with the essential elements of effective processes. It can be used to guide process improvement across a project, a division, or an entire organization. CMMI helps integrate traditionally separate organizational functions, set process improvement goals and priorities, provide guidance for quality processes, and provide a point of reference for appraising current processes. Seen by many as the successor to the CMM, the goal of the CMMI project is to improve the usability of maturity models by integrating many different models into one framework.


The process of confirming that a system or component complies with its specified requirements and is acceptable for operational use.


See change control board

Change control board

A group responsible for evaluating, prioritizing, and approving/rejecting requested changes to an IT system.

Change request

A type of document describing a needed or desired change to the system.


A simpler form of test case, often merely a document with short test instructions (“one-liners”). An advantage of checklists is that they are easy to develop. A disadvantage is that they are less structured than test cases. Checklists can complement test cases well. In exploratory testing, checklists are often used instead of test cases.


The part of an organization that orders an IT system from the internal IT department or from an external supplier/vendor.

Code Complete

A phase of development where functionality is implemented in its entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

Code coverage

A generic term for analysis methods that measure the proportion of code in a system that is executed by testing. Expressed as a percentage, for example, 90 % code coverage.

Code-Based Testing

The principle of structural code based testing is to have each and every statement in the program executed at least once during the test. Based on the premise that one cannot have confidence in a section of code unless it has been exercised by tests, structural code based testing attempts to test all reachable elements in the software under the cost and time constraints. The testing process begins by first identifying areas in the program not being exercised by the current set of test cases, follow by creating additional test cases to increase the coverage.

Code Inspection

A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.

Code review

Code review is systematic examination (sometimes referred to as peer review) of computer source code. It is intended to find mistakes overlooked in the initial development phase, improving the overall quality of software

Code standard

Description of how a programming language should be used within an organization.

Code Walkthrough

A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer’s logic and assumptions.

Compatibility Testing

The process of testing to understand if software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.


The activity of translating lines of code written in a human-readable programming language into machine code that can be executed by the computer.

Complete Path Testing

See exhaustive testing.


The smallest element of the system, such as class or a DLL.

Component integration testing

Another term for integration test.

Component testing

Test level that evaluates the smallest elements of the system. Also known as unit test, program test and module test.

Component Specification

A description of a component’s function in terms of its output values for specified input values under specified preconditions.

Computation Data Use

A data use not in a condition. Also called C-use.

Configuration management

Routines for version control of documents and software/program code, as well as managing multiple system release versions.

Configuration testing

A test to confirm that the system works under different configurations of hardware and software, such as testing a website using different browsers.

Concurrent Testing

Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. See Load Testing


A Boolean expression containing no Boolean operators. For instance, A<B is a condition but A and B is Not.

Condition Coverage

See branch condition coverage.

Condition Outcome

The evaluation of a condition to TRUE or FALSE.

Conformance Testing

The process of testing to determine whether a system meets some specified standard. To aid in this, many test procedures and test setups have been developed, either by the standard’s maintainers or external organizations, specifically for testing conformance to standards. Conformance testing is often performed by external organizations; sometimes the standards body itself, to give greater guarantees of compliance. Products tested in such a manner are then advertised as being certified by that external organization as complying with the standard.

Context Driven Testing

The context-driven school of software testing is similar to Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.

Control Flow

An abstract representation of all possible sequences of events in a program’s execution.

Control Flow Graph

The diagrammatic representation of the possible alternative control flow paths through a component.

Control Flow Path:

See path

Conversion Testing

Testing of programs or procedures used to convert data from existing systems for use in replacement systems.


The degree to which software conforms to its specification. Coverage: The degree, expressed as a percentage, to which a specified coverage item has been tested.

Coverage Item

An entity or property used as a basis for testing.


Commercial Off the Shelf. Software that can be bought on the open market. Also called “packaged” software.

Cyclomatic Complexity

A software metric (measurement). It was developed by Thomas McCabe and is used to measure the complexity of a program. It directly measures the number of linearly independent paths through a program’s source code


Data Case

Data relationship model simplified for data extraction and reduction purposes in order to create test data.

Data Definition

An executable statement where a variable is assigned a value.

Data Definition C-use Coverage

The percentage of data definition C-use pairs in a component that is exercised by a test case suite.

Data Definition C-use Pair

A data definition and computation data use, where the data use uses the value defined in the data definition.

Data Definition P-use Coverage

The percentage of data definition P-use pairs in a component that is tested.

Data Definition-use Coverage

The percentage of data definition-use pairs in a component that is exercised by a test case suite.

Data Definition-use Pair

A data definition and data use, where the data uses the value defined in the data definition.

Data Definition-use Testing

A test case design technique for a component in which test cases are designed to execute data definition-use pairs.

Data Dictionary

A database that contains definitions of all data items defined during analysis.

Data Driven Testing

A framework where test input and output values are read from data files and are loaded into variables in captured or manually coded scripts. In this framework, variables are used for both input values and output verification values. Navigation through the program, reading of the data files, and logging of test status and information are all coded in the test script.

Data Flow Diagram

A modeling notation that represents a functional decomposition of a system.

Data Flow Coverage

Test coverage measure based on variable usage within the code. Examples are data definition-use coverage, data definition P-use coverage, data definition C-use coverage, etc.

Data Flow Testing

Data-flow testing looks at the lifecycle of a particular piece of data (i.e. a variable) in an application. By looking for patterns of data usage, risky areas of code can be found and more test cases can be applied.

Data Protection

Technique in which the condition of the underlying database is synchronized with the test scenario so that differences can be attributed to logical changes. This technique also automatically re-sets the database after tests – allowing for a constant data set if a test is re-run.

Data Protection Act

UK Legislation surrounding the security, use and access of an individual’s information. May impact the use of live data used for testing purposes.

Data Use

An executable statement where the value of a variable is accessed.

Database Testing

The process of testing the functionality, security, and integrity of the database and the data held within.

Data Flow Diagram

A modeling notation that represents a functional decomposition of a system.

Data Flow Coverage

Test coverage measure based on variable usage within the code. Examples are data definition-use coverage, data definition P-use coverage, data definition C-use coverage, etc.

Data Flow Testing

Data-flow testing looks at the lifecycle of a particular piece of data (i.e. a variable) in an application. By looking for patterns of data usage, risky areas of code can be found and more test cases can be applied.

Data Protection

Technique in which the condition of the underlying database is synchronized with the test scenario so that differences can be attributed to logical changes. This technique also automatically re-sets the database after tests – allowing for a constant data set if a test is re-run.

Data Protection Act

UK Legislation surrounding the security, use and access of an individual’s information. May impact the use of live data used for testing purposes.

Data Use

An executable statement where the value of a variable is accessed. Database Testing: The process of testing the functionality, security, and integrity of the database and the data held within.

Daily build

A process in which the test object is compiled every day in order to allow daily testing. While it ensures that defect reports are reported early and regularly, it requires automated testing support.


The process in which developers identify, diagnose, and fix errors found.

Decision table

A test design and requirements specification technique. A decision table describes the logical conditions and rules for a system. Testers use the table as the basis for creating test cases.


A flaw in a component or system that can cause the component or system to fail to perform its required function. A defect, if encountered during execution, may cause a failure of the component or system.

Defect report

A document used to report a defect in a component, system, or document. Also known as an incident report.


Any product that must be delivered to someone other than the author of the product. Examples of deliverables are documentation, code and the system.

Delta Release

A delta, or partial, release is one that includes only those areas within the release unit that have actually changed or are new since the last full or delta release. For example, if the release unit is the program, a delta release contains only those modules that have changed, or are new, since the last full release of the program or the last delta release of certain modules.

Dependency Testing

Examines an application’s requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

Depth Testing

A test that exercises a feature of a product in full detail.

Desk checking

A static testing technique in which the tester reads code or a specification and “executes” it in his mind.

Design-Based Testing

Designing tests based on objectives derived from the architectural or detail design of the software (e.g., tests that execute specific invocation paths or probe the worst case behavior of algorithms).

Dirty Testing

Testing which demonstrates that the system under test does not work. (Also known as negative testing).

Documentation Testing

Testing concerned with the accuracy of documentation.


The set from which values are selected.

Domain Expert

A person who has significant knowledge in a specific domain.

Domain Testing

Domain testing is the most frequently described test technique. The basic notion is that you take the huge space of possible tests of an individual variable and subdivide it into subsets that are (in some way) equivalent. Then you test a representative from each subset.


Total period that a service or component is not operational.

Document review

See review.


See test driver


Dynamic Systems Development Method. An iterative development approach.

Dynamic testing

Testing performed while the system is running. Dynamic testing involves working with the software, giving input values and checking if the output is as expected.

Dynamic Analysis

The examination of the physical response from the system to variables that are not constant and change with time.



A device that duplicates (provides an emulation of) the functions of one system using a different system, so that the second system behaves like (and appears to be) the first system. This focus on exact reproduction of external behavior is in contrast to simulation, which can concern an abstract model of the system being simulated, often considering internal state.

Endurance Testing

Checks for memory leaks or other problems that may occur with prolonged execution.

End-to-end testing

Testing used to test whether the performance of an application from start to finish conforms with the behavior that is expected from it. This technique can be used to identify system dependencies and confirm the integrity of data transfer across different system components remains.

Entry criteria

Criteria that must be met before you can initiate testing, such as that the test cases and test plans are complete.

Entry Point

The first executable statement within a component.

Equivalence Class

A mathematical concept, an equivalence class is a subset of given set induced by an equivalence relation on that given set. (If the given set is empty, then the equivalence relation is empty, and there are no equivalence classes; otherwise, the equivalence elation and its concomitant equivalence classes are all non-empty.). Elements of an equivalence class are said to be equivalent, under the equivalence relation, to all the other elements of the same equivalence class.

Equivalence partitioning

A test design technique based on the fact that data in a system is managed in classes, such as intervals. Because of this, you only need to test a single value in every equivalence class. For example, you can assume that a calculator performs all addition operations in the same way; so if you test one addition operation, you have tested the entire equivalence class.

Equivalence Partition Coverage

The percentage of equivalence classes generated for the component, which has been tested.

Equivalence Partition Testing

A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.


A human action that produces an incorrect result.

Error description

The section of a defect report where the tester describes the test steps he/she performed, what the outcome was, what result he/she expected, and any additional information that will assist in troubleshooting.

Error guessing

Experience-based test design technique where the tester develops test cases based on his/her skill and intuition, and experience with similar systems and technologies.

Error Seeding

The process of injecting a known number of “dummy” defects into the program and then check how many of them are found by various inspections and testing. If, for example, 60% of them are found, the presumption is that 60% of other defects have been found as well.

Evaluation Report

A document produced at the end of the test process summarizing all testing activities and results. It also contains an evaluation of the test process and lessons learned.

Executable statement

A statement which, when compiled, is translated into object code, which will be executed procedurally when the program is running and may perform an action on program data.


A program element is exercised by a test case when the input value causes the execution of that element, such as a statement, branch, or other structural element.


Run, conduct. When a program is executing, it means that the program is running. When you execute or conduct a test case, you can also say that you are running the test case.

Exhaustive testing

A test approach in which you test all possible inputs and outputs.

Exit criteria

Criteria that must be fulfilled for testing to be considered complete, such as that all high-priority test cases are executed, and that no open high-priority defect remains. Also known as completion criteria.

Expected result

A description of the test object’s expected status or behavior after the test steps are completed. Part of the test case.

Exit Point

The last executable statement within a component.

Expert System

A domain specific knowledge base combined with an inference engine that processes knowledge encoded in the knowledge base to respond to a user’s request for advice.


Specialized domain knowledge, skills, tricks, shortcuts and rules-of-thumb that provide an ability to rapidly and effectively solve problems in the problem domain.

Exploratory testing

A test design technique based on the tester’s experience; the tester creates the tests while he/she gets to know the system and executes the tests.

External supplier

A supplier/vendor that doesn’t belong to the same organization as the client/buyer.

Extreme programming

An agile development methodology that emphasizes the importance of pair programming, where two developers write program code together. The methodology also implies frequent deliveries and automated testing.


Factory acceptance test (FAT)

Acceptance testing carried out at the supplier’s facility, as opposed to a site acceptance test, which is conducted at the client’s site.


Deviation of the component or system under test from its expected result.


A manifestation of an error in software. Also known as a bug.

Fault Injection

A technique used to improve test coverage by deliberately inserting faults to test different code paths, especially those that handle errors and which would otherwise be impossible to observe.

Feasible Path

A path for which there exists a set of input values and execution conditions which causes it to be executed.

Feature Testing

A method of testing which concentrates on testing one feature at a time.

Firing a Rule

A rule fires when the “if” part (premise) is proven to be true. If the rule incorporates an “else” component, the rule also fires when the “if” part is proven to be false.

Fit For Purpose Testing

Validation carried out to demonstrate that the delivered system can be used to carry out the tasks for which it was designed and acquired.

Forward Chaining

Applying a set of previously determined facts to the rules in a knowledge base to see if any of them will fire.

Formal review

A review that proceeds according to a documented review process that may include, for example, review meetings, formal roles, required preparation steps, and goals. Inspection is an example of a formal review.

Full Release

All components of the release unit that are built, tested, distributed and implemented together.

See also delta release.

Functional integration

An integration testing strategy in which the system is integrated one function at a time. For example, all the components needed for the “search customer” function are put together and tested one by one.

Functional Specification

The document that describes in detail the characteristics of the product with regard to its intended capability.

Functional Decomposition

A technique used during planning, analysis and design; creates a functional hierarchy for the software. Functional Decomposition broadly relates to the process of resolving a functional relationship into its constituent parts in such a way that the original function can be reconstructed (i.e., recomposed) from those parts by function composition. In general, this process of decomposition is undertaken either for the purpose of gaining insight into the identity of the constituent components (which may reflect individual physical processes of interest, for example), or for the purpose of obtaining a compressed representation of the global function, a task which is feasible only when the constituent processes possess a certain level of modularity (i.e. independence or non-interaction).

Functional Requirements

Define the internal workings of the software: that is, the calculations, technical details, data manipulation and processing and other specific functionality that show how the use cases are to be satisfied. They are supported by non-functional requirements, which impose constraints on the design or implementation (such as performance requirements, security, quality standards, or design constraints).

Functional Specification

A document that describes in detail the characteristics of the product with regard to its intended features.

Functional testing

Testing of the system’s functionality and behavior; the opposite of non-functional testing.


Genetic Algorithms

Search procedures that use the mechanics of natural selection and natural genetics. It uses evolutionary techniques, based on function optimization and artificial intelligence, to develop a solution.

Glass Box Testing

Also known as white box testing, a form of testing in which the tester can examine the design documents and the code as well as analyze and possibly manipulate the internal state of the entity being tested. Glass box testing involves examining the design documents and the code, as well as observing at run time the steps taken by algorithms and their internal data.


The solution that the program or project is trying to reach.

Gorilla Testing

An intense round of testing, quite often redirecting all available resources to the activity. The idea here is to test as much of the application in as short a period of time as possible.

Graphical User Interface (GUI)

A type of display format that enables the user to choose commands, start programs, and see lists of files and other options by pointing to pictorial representations (icons) and lists of menu items on the screen.

Gray-box testing

Testing which uses a combination of white box and black box testing techniques to carry out software debugging on a system whose code the tester has limited knowledge of.



A test environment comprised of stubs and drivers needed to conduct a test.


The informal, judgmental knowledge of an application area that constitutes the “rules of good judgment” in the field. Heuristics also encompass the knowledge of how to solve problems efficiently and effectively, how to plan steps in solving a complex problem, how to improve performance, etc.

High Order Tests

High-order testing checks that the software meets customer requirements and that the software, along with other system elements, meets the functional, behavioral, and performance requirements. It uses black-box techniques and requires an outsider perspective. Therefore, organizations often use an Independent Testing Group (ITG) or the users themselves to perform high-order testing. High-order testing includes validation testing, system testing (focusing on aspects such as reliability, security, stress, usability, and performance), and acceptance testing (includes alpha and beta testing). The testing strategy specifies the type of high-order testing that the project requires. This depends on the aspects that are important in a particular system from the user perspective


IEEE 829

An international standard for test documentation published by the IEEE organization. The full name of the standard is IEEE Standard for Software Test Documentation. It includes templates for the test plan, various test reports, and handover documents.

Impact analysis

Techniques that help assess the impact of a change. Used to determine the choice and extent of regression tests needed.

Implementation Testing

See Installation Testing.

Incremental Testing

Partial testing of an incomplete product. The goal of incremental testing is to provide an early feedback to software developers.


A condition that is different from what is expected, such a deviation from requirements or test cases.

Incident report

See defect report.</>


Separation of responsibilities which ensures the accomplishment of objective evaluation.

Independent testing

A type of testing in which testers’ responsibilities are divided up in order to maintain their objectivity. One way to do this is by giving different roles the responsibility for various tests. You can use different sets of test cases to test the system from different points of view.

Independent Test Group (ITG)

A group of people whose primary responsibility is to conduct software testing for other companies.

Infeasible path

A path which cannot be exercised by any set of possible input values. Inference: Forming a conclusion from existing facts.

Inference Engine

Software that provides the reasoning mechanism in an expert system. In a rule based expert system, typically implements forward chaining and backward chaining strategies.


The organizational artifacts needed to perform testing, consisting of test environments, automated test tools, office environment and procedures.


The ability of a class to pass on characteristics and data to its descendants. Input: A variable (whether stored within a component or outside it) that is read by the component.

Input Domain

The set of all possible inputs.

Informal review

A review that isn’t based on a formal procedure.


An example of a formal review technique.


The ability of a software component or system to be installed on a defined target platform allowing it to be run as required. Installation includes both a new installation and an upgrade.

Installability Testing

Testing whether the software or system installation being tested meets predefined installation requirements.

Installation Guide

Supplied instructions on any suitable media, which guides the installer through the installation process. This may be a manual guide, step-by-step procedure, installation wizard, or any other similar process description.

Installation test

A type of test meant to assess whether the system meets the requirements for installation and uninstallation. This could include verifying that the correct files are copied to the machine and that a shortcut is created in the application menu.

Installation Wizard

Supplied software on any suitable media, which leads the installer through the installation process. It shall normally run the installation process, provide feedback on installation outcomes and prompt for options.


The insertion of additional code into the program in order to collect information about program behavior during program execution.

Instrumentation code

Code that makes it possible to monitor information about the system’s behavior during execution. Used when measuring code coverage, for example.


The process of combining components into larger groups or assemblies.

Integration testing

A test level meant to show that the system’s components work with one another. The goal is to find problems in interfaces and communication between components.

Internal supplier

Developer that belongs to the same organization as the client. The IT department is usually the internal supplier.

Interface Testing

Integration testing where the interfaces between system components are tested.

Isolation Testing

Component testing of individual components in isolation from surrounding components


International Software Testing Qualifications Board. ISTQB is responsible for international programs for testing certification.


A development cycle consisting of a number of phases, from formulation of requirements to delivery of part of an IT system. Common phases are analysis, design, development, and testing. The practice of working in iterations is called iterative development.



A framework for testing Java applications, specifically designed for automated testing of Java components.


KBS (Knowledge Based System)

A domain specific knowledge base combined with an inference engine that processes knowledge encoded in the knowledge base to respond to a user’s request for advice.

Key Performance Indicator

Quantifiable measurements against which specific performance criteria can be set.

Keyword Driven Testing

An approach to test script writing aimed at code based automation tools that separates much of the programming work from the actual test steps. The results is the test steps can be designed earlier and the code base is often easier to read and maintain.

Knowledge Engineering

The process of codifying an expert’s knowledge in a form that can be accessed through an expert system.

Known Error

An incident or problem for which the root cause is known and for which a temporary Work-around or a permanent alternative has been identified.



A Linear Code Sequence And Jump, consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence.

LCSAJ Coverage

The percentage of LCSAJs of a component which are exercised by a test case suite.

LCSAJ Testing

A test case design technique for a component in which test cases are designed to execute LCSAJs.

Logic-Coverage Testing

Sometimes referred to as Path Testing, logic-coverage testing attempts to expose software defects by exercising a unique combination of the program’s statements known as a path.

Load testing

A type of performance testing conducted to evaluate the behavior of a component or system with increasing load, e.g. numbers of concurrent users and/or numbers of transactions. Used to determine what load can be handled by the component or system.

Localization Testing

This term refers to making software specifically designed for a specific locality. This test is based on the results of globalization testing, which verifies the functional support for that particular culture/locale. Localization testing can be executed only on the localized version of a product.


A chronological record of relevant details about the execution of tests.

Loop Testing

Loop testing is the testing of a resource or resources multiple times under program control.



A measure of how easy a given piece of software code is to modify in order to correct defects, improve or add functionality.


Activities for managing a system after it has been released in order to correct defects or to improve or add functionality. Maintenance activities include requirements management, testing, development amongst others.

Maintenance Requirements

A specification of the required maintenance needed for the system/software. The released software often needs to be revised and/or upgraded throughout its lifecycle. Therefore it is essential that the software can be easily maintained, and any errors found during re-work and upgrading.

Manual Testing

The oldest type of software testing. Manual testing requires a tester to perform manual test operations on the test software without the help of test automation. Manual testing is a laborious activity that requires the tester to possess a certain set of qualities; to be patient, observant, speculative, creative, innovative, open-minded, resourceful, un-opinionated, and skillful.


A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.

Modified Condition/Decision Coverage

The percentage of all branch condition outcomes that independently affect a decision outcome that have been exercised by a test case suite.

Modified Condition/Decision Testing

A test case design technique in which test cases are designed to execute branch condition outcomes that independently affect a decision outcome.

Monkey Testing

Testing a system or an application on the fly, i.e. a unit test with no specific end result in mind.

Module testing

See component testing.

Multiple Condition Coverage

See Branch Condition Combination Coverage.

Mutation Analysis

A method to determine test case suite thoroughness by measuring the extent to which a test case suite can discriminate the program from slight variants (mutants) of the program.

Mutation Testing

Testing done on the application where bugs are purposely added to it.


Mean time between failures. The average time between failures of a system.


N-switch Coverage

The percentage of sequences of N-transitions that have been tested.

N-switch Testing

A form of state transition testing in which test cases are designed to execute all valid sequences of N-transitions.


A sequence of N+ transitions. N+1 Testing: A variation of regression testing. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test cycle N+1. The cycles are typically repeated until the solution reaches a steady state and there are no errors.

Naming standard

The standard for creating names for variables, functions, and other parts of a program. For example, strName, sName and Name are all technically valid names for a variable, but if you don’t adhere to one structure as the standard, maintenance will be very difficult.

Negative testing

A type of testing intended to show that the system works well even if it is not used correctly. For example, if a user enters text in a numeric field, the system should not crash.

Neural Network

A system modeled after the neurons (nerve cells) in a biological nervous system. A neural network is designed as an interconnected system of processing elements, each with a limited number of inputs and outputs. Rather than being programmed, these systems learn to recognize patterns.

Non-functional Requirements Testing

Testing of those requirements that do not relate to functionality. i.e. performance, usability, etc.

Non-functional testing

Testing of non-functional aspects of the system, such as usability, reliability, maintainability, and performance.


A technique for designing relational database tables to minimize duplication of information and, in so doing, to safeguard the database against certain types of logical or structural problems, namely data anomalies.


An open source framework for automated testing of components in Microsoft .Net applications.



A software structure which represents an identifiable item that has a well-defined role in a problem domain.

Object Orientated

An adjective applied to any system or language that supports the use of objects.


The purpose of the specific test being undertaken.

Open source

A form of licensing in which software is offered free of charge. Open source software is frequently available via download from the internet, from www.sourceforge.net for example.

Operational testing

Tests carried out when the system has been installed in the operational environment (or simulated operational environment) and is otherwise ready to go live. Intended to test operational aspects of the system, e.g. recoverability, co-existence with other systems and resource consumption.


A mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test.


The result after a test case has been executed.


A variable (whether stored within a component or outside it) that is written to by the component.

Output Domain

The set of all possible outputs.

Output Value

An instance of an output.


Page Fault

A program interruption that occurs when a page that is marked ‘not in real memory’ is referred to by an active page

Pair programming

A software development approach where two developers sit together at one computer while programming a new system. While one developer codes, the other makes comments and observations, and acts as a sounding board. The technique has been shown to lead to higher quality thanks to the de facto continuous code review – bugs and errors are avoided because the team catches them as the code is written.

Pair testing

Test approach where two persons, e.g. two testers, a developer and a tester, or an end-user and a tester, work together to find defects. Typically, they share one computer and trade control of it while testing. One tester can act as observer when the other performs tests.

Pairwise Testing

A combinatorial software testing method that, for each pair of input parameters to a system (typically, a software algorithm) tests all possible discrete combinations of those parameters. Using carefully chosen test vectors, this can be done much faster than an exhaustive search of all combinations of all parameters, by “parallelizing” the tests of parameter pairs. The number of tests is typically O(nm), where n and m are the number of possibilities for each of the two parameters with the most choices.

Partial Test Automation

The process of automating parts but not all of the software testing process. If, for example, an oracle cannot reasonably be created, or if fully automated tests would be too difficult to maintain, then a software tools engineer can instead create testing tools to help human testers perform their jobs more efficiently. Testing tools can help automate tasks such as product installation, test data creation, GUI interaction, problem detection (consider parsing or polling agents equipped with oracles), defect logging, etc., without necessarily automating tests in an end-to-end fashion.


Software has deemed to have passed a test if the actual results of the test matched the expected results.

Pass/Fail Criteria

Decision rules used to determine whether an item under test has passed or failed a test.


A sequence of executable statements of a component, from an entry point to an exit point.

Path Coverage

The percentage of paths in a component exercised by a test case suite.

Path Sensitizing

Choosing a set of input values to force the execution of a component to take a given path.

Path Testing

Used as either black box or white box testing, the procedure itself is similar to a walk-through. First, a certain path through the program is chosen. Possible inputs and the correct result are written down. Then the program is executed by hand, and its result is compared to the predefined. Possible faults have to be written down at once.


The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate.

Performance testing

A test to evaluate whether the system meets performance requirements such as response time or transaction frequency.


The ease with which the system/software can be transferred from one hardware or software environment to another.

Portability Requirements

A specification of the required portability for the system/software.

Portability Testing

The process of testing the ease with which a software component can be moved from one environment to another. This is typically measured in terms of the maximum amount of effort permitted. Results are expressed in terms of the time required to move the software and complete data conversion and documentation updates.

Positive testing

A test aimed to show that the test object works correctly in normal situations. For example, a test to show that the process of registering a new customer functions correctly when using valid test data.


Environmental and state conditions that must be fulfilled after a test case or test run has been executed.


Environmental and state conditions that must be fulfilled before the component or system can be tested. May relate to the technical environment or the status of the test object. Also known as prerequisites or preparations.


See preconditions.


A logical expression which evaluates to TRUE or FALSE, normally to direct the execution path in code.


The choice to execute or not to execute a given instruction.

Predicted Outcome

The behavior expected by the specification of an object under specified conditions.


The choice to execute or not to execute a given instruction.

Predicted Outcome

The behavior expected by the specification of an object under specified conditions.


The level of importance assigned to e.g. a defect.

Professional tester

A person whose sole job is testing.

Program testing

See component testing.


A course of action which turns inputs into outputs or results

Process Cycle Test

A black box test design technique in which test cases are designed to execute business procedures and processes.

Progressive Testing

Testing of new features after regression testing of previous features.


A planned undertaking for presentation of results at a specified time in the future.


A strategy in system development in which a scaled down system or portion of a system is constructed in a short time, then tested and improved upon over several iterations.


A series which appears to be random but is in fact generated according to some prearranged sequence.



The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.

Quality assurance (QA)

Systematic monitoring and evaluation of various aspects of a component or system to maximize the probability that minimum standards of quality are being attained.

Quality Attribute

A feature or characteristic that affects an item’s quality.

Quality Audit

A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.

Quality Circle

A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.

Quality Control

The operational techniques and the activities used to fulfill and verify requirements of quality.

Quality Management

That aspect of the overall management function that determines and implements the quality policy. Direction and control with regard to quality generally includes the establishment of the quality policy and quality objectives, quality planning, quality control, quality assurance and quality improvement.

Quality Conundrum

Resource, risk and application time to-market are often in conflict as IS teams strive to deliver quality applications within their budgetary constraints. This is the quality conundrum.

Quality Policy

The overall intentions and direction of an organization as regards quality as formally expressed by top management.

Quality System

The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.


A question. Often associated with an SQL query of values in a database.

Queuing Time

Incurred when the device, which a program wishes to use, is already busy. The program therefore has to wait in a queue to obtain service from that device.



Return on Investment. A performance measure used to evaluate the efficiency of an investment or to compare the efficiency of a number of different investments. To calculate ROI, the benefit (return) of an investment is divided by the cost of the investment; the result is expressed as a percentage or a ratio.

Ramp Testing

Continuously raising an input signal until the system breaks down.

Random Testing

A black-box testing approach in which software is tested by choosing an arbitrary subset of all possible input values. Random testing helps to avoid the problem of only testing what you know will work.


Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.


The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure.

Recovery Testing

The activity of testing how well the software is able to recover from crashes, hardware failures and other similar problems. Recovery testing is the forced failure of the software in a variety of ways to verify that recovery is properly performed.

Record and playback tool

Test execution tool for recording and playback of test cases often used to support automation of regression testing. Also known as capture/playback.

Recreation Materials

A script or set of results containing the steps required to reproduce a desired outcome.

Regression testing

A test activity generally conducted in conjunction with each new release of the system, in order to detect defects that were introduced (or discovered) when prior defects were fixed. Compare to Re-testing.

Relational Operator

Conditions such as “is equal to” or “is less than” that link an attribute name with an attribute value in a rule’s premise to form logical expressions that can be evaluated true or false.


A new version of the system under test. The release can be either an internal release from developers to testers, or release of the system to the client.

See also release management.

Release Candidate

A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).

Release management

A set of activities geared to create new versions of the complete system. Each release is identified by a distinct version number.

Release Note

A document identifying test items, their configuration, current status and other delivery information delivered by development to testing, and possibly other stakeholders, at the start of a test execution phase.

Release testing

A type of non-exhaustive test performed when the system is installed in a new target environment, using a small set of test cases to validate critical functions without going into depth on any one of them. Also called smoke testing – a funny way to say that, as long as the system does not actually catch on fire and start smoking, it has passed the test.


The ability of the system/software to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations.

Reliability Requirements

A specification of the required reliability for the system/software.

Reliability Testing

Testing to determine whether the system/software meets the specified reliability requirements.


A capability that must be met or possessed by the system/software (requirements may be functional or non-functional).

Requirements-based Testing

An approach to testing in which test cases are designed based on test objectives and test conditions derived from requirements. For example: tests that exercise specific functions or probe nonfunctional attributes such as reliability or usability.

Requirements management

A set of activities covering gathering, elicitation, documentation, prioritization, quality assurance and management of requirements for an IT system.

Requirements manager

The person responsible for requirements management Also known as Requirements Lead or Business Analyst.


The consequence or outcome of a test


A test to verify that a previously-reported defect has been corrected.

Retrospective meeting

A meeting at the end of a project/a sprint during which the team members evaluate the work and learn lessons that can be applied to the next project or sprint.


A static test technique in which the reviewer reads a text in a structured way in order to find defects and suggest improvements. Reviews may cover requirements documents, test documents, code, and other materials, and can range from informal to formal.


A person involved in the review process that identifies and documents discrepancies in the item being reviewed. Reviewers are selected in order to represent different areas of expertise, stakeholder groups and types of analysis.


A factor that could result in future negative consequences. Is usually expressed in terms of impact and likelihood.

Risk-based testing

A structured approach in which test cases are chosen based on risks. Test design techniques like boundary value analysis and equivalence partitioning are risk-based. All testing ought to be risk-based.

Risk Management

Systematic application of procedures and practices to the tasks of identifying, analyzing, prioritizing, and controlling risk.


The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions.

Root Cause

An underlying factor that caused a non-conformance and possibly should be permanently eliminated through process improvement.


A statement of the form: if X then Y else Z. The “if” part is the rule premise, and the “then” part is the consequent. The “else” component of the consequent is optional. The rule fires when the if part is determined to be true or false.

Rule Base

The encoded knowledge for an expert system. In a rule-based expert system, a knowledge base typically incorporates definitions of attributes and rules along with control information.


The Rational Unified Process; a development methodology from IBM’s Rational software division.


Safety Testing

The process of testing to determine the safety of a software product.

Sanity Testing

Brief test of major functional elements of a piece of software to determine if it’s basically operational.

Sandwich integration

An integration testing strategy in which the system is integrated both top-down and bottom-up simultaneously. Can save time, but is complex.

Scalability testing

A component of non-functional testing, used to measure the capability of software to scale up or down in terms of its non-functional characteristics.


A sequence of activities performed in a system, such as logging in, signing up a customer, ordering products, and printing an invoice. You can combine test cases to form a scenario especially at higher test levels.


A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in the order in which they are to be executed.


Data obfuscation routine to de-identify sensitive data in test data environments to meet the requirements of the Data Protection Act and other legislation.


The person who has to record each defect mentioned and any suggestions for improvement during a review meeting, on a logging form. The scribe has to make ensure that the logging form is understandable.


See Test Script.


An iterative, incremental framework for project management commonly used with agile software development.


Preservation of availability, integrity and confidentiality of information: access to information and associated assets when required. completeness of information and processing methods. accessible only to those authorized to have access.

Security Requirements

A specification of the required security for the system or software.

Security Testing

Process to determine that an IS (Information System) protects data and maintains functionality as intended. The six basic concepts that need to be covered by security testing are: confidentiality, integrity, authentication, authorization, availability and non-repudiation.

Session-based testing

An approach to testing in which test activities are planned as uninterrupted, quite short, sessions of test design and execution, often used in conjunction with exploratory testing.


The degree of impact that a defect has on the development or operation of a component or system.

Simple Subpath

A subpath of the control flow graph in which no program part is executed more than necessary.


The representation of selected behavioral characteristics of one physical or abstract system by another system.


A device, computer program or system used during software verification, which behaves or operates like a given system when provided with a set of controlled inputs.

Site acceptance testing (SAT)

Acceptance testing carried out onsite at the client’s location, as opposed to the developer’s location. Testing at the developer’s site is called factory acceptance testing (FAT).

Smoke Testing

A preliminary to further testing, which should reveal simple failures severe enough to reject a prospective software release. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire. In the software world, the smoke is metaphorical.

Soak Testing

Involves testing a system with a significant load extended over a significant period of time, to discover how the system behaves under sustained use.

Software Requirements Specification

A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software.

Software Testing

The process used to measure the quality of developed computer software. Usually, quality is constrained to such topics as correctness, completeness, security, but can also include more technical requirements as described under the ISO standard ISO 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability. Testing is a process of technical investigation, performed on behalf of stakeholders, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding errors. Quality is not an absolute; it is value to some person.


A description, in any suitable form, of requirements.

Specification testing

An approach to testing wherein the testing is restricted to verifying the system/software meets an agreed specification. Specified input: An input for which the specification predicts an outcome. State Transition: A transition between two allowable states of a system or component.

State transition testing

A test design technique in which a system is viewed as a series of states, valid and invalid transitions between those states, and inputs and events that cause changes in state.


An entity in a programming language which is typically the smallest indivisible unit of execution.

Statement Coverage

The percentage of executable statements in a component that have been exercised by a test case suite.

Statement Testing

A test case design technique for a component in which test cases are designed to execute statements. Statement Testing is a structural or white box technique, because it is conducted with reference to the code. Statement testing comes under Dynamic Analysis.

Static Analysis

Analysis of a program carried out without executing the program.

Static Analyzer

A tool that carries out static analysis.

Static Code Analysis

The analysis of computer software that is performed without actually executing programs built from that software. In most cases the analysis is performed on some version of the source code and in the other cases some form of the object code. The term is usually applied to the analysis performed by an automated tool, with human analysis being called program understanding or program comprehension.

Static testing

Testing performed without running the system. Document review is an example of a static test.

Statistical Testing

A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases.

Storage Testing

Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.

Stress testing

Testing meant to assess how the system reacts to workloads (network, processing, data volume) that exceed the system’s specified requirements. Stress testing shows which system resource (e.g. memory or bandwidth) is first to fail.

Structural Coverage

Coverage measures based on the internal structure of the component.

Structural Test Case Design

Test case selection that is based on an analysis of the internal structure of the component.

Structural testing

See white box testing.

Structured Basis Testing

A test case design technique in which test cases are derived from the code logic to achieve % branch coverage.

Structured Walkthrough

See walkthrough.


A skeletal or special-purpose implementation of a software module, used to develop or test a component that calls or is otherwise dependent on it.


An attribute which becomes a temporary intermediate goal for the inference engine. Subgoal values need to be determined because they are used in the premise of rules that can determine higher level goals.


A sequence of executable statements within a component.


The capability of the software product to provide an appropriate set of functions for specified tasks and user objectives.

Suspension Criteria

The criteria used to (temporarily) stop all or a portion of the testing activities on the test items.


The organization that supplies an IT system to a client. Can be internal or external. Also called vendor. Contrast with Client.

Symbolic Evaluation

See symbolic execution.

Symbolic Execution

A static analysis technique used to analyze if and when errors in the code may occur. It can be used to predict what code statements do to specified inputs and outputs. It is also important for considering path traversal. It struggles when dealing with statements which are not purely mathematical.

Symbolic Processing

Use of symbols, rather than numbers, combined with rules-of-thumb (or heuristics), in order to process information and solve problems.

Syntax Testing

A test case design technique for a component or system in which test case design is based upon the syntax of the input.


The integrated combination of hardware, software, and documentation.

System integration testing

A test level designed to evaluate whether a system can be successfully integrated with other systems (e.g. that the tested system works well with the finance system). May be included as part of system-level testing, or be conducted as its own test level in between system testing and acceptance testing.

System testing

Test level aimed at testing the complete integrated system. Both functional and nonfunctional tests are conducted.


TMM (Testing Maturity Model)

A model developed by Dr Ilene Bernstein of the Illinois Institute of Technology, for judging the maturity of the software testing processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.

Technical Review

A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken. A technical review is also known as a peer review.

Test Approach

The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project’s goal and the risk assessment carried out, starting points regarding the test process and the test design techniques to be applied

Test automation

The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions. Commonly, test automation involves automating a manual process already in place that uses a formalized testing process.

Test basis

The documentation on which test cases are based.

Test Bed

An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerate the test beds(s) to be used.

Test case

A structured test script that describes how a function or feature should be tested, including test steps, expected results preconditions and post conditions.

Test Case Design Technique

A method used to derive or select test cases.

Test Case Suite

A collection of one or more test cases for the software under test.

Test Charter

A statement of test objectives, and possibly test ideas. Test charters are amongst other used in exploratory testing.

Test Comparator

A test tool that compares the actual outputs produced by the software under test with the expected outputs for that test case.

Test Comparison

The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution.

Test Completion Criterion

A criterion for determining when planned testing is complete, defined in terms of a test measurement technique.

Test data

Information that completes the test steps in a test case with e.g. what values to input. In a test case where you add a customer to the system the test data might be customer name and address. Test data might exist in a separate test data file or in a database.

Test Data Management

The management of test data during tests to ensure complete data integrity and legitimacy from the beginning to the end of test.

Test driven development

Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.

Test driver

A software component (driver) used during integration testing in order to emulate (i.e. to stand in for) higher-level components of the architecture. For example, a test driver can emulate the user interface during tests.

Test environment

The technical environment in which the tests are conducted, including hardware, software, and test tools. Documented in the test plan and/or test strategy.

Test execution

The process of running test cases on the test object.

Test Execution Phase

The period of time in the application development life cycle during which the components of a software product are executed, and the software product is evaluated to determine whether or not requirements have been satisfied.

Test Execution Schedule

A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in their context and in the order in which they are to be executed.

Test Execution Technique

The method used to perform the actual test execution, e.g. manual, capture/playback tool, etc.

Test First Design

Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test.

Test Generator

A program that generates test cases in accordance to a specified strategy.

Test Harness

A program or test tool used to execute a test. Also known as a Test Driver.

Test Infrastructure

The organizational artifacts needed to perform testing, consisting of test environments, test tools, office environment and procedures.

Test level

A group of test activities organized and carried out together in order to meet stated goals. Examples of levels of testing are component, integration, system, and acceptance test.

Test log

A document that describes testing activities in chronological order.

Test manager

The person responsible for planning the test activities at a specific test level. Usually responsible for writing the test plan and test report. Often involved in writing test cases.

Test Measurement Technique

A method used to measure test coverage items.

Test object

The part or aspects of the system to be tested. Might be a component, subsystem, or the system as a whole.

Test plan

A document describing what should be tested by whom, when, how, and why. The test plan is bounded in time, describing system testing for a particular version of a system, for example. The test plan is to the test leader what the project plan is to the project manager.

Test policy

A document that describes how an organization runs its testing processes at a high level. It may contain a description of test levels according to the chosen life cycle model, roles and responsibilities, required/expected documents, etc.

Test Point Analysis

A formula based test estimation method based on function point analysis.

Test Procedure

A document providing detailed instructions for the execution of one or more test cases.

Test process

The complete set of testing activities, from planning through to completion. The test process is usually described in the test policy.

Test Records

For each test, an unambiguous record of the identities and versions of the component under test, the test specification, and actual outcome.

Test report

A document that summarizes the process and outcome of testing activities at the conclusion of a test period. Contains the test manager’s recommendations, which in turn are based on the degree to which the test activities attained its objectives. Also called test summary report.

Test run

A group of test cases e.g. all the test cases for system testing with owner and end-date.

Tests on one test level are often grouped into a series of tests, i.e. two-week cycles consisting of testing, retesting, and regression testing. Each series can be a test run.

Test Scenario

Definition of a set of test cases or test scripts and the sequence in which they are to be executed.

Test script

Automated test case that the team creates with the help of a test automation tool. Sometimes also used to refer to a manual test case, or to a series of interlinked test cases.

Test specification

A document containing a number of test cases that include steps for preparing and resetting the system. In a larger system you might have one test specification for each subsystem.

Test strategy

A high-level document defining the test levels to be performed and the testing within those levels for a program (one or more projects).

Test stub

A test program used during integration testing in order to emulate lower-level components. For example, you can replace a database with a test stub that provides a hard-coded answer when it is called.

Test suite

A group of test cases e.g. all the test cases for system testing.

Test Target

A set of test completion criteria for the test.

Test Type

A group of test activities aimed at testing a component or system regarding one or more interrelated quality attributes. A test type is focused on a specific test objective, i.e. reliability test, usability test, regression test etc., and may take place on one or more test levels or test phases.


The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.


A person (either a professional tester or a user) who is involved in the testing of a component or system.


A set of activities intended to evaluate software and other deliverables to determine if that they meet requirements, to demonstrate that they are fit for purpose and to find defects.

Test Tools

Computer programs used in the testing of a system, a component of the system, or its documentation.

Thread Testing

A version of component integration testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by levels of a hierarchy.

Top Down Testing

An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

Total Quality Management

A company commitment to develop a process that achieves high quality product and customer satisfaction.


The ability to identify related items in documentation and software, such as requirements with associated tests.

Traceability Matrix

A table showing the relationship between two or more baselined documents, such as requirements and test cases, or test cases and defect reports. Used to assess what impact a change will have across the documentation and software, for example, which test cases will need to be run when given requirements change.

Tracked Field

A value captured in one part of an automated test process and retained for use at a later stage.

Third-party component

A part of an IT system that is purchased as a packaged/complete product instead of being developed by the supplier/vendor.

Top-down integration

An integration test strategy, in which the team starts to integrate components at the top level of the system architecture.


Test Process Improvement. A method of measuring and improving the organization’s maturity with regard to testing.



Unified Modeling Language. A technique for describing the system in the form of use cases.

See also use case.


The capability of the software product to enable the user to understand whether the software is suitable, and how it can be used for particular tasks and conditions of use.

Unit test

Unit testing is a software development process in which the smallest testable parts of an application, called units, are individually and independently scrutinized for proper operation. Unit testing can be done manually but is often automated

Unit test framework

Software or class libraries that enable developers to write test code in their regular programming language. Used to automate component and integration testing.


The capability of the software to be understood, learned, used and attractive to the user.

Usability requirements

A specification of the required usability for the system/software.

Usability testing

A test technique for evaluating a system’s usability. Frequently conducted by users performing tasks in the system while they describe their thought process out loud.

Use case

A type of requirements document in which the requirements are written in the form of sequences that describe how various actors in the system interact with the system.

Use Case Testing

A black box test design technique in which test cases are designed to execute user scenarios.

User Acceptance Testing

Black-box testing performed on a system prior to its delivery. In most environments, acceptance testing by the system provider is distinguished from acceptance testing by the customer (the user or client) prior to accepting transfer of ownership. In such environments, acceptance testing performed by the customer is known as beta testing, user acceptance testing (UAT), end user testing, site (acceptance) testing, or acceptance testing.



A software development lifecycle model that describes requirements management, development, and testing on a number of different levels.


Tests designed to demonstrate that the developers have built the correct system. Contrast with verification, which means testing that the system has been built correctly. A large number of validation activities take place during acceptance testing.

Validation Testing

Determination of the correctness of the products of software development with respect to the user needs and requirements.

Variable Data

A repository for multiple scenario values which can be used to drive repeatable automated processes through a number of iterations when used in conjunction with an automation solution


Tests designed to demonstrate that the developers have built the system correctly. Contrast with validation, which means testing that the correct system has been built. A large number of verification activities take place during component testing.


Various methods for uniquely identifying documents and source files, e.g. with a unique version number. Each time the object changes, it should receive a new version number.

See also release management.

Version Identifier

A version number; version date, or version date and time stamp.

Volume Testing

Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.


A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review.

Waterfall model

A sequential development approach consisting of a series of phases carried out one by one. This approach is not recommended due to a number of inherent problems.


The lowest level of detail relevant to the Customer.

What If Analysis

The capability of “asking” the software package what the effect will be of changing some of the input data or independent variables.

White box testing

A type of testing in which the tester has knowledge of the internal structure of the test object. White box testers may familiarize themselves with the system by reading the program code, studying the database model, or going through the technical specifications. Contrast with black box testing.

Wide Band Delphi

A consensus-based estimation technique for estimating effort. It was developed in the 1940s at the RAND Corporation as a forecasting tool. It has since been adapted across many industries to estimate many kinds of tasks, ranging from statistical data collection results to sales and marketing forecasts. It has proven to be a very effective estimation tool, and it lends itself well to software projects. However, many see great problems with the technique, such as unknown manipulation of a group and silencing of minorities in order to see a preset outcome of a meeting.


Method of avoiding an incident or problem, either from a temporary fix or from a technique that means the Customer is not reliant on a particular aspect of a service that is known to have a problem.

Workflow Testing

Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.


Are you still with me? Hope you do not fall asleep half-way. Yeah yeah I know, this is a long list of glossary of software testing terms but it’s far from an exhaustive list. So, feel free to add the missing terms by comment below so we can have a complete glossary.

Last but not least, I’d like to give credit to following resources that I refer to:


  1. Mat Walker

    Bug Bash – Ad Hoc, or exploratory testing (see above), whereby a project team exercise a piece of functionality – or an application – to find bugs.

  2. Thanh Huynh

    @Mat Walker,
    “Bug bash” is interesting. That’s new to me 😀

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2024 AskTester

Theme by Anders NorenUp ↑