Testing Terminology

Acceptance Testing: Formal testing conducted to enable a user, customer, or other authorized entity to determine whether to accept a system or component. [IEEE]
Ad Hoc Testing: Testing carried out using no recognized test case design technique.
Alpha Testing: Simulated or actual operational testing at an in-house site not otherwise involved with the software developers.
API - Application Programming Interface: A set of documented programming routines, provided by the manufacturer of an application or a device designed to allow third-party access to functions or capabilities of the application or device. Used to facilitate the development of value-added features by parties other than the manufacturer.
Base Station: A land station in the land mobile service. For example, in cellular and PCS uses, each cell has its own base station; each base station is interconnected with other base stations and with the public switched network (PSTN).
Behavior: The combination of input values and preconditions and the required response for a function of a system. The full specification of a function would normally comprise one or more behaviors.
Beta Testing: Operational testing at a site not otherwise involved with the software developers.
Big-bang Testing: Integration testing where no incremental testing takes place prior to all the system's components being combined to form the system.
Black Box Testing: See "Functional Testing".
Bottom-up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Boundary Value: An input value or output value which is on the boundary between equivalence classes, or an incremental distance either side of the boundary.
Boundary Value Analysis[BVA]: A test case design technique for a component in which test cases are designed which include representatives of boundary values.
Boundary Value Coverage: The percentage of boundary values of the component's equivalence classes which have been exercised by a test case suite.
Boundary Value Testing: A test case selection technique in which test data is chosen to lie among "boundaries" or extremes of input domain (or output range) classes, data structures, procedure parameters, etc. Boundary value test cases often include the minimum and maximum in-range values and the out-of-range values just beyond these values.
Branch: A conditional transfer of control from any statement to any other statement in a component, or an unconditional transfer of control from any statement to any other statement in the component except the next statement, or when a component has more than one entry point, a transfer of control to an entry point of the component.
Branch Condition: See decision condition.
Branch Condition Combination Coverage: The percentage of combinations of all branch condition outcomes in every decision that have been exercised by a test case suite.
Branch Condition Combination Testing: A test case design technique in which test cases are designed to execute combinations of branch condition outcomes.
Branch Condition Coverage: The percentage of branch condition outcomes in every decision that have been exercised by a test case suite.
Branch Condition Testing: A test case design technique in which test cases are designed to execute branch condition outcomes.
Branch Coverage: The percentage of branches that have been exercised by a test case suite.
Branch Outcome: See decision outcome.
Branch Point: See decision.
Branch Testing: A test case design technique for a component in which test cases are designed to execute branch outcomes.
Broadband: Broadband is a descriptive term for evolving digital technologies offering consumers a single switched facility offering integrated access to voice, high-speed data services, video-demand services, and interactive information delivery services. (Example: Cable TV system employs analog broadband transmission)
Bug: See fault.
Bug Seeding: See error seeding.
Capture/Playback Tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time.
Capture/Replay Tool: See capture/playback tool.
Certification: The process of confirming that a system or component complies with its specified requirements and is acceptable for operational use.
CDMA - Code Division Multiple Access: CDMA uses a method of spreading spectrum transmissions for digital wireless personal communications networks that allows a large number of users simultaneously to access a single radio frequency without interference.
CDMA2000: The name for the 3G evolution of cdmaOne (IS-95-based CDMA). A W-CDMA that is backwards compatible with cdmaOne systems.
Client/Server: A computer network configuration in which the "client" is a desktop computing device or program "served" by another networked computing device. Computers are integrated over the network by an application, which provides a single system image. The client can request information or applications from the server and the server provides the information or application.
Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
Code-based Testing: Designing tests based on objectives derived from the implementation (e.g., tests that execute specific control flow paths or use specific data items).
Component: A minimal software item for which a separate specification is available.
Component Testing: The testing of individual software components
Condition: An expression containing no Boolean operators. For example, the expression "IF A" is a condition as it is a Boolean expression without Boolean operators which evaluates to either True or False.
Condition Outcome: The evaluation of a condition to TRUE or FALSE.
Control Flow: An abstract representation of all possible sequences of events in a program's execution.
Control Flow Graph: The diagrammatic representation of the possible alternative control flow paths through a component.
Control Flow Path: See path.
Correctness: The degree to which software conforms to its specification.
Coverage: The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test case suite.
Data Definition: An executable statement where a variable is assigned a value.
Data Definition-use Coverage: The percentage of data definition-use pairs in a component that are exercised by a test case suite.
Data Definition-use Pair: A data definition and data use, where the data use uses the value defined in the data definition.
Data Definition-use Testing: A test case design technique for a component in which test cases are designed to execute data definition-use pairs.
Data Flow Coverage: Test coverage measure based on variable usage within the code. Examples are data definition-use coverage, data definition P-use coverage, data definition C-use coverage, etc.
Data Flow Testing: Testing in which test cases are designed based on variable usage within the code.
Data Use: An executable statement where the value of a variable is accessed.
Deactivated Code: Code which by design is either (a) not intended to be executed (for example, source code that is part of a previously developed, reusable software component) or (b) is only executed in certain configurations of the target computer environment (for example, machine code that is enabled by a hardware pin selection). The software build process (linker) may exclude such code from being bound in to the executable.
Dead Code: Executable machine code (not source code, although the machine code may have resulted directly from the source code) which cannot, nor is intended to, be used in any operational configuration of the target computer environment and is not traceable to a system or software requirement. Dead code may be generated by compilers or linkers.
Debugging: The act of correcting errors during the development process.
Decision: An expression comprising conditions and zero or more Boolean operators that is used in a control construct (e.g. if...then...else; case statement) that determines the flow of execution of the software program. A decision without a Boolean operator reduces to a condition. For example, the expression "IF (A>B) or (B5 LOOP".
Decision Coverage: Every point of entry and exit within the software is invoked at least once, and every decision in the software has taken all possible outcomes at least once. Source code decision coverage, by definition, includes source level statement coverage, while instruction decision coverage includes machine code decision coverage.
Decision/Condition Coverage: Every point of entry and exit within the software is invoked at least once, every condition in a decision in the software has taken all possible outcomes at least once, and every decision has taken all possible outcomes at least once.
Decision Outcome: The result of a decision (which therefore determines the control flow alternative taken).
Designated Engineering Representative (DER): A formal nomenclature bestowed by the Federal Aviation Administration on an engineer who is authorized to act on the FAA's behalf in evaluating aircraft products, engineering, and issues.
Design-based Testing: Designing tests based on objectives derived from the architectural or detail design of the software (e.g., tests that execute specific invocation paths or probe the worst case behavior of algorithms).
Desk Checking: The testing of software by the manual simulation of its execution.
Domain: The set from which values are selected.
Domain Testing: See equivalence partition testing.
Downlink: The part of a satellite system that includes the satellite itself, the receiving earth station and the signal transmitted from the satellite to the earth.
DPE - Distributed Processing Environment: A software construct that facilitates an application's distribution across a network of Processing Elements. The Processing Elements may be aggregated together into functional Network Elements in which the Network Elements may represent a physical package, management domain, an application domain, or some combination of these. The primary objective of the Distributed Processing Environment is to insulate the application program from the complexities of building a distributed application. The insulation is provided through a set of services that allow communication and service usage in both a locations transparent and access transparent fashion. DPE provides infrastructure functions and services within a structure that supports many requirements such as availability, scalability, real-time responsiveness, and security.
Dynamic Analysis: The process of evaluating a system or component based upon its behaviour during execution. [IEEE]
Earth Station: Equipment on earth that can transmit or receive satellite communications. In general usage, this term refers to receive-only stations.
Emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system. [IEEE,do178b]
Entry Point: The first executable statement within a component.
Equivalence Class: An input domain ("class") for which each input yields the same ("equivalent") execution path regardless of which input from the class is chosen.
Equivalence Partition: See equivalence class.
Equivalence Partition Coverage: The percentage of equivalence classes generated for the component, which have been exercised by a test case suite.
Equivalence Partition Testing: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
Error: A human action that produces an incorrect result. [IEEE]
Error Guessing: A test case design technique where the experience of the tester is used to postulate what faults might occur, and to design tests specifically to expose them.
Error Seeding: The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program. [IEEE]
Executable Statement: A statement which, when compiled, is translated into object code, which will be executed procedurally when the program is running and may perform an action on program data.
Exercised: A program element is exercised by a test case when the input value causes the execution of that element, such as a statement, branch, or other structural element.
Exhaustive Testing: A test case design technique in which the test case suite comprises all combinations of input values and preconditions for component variables.
Exit Point: The last executable statement within a component.
Expected Outcome: See predicted outcome.
Failure: Deviation of the software from its expected delivery or service. [Fenton]
Fault: A manifestation of an error in software. A fault, if encountered may cause a failure. [do178b]
Feasible Path: A path for which there exists a set of input values and execution conditions which causes it to be executed.
Firewall: A combination of hardware and software which limits the exposure of a computer or a group of computers to attack from outside. The most common use of a firewall is on a local area network (LAN) connected to the Internet.
Footprint: The area in which a specific transmission can be received. Some footprints cover as much as one-third of the earth, such as satellite or cell systems.
Functional Specification: The document that describes in detail the characteristics of the product with regard to its intended capability. [BS 4778, Part2]
Functional Test Case Design: Test case selection that is based on an analysis of the specification of the component without reference to its internal workings.
Functional Testing: Verification of an item by applying test data derived from specified functional requirements without consideration of the underlying product architecture or composition.
2G - Second Generation Wireless System: A term used to describe digital cellular and Personal Communications Service (PCS) technologies, as well as the systems using such technologies. 2G systems include GSM, PCS1900, IS54/IS-136 and IS-136-based PCS, IS-95 and IS-95-based PCS.
2.5G: A term used to describe extensions to 2G systems to provide low bandwidth data transport capabilities and Internet Access.
3G - Third Generation Wireless System: A term used to describe the next major evolution in the technologies for digital cellular and PCS after 2G. 3G systems will support wireless Internet access at data rates exceeding 144kpbs in a vehicular environment, exceeding 384 kbps in an outdoor/indoor pedestrian environment, and exceeding 2 Mbps in a fixed, indoor environment.
Gateway: Gateways provide a single source through which users can locate and gain access to a wide variety of computer services. Gateways typically offer a directory of services available through them, and provide billing for these services.
GSM - Global System for Mobile Communications: The standard digital cellular phone service found in Europe, Japan, Australia, and elsewhere. GSM is a set of ETSI standards specifying the infrastructure for a digital cellular service.
iDEN - Integrated Dispatch Enhanced Network: A wireless technology developed by Motorola, iDEN operates in the 800 MHz, 900MHz, and 1.5 GHz radio bands. Through a single proprietary handset, iDEN supports voice in the form of both dispatch radio and PSTN interconnection, numeric paging, SMS for text, data and fax transmission.
Incremental Testing: Integration testing where system components are integrated into the system one at a time until the entire system is integrated.
Independence: Separation of responsibilities which ensures the accomplishment of objective evaluation. After [do178b].
Infeasible Path: A path which cannot be exercised by any set of possible input values.
Input: A variable (whether stored within a component or outside it) that is read by the component.
Input Domain: The set of all possible inputs.
Input Value: An instance of an input.
Inspection: A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection). After [Graham]
Installation Testing: Testing concerned with the installation procedures for the system.
Instruction Coverage: Every machine code instruction in the software has been executed at least once. Executing a machine instruction means that the instruction was processed.
Instrumentation: The insertion of additional code or breakpoints into software in order to collect runtime information about program execution which would otherwise not be obtainable.
Instrumenter: A software tool used to carry out instrumentation.
Integration: The process of combining components into larger assemblies.
Integration Testing: Testing performed to expose faults in the interfaces and in the interaction between integrated components.
Interface Testing: Integration testing where the interfaces between system components are tested.
ISDN - Integrated Services Digital Network: Switched network providing end-to-end digital connection for simultaneous transmission of voice and/or data over multiple multiplexed communication channels and employing transmission that conforms to internationally defined standards. ISDN is considered to be the basis for a "universal network" that can support almost any type of communications device or devices.
Isolation Testing: Component testing of individual components in isolation from surrounding components, with surrounding components being simulated by stubs.
Modified Condition/Decision Testing: A test case design technique in which test cases are designed to execute branch condition outcomes that independently affect a decision outcome.
Modified Decision/Condition Coverage: Every point of entry and exit within the software is invoked at least once, every condition in a decision in the software has taken all possible outcomes at least once, every decision has taken all possible outcomes at least once, and every condition in a decision is shown to independently affect that decision's outcome. A condition is shown to independently affect a decision's outcome by varying that condition while holding fixed all other possible conditions, and the decision's outcome changes as a result of the conditional change.
Optimization: The result of one or more compilation options which yields object code specially altered to provide an improvement in execution speed or reduced image size. Optimized object code contains different or re-ordered machine instructions and data compared to unoptimized code, even though each was compiled from the same source code.
Output: A variable (whether stored within a component or outside it) that is written to by the component.
Output Domain: The set of all possible outputs.
Output Value: An instance of an output
Path: A sequence of executable statements of a component, from an entry point to an exit point.
Path Coverage: The percentage of paths in a component exercised by a test case suite.
Path Sensitizing: Choosing a set of input values to force the execution of a component to take a given path.
Path Testing: A test case design technique in which test cases are designed to execute paths of a component.
PCS - Personal Communication Services: A term coined by the FCC, it describes a two-way, voice and data, wireless telecommunications system. PCS encompasses cordless phones, cellular mobile phone, paging systems, personal communications networks, wireless office phone systems and any other wireless telecommunications systems that allow people to place and receive voice/data calls while away from home and office.
Peer Review: A formal review of an item by a group of peers of the item's developer.
Performance Testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. [IEEE]
Portability Testing: Testing aimed at demonstrating the software can be ported to specified hardware or software platforms.
Precondition: Environmental and state conditions, which must be fulfilled before the component can be executed with a particular input value.
Progressive Testing: Testing of new features after regression testing of previous features. [Beizer]
Regression Testing: Re-execution of tests which have previously been executed correctly, in order to verify a subsequent revision of that same product.
Requirements-based Testing: Designing tests based on objectives derived from requirements for the software component (e.g., tests that exercise specific functions or probe the non-functional constraints such as performance or security). See functional test case design.
Review: A process or meeting during which a work product, or set of work products, is presented to project personnel, managers, users or other interested parties for comment or approval. [ieee]
Simulation: The representation of selected behavioural characteristics of one physical or abstract system by another system. [ISO 2382/1].
Simulator: A device, computer program or system used during software verification, which behaves or operates like a given system when provided with a set of controlled inputs. [IEEE,do178b]
Software element: A software component which has quantifiable characteristics regarding its execution. Source code (assignment statements, decision statements, etc), and machine instructions are examples of elements. Structural coverage requirements typically embody one or more of these elements.
Software Structural Coverage: The degree to which execution of the software exercises one or more elements within that software. Since the term "coverage" refers to one or more types of elements, the potential for ambiguity implies that the use of the term "coverage" be preceded by the element type which is being covered by the structural test. The different types of coverage are defined in the remaining coverage definitions.
Software Structural Test/Source Code Coverage Matrix: A matrix which provides the correlation between all modules and all tests for software structural coverage purposes, thereby defining which tests provide structural coverage of which modules.
Source Statement: See statement.
Specification: A description of a component's function in terms of its output values for specified input values under specified preconditions.
SS7 - Signaling System 7: An architecture for performing out-of-band signaling in support of the call-establishment, billing, routing, and information-exchange functions of the public switched telephone network (PSTN). It identifies functions to be performed by a signaling-system network and a protocol to enable their performance.
State Transition: A transition between two allowable states of a system or component.
State Transition Testing: A test case design technique in which test cases are designed to execute state transitions.
Statement: An entity in a programming language which is typically the smallest indivisible unit of execution.
Statement Coverage: Every statement in the software has been executed at least once. Executing a statement means that the statement was encountered and evaluated during testing.
Statement Testing: A test case design technique for a component in which test cases are designed to execute statements.
Static Analysis: Analysis of a program carried out without executing the program.
Static Analyzer: A tool that carries out static analysis.
Static Testing: Testing of an object without execution on a computer.
Statistical Testing: A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases.
Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. [IEEE]
Structural Coverage: Coverage measures based on the internal structure of the component.
Structural Coverage Deviations: An acceptable rationale associated with an unexecuted element which is not intended to be executed in any deliverable configuration, and for which analysis shows that the element cannot be inadvertently executed.
Structural Test Case Design: Test case selection that is based on an analysis of the internal structure of the component.
Structural Testing: See structural test case design.
Structured Basis Testing: A test case design technique in which test cases are derived from the code logic to achieve 100% branch coverage.
Structured Walk-through: See walkthrough.
Stub: Special code segments, or a subset of the final intended code which will simulate the interface of that code to other entities. Used to prototype, simulate, or test in advance of component completion.
Subpath: A sequence of executable statements within a component.
System Testing: The process of testing an integrated system to verify that it meets specified requirements. [Hetzel]
TDMA - Time Division Multiple Access: TDMA is a method of digital transmission for wireless telecommunications systems that allows a large number of users simultaneously to access a single radio frequency without interference. TDMA splices up time slots to provide users the individual portions of the allocated bandwidth.
Telephony: The word used to describe the science of transmitting voice over a telecommunications network.
Test Automation: The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
Test Case: A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement. After [IEEE,do178b]
Test Case Design Technique: A method used to derive or select test cases.
Test Case Suite: A collection of one or more test cases for the software under test.
Test Comparator: A test tool that compares the actual outputs produced by the software under test with the expected outputs for that test case.
Test Completion Criterion: A criterion for determining when planned testing is complete, defined in terms of a test measurement technique.
Test Coverage: See coverage.
Test Driver: A program which sets up an environment and calls a module for test.
Test Environment: A description of the hardware and software environment in which the tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.
Test Execution: The processing of a test case suite by the software under test, producing an outcome.
Test Execution Technique: The method used to perform the actual test execution, e.g. manual, capture/playback tool, etc.
Test Generator: A program that generates test cases in accordance to a specified strategy or heuristic. After [Beizer].
Test Harness: See Test Driver.
Test Measurement Technique: A method used to measure test coverage items.
Test Outcome: See outcome.
Test Plan: A record of the test planning process detailing the degree of tester indedendence, the test environment, the test case design techniques and test measurement techniques to be used, and the rationale for their choice.
Test Procedure: A document providing detailed instructions for the execution of one or more test cases.
Test Records: For each test, an unambiguous record of the identities and versions of the component under test, the test specification, and actual outcome.
Test Script: Commonly used to refer to the automated test procedure used with a test harness.
Testing: The process of exercising software to verify that it satisfies specified requirements and to detect errors. After [do178b]
Thread Testing: A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.
Top-down Testing: An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
Transponder: A device in a communications satellite that receives signals from the earth, translates and amplifies them on another frequency, and retransmits them.
Unit Testing: See component testing.
Uplink: The signal that carries information from an earth station source up to a satellite.
Usability Testing: Testing the ease with which users can learn and use a product.
Validation: The determination of correctness of an item based upon requirements, and the sanctity of those requirements.
Verification: The demonstration of consistency, completeness, and correctness of an item.
Walk-Through: A manual analysis technique in which the item's developer describes the item's structure and logic to a group of peers.
White Box Testing: Verification of an item by applying test data derived from analysis of the item's underlying product architecture and composition.

No comments:

Post a Comment

Protected by Copyscape Plagiarism Check