Software Testing Dictionary -1


Acceptance Test: - Formal tests (often performed by a customer) to determine whether or not a system has satisfied predetermined acceptance criteria. These tests are often used to enable the customer (either internal or external) to determine whether or not to accept a system.


Ad Hoc Testing: - Testing carried out using no recognized test case design technique. [BCS]

Alpha Testing: - Testing of a software product or system conducted at the developer's site by the customer.

Artistic Testing: - Also known as Exploratory testing.

Assertion Testing. (NBS) A dynamic analysis technique which inserts assertions about the relationship between program variables into the program code. The truth of the assertions is determined as the program executes.

Automated Testing Software testing which is assisted with software technology that does not require operator (tester) input, analysis, or evaluation.



Background testing. is the execution of normal functional testing while the SUT is exercised by a realistic work load. This work load is being processed "in the background" as far as the functional testing is concerned. [ Load Testing Terminology by Scott Stirling ]

Bug: glitch, error, goof, slip, fault, blunder, boner, howler, oversight, botch, delusion, elision. [B. Beizer, 1990], defect, issue, problem

Beta Testing. Testing conducted at one or more customer sites by the end-user of a delivered software product or system.

Benchmarks Programs that provide performance comparison for software, hardware, and systems.

Benchmarking is specific type of performance test with the purpose of determining performance baselines for comparison. [Load Testing Terminology by Scott Stirling ]

Big-bang testing Integration testing where no incremental testing takes place prior to all the system's components being combined to form the system.[BCS]

Black box testing. A testing method where the application under test is viewed as a black box and the internal behavior of the program is completely ignored. Testing occurs based upon the external specifications. Also known as behavioral testing, since only the external behaviors of the program are evaluated and analyzed.

Boundary Value Analysis (BVA). BVA is different from equivalence partitioning in that it focuses on "corner cases" or values that are usually out of range as defined by the specification. This means that if function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001. BVA attempts to derive the value often used as a technique for stress, load or volume testing. This type of validation is usually performed after positive functional validation has completed (successfully) using requirements specifications and user documentation.

Breadth test. - A test suite that exercises the full scope of a system from a top-down perspective, but does not test any aspect in detail [Dorothy Graham, 1999]



Cause Effect Graphing. (1) [NBS] Test data selection technique. The input and output domains are partitioned into classes and analysis is performed to determine which input classes cause which effect. A minimal set of inputs is chosen which will cover the entire effect set. (2)A systematic method of generating test cases representing combinations of conditions. See: testing, functional.[G. Myers]

Clean test. A test whose primary purpose is validation; that is, tests designed to demonstrate the software`s correct working.(syn. positive test)[B. Beizer 1995]

Code Inspection. A manual [formal] testing [error detection] technique where the programmer reads source code, statement by statement, to a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards. Contrast with code audit, code review, code walkthrough. This technique can also be applied to other software and configuration items. [G.Myers/NBS] Syn: Fagan Inspection

Code Walkthrough. A manual testing [error detection] technique where program [source code] logic [structure] is traced manually [mentally] by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.[G.Myers/NBS] Contrast with code audit, code inspection, code review.

Coexistence Testing.Coexistence isnÂ’t enough. It also depends on load order, how virtual space is mapped at the moment, hardware and software configurations, and the history of what took place hours or days before. ItÂ’s probably an exponentially hard problem rather than a square-law problem. [from Quality Is Not The Goal. By Boris Beizer, Ph. D.]

Compatibility bug A revision to the framework breaks a previously working feature: a new feature is inconsistent with an old feature, or a new feature breaks an unchanged application rebuilt with the new framework code. [R. V. Binder, 1999]

Compatibility Testing. The process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems.

Composability testing –testing the ability of the interface to let users do more complex tasks by combining different sequences of simpler, easy-to-learn tasks. [Timothy Dyck, ‘Easy’ and other lies, eWEEK April 28, 2003]

Condition Coverage. A test coverage criteria requiring enough test cases such that each condition in a decision takes on all possible outcomes at least once, and each point of entry to a program or subroutine is invoked at least once. Contrast with branch coverage, decision coverage, multiple condition coverage, path coverage, statement coverage.[G.Myers]

Conformance directed testing. Testing that seeks to establish conformance to requirements or specification. [R. V. Binder, 1999]

Cookbook scenario. A test scenario description that provides complete, step-by-step details about how the scenario should be performed. It leaves nothing to change. [Scott Loveland, 2005]

CRUD Testing. Build CRUD matrix and test all object creation, reads, updates, and deletion. [William E. Lewis, 2000]

No comments: