System Requirements Document or System Specification
|System||A collection of components organized to accomplish a specific function or set of functions.Combination of interacting elements organized to achieve one or more stated purposes NOTE 2 In practice, the interpretation of its meaning is frequently clarified by the use of an associative noun, e.g., aircraft system||IEEE 610.12IEEE15288|
|System of Interest||system whose life cycle is under consideration in the context of this International Standard||IEEE 15288|
|Enabling System||system that supports a system-of-interest during its life cycle stages but does not necessarily contribute directly to its function during operation||IEEE 15288|
|Test System||An enabling system supporting Test activities during the life-cycle of the system of interest, while not being a part of the system of interest||Extended from IEEE 15288|
|Baseline||specification or work product that has been formally reviewed and agreed upon, that thereafter serves as the basis for further development, and that can be changed only through formal change control procedures||IEEE 15288|
|Test||(1) An activity in which a system or component is executed under specified conditions, the results are observed or recorded, and an evaluation is made of some aspect of the system or component.(2) To conduct an activity as in (1)||IEEE 610.12|
|Acceptance Testing||Formal testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system.||IEEE 610.12IEEE 1012|
|System Testing||Testing conducted on a complete, integrated system to evaluate the system’s compliance with its specified requirements.||IEEE 610.12|
|Integration Testing||Testing in which software components, hardware components, or both are combined and tested to evaluate the interaction between them.||IEEE 610.12|
|Component Testing||Testing of individual hardware or software components or groups of related components.||IEEE 610.12|
|Test Case||(1) A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.(2) Documentation specifying inputs, predicted results, and a set of execution conditions for a test item.Prescriptive reference: A test case is a behavioral feature or behavior specifying tests. A test case specifies how a set of test components interact with an SUT to realize a test objective to return a verdict value.Test cases are owned by test contexts, and has therefore access to all features of the test context (e.g., the SUT and test components of the composite structure).A test case always returns a verdict.||IEEE 610.12IEEE 829-1983UTP|
|Test Case Specification||A document that specifies the test inputs, execution conditions, and predicted results for an item to be tested.||IEEE 610.12|
|Test Objective||An identified set of software features to be measured under specified conditions by comparing actual behavior with the required behavior described in the software documentation.Prescriptive reference: A dependency used to specify the objectives of a test case or test context. A test case or test context can have any number of objectives and an objective can be realized by any number of test cases or test contexts.Descriptive reference: A test objective is a reason or purpose for designing and execution a test case [ISTQB]. The underlying Dependency points from a test case or test context to anything that may represent such a reason or purpose. This includes (but is not restricted to) use cases, comments, or even elements from different profiles, like requirements from [SysML].||IEEE Std 1008-1987UTP|
|Test Requirement||See Test Condition||UTP [ISTQB]|
|Test Condition||An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element.||UTP [ISTQB]|
|Acceptance Criteria||The criteria that a system or component must satisfy in order to be accepted by a user, customer, or other authorized entity.||IEEE 610.12|
|Test Matrix||Features to be tested (Level Test Plan (LTP) Section 2.3), features not to be tested (LTP Section 2.4), and approaches (LTP Section 2.5) are commonly combined in a table called a Test Matrix. It contains a unique identifier for each requirement for the test (e.g., system and/or software requirements, design, or code), an indication of the source of the requirement (e.g., a paragraph number in the source document), and a summary of the requirement and an identification of one or more generic method(s) of test.||IEEE 829-2008|
|Test Traceability Matrix||Provide a list of the requirements (software and/or system; may be a table or a database) that are being exercised by this level of test and show the corresponding test cases or procedures. The requirements may be software product or software-based system functions or nonfunctional requirements for the higher levels of test, or design or coding standards for the lower levels of test. This matrix may be part of a larger Requirements Traceability Matrix (RTM) referenced by this plan that includes requirements for all levels of test and traces to multiple levels of life cycle documentation products. It may include both forward and backward tracing.||IEEE 829-2008|
|Test Context||Prescriptive reference: A test context acts as a grouping mechanism for a set of test cases. The composite structure of a test context is referred to as test configuration. The classifier behavior of a test context may be used for test control.Descriptive reference: A test context is just a top-level test case||UTP|
|Stimuli||Test data sent to the SUT in order to control it and to make assessments about the SUT when receiving the SUT reactions to these stimuli.||UTP|
|Observation||Test data reflecting the reactions from the SUT and used to assess the SUT reactions which are typically the result of a stimulus sent to the SUT.||UTP|
|SUT||Prescriptive reference: Stereotype applied to one or more properties of a classifier to specify that they constitute the system under test. The features and behavior of the SUT is given entirely by the type of the property to which the stereotype is applied.Descriptive reference: refers to a system, subsystem, or component which is being tested. An SUT can consist of several objects. The SUT is stimulated via its public interface operations and signals by the test components. No internals of a SUT are known or accessible during test case execution, due to its black-box nature||UTP|
The IT&E domain philosophy employs “Black-Box” test methods at the higher levels of test, so it is highly dependent on behavior specifications. Integration philosophy is highly dependent on “thread” knowledge. The IT&E domain desires to drive a defined need for system test requirements in the system requirements document/system subsystem specification, something sorely lacking today.
If consistent with the MIL-STD-961E and MIL-HDBK-520A the System Requirements Document (SRD) or System/Subsystem Specification (SSS) provides traceability from each of its requirements to a system element which will «satisfy» the requirement and a system feature element which will «verify» the requirement. “The baseline management system should allow for traceability from the lowest level component all the way back to the user capability document or other source document from which it was derived”. Defense Acquisition Guide.
IEEE 829-2008 requires the Test System to produce a data artifact indentifying which System Features are tested and which are not, this is the Test Matrix..
IEEE 829-2008 requires the Test System to produce a data artifact tracing a system feature requiring test to the test case performing the verification and the requirement verified by the test case, this is the Test Traceability Matrix. These matrices are required at each level of testing.
To satisfy the Test System’s Test Architecture data needs for the System Acceptance Test Event architecture component, it needs the SRD/SSS to provide a data artifact containing its requirements, the system feature which satisfies and a test case which verifies the requirement, when the feature’s implementation requires verification at the System Acceptance Test Event. It may be extracted from the content of the Verification Matrices identified by MIL-HDBK-520A
A System model element with a stereotype of «testCase» provides the verification method (i.e., inspection, analysis, demonstration, test), a behavior specification for the method, the acceptance criteria for the outcome of the behavior and the execution conditions of the behavior (e.g., pre-conditions, post-conditions and conditions of performance)
Each requirement in a SRD/SSS should have one (preferably only one) test case responsible to produce the evidence that the system’s implementation satisfies the requirement. The SysML profile supports documenting this dependency between a requirement and its test case. Compound requirements may require multiple test cases, but this is a signal that the requirement should be decomposed to multiple atomic requirements, a best practice.
A specification for a test case includes inputs, execution conditions, and predicted results. A test case specification has more in common with a use case description than a sequence of actions describing a use case scenario. A test procedure is a sequence of steps realizing the test case’s behavior. A test procedure is concrete and contains concrete test data. A test case specification does not provide a detailed test procedure specification, rather just the requirements for the test case (e.g., What is to be tested).
The test case specifications in an SRD/SSS sets the acquirer stakeholder’s expectation for the system’s acceptance test event. The SRD/SSS test case specifications are required by the Integration and Test System Architecture to construct the test matrix and test traceability matrix. (IEEE 829)
The test case specification reflects the test requirements/test conditions necessary to produce the evidence that the system’s implementation satisfies the requirements. Test case specifications have the same quality standards required of them as system specifications. They must be complete, consistent, correct, feasible, necessary, prioritized, unambiguous, and traceable.
A test requirement/test condition statement is similar in content to a performance requirement; in as much as the requisite conditions are specified for achieving the stated performance or test observation/expected result.
Performance Requirement example:
“The Lawnmower System shall operate with [Hourly Mowing Capacity] of at least 1 level ground acre per hour, at [Max Elevation] up to 5,000 feet above sea level, and [Max Ambient Temperature] of up to 85 degrees F., at up to 50% [Max Relative Humidity], for [Foliage Cutting Capacity] of Acme American Standard one week Lawn Grass.”
The Lawnmower System shall
and [Max Relative Humidity] at up to 50% relative humidity
“The Lawnmower System shall operate with [Fuel Economy] of at least 1 hour / gallon at [Min Elevation] of 0 feet ASL, at [Max Ambient Temperature] 85 degrees F., 50% [Max Relative Humidity], for Acme American Standard one week Lawn Grass.”
From: Requirements Statements Are Transfer Functions: An Insight from Model-Based Systems Engineering, Author, William D. Schindel, Copyright © 2005 by William D. Schindel.
These two stated performance requirements have a relationship with each other. The second requirement constrains the acceptance criterion for the first requirement. Not only must the Hourly Mowing Capacity be achieved, but it must be achieved using no more than 1 gallon of fuel. The constraint must be normalized for altitude, as the two requirements differ in this pre-condition regard. It is the Test System’s Test Architecture responsibility to group these requirements into an efficient test context, not the SRD/SSS. The SRD/SSS should only state the test requirement for the test case which verifies the requirement’s satisfaction.
Requirements’ verification method is Analysis. The problems are physics based. Altitude and Ambient temperatures directly impact air pressure density. Air pressure density negatively impacts the lawnmower’s fuel efficiency and the internal combustion engine efficiency transforming gasoline to a mechanical force. These environment conditions would be difficult and costly for a test facility to reproduce.
Blocks in the test:
- lawnmower, SUT Property – Hourly Mowing Capacity
- lawnmower user
- Acme American Standard Lawn Grass, initial state [1 weeks growth] end state [ Mowed Height Standard], Property – dimension(1 acre)
- Terrain – Property – Level, Altitude. Host for Acme American Standard Lawn Grass test coupon
- atmosphere(air pressure density), Properties – temperature, relative humidity, barometric pressure
SUT = Feature(Hourly Mowing Capacity)
The terrain and the Acme American Standard Lawn Grass need to be realized in the Test System’s Architecture as test components. Their properties should be controlled by the analysis, rather than the explicit statements of the SRD/SSS requirement text. Analysis verifies the SRD/SSS requirement explicitly; the test case providing a measure of Hourly Mowing Capacity serves to confirm the analysis. It implicitly verifies that the requirement is satisfied rather than an explicit verification of the satisfaction.
Given that the environmental parameters are difficult to control on a large scale. The most likely approach the test architect will take to test case design is to measure environmental conditions and adjust the test case’s acceptance criteria to account for test case ambient rather than to control the environment. The test coupon may also be adjusted in size based on the criticality of the performance requirement and the uncertainties in making the measurement of Hourly Mowing Capacity and its confidence interval. In a risk-driven test architecture; as the criticality of the requirement increases so should the investment in verification. If the confidence interval for this requirement is low, then a very terse test case supporting the formal analysis may satisfy the acquirer’s acceptance criterion.
From the ISTQB Glossary of Testing Terms:
test requirement: See test condition.
test condition: An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element.
test case: A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. [After IEEE 610]
test case specification: A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item. [After IEEE 829]
test design specification: A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases. [After IEEE 829]
INCOSE, in the latest Systems Engineering Handbook V 3.2.2, the term “test requirement” no longer appears. The term was employed in version 3.1.5 from 2007, but has since been removed.
Excerpt from Version 3.1.5:
Each test requirement should be verifiable by a single test. A requirement requiring multiple tests to verify should be broken into multiple requirements. There is no problem with one test verifying multiple requirements; however, it indicates a potential for consolidating requirements. When the system hierarchy is properly designed, each level of specification has a corresponding level of test during the test phase. If element specifications are required to appropriately specify the system, element verification should be performed.
establish a basis for test planning, system-level test requirements, and any requirements for environmental simulators
The 2010 revision that created Version 3.2.2. harmonized with ISO/IEC 15288:2008 with the intent to provide elaboration of the processes and activities to execute the processes of ISO/IEC 15288:2008.
A Wayne Wymore’s treatise “Model-Based Systems Engineering” includes “System Test Requirements” as a key element of his mathematical theory of systems design.
SDR (system design requirements) = (IOR, TYR, PR, CR, TR, STR (system test requirements))
A concept example:
There is a system requirement that is a “Key Performance Parameter”. The requirement must be satisfied or the system is not acceptable. There is a range of environmental execution conditions under which the KPP must be achieved. There is only a single event which triggers the functional behavior of the system (e.g., a thread) and only a single standard (performance standard) to evaluate the system output (e.g., acceptance criteria, expected result). Of the range of input conditions, there are two boundary condition sets that pose a performance challenge to the functional behavior’s design. Since this performance requirement and the conditions under which the performance must be achieved is infeasible to “Test”, the verification of the requirement is by the verification method “Analysis”, all stakeholders accept this approach.
“System Test”, an informal testing activity, test cases will execute that observe test requirements/conditions identified by the challenging boundary condition sets. The design of the test cases and their specifications will comply with the test requirements/conditions that are derived from the analysis. The analysis predicts system behavior and the test requirements/conditions will drive design of test cases where the predicted system behavior is the test oracle (e.g., acceptance criteria).
In this example the test requirement(s) or test condition(s) drive test design. The Test Architect defines the Test Requirement/ Test Condition specification during the planning phase. The original test requirement called for a verification method of “Analysis”. The verification method of “Analysis” was not fully satisfactory to the Customer. To build Customer intimacy and refine the test requirements, test cases were added to the System Test level to address the concern and build confidence in the performance analysis. These are informal test cases designed to validate the performance analysis within the constraints imposed by test implementation limitation.