Category Archives: Thoughts on Test Architecture

Thought on the development stage of a Test Case

From My perspective of the OMG’s UML Testing Profile

Conceptual Design Stage

The System Architecture description models and its sub-systems architecture description models own the test case conceptual design of each requirement having a SysML «Verify» dependency to a TestCase model element in version 1.4 of the language standard.  A Test Case in the UML Testing Profile (UTP) extends the UML metaclasses behavior and operation.  A Test Case ALWAYS returns a Verdict.

TestObjectiveSpecificationAtSRRb

Each requirement subject to verification should have such a dependency

The TestCase is named and it has a dependency relationship to its TestObjectiveSpecification model element per the UTP v1.2

A TestRequirement model element may be used to refine the TestObjectiveSpecification model element and act as a bridge between the System of Interest model and the Test System model

TestReqCon

Note: The UTP 2 specification addresses maturing concepts in these elements of a test specification

The TestObjectiveSpecification expresses the conceptual design of the TestCase in a natural language statement.  This statement identifies the most important components of the TestCase from the perspective of the Accepting Stakeholder concerns. The components are: pre and invariant conditions of the system under test (SUT) environment, the SUT and its interacting Actors or Systems, their Input(s) to the SUT, the expected behavior of the SUT in response to its inputs.

This collection of components expresses the ‘acceptance criterion’ of the Accepting Stakeholder.  A TestCase, satisfying its TestObjectiveSpecification, will produce objective evidence verifying, to the satisfaction of the Accepting Stakeholder, that the system requirement has been satisfied by the responsible system feature.

The goal of the Test Objective is to articulate an abstract, yet understandable and unambiguous description of the purpose of the TestCase.  The TestObjectiveSpecification does not state ‘How’ to implement the TestCase. The conceptual design treats the TestCase as a “black box”

The Test System Architecture Imports the conceptual design of a TestCase as the foundation to the development of the Test Design Specification for the TestCase

TestObjectiveSpecificationAtSRRa

A conceptual test case design may be realized by a collection of test cases by the Test System

TestCaseConcepta

Logical Design Stage

The Test Specification of the System Architecture has the responsibility for the Test Design Specification of each conceptual test case imported from the system architecture

A logical design identifies  the components of a test case’s structure and behavior and their relationships.

The logical design activity is an iterative activity ending when a specification can be realized

TestObjectiveSpecificationAtSFRa

Test Architecture and high level Test design is performed

Engineering tasks include:  requirements analysis, evaluation, allocation and component specification

The role each component plays in its test case and its test objective responsibilities is defined and traced

A logical design should not specify concrete values for test data properties.  Specify concepts for test data properties (e.g., inbounds, at Boundary, out of bounds, etc…)

TestReqLog

Logical properties can be realized as concrete data values through transformation rules

Allocate a TestCase to it’s owning TestContext

A TestContext owns 1 or more TestCases having a common test configuration and a common composited TestObjectiveSpecification

Test component / SUT connections are detailed by the Test Configuration of the TestContext

Typically an internal block diagram when using SysML

Document the execution sequence/schedule of a TestContext’s TestCases

Specify requirements for: the Test Environment, Test Components, test tools, test data, etc…

The Test Case Specification is fully described and can be realized

Concrete Design Stage

The Test Specification of the System Architecture has the responsibility for the Test Case Specification of each test case specified by the Test Design Specification

Define test data pools, partitions and selectors

Detailed design and specification of test data values

Detailed design and specification of the test environment

The physical design of a test case defines component deployment in the test environment

Identify constraints / limitations of the test environment, data flows, or execution infrastructure

The Test Case Specification is complete when an implementation can be realized

Specification of an Abstract Verification Method for Acceptance Testing

System Requirements Document or System Specification

Definitions:

System A collection of components organized to   accomplish a specific function or set of functions.Combination of interacting elements   organized to achieve one or more stated purposes NOTE 2 In practice,   the interpretation of its meaning is frequently clarified by the use of an   associative noun, e.g., aircraft system IEEE 610.12IEEE15288
System of Interest system whose life cycle is under consideration in   the context of this International Standard IEEE 15288
Enabling System system that supports a system-of-interest during its   life cycle stages but does not necessarily contribute directly to its function   during operation IEEE 15288
Test System An enabling system supporting Test activities during   the life-cycle of the system of interest, while not being a part of the   system of interest Extended from IEEE 15288
Baseline specification or work product that has been formally   reviewed and agreed upon, that thereafter serves as the basis for further   development, and that can be changed only through formal change control   procedures IEEE 15288
Test (1) An activity in which a system or component is   executed under specified conditions, the results are observed or   recorded, and an evaluation is made of some aspect of the system or   component.(2) To conduct an activity as in (1) IEEE 610.12
Acceptance Testing Formal testing conducted to determine whether or not   a system satisfies its acceptance criteria and to enable the customer to   determine whether or not to accept the system. IEEE 610.12IEEE 1012
System Testing Testing conducted on a complete, integrated system   to evaluate the system’s compliance with its specified requirements. IEEE 610.12
Integration Testing Testing in which software components, hardware   components, or both are combined and tested to evaluate the interaction   between them. IEEE 610.12
Component Testing Testing of individual hardware or software   components or groups of related components. IEEE 610.12
Test Case (1) A set of test inputs, execution conditions, and   expected results developed for a particular objective, such as to exercise a   particular program path or to verify compliance with a specific requirement.(2) Documentation specifying inputs, predicted   results, and a set of execution conditions for a test item.Prescriptive reference: A test case is a behavioral   feature or behavior specifying tests. A test case specifies how a set of test   components interact with an SUT to realize a test objective to return a   verdict value.Test cases are owned by test contexts, and has   therefore access to all features of the test context (e.g., the SUT and test   components of the composite structure).A test case always returns a verdict. IEEE 610.12IEEE 829-1983UTP
Test Case Specification A document that specifies the test inputs, execution   conditions, and predicted results for an item to be tested. IEEE 610.12
Test Objective An identified set of software features to be measured   under specified conditions by comparing actual behavior with the required behavior   described in the software documentation.Prescriptive reference: A dependency used to specify   the objectives of a test case or test context. A test case or test context   can have any number of objectives and an objective can be realized by any   number of test cases or test contexts.Descriptive reference: A test objective is a reason   or purpose for designing and execution a test case [ISTQB]. The underlying   Dependency points from a test case or test context to anything that may   represent such a reason or purpose. This includes (but is not restricted to)   use cases, comments, or even elements from different profiles, like   requirements from [SysML]. IEEE Std 1008-1987UTP
Test Requirement See Test Condition UTP [ISTQB]
Test Condition An item or event of a component or system that could   be verified by one or more test cases, e.g. a function, transaction, feature,   quality attribute, or structural element. UTP [ISTQB]
Acceptance Criteria The criteria that a system or component must satisfy   in order to be accepted by a user, customer, or other authorized entity. IEEE 610.12
Test Matrix Features to be tested (Level Test Plan (LTP) Section   2.3), features not to be tested (LTP Section 2.4), and approaches (LTP   Section 2.5) are commonly combined in a table called a Test Matrix. It   contains a unique identifier for each requirement for the test (e.g., system   and/or software requirements, design, or code), an indication of the source   of the requirement (e.g., a paragraph number in the source document), and a summary   of the requirement and an identification of one or more generic method(s) of   test. IEEE 829-2008
Test Traceability Matrix Provide a list of the requirements (software and/or   system; may be a table or a database) that are being exercised by this level   of test and show the corresponding test cases or procedures. The requirements   may be software product or software-based system functions or nonfunctional   requirements for the higher levels of test, or design or coding standards for   the lower levels of test. This matrix may be part of a larger Requirements   Traceability Matrix (RTM) referenced by this plan that includes requirements for   all levels of test and traces to multiple levels of life cycle documentation   products. It may include both forward and backward tracing. IEEE 829-2008
Test Context Prescriptive reference: A test context acts as a   grouping mechanism for a set of test cases. The composite structure of a test   context is referred to as test configuration. The classifier behavior of a   test context may be used for test control.Descriptive reference: A test context is just a   top-level test case UTP
Stimuli Test data sent to the SUT in order to control it and   to make assessments about the SUT when receiving the SUT reactions to these   stimuli. UTP
Observation Test data reflecting the reactions from the SUT and   used to assess the SUT reactions which are typically the result of a stimulus   sent to the SUT. UTP
SUT Prescriptive reference: Stereotype applied to one or   more properties of a classifier to specify that they constitute the system   under test. The features and behavior of the SUT is given entirely by the   type of the property to which the stereotype is applied.Descriptive reference: refers to a system,   subsystem, or component which is being tested. An SUT can consist of several   objects. The SUT is stimulated via its public interface operations and   signals by the test components. No internals of a SUT are known or accessible   during test case execution, due to its black-box nature UTP

The IT&E domain philosophy employs “Black-Box” test methods at the higher levels of test, so it is highly dependent on behavior specifications.  Integration philosophy is highly dependent on “thread” knowledge. The IT&E domain desires to drive a defined need for system test requirements in the system requirements document/system subsystem specification, something sorely lacking today.

If consistent with the MIL-STD-961E and MIL-HDBK-520A the System Requirements Document (SRD) or System/Subsystem Specification (SSS) provides traceability from each of its requirements to a system element which will «satisfy» the requirement and a system feature element which will «verify» the requirement.  “The baseline management system should allow for traceability from the lowest level component all the way back to the user capability document or other source document from which it was derived”. Defense Acquisition Guide.

IEEE 829-2008 requires the Test System to produce a data artifact indentifying which System Features are tested and which are not, this is the Test Matrix..

IEEE 829-2008 requires the Test System to produce a data artifact tracing a system feature requiring test to the test case performing the verification and the requirement verified by the test case, this is the Test Traceability Matrix.  These matrices are required at each level of testing.

To satisfy the Test System’s Test Architecture data needs for the System Acceptance Test Event architecture component, it needs the SRD/SSS to provide a data artifact containing its requirements, the system feature which satisfies and a test case which verifies the requirement, when the feature’s implementation requires verification at the System Acceptance Test Event.  It may be extracted from the content of the Verification Matrices identified by MIL-HDBK-520A

A System model element with a stereotype of «testCase» provides the verification method (i.e., inspection, analysis, demonstration, test), a behavior specification for the method, the acceptance criteria for the outcome of the behavior and the execution conditions of the behavior (e.g., pre-conditions, post-conditions and conditions of performance)

Each requirement in a SRD/SSS should have one (preferably only one) test case responsible to produce the evidence that the system’s implementation satisfies the requirement.  The SysML profile supports documenting this dependency between a requirement and its test case.  Compound requirements may require multiple test cases, but this is a signal that the requirement should be decomposed to multiple atomic requirements, a best practice.

A specification for a test case includes inputs, execution conditions, and predicted results.  A test case specification has more in common with a use case description than a sequence of actions describing a use case scenario.  A test procedure is a sequence of steps realizing the test case’s behavior.  A test procedure is concrete and contains concrete test data.  A test case specification does not provide a detailed test procedure specification, rather just the requirements for the test case (e.g., What is to be tested).

The test case specifications in an SRD/SSS sets the acquirer stakeholder’s expectation for the system’s acceptance test event. The SRD/SSS test case specifications are required by the Integration and Test System Architecture to construct the test matrix and test traceability matrix. (IEEE 829)

The test case specification reflects the test requirements/test conditions necessary to produce the evidence that the system’s implementation satisfies the requirements.  Test case specifications have the same quality standards required of them as system specifications.  They must be complete, consistent, correct, feasible, necessary, prioritized, unambiguous, and traceable.

A test requirement/test condition statement is similar in content to a performance requirement; in as much as the requisite conditions are specified for achieving the stated performance or test observation/expected result.

Performance Requirement example:

“The Lawnmower System shall operate with [Hourly Mowing Capacity] of at least 1 level ground acre per hour, at [Max Elevation] up to 5,000 feet above sea level, and [Max Ambient Temperature] of up to 85 degrees F., at up to 50% [Max Relative Humidity], for [Foliage Cutting Capacity] of Acme American Standard one week Lawn Grass.”

The Lawnmower System shall

•[Foliage Cutting Capability]
–mow
•[Foliage Cutting Capability Performance]
–a minimum of 1 acre per hour
•[Input & Performance Constraint]
–of Acme American Standard Lawn Grass one week (Input Object State) on level ground (environment condition)
•[Performance Constraint]
–at [Max Elevation] up to 5,000 feet above sea level,
–and [Max Ambient Temperature] at up to 85 degrees F.,

and [Max Relative Humidity] at up to 50% relative humidity

•Mow is a Behavioral Feature of the Lawnmower.
–The Mow() operation returns “cut grass”

LawnmowerElements

“The Lawnmower System shall operate with [Fuel Economy] of at least 1 hour / gallon at [Min Elevation] of 0 feet ASL, at [Max Ambient Temperature] 85 degrees F., 50% [Max Relative Humidity], for Acme American Standard one week Lawn Grass.”

From: Requirements Statements Are Transfer Functions: An Insight from Model-Based Systems Engineering, Author, William D. Schindel, Copyright © 2005 by William D. Schindel.

These two stated performance requirements have a relationship with each other.  The second requirement constrains the acceptance criterion for the first requirement.  Not only must the Hourly Mowing Capacity be achieved, but it must be achieved using no more than 1 gallon of fuel.  The constraint must be normalized for altitude, as the two requirements differ in this pre-condition regard.  It is the Test System’s Test Architecture responsibility to group these requirements into an efficient test context, not the SRD/SSS.  The SRD/SSS should only state the test requirement for the test case which verifies the requirement’s satisfaction.

Requirements’ verification method is Analysis.  The problems are physics based.  Altitude and Ambient temperatures directly impact air pressure density.  Air pressure density negatively impacts the lawnmower’s fuel efficiency and the internal combustion engine efficiency transforming gasoline to a mechanical force.  These environment conditions would be difficult and costly for a test facility to reproduce.

Blocks in the test:

  1. lawnmower, SUT Property – Hourly Mowing Capacity
  2. lawnmower user
  3. Acme American Standard Lawn Grass, initial state [1 weeks growth] end state [ Mowed Height Standard], Property – dimension(1 acre)
  4. Terrain – Property – Level, Altitude.  Host for Acme American Standard Lawn Grass test coupon
  5. atmosphere(air pressure density), Properties – temperature, relative humidity, barometric pressure

SUT = Feature(Hourly Mowing Capacity)

•Pre-Conditions:
–Temperature, Air Pressure Density, level terrain with Acme American Standard Lawn Grass(1 acre) state=[1 weeks growth]
–Lawnmower state=[idle] Property – operating temp = True
•Input:
–Control Flow = User signal to transition lawnmower state = [operate]
–Object Flow = Acme American Standard Lawn Grass [Un-mown]
–Start 1 hour elapsed timer
•Output:
–Object Flow = Acme American Standard Lawn Grass [Mown]
•Post Condition:
–Lawnmower user signal to transition lawnmower state = [stop]
–1 Acre of Acme American Standard Lawn Grass state=[Mown]
–Fuel consumed =< 1 gallon (normalized for test environment @ runtime)
•Observation Assertion:
–1 Hour Elapsed Timer not = Zero
–Acme American Standard Lawn Grass state=[Mowed Height Standard]
•Verdict(Pass)

The terrain and the Acme American Standard Lawn Grass need to be realized in the Test System’s Architecture as test components.  Their properties should be controlled by the analysis, rather than the explicit statements of the SRD/SSS requirement text.  Analysis verifies the SRD/SSS requirement explicitly; the test case providing a measure of Hourly Mowing Capacity serves to confirm the analysis.  It implicitly verifies that the requirement is satisfied rather than an explicit verification of the satisfaction.

Given that the environmental parameters are difficult to control on a large scale.  The most likely approach the test architect will take to test case design is to measure environmental conditions and adjust the test case’s acceptance criteria to account for test case ambient rather than to control the environment.  The test coupon may also be adjusted in size based on the criticality of the performance requirement and the uncertainties in making the measurement of Hourly Mowing Capacity and its confidence interval.  In a risk-driven test architecture; as the criticality of the requirement increases so should the investment in verification.  If the confidence interval for this requirement is low, then a very terse test case supporting the formal analysis may satisfy the acquirer’s acceptance criterion.

Additional Background:

From the ISTQB Glossary of Testing Terms:

test requirement: See test condition.

test condition: An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element.

test case: A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. [After IEEE 610]

test case specification: A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item. [After IEEE 829]

test design specification: A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases. [After IEEE 829]

INCOSE, in the latest Systems Engineering Handbook V 3.2.2, the term “test requirement” no longer appears.  The term was employed in version 3.1.5 from 2007, but has since been removed.

Excerpt from Version 3.1.5:

Each test requirement should be verifiable by a single test. A requirement requiring multiple tests to verify should be broken into multiple requirements. There is no problem with one test verifying multiple requirements; however, it indicates a potential for consolidating requirements. When the system hierarchy is properly designed, each level of specification has a corresponding level of test during the test phase.  If element specifications are required to appropriately specify the system, element verification should be performed.

And

establish a basis for test planning, system-level test requirements, and any requirements for environmental simulators

The 2010 revision that created Version 3.2.2. harmonized with ISO/IEC 15288:2008 with the intent to provide elaboration of the processes and activities to execute the processes of ISO/IEC 15288:2008.

A Wayne Wymore’s treatise “Model-Based Systems Engineering”  includes “System Test Requirements” as a key element of his mathematical theory of systems design.

SDR (system design requirements) = (IOR, TYR, PR, CR, TR, STR (system test requirements))

A concept example:

There is a system requirement that is a “Key Performance Parameter”.  The requirement must be satisfied or the system is not acceptable.  There is a range of environmental execution conditions under which the KPP must be achieved.  There is only a single event which triggers the functional behavior of the system (e.g., a thread) and only a single standard (performance standard) to evaluate the system output (e.g., acceptance criteria,  expected result).  Of the range of input conditions, there are two boundary condition sets that pose a performance challenge to the functional behavior’s design.  Since this performance requirement and the conditions under which the performance must be achieved is infeasible to “Test”, the verification of the requirement is by the verification method “Analysis”, all stakeholders accept this approach.

“System Test”, an informal testing activity, test cases will execute that observe test requirements/conditions identified by the challenging boundary condition sets.  The design of the test cases and their specifications will comply with the test requirements/conditions that are derived from the analysis.  The analysis predicts system behavior and the test requirements/conditions will drive design of test cases where the predicted system behavior is the test oracle (e.g., acceptance criteria).

In this example the test requirement(s) or test condition(s) drive test design.  The Test Architect defines the Test Requirement/ Test Condition specification during the planning phase.  The original test requirement called for a verification method of “Analysis”.  The verification method of “Analysis” was not fully satisfactory to the Customer.  To build Customer intimacy and refine the test requirements, test cases were added to the System Test level to address the concern and build confidence in the performance analysis.  These are informal test cases designed to validate the performance analysis within the constraints imposed by test implementation limitation.

Test Architecture Philosophy

Test (noun)

Examination

a series of questions, problems, or practical tasks to gauge somebody’s knowledge, ability, or experience

Basis for evaluation

a basis for evaluating or judging something or somebody

Trial run-through a process

a trial run-through of a process or on equipment to find out if it works

Procedure to detect presence of something

a procedure to ascertain the presence of or the properties of a substance

Architecture (noun)

Building design

the art and science of designing and constructing buildings

Building style

a style or fashion of building, especially one that is typical of a period of history or of a particular place

Structure of computer system

the design, structure, and behavior of a computer system, microprocessor, or system program, including the characteristics of individual components and how they interact

Philosophy (noun)

Examination of basic concepts

the branch of knowledge or academic study devoted to the systematic examination of basic concepts such as truth, existence, reality, causality, and freedom

School of thought

a particular system of thought or doctrine

Guiding or underlying principles

a set of basic principles or concepts underlying a particular sphere of knowledge

Set of beliefs or aims

a precept, or set of precepts, beliefs, principles, or aims, underlying somebody’s practice or conduct

The use of natural language to convey thoughts is fraught with semantic risk.  Natural language is essential for humans, yet to mitigate the semantic risk a rigorous grammar is required within a community of interest.  Considerable research exists to document the monumental undertaking it is to instantiate a universal lexicon, however there is strong evidence that a rigorous lexicon within a community is possible and of significant value.  Towards this end I hope to convey a distillation of my research over the last few years.

“The Four Horsemen”

Boundaries are explicit

Services are Autonomous

Services share Schema and Contract, not Class

Compatibility is based upon Policy

Any entity, without regard to its affiliation, has inherently greater value if it is employable in many places in many different ways.  To realize this objective an entity needs to posses the attributes embodied in “The Four Horsemen.”  The entity is whole and self-contained, yet it is allowed to interact with other entities via its interfaces.  The interface has a rigorous specification on how it interacts with other interfaces.  The entity exists within a community where all entities observe the same principles of realization.  An entity possessing these attributes is inherently more flexible when choreographing or orchestrating with other entities to realize a more complex entity.  We could assemble a complex entity from purpose specific component entities into a monolithic structure.  It might be a thing of great beauty, but it is purpose built and its maintenance and flexibility in the face of change will come at great cost.

For this reason I propose a Test Architecture philosophy that embraces these tenets.  It requires strict adherence to a framework.  Entities have explicit boundaries and they are autonomous.  Interfaces adhere to strict standards.  Policy governance insures that entities are interoperable and re-usable.

This is not a trivial task.  It requires considerable architectural design effort in the framework, but the reward is downstream cost reduction.

Policy One – Define your problem then look for a solution in the Standards space.  Start with the standards closest to the problem space of the stakeholders.  Next employ standards with wide adoption in the problem space.

Employing standards from which we extract patterns to solve our problem and convey our solution increases the probability that our solution is interoperable with products employing the same patterns.  Standards evoke patterns.  Using patterns in the solution space fosters the emergence of intuitive cognitive recognition of the solution.  The brain of the human animal thrives on recognizing patterns; it is superlative in that regard.

Policy Two – Define a lexicon.  The lexicon is policy.  Embrace the lexicon.  Amend the lexicon as the world changes as new realities and understandings emerge.  DoDAF is evolving because a broad range of stakeholders realized early that the original specification had serious shortcomings and limitations.  It underwent a rapid transition to version 1.5 and then 2.  Version 2 has had to overcome significant obstacles created by the entities involved in the evolution.  A lack of inclusive collaboration and compromise is likely to blame.  There appears as well to have been some practice of dogma by some entities, they no longer participate in the evolution of the standard.  The stakeholders of the UPDM (UML Profile for DoDAF and MODAF) appear to be likely to adopt the new proposed standard.  We might be wise to draw our lexicon from the UML, tailored to the DoDAF influence.

Policy Three – Define a framework for the Test Architecture.  Enforce Policy One and Two on the framework.