All posts by gashuebr

Thought on the development stage of a Test Case

From My perspective of the OMG’s UML Testing Profile

Conceptual Design Stage

The System Architecture description models and its sub-systems architecture description models own the test case conceptual design of each requirement having a SysML «Verify» dependency to a TestCase model element in version 1.4 of the language standard.  A Test Case in the UML Testing Profile (UTP) extends the UML metaclasses behavior and operation.  A Test Case ALWAYS returns a Verdict.

TestObjectiveSpecificationAtSRRb

Each requirement subject to verification should have such a dependency

The TestCase is named and it has a dependency relationship to its TestObjectiveSpecification model element per the UTP v1.2

A TestRequirement model element may be used to refine the TestObjectiveSpecification model element and act as a bridge between the System of Interest model and the Test System model

TestReqCon

Note: The UTP 2 specification addresses maturing concepts in these elements of a test specification

The TestObjectiveSpecification expresses the conceptual design of the TestCase in a natural language statement.  This statement identifies the most important components of the TestCase from the perspective of the Accepting Stakeholder concerns. The components are: pre and invariant conditions of the system under test (SUT) environment, the SUT and its interacting Actors or Systems, their Input(s) to the SUT, the expected behavior of the SUT in response to its inputs.

This collection of components expresses the ‘acceptance criterion’ of the Accepting Stakeholder.  A TestCase, satisfying its TestObjectiveSpecification, will produce objective evidence verifying, to the satisfaction of the Accepting Stakeholder, that the system requirement has been satisfied by the responsible system feature.

The goal of the Test Objective is to articulate an abstract, yet understandable and unambiguous description of the purpose of the TestCase.  The TestObjectiveSpecification does not state ‘How’ to implement the TestCase. The conceptual design treats the TestCase as a “black box”

The Test System Architecture Imports the conceptual design of a TestCase as the foundation to the development of the Test Design Specification for the TestCase

TestObjectiveSpecificationAtSRRa

A conceptual test case design may be realized by a collection of test cases by the Test System

TestCaseConcepta

Logical Design Stage

The Test Specification of the System Architecture has the responsibility for the Test Design Specification of each conceptual test case imported from the system architecture

A logical design identifies  the components of a test case’s structure and behavior and their relationships.

The logical design activity is an iterative activity ending when a specification can be realized

TestObjectiveSpecificationAtSFRa

Test Architecture and high level Test design is performed

Engineering tasks include:  requirements analysis, evaluation, allocation and component specification

The role each component plays in its test case and its test objective responsibilities is defined and traced

A logical design should not specify concrete values for test data properties.  Specify concepts for test data properties (e.g., inbounds, at Boundary, out of bounds, etc…)

TestReqLog

Logical properties can be realized as concrete data values through transformation rules

Allocate a TestCase to it’s owning TestContext

A TestContext owns 1 or more TestCases having a common test configuration and a common composited TestObjectiveSpecification

Test component / SUT connections are detailed by the Test Configuration of the TestContext

Typically an internal block diagram when using SysML

Document the execution sequence/schedule of a TestContext’s TestCases

Specify requirements for: the Test Environment, Test Components, test tools, test data, etc…

The Test Case Specification is fully described and can be realized

Concrete Design Stage

The Test Specification of the System Architecture has the responsibility for the Test Case Specification of each test case specified by the Test Design Specification

Define test data pools, partitions and selectors

Detailed design and specification of test data values

Detailed design and specification of the test environment

The physical design of a test case defines component deployment in the test environment

Identify constraints / limitations of the test environment, data flows, or execution infrastructure

The Test Case Specification is complete when an implementation can be realized

Interface Stimulator Services Enterprise

Identification
The Interface Stimulator Services Enterprise (ISSE) is a loosely coupled, modular set of simulation/stimulation service components and a core set of management services. The stimulator service components interact with SUT interfaces that normally exchange information with interfaces external to the SUT.  The ISSE provides test capabilities to the integration, verification, and validation life cycle processes. The ISSE is employable with test automation tools (e.g.: Quality Center, etc) and is planned for employment as a component with the system simulation trainer element.

jpg_7

Overview of the Proposed System
The primary objective of the ISSE is to provide information exchange stimulation capability to all information exchange interfaces external to the system. This is a fundamental enabler to basic system integration and verification operations. The interface stimulation components resident in the ISSE provide a web services managed interface to models, simulations, tools, databases, and hardware used to stimulate system’s interfaces. The stimulator service components have loose coupling and management of the stimulator service components is via core infrastructure services.

ISSE Overview
The ISSE is a fundamental enabler of integration and verification activities of system interfaces. Secondary design objectives are; support integration and verification activities by simulating interfaces, support system integration and verification activities by simulating information exchange interfaces external to the system, and support trainer operations by stimulating all segment information exchange interfaces external to the segment.

ISSE_SIL
The design of the ISSE capability set supports evolution to fulfill the operational needs of system data and control interfaces. This expansion of ISSE functionality is largely dependent on evolving program objectives in the area of interoperability validation.
The final ISSE spiral is the trainer product to support training of operators, maintainers, and supervisors. In this deliverable instantiation the ISSE is a component of the trainer element.
Each stimulator component possesses its own web services management interface and the ISSE provides common services to manage infrastructure and the stimulator components. In addition to stimulation services, data logging functionality with time stamping is in the design plan for all stimulator components to support artifact collection automation.
Users (test personnel, instructors, supervisors) can connect and configure the stimulator components (models, simulations, tools, hardware) with specific data and parameter values to compose a stimulation environment to perform a test or conduct a training exercise. The ISSE is capable of archiving service orchestrations for future reference. Test automation tools like HP’s Quality Center can employ service request transactions directly with the ISSE’ core services interface to invoke archived orchestrations.

ISSE_Realized
A design objective is to provide the capability to interface with virtually any model, simulation, tool, database, or hardware. The initial development spiral will only incorporate models and simulations that match the need to integrate and verify system entities. The models and simulations developed for or incorporated into the ISSE will have varying levels of fidelity:
High Appears to be reality from a system perspective. Dynamic and variable behaviors.
Medium system interactions have a visible but limited effect on behaviors.
Low Correctly formatted static data without dynamic variability of behaviors.
To manage cost and risk; functionality and fidelity of stimulator components will not evolve past the point where the components are suitable for integration and verification activities. Low and medium fidelity should suffice for many of the models and simulations. If additional functionality or greater fidelity is required to meet training and operational support objectives, the infusion of additional funding over and above that required for just system integration and verification will be necessary.

Architecture & Design
Employ SOA design patterns and web services to manage simulation / stimulator components and component strings. Employ open standards from standards bodies with demonstrated domain influence. The Object Management Group is a prime example of one such body. Maintain loose coupling of the simulation / stimulator components.
Maintain a focus on the future evolution spiral of simulation / stimulator components and core services of the ISSE. Keep in mind the evolutionary spiral of the trainer, model use in the tactical applications supporting operations planning, and development of distributed test beds for develop-mental/operational test and evaluation (D/OT&E) of the total enterprise.

Background, Objectives, and Scope
The ISSE engineering effort is responsible for the capture of the capability needs of the element, trainer, segment, operational support models, and system. Translate these needs to requirements and design a capability that is evolvable to provide those needs to the foreseen end state. The systems engineering effort is also responsible to identify models for use in the simulator components development (process model) and operation (parametric model). The implementation is limited to the capability set required to integrate and verify the segment.
The segment of interest engages in system data exchanges with entities external to the system. The possible number of external entities exceeds 10,000 instances. The integration and verification activities require information exchange stimulators to succeed in the testing of these interfaces.
COTS tools exist that may satisfy basic integration and verification of interfaces, at the element level, that exclusively engage in SOA system data exchanges at a single interface. In verification situations where coordination of complex exchanges of information occurs at multiple interfaces, existing COTS tools may prove inadequate. This requirement may emerge where system data exchanges at multiple interfaces require orchestration with a complex operational mission scenario. Coordinated scripts may suffice, but they may be subject to high maintenance overhead as scenarios evolve or change.
Realization of the a distributed test bed concept mandates employment of advanced interface stimulator capabilities to bridge segment interfaces to Distributed Interactive Simulation (DIS), High Level Architecture (HLA), or Test and Training Enabling Architecture (TENA) simulation / experiment environments. The complexity of a virtual system of systems experiment environment is unlikely to be supportable using simple scripted system data exchanges.
The objective of this effort is to define these near-term and far-term capabilities and develop only those essential near-term capabilities for segment integration and verification.
Operational Description

Near-Term Capability Description
The ISSE employs a core web services interface. External service consumers interact with this interface to obtain customized stimulator services. Provision for direct access to the stimulator component web services interface is required. This requirement supports component re-use in other infrastructures.
The ISSE employs composable stimulator components. The components feature a web services interface that serves to encapsulate the model, simulation, database, etc and provide for composition of the stimulator component at the environment build time. Modification of the stimulator application at run time is not required. Control of the stimulator component is required during run time. There is a clear distinction between component composition and component control. Composition implies the creation of service chains, links to external data resources or similar configuration of complex behavior models and simulations; actions that are difficult or impossible to orchestrate in real-time. This is different from the simple exposure of a service or process control at the service interface or to proprietary simulation control interface.
The ISSE interfaces to the user’s GUI through the bus’ core web services interface. Infrastructure services support management of the bus and stimulator components, as well as composition of the bus and stimulator components.
The ISSE provides an environment configuration report service. This service provides information relating to stimulator component composition data, model or simulation version data, data base version data, bus orchestration and component deployment data.
The ISSE provides a simulator component composition service. Simulator component composition ser-vice provides service consumer the capability to control those simulation elements exposed to service consumers. This provides a level of service customization.
The ISSE provides a bus orchestration service. This service coordinates the behaviors of the stimulator components.
The ISSE provides a service consumer notification service. An event notification service provided to service consumers.
The ISSE provides a simulator component deployment service. Supports automated deployment of a set of stimulator components.
The ISSE and stimulator components have configurable data monitoring, acquisition, and storage capabilities.
The ISSE supports third party stimulator service requests through its native web services interface. Third party applications may be COTS Test automation tools capable of interacting with a SOA interface.
The ISSE supports interaction directly with a stimulator component by a third party application.
The ISSE supports real-time stimulation of segment interfaces.
The ISSE provides the capability to stimulate segment interfaces in non-real-time as well as modified epoch time reference point.
The ISSE supports automated operation by software test automation tools such as HP Quality Center via the bus’ core web services interface.
The ISSE provides an automated archive service.
Future Capability Description
System program documents convey the concept of developing a “system in the loop” test capability for the system prior to operational deployment. A “test bed” supports the system level test capability.
The concept of a test bed implies that a system or portion of a system is placed in an environment that exercises the system under test (SUT) as if it were in a real world situation. The test bed provides inter-faces to other systems and/or portions of the same system that stimulate the SUT and accept and respond to outputs from the SUT.
In moving closer toward the real world environment, network components that are geographically distributed will comprise the test bed, removing the collocation requirements needed for earlier testing.
To establish the system test and verification environment one must identify a system test bed where the entire set of entities can be assembled and connected in their near operational state. This can be a virtual environment made up of several Integration Laboratories or it can be a physical single site.
The ISSE fits into the above system test environment as a part of the integration lab. The ISSE may evolve into a component of a test bed as testing evolves, allowing the use of actual external systems rather than simulators.
The system distributed test bed concept extends the integration lab to support program objectives in future development spirals of the Interface Stimulation Service Bus.

Definitions
Automated Archive Service – Archives are analogous to logs. There is a probability that artifacts will be required for the test evidence or debug. This service automates the collection and organization of the data logged by the different interface stimulator components, whatever they may have collected. Wraps it all up in a neat package and may even submit it to a CM service interface.
Bus Orchestration Service – If there is a need to have behaviors synchronized at various interface stimulator components, this is the service that is responsible for this. This service may be or is very tightly coupled to the timing service. In an HLA it is similar to what the RTI is responsible for.
Component Strings a component string is a concept where 2 or more atomic service components are engaged in a service contract to provide a complex service.
Composition the creation of service chains, links to external data resources or similar configuration of complex behavior models and simulations.
Environment Configuration Report Service – captures the test component environment attributes. Tool version, operating system, serial numbers of hardware. Supports re-execution of a test precisely as it was originally instantiated.
ISSE (Interface Stimulator Services Enterprise) – is a loosely coupled, modular set of simulation/stimulation service components and a core set of management services
Service Consumer Notification Service – This is the service that notifies the service consumer about alerts and could even provide the service output.
Simulator Component Composition Service – A service that employs an intelligent agent to assist a service consumer in composing another service. Service that post internal service process models can be customized to a specific consumer’s needs via this service mechanism.
Simulator Component Deployment Service – A service that might be employed to deploy atomic services to a test bed. Conversely, the infrastructure team may deploy the services and then this service is not required.
Stimulation Components Models, Simulations, tools, hardware that provides the stimulus for the external interfaces.
Stimulator Components see Stimulation Components.

Specification of an Abstract Verification Method for Acceptance Testing

System Requirements Document or System Specification

Definitions:

System A collection of components organized to   accomplish a specific function or set of functions.Combination of interacting elements   organized to achieve one or more stated purposes NOTE 2 In practice,   the interpretation of its meaning is frequently clarified by the use of an   associative noun, e.g., aircraft system IEEE 610.12IEEE15288
System of Interest system whose life cycle is under consideration in   the context of this International Standard IEEE 15288
Enabling System system that supports a system-of-interest during its   life cycle stages but does not necessarily contribute directly to its function   during operation IEEE 15288
Test System An enabling system supporting Test activities during   the life-cycle of the system of interest, while not being a part of the   system of interest Extended from IEEE 15288
Baseline specification or work product that has been formally   reviewed and agreed upon, that thereafter serves as the basis for further   development, and that can be changed only through formal change control   procedures IEEE 15288
Test (1) An activity in which a system or component is   executed under specified conditions, the results are observed or   recorded, and an evaluation is made of some aspect of the system or   component.(2) To conduct an activity as in (1) IEEE 610.12
Acceptance Testing Formal testing conducted to determine whether or not   a system satisfies its acceptance criteria and to enable the customer to   determine whether or not to accept the system. IEEE 610.12IEEE 1012
System Testing Testing conducted on a complete, integrated system   to evaluate the system’s compliance with its specified requirements. IEEE 610.12
Integration Testing Testing in which software components, hardware   components, or both are combined and tested to evaluate the interaction   between them. IEEE 610.12
Component Testing Testing of individual hardware or software   components or groups of related components. IEEE 610.12
Test Case (1) A set of test inputs, execution conditions, and   expected results developed for a particular objective, such as to exercise a   particular program path or to verify compliance with a specific requirement.(2) Documentation specifying inputs, predicted   results, and a set of execution conditions for a test item.Prescriptive reference: A test case is a behavioral   feature or behavior specifying tests. A test case specifies how a set of test   components interact with an SUT to realize a test objective to return a   verdict value.Test cases are owned by test contexts, and has   therefore access to all features of the test context (e.g., the SUT and test   components of the composite structure).A test case always returns a verdict. IEEE 610.12IEEE 829-1983UTP
Test Case Specification A document that specifies the test inputs, execution   conditions, and predicted results for an item to be tested. IEEE 610.12
Test Objective An identified set of software features to be measured   under specified conditions by comparing actual behavior with the required behavior   described in the software documentation.Prescriptive reference: A dependency used to specify   the objectives of a test case or test context. A test case or test context   can have any number of objectives and an objective can be realized by any   number of test cases or test contexts.Descriptive reference: A test objective is a reason   or purpose for designing and execution a test case [ISTQB]. The underlying   Dependency points from a test case or test context to anything that may   represent such a reason or purpose. This includes (but is not restricted to)   use cases, comments, or even elements from different profiles, like   requirements from [SysML]. IEEE Std 1008-1987UTP
Test Requirement See Test Condition UTP [ISTQB]
Test Condition An item or event of a component or system that could   be verified by one or more test cases, e.g. a function, transaction, feature,   quality attribute, or structural element. UTP [ISTQB]
Acceptance Criteria The criteria that a system or component must satisfy   in order to be accepted by a user, customer, or other authorized entity. IEEE 610.12
Test Matrix Features to be tested (Level Test Plan (LTP) Section   2.3), features not to be tested (LTP Section 2.4), and approaches (LTP   Section 2.5) are commonly combined in a table called a Test Matrix. It   contains a unique identifier for each requirement for the test (e.g., system   and/or software requirements, design, or code), an indication of the source   of the requirement (e.g., a paragraph number in the source document), and a summary   of the requirement and an identification of one or more generic method(s) of   test. IEEE 829-2008
Test Traceability Matrix Provide a list of the requirements (software and/or   system; may be a table or a database) that are being exercised by this level   of test and show the corresponding test cases or procedures. The requirements   may be software product or software-based system functions or nonfunctional   requirements for the higher levels of test, or design or coding standards for   the lower levels of test. This matrix may be part of a larger Requirements   Traceability Matrix (RTM) referenced by this plan that includes requirements for   all levels of test and traces to multiple levels of life cycle documentation   products. It may include both forward and backward tracing. IEEE 829-2008
Test Context Prescriptive reference: A test context acts as a   grouping mechanism for a set of test cases. The composite structure of a test   context is referred to as test configuration. The classifier behavior of a   test context may be used for test control.Descriptive reference: A test context is just a   top-level test case UTP
Stimuli Test data sent to the SUT in order to control it and   to make assessments about the SUT when receiving the SUT reactions to these   stimuli. UTP
Observation Test data reflecting the reactions from the SUT and   used to assess the SUT reactions which are typically the result of a stimulus   sent to the SUT. UTP
SUT Prescriptive reference: Stereotype applied to one or   more properties of a classifier to specify that they constitute the system   under test. The features and behavior of the SUT is given entirely by the   type of the property to which the stereotype is applied.Descriptive reference: refers to a system,   subsystem, or component which is being tested. An SUT can consist of several   objects. The SUT is stimulated via its public interface operations and   signals by the test components. No internals of a SUT are known or accessible   during test case execution, due to its black-box nature UTP

The IT&E domain philosophy employs “Black-Box” test methods at the higher levels of test, so it is highly dependent on behavior specifications.  Integration philosophy is highly dependent on “thread” knowledge. The IT&E domain desires to drive a defined need for system test requirements in the system requirements document/system subsystem specification, something sorely lacking today.

If consistent with the MIL-STD-961E and MIL-HDBK-520A the System Requirements Document (SRD) or System/Subsystem Specification (SSS) provides traceability from each of its requirements to a system element which will «satisfy» the requirement and a system feature element which will «verify» the requirement.  “The baseline management system should allow for traceability from the lowest level component all the way back to the user capability document or other source document from which it was derived”. Defense Acquisition Guide.

IEEE 829-2008 requires the Test System to produce a data artifact indentifying which System Features are tested and which are not, this is the Test Matrix..

IEEE 829-2008 requires the Test System to produce a data artifact tracing a system feature requiring test to the test case performing the verification and the requirement verified by the test case, this is the Test Traceability Matrix.  These matrices are required at each level of testing.

To satisfy the Test System’s Test Architecture data needs for the System Acceptance Test Event architecture component, it needs the SRD/SSS to provide a data artifact containing its requirements, the system feature which satisfies and a test case which verifies the requirement, when the feature’s implementation requires verification at the System Acceptance Test Event.  It may be extracted from the content of the Verification Matrices identified by MIL-HDBK-520A

A System model element with a stereotype of «testCase» provides the verification method (i.e., inspection, analysis, demonstration, test), a behavior specification for the method, the acceptance criteria for the outcome of the behavior and the execution conditions of the behavior (e.g., pre-conditions, post-conditions and conditions of performance)

Each requirement in a SRD/SSS should have one (preferably only one) test case responsible to produce the evidence that the system’s implementation satisfies the requirement.  The SysML profile supports documenting this dependency between a requirement and its test case.  Compound requirements may require multiple test cases, but this is a signal that the requirement should be decomposed to multiple atomic requirements, a best practice.

A specification for a test case includes inputs, execution conditions, and predicted results.  A test case specification has more in common with a use case description than a sequence of actions describing a use case scenario.  A test procedure is a sequence of steps realizing the test case’s behavior.  A test procedure is concrete and contains concrete test data.  A test case specification does not provide a detailed test procedure specification, rather just the requirements for the test case (e.g., What is to be tested).

The test case specifications in an SRD/SSS sets the acquirer stakeholder’s expectation for the system’s acceptance test event. The SRD/SSS test case specifications are required by the Integration and Test System Architecture to construct the test matrix and test traceability matrix. (IEEE 829)

The test case specification reflects the test requirements/test conditions necessary to produce the evidence that the system’s implementation satisfies the requirements.  Test case specifications have the same quality standards required of them as system specifications.  They must be complete, consistent, correct, feasible, necessary, prioritized, unambiguous, and traceable.

A test requirement/test condition statement is similar in content to a performance requirement; in as much as the requisite conditions are specified for achieving the stated performance or test observation/expected result.

Performance Requirement example:

“The Lawnmower System shall operate with [Hourly Mowing Capacity] of at least 1 level ground acre per hour, at [Max Elevation] up to 5,000 feet above sea level, and [Max Ambient Temperature] of up to 85 degrees F., at up to 50% [Max Relative Humidity], for [Foliage Cutting Capacity] of Acme American Standard one week Lawn Grass.”

The Lawnmower System shall

•[Foliage Cutting Capability]
–mow
•[Foliage Cutting Capability Performance]
–a minimum of 1 acre per hour
•[Input & Performance Constraint]
–of Acme American Standard Lawn Grass one week (Input Object State) on level ground (environment condition)
•[Performance Constraint]
–at [Max Elevation] up to 5,000 feet above sea level,
–and [Max Ambient Temperature] at up to 85 degrees F.,

and [Max Relative Humidity] at up to 50% relative humidity

•Mow is a Behavioral Feature of the Lawnmower.
–The Mow() operation returns “cut grass”

LawnmowerElements

“The Lawnmower System shall operate with [Fuel Economy] of at least 1 hour / gallon at [Min Elevation] of 0 feet ASL, at [Max Ambient Temperature] 85 degrees F., 50% [Max Relative Humidity], for Acme American Standard one week Lawn Grass.”

From: Requirements Statements Are Transfer Functions: An Insight from Model-Based Systems Engineering, Author, William D. Schindel, Copyright © 2005 by William D. Schindel.

These two stated performance requirements have a relationship with each other.  The second requirement constrains the acceptance criterion for the first requirement.  Not only must the Hourly Mowing Capacity be achieved, but it must be achieved using no more than 1 gallon of fuel.  The constraint must be normalized for altitude, as the two requirements differ in this pre-condition regard.  It is the Test System’s Test Architecture responsibility to group these requirements into an efficient test context, not the SRD/SSS.  The SRD/SSS should only state the test requirement for the test case which verifies the requirement’s satisfaction.

Requirements’ verification method is Analysis.  The problems are physics based.  Altitude and Ambient temperatures directly impact air pressure density.  Air pressure density negatively impacts the lawnmower’s fuel efficiency and the internal combustion engine efficiency transforming gasoline to a mechanical force.  These environment conditions would be difficult and costly for a test facility to reproduce.

Blocks in the test:

  1. lawnmower, SUT Property – Hourly Mowing Capacity
  2. lawnmower user
  3. Acme American Standard Lawn Grass, initial state [1 weeks growth] end state [ Mowed Height Standard], Property – dimension(1 acre)
  4. Terrain – Property – Level, Altitude.  Host for Acme American Standard Lawn Grass test coupon
  5. atmosphere(air pressure density), Properties – temperature, relative humidity, barometric pressure

SUT = Feature(Hourly Mowing Capacity)

•Pre-Conditions:
–Temperature, Air Pressure Density, level terrain with Acme American Standard Lawn Grass(1 acre) state=[1 weeks growth]
–Lawnmower state=[idle] Property – operating temp = True
•Input:
–Control Flow = User signal to transition lawnmower state = [operate]
–Object Flow = Acme American Standard Lawn Grass [Un-mown]
–Start 1 hour elapsed timer
•Output:
–Object Flow = Acme American Standard Lawn Grass [Mown]
•Post Condition:
–Lawnmower user signal to transition lawnmower state = [stop]
–1 Acre of Acme American Standard Lawn Grass state=[Mown]
–Fuel consumed =< 1 gallon (normalized for test environment @ runtime)
•Observation Assertion:
–1 Hour Elapsed Timer not = Zero
–Acme American Standard Lawn Grass state=[Mowed Height Standard]
•Verdict(Pass)

The terrain and the Acme American Standard Lawn Grass need to be realized in the Test System’s Architecture as test components.  Their properties should be controlled by the analysis, rather than the explicit statements of the SRD/SSS requirement text.  Analysis verifies the SRD/SSS requirement explicitly; the test case providing a measure of Hourly Mowing Capacity serves to confirm the analysis.  It implicitly verifies that the requirement is satisfied rather than an explicit verification of the satisfaction.

Given that the environmental parameters are difficult to control on a large scale.  The most likely approach the test architect will take to test case design is to measure environmental conditions and adjust the test case’s acceptance criteria to account for test case ambient rather than to control the environment.  The test coupon may also be adjusted in size based on the criticality of the performance requirement and the uncertainties in making the measurement of Hourly Mowing Capacity and its confidence interval.  In a risk-driven test architecture; as the criticality of the requirement increases so should the investment in verification.  If the confidence interval for this requirement is low, then a very terse test case supporting the formal analysis may satisfy the acquirer’s acceptance criterion.

Additional Background:

From the ISTQB Glossary of Testing Terms:

test requirement: See test condition.

test condition: An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element.

test case: A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. [After IEEE 610]

test case specification: A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item. [After IEEE 829]

test design specification: A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases. [After IEEE 829]

INCOSE, in the latest Systems Engineering Handbook V 3.2.2, the term “test requirement” no longer appears.  The term was employed in version 3.1.5 from 2007, but has since been removed.

Excerpt from Version 3.1.5:

Each test requirement should be verifiable by a single test. A requirement requiring multiple tests to verify should be broken into multiple requirements. There is no problem with one test verifying multiple requirements; however, it indicates a potential for consolidating requirements. When the system hierarchy is properly designed, each level of specification has a corresponding level of test during the test phase.  If element specifications are required to appropriately specify the system, element verification should be performed.

And

establish a basis for test planning, system-level test requirements, and any requirements for environmental simulators

The 2010 revision that created Version 3.2.2. harmonized with ISO/IEC 15288:2008 with the intent to provide elaboration of the processes and activities to execute the processes of ISO/IEC 15288:2008.

A Wayne Wymore’s treatise “Model-Based Systems Engineering”  includes “System Test Requirements” as a key element of his mathematical theory of systems design.

SDR (system design requirements) = (IOR, TYR, PR, CR, TR, STR (system test requirements))

A concept example:

There is a system requirement that is a “Key Performance Parameter”.  The requirement must be satisfied or the system is not acceptable.  There is a range of environmental execution conditions under which the KPP must be achieved.  There is only a single event which triggers the functional behavior of the system (e.g., a thread) and only a single standard (performance standard) to evaluate the system output (e.g., acceptance criteria,  expected result).  Of the range of input conditions, there are two boundary condition sets that pose a performance challenge to the functional behavior’s design.  Since this performance requirement and the conditions under which the performance must be achieved is infeasible to “Test”, the verification of the requirement is by the verification method “Analysis”, all stakeholders accept this approach.

“System Test”, an informal testing activity, test cases will execute that observe test requirements/conditions identified by the challenging boundary condition sets.  The design of the test cases and their specifications will comply with the test requirements/conditions that are derived from the analysis.  The analysis predicts system behavior and the test requirements/conditions will drive design of test cases where the predicted system behavior is the test oracle (e.g., acceptance criteria).

In this example the test requirement(s) or test condition(s) drive test design.  The Test Architect defines the Test Requirement/ Test Condition specification during the planning phase.  The original test requirement called for a verification method of “Analysis”.  The verification method of “Analysis” was not fully satisfactory to the Customer.  To build Customer intimacy and refine the test requirements, test cases were added to the System Test level to address the concern and build confidence in the performance analysis.  These are informal test cases designed to validate the performance analysis within the constraints imposed by test implementation limitation.

Return on Investment

Does modeling provide an ROI and how?

I start from the premise that the quality of the outcome of technical processes related to the Test Domain (e.g., integration, verification as described by the technical processes of ISO/IEC/IEEE 15288) is dependent on the quality of the skills of the resource performing the role responsible for the process.

My interest is in the “cognitive processes”, as described by “Bloom’s Taxonomy”, required by a “Role” in the execution of a “responsibility”.  “Roles” produce “Outcomes” by performing “Activities”.  “Activities” require “cognitive processes” to perform.  These “Activities” are the “responsibility” of a “Role”.  The quality of an “Outcome” is dependent on “cognitive process” proficiency of the individual performing a “Role”.

ProcessConcepts

So the most important elements of the taxonomy are: role, activity, cognitive process(es) required to support a role activity

outcome * cognitive_processquality  = outcomequality

The premise is that models and modeling languages are cognitive process tools.  Being proven true, it establishes that there is an enabling relationship between cognitive processes and models and modeling languages.  They enable and enhance, thereby creating value.

By establishing an understanding of this fundamental relationship, the value of modeling becomes apparent.  It is more likely a better indicator than other metrics of modeling benefits that are being requested (e.g., cost, schedule, quality measures) by much of the management infrastructure.

Models and modeling languages are tools that directly influence outcomes of cognitive processes.  By enhancing core cognitive processes, program performance is improved.  The relationship between modeling and program performance is not direct, it is a consequence of improving cognitive process quality.

Hopefully, this brings the focus back on engineering fundamentals and why we cannot ignore them.  “Fools with tools are still fools”.  Modeling is not a silver bullet, it is a multiplier.  Zero multiplied by any very large number is still zero.

The individual fulfilling a role must possess the essential cognitive process capabilities demanded by the role’s responsibilities.  Modeling enhances cognitive processes, it cannot proxy for them.

An individual’s

Cognitive_Process_Proficiency * Modeling_Proficiency

is a “Leading Indicator” and we can use this understanding to forecast program risk.

Defining a Requirement’s Acceptance Criterion

Premise: A requirement expresses a precise feature set for an entity.  Typically, features are framed as desired behavior (functional requirements).  Features may be expressed in terms of functionality, inputs, outputs, actions, sequences, qualities and constraints.  A collection of requirement statements that define an entity’s technical features is typically identified as a specification.  A stakeholder requirements specification should define stakeholder needs and an intended use of the entity’s features by the stakeholder in an environment, the system’s context.  This intended use is the description of how the stakeholder intends to satisfy their need by employing the entity under development to achieve a goal.  This information is foundational to developing requirement acceptance criterion.

Discussion: Frequently, stakeholder requirement statements lack the precision to unambiguously produce objective evidence through test case execution that a feature complies with its specification.  Requirements are frequently expressed implicitly rather than explicitly, or are poorly constructed.  Implicit characteristics of a feature and poor requirement statement construction frequently results in conflict during the object’s acceptance test phase as a result of emergent requirement statement interpretation.  It is not at all unusual to have stakeholders re-interpret requirement statements to obtain new features they did not originally define.  One approach to clarify imprecise requirement statements is to develop explicit requirement statements from the imprecise statements and offer the improved requirements as a replacement.  An alternate approach is to author acceptance criterion for implicit or poorly constructed requirements to explicitly define the criterion that support the assertion that the entity’s feature behavior satisfies its specification.

Ideally, requirements should focus on the problems requiring solutions in the stakeholder’s domain rather than focusing on the system solution.  Stakeholders must be accountable for their requirement statements and agree to an acceptance criterion for each and every stakeholder requirement prior to the commencement of stakeholder requirement analysis.  Acceptance criterion forms the basis for the stakeholder’s acceptance of the developed entity.  This acceptance is the transfer of an entity from development to stakeholder use.  Mutual agreement on precise acceptance criterion precludes discord late in the program during the entity’s acceptance test phase where the stakeholder’s satisfaction is critically important.

Assertion: Well written acceptance criterion will validate that a requirement statement is verifiable.  Employ explicit acceptance criterion early in the entity’s development period and obtain stakeholder agreement with it.  Agreement on acceptance criterion should be established at the stakeholder requirements review phase of a project.

Method: An approach to developing acceptance criterion is to ask questions from the user/stakeholder viewpoint.  These criterions will answer the stakeholder question “How do we know the entity meets its specification?”

What measured or observed criterion will convince stakeholders that the system implementation satisfies their operational need or is there a specific operational situation where an entity’s features will provide a user a capability which accomplishes a user operational objective/goal?  Is there an output provided by an entity feature that satisfies a stakeholder criterion that can be measured and subjected to a stakeholder standard?

Answering these questions provides a foundation for test case development.  The first answer forms the operational domain context for a test case.  A domain description defines the static structure of the population of an environment and how they interrelate.  In the Department of Defense Architecture Framework V2 specification the populations of entities in an environment are referred to as “Performers”.  Performer entities posses features that enable interaction with our stakeholder’s entity to accomplish a stakeholder objective and thereby satisfy a stakeholder need.  How these entities are connected and how they exchange objects (e.g., information items, or flows) helps to define the acceptance criterion.  What performers in the environment are interacting with the system and how are they interacting?  The second answer provides both a test case scenario and the test oracle that an arbiter uses to assert if the test case has “passed” or “failed”.  The test case scenario description defines the dynamic event and interactions of the entities in an environment.  Acceptance criterion defines the successful outcome of a test case.  Each requirement should have an acceptance criterion statement.  Criterion must be measurable, either qualitatively or quantitatively.  Where the measure is qualitative, it is imperative to reach agreement on defining this subjective evaluation.

Further Refinement: Acceptance criterion such as “successful” is qualitative.  In this example there is a need to quantify “Success”.  We can measure “profitability” by assessing the “return on investment” and stating that if each dollar investment returns a 10% profit, the standard used to assert a verdict, and then success has been achieved.  Success might also be measured by the quantity of process cycles performed in a period of time.

Yes, these quantitative criterions are measures applying to non-functional requirements.  However, the point is that by measuring the performance of a function, which may be a derived performance measure, there is implicit verification of the functional requirement statement.

The acceptance criterion defines the domain the entity is employed in.  It describes how the entity’s features are measured or assessed.  Acceptance criterion may express constraints to be imposed on the entity when its features are in use.  Acceptance criterion forms the test case development framework.  The acceptance criterion statements explicitly communicate the interpretation of stakeholder requirements.

Modeling, Standards and the Test Engineer

Alignment to ISO/IEC/IEEE standards applicable to the test domain has been a key principle of my work.  This alignment does not “add” tasks to the “as is” state of the test engineering domain, but it does restructure domain artifacts (i.e., test plan, test specification, test design specification, test case specification, test procedure specification) and the sequence of some of the tasks (early engagement in setting stakeholder expectation and concurrence for system acceptance test), as well as re-instantiating tasks (describing the verification method beyond inspection, analysis, demonstration and test) frequently over-looked.   Focus has been brought on a key task within the test sub-process area supporting verification of requirements at the acceptance level of test.  An exemplar under development typifies production and acceptance by all stakeholders of a requirement’s acceptance criteria beginning in the proposal phase with the key requirements of the proposed system.  This methodology is not a new concept, but rather a revitalization of a time proven practice.  In A. Wayne Wymore’s seminal work “Model Based Systems Engineering”, Mr. Wymore emphasizes the importance of the ‘System Test Requirement’ as an element of the ‘System Design Requirement’ in his system modeling formalism.  My emphasis is also consistent with guidance for Section 4 Verifications of a System Requirements Document provided by MIL-HDBK-520A and MIL-STD-961E.  The recommended practice goes beyond simply applying a verification method kind to a requirement in the Verification Cross Reference Matrix (VCRM)[i].  It requires the creation of a baseline concept for the test requirement / method, principally by defining an abstract set of acceptance criteria in the form of a test objective specification (e.g., a test requirement).  This forms the basis for a test design specification which is ultimately realized by a concrete test case specification.

testReq

At the system’s SRR each requirement is associated with a concept for its verification test case.  A test case is ‘named’ and ‘described’ by its test objective.  This strikes a test specification baseline for the test system at SRR.

TestModelAtSRR

The maturation stage of the test specification is at SFR.  The elements required to implement a test are described for the test context owning the test case.  This forms the basis for the test architecture associated to the requirement.

TestModelAtSFR

Another principle motivating this work is to drive the engagement in product development of the test engineering domain much earlier in the lifecycle of the system of interest than has been the typical practice, in my experience.  An example of the concept is to treat work product inspections as a “Test Case” and incorporate the test case execution in the test body of evidence.  This is a concept currently in use in the European software development community.  The intent is to dramatically influence and thereby reduce the accumulation of technical debt in all of its forms.

Early test engineering effort of this nature are not typical, in my personal experience, but my research and experiences suggest they hold promise for a substantive ROI. Setting the test system’s technical baseline and the test architecture it is responsible for early in the project aides in setting the expectation with stakeholders.  Early setting of the baseline supports managing changes in scope and offers an opportunity to incrementally refine expectations and thereby enhance the probability of a satisfied customer stakeholder at acceptance testing.


[i] The term VCRM is inconsistent with the Glossary define by ISO/IEC 29119

Test Architecture Philosophy

Test (noun)

Examination

a series of questions, problems, or practical tasks to gauge somebody’s knowledge, ability, or experience

Basis for evaluation

a basis for evaluating or judging something or somebody

Trial run-through a process

a trial run-through of a process or on equipment to find out if it works

Procedure to detect presence of something

a procedure to ascertain the presence of or the properties of a substance

Architecture (noun)

Building design

the art and science of designing and constructing buildings

Building style

a style or fashion of building, especially one that is typical of a period of history or of a particular place

Structure of computer system

the design, structure, and behavior of a computer system, microprocessor, or system program, including the characteristics of individual components and how they interact

Philosophy (noun)

Examination of basic concepts

the branch of knowledge or academic study devoted to the systematic examination of basic concepts such as truth, existence, reality, causality, and freedom

School of thought

a particular system of thought or doctrine

Guiding or underlying principles

a set of basic principles or concepts underlying a particular sphere of knowledge

Set of beliefs or aims

a precept, or set of precepts, beliefs, principles, or aims, underlying somebody’s practice or conduct

The use of natural language to convey thoughts is fraught with semantic risk.  Natural language is essential for humans, yet to mitigate the semantic risk a rigorous grammar is required within a community of interest.  Considerable research exists to document the monumental undertaking it is to instantiate a universal lexicon, however there is strong evidence that a rigorous lexicon within a community is possible and of significant value.  Towards this end I hope to convey a distillation of my research over the last few years.

“The Four Horsemen”

Boundaries are explicit

Services are Autonomous

Services share Schema and Contract, not Class

Compatibility is based upon Policy

Any entity, without regard to its affiliation, has inherently greater value if it is employable in many places in many different ways.  To realize this objective an entity needs to posses the attributes embodied in “The Four Horsemen.”  The entity is whole and self-contained, yet it is allowed to interact with other entities via its interfaces.  The interface has a rigorous specification on how it interacts with other interfaces.  The entity exists within a community where all entities observe the same principles of realization.  An entity possessing these attributes is inherently more flexible when choreographing or orchestrating with other entities to realize a more complex entity.  We could assemble a complex entity from purpose specific component entities into a monolithic structure.  It might be a thing of great beauty, but it is purpose built and its maintenance and flexibility in the face of change will come at great cost.

For this reason I propose a Test Architecture philosophy that embraces these tenets.  It requires strict adherence to a framework.  Entities have explicit boundaries and they are autonomous.  Interfaces adhere to strict standards.  Policy governance insures that entities are interoperable and re-usable.

This is not a trivial task.  It requires considerable architectural design effort in the framework, but the reward is downstream cost reduction.

Policy One – Define your problem then look for a solution in the Standards space.  Start with the standards closest to the problem space of the stakeholders.  Next employ standards with wide adoption in the problem space.

Employing standards from which we extract patterns to solve our problem and convey our solution increases the probability that our solution is interoperable with products employing the same patterns.  Standards evoke patterns.  Using patterns in the solution space fosters the emergence of intuitive cognitive recognition of the solution.  The brain of the human animal thrives on recognizing patterns; it is superlative in that regard.

Policy Two – Define a lexicon.  The lexicon is policy.  Embrace the lexicon.  Amend the lexicon as the world changes as new realities and understandings emerge.  DoDAF is evolving because a broad range of stakeholders realized early that the original specification had serious shortcomings and limitations.  It underwent a rapid transition to version 1.5 and then 2.  Version 2 has had to overcome significant obstacles created by the entities involved in the evolution.  A lack of inclusive collaboration and compromise is likely to blame.  There appears as well to have been some practice of dogma by some entities, they no longer participate in the evolution of the standard.  The stakeholders of the UPDM (UML Profile for DoDAF and MODAF) appear to be likely to adopt the new proposed standard.  We might be wise to draw our lexicon from the UML, tailored to the DoDAF influence.

Policy Three – Define a framework for the Test Architecture.  Enforce Policy One and Two on the framework.