Category Archives: Thoughts on Modeling

Thought on the development stage of a Test Case

From My perspective of the OMG’s UML Testing Profile

Conceptual Design Stage

The System Architecture description models and its sub-systems architecture description models own the test case conceptual design of each requirement having a SysML «Verify» dependency to a TestCase model element in version 1.4 of the language standard.  A Test Case in the UML Testing Profile (UTP) extends the UML metaclasses behavior and operation.  A Test Case ALWAYS returns a Verdict.

TestObjectiveSpecificationAtSRRb

Each requirement subject to verification should have such a dependency

The TestCase is named and it has a dependency relationship to its TestObjectiveSpecification model element per the UTP v1.2

A TestRequirement model element may be used to refine the TestObjectiveSpecification model element and act as a bridge between the System of Interest model and the Test System model

TestReqCon

Note: The UTP 2 specification addresses maturing concepts in these elements of a test specification

The TestObjectiveSpecification expresses the conceptual design of the TestCase in a natural language statement.  This statement identifies the most important components of the TestCase from the perspective of the Accepting Stakeholder concerns. The components are: pre and invariant conditions of the system under test (SUT) environment, the SUT and its interacting Actors or Systems, their Input(s) to the SUT, the expected behavior of the SUT in response to its inputs.

This collection of components expresses the ‘acceptance criterion’ of the Accepting Stakeholder.  A TestCase, satisfying its TestObjectiveSpecification, will produce objective evidence verifying, to the satisfaction of the Accepting Stakeholder, that the system requirement has been satisfied by the responsible system feature.

The goal of the Test Objective is to articulate an abstract, yet understandable and unambiguous description of the purpose of the TestCase.  The TestObjectiveSpecification does not state ‘How’ to implement the TestCase. The conceptual design treats the TestCase as a “black box”

The Test System Architecture Imports the conceptual design of a TestCase as the foundation to the development of the Test Design Specification for the TestCase

TestObjectiveSpecificationAtSRRa

A conceptual test case design may be realized by a collection of test cases by the Test System

TestCaseConcepta

Logical Design Stage

The Test Specification of the System Architecture has the responsibility for the Test Design Specification of each conceptual test case imported from the system architecture

A logical design identifies  the components of a test case’s structure and behavior and their relationships.

The logical design activity is an iterative activity ending when a specification can be realized

TestObjectiveSpecificationAtSFRa

Test Architecture and high level Test design is performed

Engineering tasks include:  requirements analysis, evaluation, allocation and component specification

The role each component plays in its test case and its test objective responsibilities is defined and traced

A logical design should not specify concrete values for test data properties.  Specify concepts for test data properties (e.g., inbounds, at Boundary, out of bounds, etc…)

TestReqLog

Logical properties can be realized as concrete data values through transformation rules

Allocate a TestCase to it’s owning TestContext

A TestContext owns 1 or more TestCases having a common test configuration and a common composited TestObjectiveSpecification

Test component / SUT connections are detailed by the Test Configuration of the TestContext

Typically an internal block diagram when using SysML

Document the execution sequence/schedule of a TestContext’s TestCases

Specify requirements for: the Test Environment, Test Components, test tools, test data, etc…

The Test Case Specification is fully described and can be realized

Concrete Design Stage

The Test Specification of the System Architecture has the responsibility for the Test Case Specification of each test case specified by the Test Design Specification

Define test data pools, partitions and selectors

Detailed design and specification of test data values

Detailed design and specification of the test environment

The physical design of a test case defines component deployment in the test environment

Identify constraints / limitations of the test environment, data flows, or execution infrastructure

The Test Case Specification is complete when an implementation can be realized

Return on Investment

Does modeling provide an ROI and how?

I start from the premise that the quality of the outcome of technical processes related to the Test Domain (e.g., integration, verification as described by the technical processes of ISO/IEC/IEEE 15288) is dependent on the quality of the skills of the resource performing the role responsible for the process.

My interest is in the “cognitive processes”, as described by “Bloom’s Taxonomy”, required by a “Role” in the execution of a “responsibility”.  “Roles” produce “Outcomes” by performing “Activities”.  “Activities” require “cognitive processes” to perform.  These “Activities” are the “responsibility” of a “Role”.  The quality of an “Outcome” is dependent on “cognitive process” proficiency of the individual performing a “Role”.

ProcessConcepts

So the most important elements of the taxonomy are: role, activity, cognitive process(es) required to support a role activity

outcome * cognitive_processquality  = outcomequality

The premise is that models and modeling languages are cognitive process tools.  Being proven true, it establishes that there is an enabling relationship between cognitive processes and models and modeling languages.  They enable and enhance, thereby creating value.

By establishing an understanding of this fundamental relationship, the value of modeling becomes apparent.  It is more likely a better indicator than other metrics of modeling benefits that are being requested (e.g., cost, schedule, quality measures) by much of the management infrastructure.

Models and modeling languages are tools that directly influence outcomes of cognitive processes.  By enhancing core cognitive processes, program performance is improved.  The relationship between modeling and program performance is not direct, it is a consequence of improving cognitive process quality.

Hopefully, this brings the focus back on engineering fundamentals and why we cannot ignore them.  “Fools with tools are still fools”.  Modeling is not a silver bullet, it is a multiplier.  Zero multiplied by any very large number is still zero.

The individual fulfilling a role must possess the essential cognitive process capabilities demanded by the role’s responsibilities.  Modeling enhances cognitive processes, it cannot proxy for them.

An individual’s

Cognitive_Process_Proficiency * Modeling_Proficiency

is a “Leading Indicator” and we can use this understanding to forecast program risk.

Modeling, Standards and the Test Engineer

Alignment to ISO/IEC/IEEE standards applicable to the test domain has been a key principle of my work.  This alignment does not “add” tasks to the “as is” state of the test engineering domain, but it does restructure domain artifacts (i.e., test plan, test specification, test design specification, test case specification, test procedure specification) and the sequence of some of the tasks (early engagement in setting stakeholder expectation and concurrence for system acceptance test), as well as re-instantiating tasks (describing the verification method beyond inspection, analysis, demonstration and test) frequently over-looked.   Focus has been brought on a key task within the test sub-process area supporting verification of requirements at the acceptance level of test.  An exemplar under development typifies production and acceptance by all stakeholders of a requirement’s acceptance criteria beginning in the proposal phase with the key requirements of the proposed system.  This methodology is not a new concept, but rather a revitalization of a time proven practice.  In A. Wayne Wymore’s seminal work “Model Based Systems Engineering”, Mr. Wymore emphasizes the importance of the ‘System Test Requirement’ as an element of the ‘System Design Requirement’ in his system modeling formalism.  My emphasis is also consistent with guidance for Section 4 Verifications of a System Requirements Document provided by MIL-HDBK-520A and MIL-STD-961E.  The recommended practice goes beyond simply applying a verification method kind to a requirement in the Verification Cross Reference Matrix (VCRM)[i].  It requires the creation of a baseline concept for the test requirement / method, principally by defining an abstract set of acceptance criteria in the form of a test objective specification (e.g., a test requirement).  This forms the basis for a test design specification which is ultimately realized by a concrete test case specification.

testReq

At the system’s SRR each requirement is associated with a concept for its verification test case.  A test case is ‘named’ and ‘described’ by its test objective.  This strikes a test specification baseline for the test system at SRR.

TestModelAtSRR

The maturation stage of the test specification is at SFR.  The elements required to implement a test are described for the test context owning the test case.  This forms the basis for the test architecture associated to the requirement.

TestModelAtSFR

Another principle motivating this work is to drive the engagement in product development of the test engineering domain much earlier in the lifecycle of the system of interest than has been the typical practice, in my experience.  An example of the concept is to treat work product inspections as a “Test Case” and incorporate the test case execution in the test body of evidence.  This is a concept currently in use in the European software development community.  The intent is to dramatically influence and thereby reduce the accumulation of technical debt in all of its forms.

Early test engineering effort of this nature are not typical, in my personal experience, but my research and experiences suggest they hold promise for a substantive ROI. Setting the test system’s technical baseline and the test architecture it is responsible for early in the project aides in setting the expectation with stakeholders.  Early setting of the baseline supports managing changes in scope and offers an opportunity to incrementally refine expectations and thereby enhance the probability of a satisfied customer stakeholder at acceptance testing.


[i] The term VCRM is inconsistent with the Glossary define by ISO/IEC 29119