Category Archives: Thoughts on Test

Specification of an Abstract Verification Method for Acceptance Testing

System Requirements Document or System Specification

Definitions:

System A collection of components organized to   accomplish a specific function or set of functions.Combination of interacting elements   organized to achieve one or more stated purposes NOTE 2 In practice,   the interpretation of its meaning is frequently clarified by the use of an   associative noun, e.g., aircraft system IEEE 610.12IEEE15288
System of Interest system whose life cycle is under consideration in   the context of this International Standard IEEE 15288
Enabling System system that supports a system-of-interest during its   life cycle stages but does not necessarily contribute directly to its function   during operation IEEE 15288
Test System An enabling system supporting Test activities during   the life-cycle of the system of interest, while not being a part of the   system of interest Extended from IEEE 15288
Baseline specification or work product that has been formally   reviewed and agreed upon, that thereafter serves as the basis for further   development, and that can be changed only through formal change control   procedures IEEE 15288
Test (1) An activity in which a system or component is   executed under specified conditions, the results are observed or   recorded, and an evaluation is made of some aspect of the system or   component.(2) To conduct an activity as in (1) IEEE 610.12
Acceptance Testing Formal testing conducted to determine whether or not   a system satisfies its acceptance criteria and to enable the customer to   determine whether or not to accept the system. IEEE 610.12IEEE 1012
System Testing Testing conducted on a complete, integrated system   to evaluate the system’s compliance with its specified requirements. IEEE 610.12
Integration Testing Testing in which software components, hardware   components, or both are combined and tested to evaluate the interaction   between them. IEEE 610.12
Component Testing Testing of individual hardware or software   components or groups of related components. IEEE 610.12
Test Case (1) A set of test inputs, execution conditions, and   expected results developed for a particular objective, such as to exercise a   particular program path or to verify compliance with a specific requirement.(2) Documentation specifying inputs, predicted   results, and a set of execution conditions for a test item.Prescriptive reference: A test case is a behavioral   feature or behavior specifying tests. A test case specifies how a set of test   components interact with an SUT to realize a test objective to return a   verdict value.Test cases are owned by test contexts, and has   therefore access to all features of the test context (e.g., the SUT and test   components of the composite structure).A test case always returns a verdict. IEEE 610.12IEEE 829-1983UTP
Test Case Specification A document that specifies the test inputs, execution   conditions, and predicted results for an item to be tested. IEEE 610.12
Test Objective An identified set of software features to be measured   under specified conditions by comparing actual behavior with the required behavior   described in the software documentation.Prescriptive reference: A dependency used to specify   the objectives of a test case or test context. A test case or test context   can have any number of objectives and an objective can be realized by any   number of test cases or test contexts.Descriptive reference: A test objective is a reason   or purpose for designing and execution a test case [ISTQB]. The underlying   Dependency points from a test case or test context to anything that may   represent such a reason or purpose. This includes (but is not restricted to)   use cases, comments, or even elements from different profiles, like   requirements from [SysML]. IEEE Std 1008-1987UTP
Test Requirement See Test Condition UTP [ISTQB]
Test Condition An item or event of a component or system that could   be verified by one or more test cases, e.g. a function, transaction, feature,   quality attribute, or structural element. UTP [ISTQB]
Acceptance Criteria The criteria that a system or component must satisfy   in order to be accepted by a user, customer, or other authorized entity. IEEE 610.12
Test Matrix Features to be tested (Level Test Plan (LTP) Section   2.3), features not to be tested (LTP Section 2.4), and approaches (LTP   Section 2.5) are commonly combined in a table called a Test Matrix. It   contains a unique identifier for each requirement for the test (e.g., system   and/or software requirements, design, or code), an indication of the source   of the requirement (e.g., a paragraph number in the source document), and a summary   of the requirement and an identification of one or more generic method(s) of   test. IEEE 829-2008
Test Traceability Matrix Provide a list of the requirements (software and/or   system; may be a table or a database) that are being exercised by this level   of test and show the corresponding test cases or procedures. The requirements   may be software product or software-based system functions or nonfunctional   requirements for the higher levels of test, or design or coding standards for   the lower levels of test. This matrix may be part of a larger Requirements   Traceability Matrix (RTM) referenced by this plan that includes requirements for   all levels of test and traces to multiple levels of life cycle documentation   products. It may include both forward and backward tracing. IEEE 829-2008
Test Context Prescriptive reference: A test context acts as a   grouping mechanism for a set of test cases. The composite structure of a test   context is referred to as test configuration. The classifier behavior of a   test context may be used for test control.Descriptive reference: A test context is just a   top-level test case UTP
Stimuli Test data sent to the SUT in order to control it and   to make assessments about the SUT when receiving the SUT reactions to these   stimuli. UTP
Observation Test data reflecting the reactions from the SUT and   used to assess the SUT reactions which are typically the result of a stimulus   sent to the SUT. UTP
SUT Prescriptive reference: Stereotype applied to one or   more properties of a classifier to specify that they constitute the system   under test. The features and behavior of the SUT is given entirely by the   type of the property to which the stereotype is applied.Descriptive reference: refers to a system,   subsystem, or component which is being tested. An SUT can consist of several   objects. The SUT is stimulated via its public interface operations and   signals by the test components. No internals of a SUT are known or accessible   during test case execution, due to its black-box nature UTP

The IT&E domain philosophy employs “Black-Box” test methods at the higher levels of test, so it is highly dependent on behavior specifications.  Integration philosophy is highly dependent on “thread” knowledge. The IT&E domain desires to drive a defined need for system test requirements in the system requirements document/system subsystem specification, something sorely lacking today.

If consistent with the MIL-STD-961E and MIL-HDBK-520A the System Requirements Document (SRD) or System/Subsystem Specification (SSS) provides traceability from each of its requirements to a system element which will «satisfy» the requirement and a system feature element which will «verify» the requirement.  “The baseline management system should allow for traceability from the lowest level component all the way back to the user capability document or other source document from which it was derived”. Defense Acquisition Guide.

IEEE 829-2008 requires the Test System to produce a data artifact indentifying which System Features are tested and which are not, this is the Test Matrix..

IEEE 829-2008 requires the Test System to produce a data artifact tracing a system feature requiring test to the test case performing the verification and the requirement verified by the test case, this is the Test Traceability Matrix.  These matrices are required at each level of testing.

To satisfy the Test System’s Test Architecture data needs for the System Acceptance Test Event architecture component, it needs the SRD/SSS to provide a data artifact containing its requirements, the system feature which satisfies and a test case which verifies the requirement, when the feature’s implementation requires verification at the System Acceptance Test Event.  It may be extracted from the content of the Verification Matrices identified by MIL-HDBK-520A

A System model element with a stereotype of «testCase» provides the verification method (i.e., inspection, analysis, demonstration, test), a behavior specification for the method, the acceptance criteria for the outcome of the behavior and the execution conditions of the behavior (e.g., pre-conditions, post-conditions and conditions of performance)

Each requirement in a SRD/SSS should have one (preferably only one) test case responsible to produce the evidence that the system’s implementation satisfies the requirement.  The SysML profile supports documenting this dependency between a requirement and its test case.  Compound requirements may require multiple test cases, but this is a signal that the requirement should be decomposed to multiple atomic requirements, a best practice.

A specification for a test case includes inputs, execution conditions, and predicted results.  A test case specification has more in common with a use case description than a sequence of actions describing a use case scenario.  A test procedure is a sequence of steps realizing the test case’s behavior.  A test procedure is concrete and contains concrete test data.  A test case specification does not provide a detailed test procedure specification, rather just the requirements for the test case (e.g., What is to be tested).

The test case specifications in an SRD/SSS sets the acquirer stakeholder’s expectation for the system’s acceptance test event. The SRD/SSS test case specifications are required by the Integration and Test System Architecture to construct the test matrix and test traceability matrix. (IEEE 829)

The test case specification reflects the test requirements/test conditions necessary to produce the evidence that the system’s implementation satisfies the requirements.  Test case specifications have the same quality standards required of them as system specifications.  They must be complete, consistent, correct, feasible, necessary, prioritized, unambiguous, and traceable.

A test requirement/test condition statement is similar in content to a performance requirement; in as much as the requisite conditions are specified for achieving the stated performance or test observation/expected result.

Performance Requirement example:

“The Lawnmower System shall operate with [Hourly Mowing Capacity] of at least 1 level ground acre per hour, at [Max Elevation] up to 5,000 feet above sea level, and [Max Ambient Temperature] of up to 85 degrees F., at up to 50% [Max Relative Humidity], for [Foliage Cutting Capacity] of Acme American Standard one week Lawn Grass.”

The Lawnmower System shall

•[Foliage Cutting Capability]
–mow
•[Foliage Cutting Capability Performance]
–a minimum of 1 acre per hour
•[Input & Performance Constraint]
–of Acme American Standard Lawn Grass one week (Input Object State) on level ground (environment condition)
•[Performance Constraint]
–at [Max Elevation] up to 5,000 feet above sea level,
–and [Max Ambient Temperature] at up to 85 degrees F.,

and [Max Relative Humidity] at up to 50% relative humidity

•Mow is a Behavioral Feature of the Lawnmower.
–The Mow() operation returns “cut grass”

LawnmowerElements

“The Lawnmower System shall operate with [Fuel Economy] of at least 1 hour / gallon at [Min Elevation] of 0 feet ASL, at [Max Ambient Temperature] 85 degrees F., 50% [Max Relative Humidity], for Acme American Standard one week Lawn Grass.”

From: Requirements Statements Are Transfer Functions: An Insight from Model-Based Systems Engineering, Author, William D. Schindel, Copyright © 2005 by William D. Schindel.

These two stated performance requirements have a relationship with each other.  The second requirement constrains the acceptance criterion for the first requirement.  Not only must the Hourly Mowing Capacity be achieved, but it must be achieved using no more than 1 gallon of fuel.  The constraint must be normalized for altitude, as the two requirements differ in this pre-condition regard.  It is the Test System’s Test Architecture responsibility to group these requirements into an efficient test context, not the SRD/SSS.  The SRD/SSS should only state the test requirement for the test case which verifies the requirement’s satisfaction.

Requirements’ verification method is Analysis.  The problems are physics based.  Altitude and Ambient temperatures directly impact air pressure density.  Air pressure density negatively impacts the lawnmower’s fuel efficiency and the internal combustion engine efficiency transforming gasoline to a mechanical force.  These environment conditions would be difficult and costly for a test facility to reproduce.

Blocks in the test:

  1. lawnmower, SUT Property – Hourly Mowing Capacity
  2. lawnmower user
  3. Acme American Standard Lawn Grass, initial state [1 weeks growth] end state [ Mowed Height Standard], Property – dimension(1 acre)
  4. Terrain – Property – Level, Altitude.  Host for Acme American Standard Lawn Grass test coupon
  5. atmosphere(air pressure density), Properties – temperature, relative humidity, barometric pressure

SUT = Feature(Hourly Mowing Capacity)

•Pre-Conditions:
–Temperature, Air Pressure Density, level terrain with Acme American Standard Lawn Grass(1 acre) state=[1 weeks growth]
–Lawnmower state=[idle] Property – operating temp = True
•Input:
–Control Flow = User signal to transition lawnmower state = [operate]
–Object Flow = Acme American Standard Lawn Grass [Un-mown]
–Start 1 hour elapsed timer
•Output:
–Object Flow = Acme American Standard Lawn Grass [Mown]
•Post Condition:
–Lawnmower user signal to transition lawnmower state = [stop]
–1 Acre of Acme American Standard Lawn Grass state=[Mown]
–Fuel consumed =< 1 gallon (normalized for test environment @ runtime)
•Observation Assertion:
–1 Hour Elapsed Timer not = Zero
–Acme American Standard Lawn Grass state=[Mowed Height Standard]
•Verdict(Pass)

The terrain and the Acme American Standard Lawn Grass need to be realized in the Test System’s Architecture as test components.  Their properties should be controlled by the analysis, rather than the explicit statements of the SRD/SSS requirement text.  Analysis verifies the SRD/SSS requirement explicitly; the test case providing a measure of Hourly Mowing Capacity serves to confirm the analysis.  It implicitly verifies that the requirement is satisfied rather than an explicit verification of the satisfaction.

Given that the environmental parameters are difficult to control on a large scale.  The most likely approach the test architect will take to test case design is to measure environmental conditions and adjust the test case’s acceptance criteria to account for test case ambient rather than to control the environment.  The test coupon may also be adjusted in size based on the criticality of the performance requirement and the uncertainties in making the measurement of Hourly Mowing Capacity and its confidence interval.  In a risk-driven test architecture; as the criticality of the requirement increases so should the investment in verification.  If the confidence interval for this requirement is low, then a very terse test case supporting the formal analysis may satisfy the acquirer’s acceptance criterion.

Additional Background:

From the ISTQB Glossary of Testing Terms:

test requirement: See test condition.

test condition: An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element.

test case: A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. [After IEEE 610]

test case specification: A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item. [After IEEE 829]

test design specification: A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases. [After IEEE 829]

INCOSE, in the latest Systems Engineering Handbook V 3.2.2, the term “test requirement” no longer appears.  The term was employed in version 3.1.5 from 2007, but has since been removed.

Excerpt from Version 3.1.5:

Each test requirement should be verifiable by a single test. A requirement requiring multiple tests to verify should be broken into multiple requirements. There is no problem with one test verifying multiple requirements; however, it indicates a potential for consolidating requirements. When the system hierarchy is properly designed, each level of specification has a corresponding level of test during the test phase.  If element specifications are required to appropriately specify the system, element verification should be performed.

And

establish a basis for test planning, system-level test requirements, and any requirements for environmental simulators

The 2010 revision that created Version 3.2.2. harmonized with ISO/IEC 15288:2008 with the intent to provide elaboration of the processes and activities to execute the processes of ISO/IEC 15288:2008.

A Wayne Wymore’s treatise “Model-Based Systems Engineering”  includes “System Test Requirements” as a key element of his mathematical theory of systems design.

SDR (system design requirements) = (IOR, TYR, PR, CR, TR, STR (system test requirements))

A concept example:

There is a system requirement that is a “Key Performance Parameter”.  The requirement must be satisfied or the system is not acceptable.  There is a range of environmental execution conditions under which the KPP must be achieved.  There is only a single event which triggers the functional behavior of the system (e.g., a thread) and only a single standard (performance standard) to evaluate the system output (e.g., acceptance criteria,  expected result).  Of the range of input conditions, there are two boundary condition sets that pose a performance challenge to the functional behavior’s design.  Since this performance requirement and the conditions under which the performance must be achieved is infeasible to “Test”, the verification of the requirement is by the verification method “Analysis”, all stakeholders accept this approach.

“System Test”, an informal testing activity, test cases will execute that observe test requirements/conditions identified by the challenging boundary condition sets.  The design of the test cases and their specifications will comply with the test requirements/conditions that are derived from the analysis.  The analysis predicts system behavior and the test requirements/conditions will drive design of test cases where the predicted system behavior is the test oracle (e.g., acceptance criteria).

In this example the test requirement(s) or test condition(s) drive test design.  The Test Architect defines the Test Requirement/ Test Condition specification during the planning phase.  The original test requirement called for a verification method of “Analysis”.  The verification method of “Analysis” was not fully satisfactory to the Customer.  To build Customer intimacy and refine the test requirements, test cases were added to the System Test level to address the concern and build confidence in the performance analysis.  These are informal test cases designed to validate the performance analysis within the constraints imposed by test implementation limitation.

Advertisements

Defining a Requirement’s Acceptance Criterion

Premise: A requirement expresses a precise feature set for an entity.  Typically, features are framed as desired behavior (functional requirements).  Features may be expressed in terms of functionality, inputs, outputs, actions, sequences, qualities and constraints.  A collection of requirement statements that define an entity’s technical features is typically identified as a specification.  A stakeholder requirements specification should define stakeholder needs and an intended use of the entity’s features by the stakeholder in an environment, the system’s context.  This intended use is the description of how the stakeholder intends to satisfy their need by employing the entity under development to achieve a goal.  This information is foundational to developing requirement acceptance criterion.

Discussion: Frequently, stakeholder requirement statements lack the precision to unambiguously produce objective evidence through test case execution that a feature complies with its specification.  Requirements are frequently expressed implicitly rather than explicitly, or are poorly constructed.  Implicit characteristics of a feature and poor requirement statement construction frequently results in conflict during the object’s acceptance test phase as a result of emergent requirement statement interpretation.  It is not at all unusual to have stakeholders re-interpret requirement statements to obtain new features they did not originally define.  One approach to clarify imprecise requirement statements is to develop explicit requirement statements from the imprecise statements and offer the improved requirements as a replacement.  An alternate approach is to author acceptance criterion for implicit or poorly constructed requirements to explicitly define the criterion that support the assertion that the entity’s feature behavior satisfies its specification.

Ideally, requirements should focus on the problems requiring solutions in the stakeholder’s domain rather than focusing on the system solution.  Stakeholders must be accountable for their requirement statements and agree to an acceptance criterion for each and every stakeholder requirement prior to the commencement of stakeholder requirement analysis.  Acceptance criterion forms the basis for the stakeholder’s acceptance of the developed entity.  This acceptance is the transfer of an entity from development to stakeholder use.  Mutual agreement on precise acceptance criterion precludes discord late in the program during the entity’s acceptance test phase where the stakeholder’s satisfaction is critically important.

Assertion: Well written acceptance criterion will validate that a requirement statement is verifiable.  Employ explicit acceptance criterion early in the entity’s development period and obtain stakeholder agreement with it.  Agreement on acceptance criterion should be established at the stakeholder requirements review phase of a project.

Method: An approach to developing acceptance criterion is to ask questions from the user/stakeholder viewpoint.  These criterions will answer the stakeholder question “How do we know the entity meets its specification?”

What measured or observed criterion will convince stakeholders that the system implementation satisfies their operational need or is there a specific operational situation where an entity’s features will provide a user a capability which accomplishes a user operational objective/goal?  Is there an output provided by an entity feature that satisfies a stakeholder criterion that can be measured and subjected to a stakeholder standard?

Answering these questions provides a foundation for test case development.  The first answer forms the operational domain context for a test case.  A domain description defines the static structure of the population of an environment and how they interrelate.  In the Department of Defense Architecture Framework V2 specification the populations of entities in an environment are referred to as “Performers”.  Performer entities posses features that enable interaction with our stakeholder’s entity to accomplish a stakeholder objective and thereby satisfy a stakeholder need.  How these entities are connected and how they exchange objects (e.g., information items, or flows) helps to define the acceptance criterion.  What performers in the environment are interacting with the system and how are they interacting?  The second answer provides both a test case scenario and the test oracle that an arbiter uses to assert if the test case has “passed” or “failed”.  The test case scenario description defines the dynamic event and interactions of the entities in an environment.  Acceptance criterion defines the successful outcome of a test case.  Each requirement should have an acceptance criterion statement.  Criterion must be measurable, either qualitatively or quantitatively.  Where the measure is qualitative, it is imperative to reach agreement on defining this subjective evaluation.

Further Refinement: Acceptance criterion such as “successful” is qualitative.  In this example there is a need to quantify “Success”.  We can measure “profitability” by assessing the “return on investment” and stating that if each dollar investment returns a 10% profit, the standard used to assert a verdict, and then success has been achieved.  Success might also be measured by the quantity of process cycles performed in a period of time.

Yes, these quantitative criterions are measures applying to non-functional requirements.  However, the point is that by measuring the performance of a function, which may be a derived performance measure, there is implicit verification of the functional requirement statement.

The acceptance criterion defines the domain the entity is employed in.  It describes how the entity’s features are measured or assessed.  Acceptance criterion may express constraints to be imposed on the entity when its features are in use.  Acceptance criterion forms the test case development framework.  The acceptance criterion statements explicitly communicate the interpretation of stakeholder requirements.

Test Requirements – Defining a Product’s Requirement Acceptance Criteria

Acceptance Criteria – The criteria a stakeholder employs in the assessment of a product feature to assert that the feature satisfies their need.  A feature of a product is specified by a system requirement that requires verification of the feature’s implementation for the product to be accepted by the stakeholder.

trace

The question to the stakeholder – “What objective evidence, produced by the test case, will convince you that the feature specified by the requirement has been satisfied?”

The response could identify a scenario, an instance of a Use Case, and their desired outcome of the Use Case Scenario.  It may be a set of input conditions to the product and a measurable outcome, meaning the input and output can be formally defined.

tos3

context

A product requirement expresses a precise attribute set of a product feature.  Typically, these attributes are framed as desired behavior.  These feature attributes may be expressed in terms functionality, inputs, outputs, actions, sequence, qualities and constraints.

Associations

Frequently, requirement statements lack the precision to unambiguously produce evidence through test case execution that the product complies with its specification, a collection of requirements.  Requirements are frequently expressed implicitly rather than explicitly.  Ambiguous characteristics frequently results in conflict during acceptance testing as a result of emergent requirement statement interpretation.

tos1

seq

An approach to clarify the intent of imprecise requirement statements is to author acceptance criteria for these requirements which explicitly define the criteria that support the assertion that the product satisfies its specification.  The acceptance criteria of a functional requirement might be expressed using a model-based behavior specification.

Employ explicit acceptance criteria early in the product development period and obtain stakeholder agreement with it.  Agreement on acceptance criterion should be established at the requirements review phase of a project.  Ideally, requirements should focus on the problems requiring solutions in the stakeholder domain rather than focusing on the system solution.  Stakeholders must be held accountable for their requirement statements and agree to an acceptance criterion for each and every requirement prior to the commencement of high level system design.  This precludes discord late in the program where the stakeholder’s satisfaction is critically important.

An approach to developing acceptance criteria is to ask questions from the user/stakeholder viewpoint.  These criterions will answer the stakeholder question “How do we know the product meets its specification?”

What measured or observed criteria will convince stakeholders that the system implementation satisfies the requirement?  Is there a specific operational situation where a system’s features will provide a user a capability which accomplishes a user mission objective?  Is there an output provided by a system feature that satisfies a stakeholder criterion that can be measured and subjected to a stakeholder standard?

goal

tos2

Answering these questions provides a foundation for test case development.  The first answer forms the system context for a test case.  What objects in the system’s environment are interacting with it and how are they interacting?

jpg_10

jpg_10

The second answer provides the test oracle that an arbiter uses to assert if the test case has “passed” or “failed”.  Acceptance criteria define the successful outcome of a test case.  Each requirement should have an acceptance criterion statement.  Criterion must be measurable, either qualitatively or quantitatively.  Where the measure is qualitative, it is imperative to reach agreement on defining this subjective evaluation.

These answers drive the test strategy which culminates in a demonstration or test that produces the evidence that satisfies the stakeholder’s expectation, thereby establishing the acceptance criterion for the requirement.

Acceptance criteria must be explicitly associated with a requirement and acknowledged formally by the stakeholder as being adequate criterion.

The acquirer’s acceptance criteria for a product should be stated in the acquirer’s product specification at the time of their request for a proposal.  In the US DoD MIL-STD 961E section 4 of the system’s specification contains the acceptance criteria in the form a test method and a test specification for all requirements in section 3.  If the acquirer of the product has not stated acceptance criterion for all requirements in their product specification, then the proposal must contain scoping acceptance criteria to ensure that the acquirer understands what will be delivered by the proposal both in terms of a product and the evidence that the product satisfies their need as stated in the product’s specification.  In any event, the acceptance criterion for every stakeholder requirement must be stated at the product’s requirement review milestone and acknowledged as acceptable by the acquirer of the product.  Delaying the establishment of acceptance criteria levies a significant risk of “scope creep”.  As the design matures and its capabilities begin to be revealed; the acquirer, or the acquirer’s representatives, may realize that they may have provided a product specification which will not fully satisfy their needs and the acceptance criteria is at risk of becoming far more costly to achieve and verify.

IEEE Std 829TM-2008 IEEE Standard for Software and System Test Documentation calls for the development of a Test Traceability Matrix.  This matrix is responsible for establishing the association between each system requirement and the test responsible for producing the evidence that satisfies the requirement’s acceptance criterion.  This matrix has a hierarchy which parallels the requirement decomposition hierarchy of system, sub-systems and components.

testTraceMatrix

The need for requirement acceptance criteria does not end at the acquirer’s specification.  All engineered requirements need acceptance criteria.  The acceptance criterion unambiguously informs the next lower tier in the engineering hierarchy of their stakeholder’s expectations.

The incoming acceptance criteria are a principle driver of the test strategy at that level of the product’s feature realization hierarchy.

Acceptance criteria such as “successful” are qualitative.  In this example there is a need to quantify “Success”.  We can measure “profitability” by assessing the “return on investment” and stating that if each dollars investment returns a 10% return, the standard used to assert a verdict, then success has been achieved.  Success might also be measured by the quantity of process cycles performed in a period of time.

Yes, these quantitative criteria are measures applying to non-functional requirements.  The point being is that by measuring the performance of a function, which may be a derived performance measure, there is implicit verification of the functional requirement statement.

The acceptance criterion defines the domain the product is employed in.  It describes how the product’s features are measured or assessed.  Acceptance criterion may express constraints to be imposed on the product when its features are in use.  Acceptance criterion forms the test case development framework.  The acceptance criteria statements will explicitly communicate our interpretation of stakeholder requirements.

The current SysML standard, as well as the UML 2 Testing Profile, do not address ‘acceptance criteria modeling’ directly or by inference.  A description of a Use Case scenario with an accompanying User goal and satisfaction of outcome criteria expressed in a classifier seems required.  The UTP specification of a ‘Test Context’ which is both a structuredClassifier and a behavioredClassifier seems fit for purpose.  However, the SysML does not include the ‘Test Context’ in its profile; rather it only includes the ‘Test Case’ which is an ‘Operation’ or ‘Behavior’ meta-class.

Opinion on “Is System Debugging a valid concept?”

LinkedIn discussion from the System Integration and Test group

“Debugging” is a valid concept.

IMHO, “Debugging” is not on par with “Design”. “Debugging” is not a technical process, it is an outcome of the execution of a technical process.

ISO/IEC/IEEE 24765 Systems and software engineering – Vocabulary defines “Debug” as:

to detect, locate, and correct faults in a computer program.

Fault is defined as:

1. manifestation of an error in software. 2. an incorrect step, process, or data definition in a computer program. 3. a defect in a hardware device or component. “bug” is listed as a synonym for “fault”.

There is nothing prohibiting the extension of the term to all elements of a system (Hw, Sw, Design, Requirements, etc…).

“Bugs” or faults are found by executing test cases against a system element, a test case SUT, and comparing the expected result (test oracle) against the observation. The expected result is derived from the test basis and if the observation is non-conforming to the oracle, then the SUT is faulty. The bug or fault must be removed or the SUT will never be found to be compliant with its requirements. And yes, IMHO, a test case can be realized as an “inspection” of a design, abstract implementation, etc…

A “Test System” is an Enabling System of the System of Interest and has a responsibility for the production of objective evidence that the system of interest as well as its elements satisfies acceptance criteria.

ISO 15288 identifies the life cycle stages and technical processes. Test is an activity of the integration, verification and validation technical processes. Test is realized through the execution of test cases, behavior specifications realized by test components against a SUT. Every element of a system traverses the development stage in its own life cycle and is subjected to execution of technical processes. An outcome of “Integration” is:

c) A system capable of being verified against the specified requirements from architectural design is assembled and integrated.

There is an implication that to “be capable of being verified” and subsequently “accepted” by a stakeholder that the system element must be brought into compliance with its requirements or “to be free of faults”. Faults/bugs in a system element are detected and corrected, “debugged”, as an outcome of the execution of process activities and tasks.

There is a new ISO standard under development for Sw Testing. It currently consists of 4 vols. The std is 29119. In vol 1 Annex A a depiction of the role of test in V&V is provided. The principles of the std can apply to all engineering domains, not just Sw (IMHO). I’m not asserting that the std is the holy grail, but it does have some good content. There is some info in the public domain on the std.

ISO/IEC/IEEE 24765 Systems and software engineering — Vocabulary defines “test” as “an activity in which a system or component is executed under specified conditions, the results are observed or recorded, and an evaluation is made of some aspect of the system or component”.

The evaluation of the test’s observation may conclude that the observation does not conform to the expected result. The expected result is confirmed to be consistent with the test basis and in this case the existence of a fault/bug is confirmed.

One objective of the ISO/IC/IEEE standards is to establish a common framework and lexicon to aid communication among diverse domains. There is still much work to be done towards this end and there are some very committed individuals striving to harmonize the ISO standards. There is value in learning a common language.

Test is not analogous to the activity in which young children engage on Easter Sunday. That is an unstructured and random sequence of behavior in which the discovery of an egg is merely happenstance. Many individuals engage in such behavior and call it test.

If bug=defect=fault, then debugging=dedefecting=defaulting

Food for thought.

NIST published a comprehensive report on project statistics and experiences based on data from a large number of software projects

70% of the defects are introduced by the specifications

30% are introduced later in the technical solution

Only 5% of the specification defects are corrected in the specification phase

95% are detected later in the project or after delivery where the cost for correction on average is 22 times higher compared to a correction directly during the specification effort

Find the requirement defects in the program phase where they occur and there will be less defects to find during integration test or system test.

A work product inspection is a Test. It employs the static verification method “inspection”. ISO 29119 supports this concept. The INCOSE Guide for Writing Requirements can serve as your inspection checklist. It is also the specification for writing requirements and is therefore your Test Basis.

SEs (as the authors of the specifications) are, typically, the source of the majority of defects in a system.

Stakeholder politics plays a role in the requirement problem. Incompetence is yet another significant contributor. There are a host of factors.

Many SEs are behind the power curve. The ISO, IEEE, and INCOSE are driving SE maturity, SEs need to get onboard and support these efforts.

Emphasis needs to be on prevention and not detection.

Test Driven System Development

TestArchitectureHierarchy

Derive Test design specifications from the abstraction tier in the development hierarchy immediately above the tier where they are employed against the design artifacts. Apply Test cases against the assembled system in the same tier. A host of possible methods exist. The simplest approach may be a checklist of required capability and measures of performance derived from the ConOps that is applied as a checklist against System Requirements. Map problems to solutions. All problems must have a solution that addresses them. Test Engineering subjects the requirements to verification using the method Inspection. At the same time they collect the test requirement for the system’s technical requirement.

An approach to mapping can be through the use of modeling. Model the problem space and the solution space and the traces between them.