Return on Investment

Does modeling provide an ROI and how?

I start from the premise that the quality of the outcome of technical processes related to the Test Domain (e.g., integration, verification as described by the technical processes of ISO/IEC/IEEE 15288) is dependent on the quality of the skills of the resource performing the role responsible for the process.

My interest is in the “cognitive processes”, as described by “Bloom’s Taxonomy”, required by a “Role” in the execution of a “responsibility”.  “Roles” produce “Outcomes” by performing “Activities”.  “Activities” require “cognitive processes” to perform.  These “Activities” are the “responsibility” of a “Role”.  The quality of an “Outcome” is dependent on “cognitive process” proficiency of the individual performing a “Role”.

ProcessConcepts

So the most important elements of the taxonomy are: role, activity, cognitive process(es) required to support a role activity

outcome * cognitive_processquality  = outcomequality

The premise is that models and modeling languages are cognitive process tools.  Being proven true, it establishes that there is an enabling relationship between cognitive processes and models and modeling languages.  They enable and enhance, thereby creating value.

By establishing an understanding of this fundamental relationship, the value of modeling becomes apparent.  It is more likely a better indicator than other metrics of modeling benefits that are being requested (e.g., cost, schedule, quality measures) by much of the management infrastructure.

Models and modeling languages are tools that directly influence outcomes of cognitive processes.  By enhancing core cognitive processes, program performance is improved.  The relationship between modeling and program performance is not direct, it is a consequence of improving cognitive process quality.

Hopefully, this brings the focus back on engineering fundamentals and why we cannot ignore them.  “Fools with tools are still fools”.  Modeling is not a silver bullet, it is a multiplier.  Zero multiplied by any very large number is still zero.

The individual fulfilling a role must possess the essential cognitive process capabilities demanded by the role’s responsibilities.  Modeling enhances cognitive processes, it cannot proxy for them.

An individual’s

Cognitive_Process_Proficiency * Modeling_Proficiency

is a “Leading Indicator” and we can use this understanding to forecast program risk.

Defining a Requirement’s Acceptance Criterion

Premise: A requirement expresses a precise feature set for an entity.  Typically, features are framed as desired behavior (functional requirements).  Features may be expressed in terms of functionality, inputs, outputs, actions, sequences, qualities and constraints.  A collection of requirement statements that define an entity’s technical features is typically identified as a specification.  A stakeholder requirements specification should define stakeholder needs and an intended use of the entity’s features by the stakeholder in an environment, the system’s context.  This intended use is the description of how the stakeholder intends to satisfy their need by employing the entity under development to achieve a goal.  This information is foundational to developing requirement acceptance criterion.

Discussion: Frequently, stakeholder requirement statements lack the precision to unambiguously produce objective evidence through test case execution that a feature complies with its specification.  Requirements are frequently expressed implicitly rather than explicitly, or are poorly constructed.  Implicit characteristics of a feature and poor requirement statement construction frequently results in conflict during the object’s acceptance test phase as a result of emergent requirement statement interpretation.  It is not at all unusual to have stakeholders re-interpret requirement statements to obtain new features they did not originally define.  One approach to clarify imprecise requirement statements is to develop explicit requirement statements from the imprecise statements and offer the improved requirements as a replacement.  An alternate approach is to author acceptance criterion for implicit or poorly constructed requirements to explicitly define the criterion that support the assertion that the entity’s feature behavior satisfies its specification.

Ideally, requirements should focus on the problems requiring solutions in the stakeholder’s domain rather than focusing on the system solution.  Stakeholders must be accountable for their requirement statements and agree to an acceptance criterion for each and every stakeholder requirement prior to the commencement of stakeholder requirement analysis.  Acceptance criterion forms the basis for the stakeholder’s acceptance of the developed entity.  This acceptance is the transfer of an entity from development to stakeholder use.  Mutual agreement on precise acceptance criterion precludes discord late in the program during the entity’s acceptance test phase where the stakeholder’s satisfaction is critically important.

Assertion: Well written acceptance criterion will validate that a requirement statement is verifiable.  Employ explicit acceptance criterion early in the entity’s development period and obtain stakeholder agreement with it.  Agreement on acceptance criterion should be established at the stakeholder requirements review phase of a project.

Method: An approach to developing acceptance criterion is to ask questions from the user/stakeholder viewpoint.  These criterions will answer the stakeholder question “How do we know the entity meets its specification?”

What measured or observed criterion will convince stakeholders that the system implementation satisfies their operational need or is there a specific operational situation where an entity’s features will provide a user a capability which accomplishes a user operational objective/goal?  Is there an output provided by an entity feature that satisfies a stakeholder criterion that can be measured and subjected to a stakeholder standard?

Answering these questions provides a foundation for test case development.  The first answer forms the operational domain context for a test case.  A domain description defines the static structure of the population of an environment and how they interrelate.  In the Department of Defense Architecture Framework V2 specification the populations of entities in an environment are referred to as “Performers”.  Performer entities posses features that enable interaction with our stakeholder’s entity to accomplish a stakeholder objective and thereby satisfy a stakeholder need.  How these entities are connected and how they exchange objects (e.g., information items, or flows) helps to define the acceptance criterion.  What performers in the environment are interacting with the system and how are they interacting?  The second answer provides both a test case scenario and the test oracle that an arbiter uses to assert if the test case has “passed” or “failed”.  The test case scenario description defines the dynamic event and interactions of the entities in an environment.  Acceptance criterion defines the successful outcome of a test case.  Each requirement should have an acceptance criterion statement.  Criterion must be measurable, either qualitatively or quantitatively.  Where the measure is qualitative, it is imperative to reach agreement on defining this subjective evaluation.

Further Refinement: Acceptance criterion such as “successful” is qualitative.  In this example there is a need to quantify “Success”.  We can measure “profitability” by assessing the “return on investment” and stating that if each dollar investment returns a 10% profit, the standard used to assert a verdict, and then success has been achieved.  Success might also be measured by the quantity of process cycles performed in a period of time.

Yes, these quantitative criterions are measures applying to non-functional requirements.  However, the point is that by measuring the performance of a function, which may be a derived performance measure, there is implicit verification of the functional requirement statement.

The acceptance criterion defines the domain the entity is employed in.  It describes how the entity’s features are measured or assessed.  Acceptance criterion may express constraints to be imposed on the entity when its features are in use.  Acceptance criterion forms the test case development framework.  The acceptance criterion statements explicitly communicate the interpretation of stakeholder requirements.

Modeling, Standards and the Test Engineer

Alignment to ISO/IEC/IEEE standards applicable to the test domain has been a key principle of my work.  This alignment does not “add” tasks to the “as is” state of the test engineering domain, but it does restructure domain artifacts (i.e., test plan, test specification, test design specification, test case specification, test procedure specification) and the sequence of some of the tasks (early engagement in setting stakeholder expectation and concurrence for system acceptance test), as well as re-instantiating tasks (describing the verification method beyond inspection, analysis, demonstration and test) frequently over-looked.   Focus has been brought on a key task within the test sub-process area supporting verification of requirements at the acceptance level of test.  An exemplar under development typifies production and acceptance by all stakeholders of a requirement’s acceptance criteria beginning in the proposal phase with the key requirements of the proposed system.  This methodology is not a new concept, but rather a revitalization of a time proven practice.  In A. Wayne Wymore’s seminal work “Model Based Systems Engineering”, Mr. Wymore emphasizes the importance of the ‘System Test Requirement’ as an element of the ‘System Design Requirement’ in his system modeling formalism.  My emphasis is also consistent with guidance for Section 4 Verifications of a System Requirements Document provided by MIL-HDBK-520A and MIL-STD-961E.  The recommended practice goes beyond simply applying a verification method kind to a requirement in the Verification Cross Reference Matrix (VCRM)[i].  It requires the creation of a baseline concept for the test requirement / method, principally by defining an abstract set of acceptance criteria in the form of a test objective specification (e.g., a test requirement).  This forms the basis for a test design specification which is ultimately realized by a concrete test case specification.

testReq

At the system’s SRR each requirement is associated with a concept for its verification test case.  A test case is ‘named’ and ‘described’ by its test objective.  This strikes a test specification baseline for the test system at SRR.

TestModelAtSRR

The maturation stage of the test specification is at SFR.  The elements required to implement a test are described for the test context owning the test case.  This forms the basis for the test architecture associated to the requirement.

TestModelAtSFR

Another principle motivating this work is to drive the engagement in product development of the test engineering domain much earlier in the lifecycle of the system of interest than has been the typical practice, in my experience.  An example of the concept is to treat work product inspections as a “Test Case” and incorporate the test case execution in the test body of evidence.  This is a concept currently in use in the European software development community.  The intent is to dramatically influence and thereby reduce the accumulation of technical debt in all of its forms.

Early test engineering effort of this nature are not typical, in my personal experience, but my research and experiences suggest they hold promise for a substantive ROI. Setting the test system’s technical baseline and the test architecture it is responsible for early in the project aides in setting the expectation with stakeholders.  Early setting of the baseline supports managing changes in scope and offers an opportunity to incrementally refine expectations and thereby enhance the probability of a satisfied customer stakeholder at acceptance testing.


[i] The term VCRM is inconsistent with the Glossary define by ISO/IEC 29119

Test Architecture Philosophy

Test (noun)

Examination

a series of questions, problems, or practical tasks to gauge somebody’s knowledge, ability, or experience

Basis for evaluation

a basis for evaluating or judging something or somebody

Trial run-through a process

a trial run-through of a process or on equipment to find out if it works

Procedure to detect presence of something

a procedure to ascertain the presence of or the properties of a substance

Architecture (noun)

Building design

the art and science of designing and constructing buildings

Building style

a style or fashion of building, especially one that is typical of a period of history or of a particular place

Structure of computer system

the design, structure, and behavior of a computer system, microprocessor, or system program, including the characteristics of individual components and how they interact

Philosophy (noun)

Examination of basic concepts

the branch of knowledge or academic study devoted to the systematic examination of basic concepts such as truth, existence, reality, causality, and freedom

School of thought

a particular system of thought or doctrine

Guiding or underlying principles

a set of basic principles or concepts underlying a particular sphere of knowledge

Set of beliefs or aims

a precept, or set of precepts, beliefs, principles, or aims, underlying somebody’s practice or conduct

The use of natural language to convey thoughts is fraught with semantic risk.  Natural language is essential for humans, yet to mitigate the semantic risk a rigorous grammar is required within a community of interest.  Considerable research exists to document the monumental undertaking it is to instantiate a universal lexicon, however there is strong evidence that a rigorous lexicon within a community is possible and of significant value.  Towards this end I hope to convey a distillation of my research over the last few years.

“The Four Horsemen”

Boundaries are explicit

Services are Autonomous

Services share Schema and Contract, not Class

Compatibility is based upon Policy

Any entity, without regard to its affiliation, has inherently greater value if it is employable in many places in many different ways.  To realize this objective an entity needs to posses the attributes embodied in “The Four Horsemen.”  The entity is whole and self-contained, yet it is allowed to interact with other entities via its interfaces.  The interface has a rigorous specification on how it interacts with other interfaces.  The entity exists within a community where all entities observe the same principles of realization.  An entity possessing these attributes is inherently more flexible when choreographing or orchestrating with other entities to realize a more complex entity.  We could assemble a complex entity from purpose specific component entities into a monolithic structure.  It might be a thing of great beauty, but it is purpose built and its maintenance and flexibility in the face of change will come at great cost.

For this reason I propose a Test Architecture philosophy that embraces these tenets.  It requires strict adherence to a framework.  Entities have explicit boundaries and they are autonomous.  Interfaces adhere to strict standards.  Policy governance insures that entities are interoperable and re-usable.

This is not a trivial task.  It requires considerable architectural design effort in the framework, but the reward is downstream cost reduction.

Policy One – Define your problem then look for a solution in the Standards space.  Start with the standards closest to the problem space of the stakeholders.  Next employ standards with wide adoption in the problem space.

Employing standards from which we extract patterns to solve our problem and convey our solution increases the probability that our solution is interoperable with products employing the same patterns.  Standards evoke patterns.  Using patterns in the solution space fosters the emergence of intuitive cognitive recognition of the solution.  The brain of the human animal thrives on recognizing patterns; it is superlative in that regard.

Policy Two – Define a lexicon.  The lexicon is policy.  Embrace the lexicon.  Amend the lexicon as the world changes as new realities and understandings emerge.  DoDAF is evolving because a broad range of stakeholders realized early that the original specification had serious shortcomings and limitations.  It underwent a rapid transition to version 1.5 and then 2.  Version 2 has had to overcome significant obstacles created by the entities involved in the evolution.  A lack of inclusive collaboration and compromise is likely to blame.  There appears as well to have been some practice of dogma by some entities, they no longer participate in the evolution of the standard.  The stakeholders of the UPDM (UML Profile for DoDAF and MODAF) appear to be likely to adopt the new proposed standard.  We might be wise to draw our lexicon from the UML, tailored to the DoDAF influence.

Policy Three – Define a framework for the Test Architecture.  Enforce Policy One and Two on the framework.

Test Requirements – Defining a Product’s Requirement Acceptance Criteria

Acceptance Criteria – The criteria a stakeholder employs in the assessment of a product feature to assert that the feature satisfies their need.  A feature of a product is specified by a system requirement that requires verification of the feature’s implementation for the product to be accepted by the stakeholder.

trace

The question to the stakeholder – “What objective evidence, produced by the test case, will convince you that the feature specified by the requirement has been satisfied?”

The response could identify a scenario, an instance of a Use Case, and their desired outcome of the Use Case Scenario.  It may be a set of input conditions to the product and a measurable outcome, meaning the input and output can be formally defined.

tos3

context

A product requirement expresses a precise attribute set of a product feature.  Typically, these attributes are framed as desired behavior.  These feature attributes may be expressed in terms functionality, inputs, outputs, actions, sequence, qualities and constraints.

Associations

Frequently, requirement statements lack the precision to unambiguously produce evidence through test case execution that the product complies with its specification, a collection of requirements.  Requirements are frequently expressed implicitly rather than explicitly.  Ambiguous characteristics frequently results in conflict during acceptance testing as a result of emergent requirement statement interpretation.

tos1

seq

An approach to clarify the intent of imprecise requirement statements is to author acceptance criteria for these requirements which explicitly define the criteria that support the assertion that the product satisfies its specification.  The acceptance criteria of a functional requirement might be expressed using a model-based behavior specification.

Employ explicit acceptance criteria early in the product development period and obtain stakeholder agreement with it.  Agreement on acceptance criterion should be established at the requirements review phase of a project.  Ideally, requirements should focus on the problems requiring solutions in the stakeholder domain rather than focusing on the system solution.  Stakeholders must be held accountable for their requirement statements and agree to an acceptance criterion for each and every requirement prior to the commencement of high level system design.  This precludes discord late in the program where the stakeholder’s satisfaction is critically important.

An approach to developing acceptance criteria is to ask questions from the user/stakeholder viewpoint.  These criterions will answer the stakeholder question “How do we know the product meets its specification?”

What measured or observed criteria will convince stakeholders that the system implementation satisfies the requirement?  Is there a specific operational situation where a system’s features will provide a user a capability which accomplishes a user mission objective?  Is there an output provided by a system feature that satisfies a stakeholder criterion that can be measured and subjected to a stakeholder standard?

goal

tos2

Answering these questions provides a foundation for test case development.  The first answer forms the system context for a test case.  What objects in the system’s environment are interacting with it and how are they interacting?

jpg_10

jpg_10

The second answer provides the test oracle that an arbiter uses to assert if the test case has “passed” or “failed”.  Acceptance criteria define the successful outcome of a test case.  Each requirement should have an acceptance criterion statement.  Criterion must be measurable, either qualitatively or quantitatively.  Where the measure is qualitative, it is imperative to reach agreement on defining this subjective evaluation.

These answers drive the test strategy which culminates in a demonstration or test that produces the evidence that satisfies the stakeholder’s expectation, thereby establishing the acceptance criterion for the requirement.

Acceptance criteria must be explicitly associated with a requirement and acknowledged formally by the stakeholder as being adequate criterion.

The acquirer’s acceptance criteria for a product should be stated in the acquirer’s product specification at the time of their request for a proposal.  In the US DoD MIL-STD 961E section 4 of the system’s specification contains the acceptance criteria in the form a test method and a test specification for all requirements in section 3.  If the acquirer of the product has not stated acceptance criterion for all requirements in their product specification, then the proposal must contain scoping acceptance criteria to ensure that the acquirer understands what will be delivered by the proposal both in terms of a product and the evidence that the product satisfies their need as stated in the product’s specification.  In any event, the acceptance criterion for every stakeholder requirement must be stated at the product’s requirement review milestone and acknowledged as acceptable by the acquirer of the product.  Delaying the establishment of acceptance criteria levies a significant risk of “scope creep”.  As the design matures and its capabilities begin to be revealed; the acquirer, or the acquirer’s representatives, may realize that they may have provided a product specification which will not fully satisfy their needs and the acceptance criteria is at risk of becoming far more costly to achieve and verify.

IEEE Std 829TM-2008 IEEE Standard for Software and System Test Documentation calls for the development of a Test Traceability Matrix.  This matrix is responsible for establishing the association between each system requirement and the test responsible for producing the evidence that satisfies the requirement’s acceptance criterion.  This matrix has a hierarchy which parallels the requirement decomposition hierarchy of system, sub-systems and components.

testTraceMatrix

The need for requirement acceptance criteria does not end at the acquirer’s specification.  All engineered requirements need acceptance criteria.  The acceptance criterion unambiguously informs the next lower tier in the engineering hierarchy of their stakeholder’s expectations.

The incoming acceptance criteria are a principle driver of the test strategy at that level of the product’s feature realization hierarchy.

Acceptance criteria such as “successful” are qualitative.  In this example there is a need to quantify “Success”.  We can measure “profitability” by assessing the “return on investment” and stating that if each dollars investment returns a 10% return, the standard used to assert a verdict, then success has been achieved.  Success might also be measured by the quantity of process cycles performed in a period of time.

Yes, these quantitative criteria are measures applying to non-functional requirements.  The point being is that by measuring the performance of a function, which may be a derived performance measure, there is implicit verification of the functional requirement statement.

The acceptance criterion defines the domain the product is employed in.  It describes how the product’s features are measured or assessed.  Acceptance criterion may express constraints to be imposed on the product when its features are in use.  Acceptance criterion forms the test case development framework.  The acceptance criteria statements will explicitly communicate our interpretation of stakeholder requirements.

The current SysML standard, as well as the UML 2 Testing Profile, do not address ‘acceptance criteria modeling’ directly or by inference.  A description of a Use Case scenario with an accompanying User goal and satisfaction of outcome criteria expressed in a classifier seems required.  The UTP specification of a ‘Test Context’ which is both a structuredClassifier and a behavioredClassifier seems fit for purpose.  However, the SysML does not include the ‘Test Context’ in its profile; rather it only includes the ‘Test Case’ which is an ‘Operation’ or ‘Behavior’ meta-class.

The System according to ISO, a short pedantic story

A System has a life cycle (“evolution of a system, product, service, project or other human-made entity from conception through retirement” ISO 15288), which embraces a life cycle model (“framework of processes and activities concerned with the life cycle that may be organized into stages, which also acts as a common reference for communication and understanding” ISO 15288). Stages of that life cycle contain execution instances of processes (“set of interrelated or interacting activities which transforms inputs into outputs” ISO 9000). Processes always have a purpose (“high level objective of performing the process and the likely outcomes of effective implementation of the  process” ISO 12207) and an outcome (“observable result of the successful achievement of the process purpose” ISO 12207) and typically these are a deliverable of some sort or another related to the system or the system itself in a new state (e.g., designed, integrated, verified, validated). The purpose always addresses a stakeholder objective (e.g., satisfy a need, achieve a goal). An organization (e.g., corporation, a business, a team) has commonality with a system. They meet many elements of the definition of a system (“combination of interacting elements organized to achieve one or more stated purposes” ISO 15288).

Entities perform processes, this is a role that an entity is responsible for. The responsibility is typically assigned by governance that controls the process (e.g., a contract, an activity, a task, a procedure). The execution of a process requires resources (“asset that is utilized or consumed during the execution of a process” ISO 15288). Resources might be schedule, budget, tools, facilities, people, etc… A resource has a role in the execution of a process. Roles perform or enable (when they are consumed by the process) process activities.
Execution of a thread of activities constitutes a process execution and delivers an outcome.

A system may require other systems during its life cycle and depends on their execution of processes, for which they are responsible, to accomplish a stage in its life cycle. These systems are “Enabling Systems” (“system that supports a system-of-interest during its life cycle stages but does not necessarily contribute  directly to its function during operation” ISO 15288).
One such enabling system is the Test System. While it is unlikely to play a role in the System of Interest in its operational context, it is a key role in the development stage of the System of Interest.
The test system provides services (“A system may be considered as a product or as the services it provides” ISO 15288) by performing processes. Obviously its key service is the “Test Service”. This service can be instantiated to provide specialized services such as integration, verification and validation services. These are elements of their parent processes. A test system has a number of elements that serve as resources to execute the processes for which it is responsible. These elements are things such as Facilities, Tools, Specifications, etc…

Perhaps if the ISO standards did not have such a high cost of entry, more people would avail themselves of the resource. I was fortunate enough to work for a corporation that made them available to me and I used that opportunity to learn as much as I possibly could.

My frustration is with the proliferation of jargon that obfuscates communication. INCOSE, ISTQB, IREB all have some level of harmonization with the ISO standards and the lexicon of the systems and software development domains. In my mind, mastery of the lexicon of the relevant domains is important to effective communications.

Why do I bother with this; because you can look at the world from a certain perspective and you find things have more in common than they have differences. Abstraction can reveal the commonality. Commonality helps reveal patterns and patterns are reusable. Human beings, for reasons I do not claim an understanding, insist on differentiating themselves and the things they create from other things and they guard them fervently. My success has always been in finding the commonality, identifying the pattern and reusing a solution I’ve previously employed with success. Often my solutions come from others that have gone before me. I do not pride myself on my inventions though I have a few, rather I pride myself in my humility to embrace the ideas others have forged.

 

 

Opinion on “Is System Debugging a valid concept?”

LinkedIn discussion from the System Integration and Test group

“Debugging” is a valid concept.

IMHO, “Debugging” is not on par with “Design”. “Debugging” is not a technical process, it is an outcome of the execution of a technical process.

ISO/IEC/IEEE 24765 Systems and software engineering – Vocabulary defines “Debug” as:

to detect, locate, and correct faults in a computer program.

Fault is defined as:

1. manifestation of an error in software. 2. an incorrect step, process, or data definition in a computer program. 3. a defect in a hardware device or component. “bug” is listed as a synonym for “fault”.

There is nothing prohibiting the extension of the term to all elements of a system (Hw, Sw, Design, Requirements, etc…).

“Bugs” or faults are found by executing test cases against a system element, a test case SUT, and comparing the expected result (test oracle) against the observation. The expected result is derived from the test basis and if the observation is non-conforming to the oracle, then the SUT is faulty. The bug or fault must be removed or the SUT will never be found to be compliant with its requirements. And yes, IMHO, a test case can be realized as an “inspection” of a design, abstract implementation, etc…

A “Test System” is an Enabling System of the System of Interest and has a responsibility for the production of objective evidence that the system of interest as well as its elements satisfies acceptance criteria.

ISO 15288 identifies the life cycle stages and technical processes. Test is an activity of the integration, verification and validation technical processes. Test is realized through the execution of test cases, behavior specifications realized by test components against a SUT. Every element of a system traverses the development stage in its own life cycle and is subjected to execution of technical processes. An outcome of “Integration” is:

c) A system capable of being verified against the specified requirements from architectural design is assembled and integrated.

There is an implication that to “be capable of being verified” and subsequently “accepted” by a stakeholder that the system element must be brought into compliance with its requirements or “to be free of faults”. Faults/bugs in a system element are detected and corrected, “debugged”, as an outcome of the execution of process activities and tasks.

There is a new ISO standard under development for Sw Testing. It currently consists of 4 vols. The std is 29119. In vol 1 Annex A a depiction of the role of test in V&V is provided. The principles of the std can apply to all engineering domains, not just Sw (IMHO). I’m not asserting that the std is the holy grail, but it does have some good content. There is some info in the public domain on the std.

ISO/IEC/IEEE 24765 Systems and software engineering — Vocabulary defines “test” as “an activity in which a system or component is executed under specified conditions, the results are observed or recorded, and an evaluation is made of some aspect of the system or component”.

The evaluation of the test’s observation may conclude that the observation does not conform to the expected result. The expected result is confirmed to be consistent with the test basis and in this case the existence of a fault/bug is confirmed.

One objective of the ISO/IC/IEEE standards is to establish a common framework and lexicon to aid communication among diverse domains. There is still much work to be done towards this end and there are some very committed individuals striving to harmonize the ISO standards. There is value in learning a common language.

Test is not analogous to the activity in which young children engage on Easter Sunday. That is an unstructured and random sequence of behavior in which the discovery of an egg is merely happenstance. Many individuals engage in such behavior and call it test.

If bug=defect=fault, then debugging=dedefecting=defaulting

Food for thought.

NIST published a comprehensive report on project statistics and experiences based on data from a large number of software projects

70% of the defects are introduced by the specifications

30% are introduced later in the technical solution

Only 5% of the specification defects are corrected in the specification phase

95% are detected later in the project or after delivery where the cost for correction on average is 22 times higher compared to a correction directly during the specification effort

Find the requirement defects in the program phase where they occur and there will be less defects to find during integration test or system test.

A work product inspection is a Test. It employs the static verification method “inspection”. ISO 29119 supports this concept. The INCOSE Guide for Writing Requirements can serve as your inspection checklist. It is also the specification for writing requirements and is therefore your Test Basis.

SEs (as the authors of the specifications) are, typically, the source of the majority of defects in a system.

Stakeholder politics plays a role in the requirement problem. Incompetence is yet another significant contributor. There are a host of factors.

Many SEs are behind the power curve. The ISO, IEEE, and INCOSE are driving SE maturity, SEs need to get onboard and support these efforts.

Emphasis needs to be on prevention and not detection.