Category Archives: Thoughts on Test Systems and their Architecture

Interface Stimulator Services Enterprise

Identification
The Interface Stimulator Services Enterprise (ISSE) is a loosely coupled, modular set of simulation/stimulation service components and a core set of management services. The stimulator service components interact with SUT interfaces that normally exchange information with interfaces external to the SUT.  The ISSE provides test capabilities to the integration, verification, and validation life cycle processes. The ISSE is employable with test automation tools (e.g.: Quality Center, etc) and is planned for employment as a component with the system simulation trainer element.

jpg_7

Overview of the Proposed System
The primary objective of the ISSE is to provide information exchange stimulation capability to all information exchange interfaces external to the system. This is a fundamental enabler to basic system integration and verification operations. The interface stimulation components resident in the ISSE provide a web services managed interface to models, simulations, tools, databases, and hardware used to stimulate system’s interfaces. The stimulator service components have loose coupling and management of the stimulator service components is via core infrastructure services.

ISSE Overview
The ISSE is a fundamental enabler of integration and verification activities of system interfaces. Secondary design objectives are; support integration and verification activities by simulating interfaces, support system integration and verification activities by simulating information exchange interfaces external to the system, and support trainer operations by stimulating all segment information exchange interfaces external to the segment.

ISSE_SIL
The design of the ISSE capability set supports evolution to fulfill the operational needs of system data and control interfaces. This expansion of ISSE functionality is largely dependent on evolving program objectives in the area of interoperability validation.
The final ISSE spiral is the trainer product to support training of operators, maintainers, and supervisors. In this deliverable instantiation the ISSE is a component of the trainer element.
Each stimulator component possesses its own web services management interface and the ISSE provides common services to manage infrastructure and the stimulator components. In addition to stimulation services, data logging functionality with time stamping is in the design plan for all stimulator components to support artifact collection automation.
Users (test personnel, instructors, supervisors) can connect and configure the stimulator components (models, simulations, tools, hardware) with specific data and parameter values to compose a stimulation environment to perform a test or conduct a training exercise. The ISSE is capable of archiving service orchestrations for future reference. Test automation tools like HP’s Quality Center can employ service request transactions directly with the ISSE’ core services interface to invoke archived orchestrations.

ISSE_Realized
A design objective is to provide the capability to interface with virtually any model, simulation, tool, database, or hardware. The initial development spiral will only incorporate models and simulations that match the need to integrate and verify system entities. The models and simulations developed for or incorporated into the ISSE will have varying levels of fidelity:
High Appears to be reality from a system perspective. Dynamic and variable behaviors.
Medium system interactions have a visible but limited effect on behaviors.
Low Correctly formatted static data without dynamic variability of behaviors.
To manage cost and risk; functionality and fidelity of stimulator components will not evolve past the point where the components are suitable for integration and verification activities. Low and medium fidelity should suffice for many of the models and simulations. If additional functionality or greater fidelity is required to meet training and operational support objectives, the infusion of additional funding over and above that required for just system integration and verification will be necessary.

Architecture & Design
Employ SOA design patterns and web services to manage simulation / stimulator components and component strings. Employ open standards from standards bodies with demonstrated domain influence. The Object Management Group is a prime example of one such body. Maintain loose coupling of the simulation / stimulator components.
Maintain a focus on the future evolution spiral of simulation / stimulator components and core services of the ISSE. Keep in mind the evolutionary spiral of the trainer, model use in the tactical applications supporting operations planning, and development of distributed test beds for develop-mental/operational test and evaluation (D/OT&E) of the total enterprise.

Background, Objectives, and Scope
The ISSE engineering effort is responsible for the capture of the capability needs of the element, trainer, segment, operational support models, and system. Translate these needs to requirements and design a capability that is evolvable to provide those needs to the foreseen end state. The systems engineering effort is also responsible to identify models for use in the simulator components development (process model) and operation (parametric model). The implementation is limited to the capability set required to integrate and verify the segment.
The segment of interest engages in system data exchanges with entities external to the system. The possible number of external entities exceeds 10,000 instances. The integration and verification activities require information exchange stimulators to succeed in the testing of these interfaces.
COTS tools exist that may satisfy basic integration and verification of interfaces, at the element level, that exclusively engage in SOA system data exchanges at a single interface. In verification situations where coordination of complex exchanges of information occurs at multiple interfaces, existing COTS tools may prove inadequate. This requirement may emerge where system data exchanges at multiple interfaces require orchestration with a complex operational mission scenario. Coordinated scripts may suffice, but they may be subject to high maintenance overhead as scenarios evolve or change.
Realization of the a distributed test bed concept mandates employment of advanced interface stimulator capabilities to bridge segment interfaces to Distributed Interactive Simulation (DIS), High Level Architecture (HLA), or Test and Training Enabling Architecture (TENA) simulation / experiment environments. The complexity of a virtual system of systems experiment environment is unlikely to be supportable using simple scripted system data exchanges.
The objective of this effort is to define these near-term and far-term capabilities and develop only those essential near-term capabilities for segment integration and verification.
Operational Description

Near-Term Capability Description
The ISSE employs a core web services interface. External service consumers interact with this interface to obtain customized stimulator services. Provision for direct access to the stimulator component web services interface is required. This requirement supports component re-use in other infrastructures.
The ISSE employs composable stimulator components. The components feature a web services interface that serves to encapsulate the model, simulation, database, etc and provide for composition of the stimulator component at the environment build time. Modification of the stimulator application at run time is not required. Control of the stimulator component is required during run time. There is a clear distinction between component composition and component control. Composition implies the creation of service chains, links to external data resources or similar configuration of complex behavior models and simulations; actions that are difficult or impossible to orchestrate in real-time. This is different from the simple exposure of a service or process control at the service interface or to proprietary simulation control interface.
The ISSE interfaces to the user’s GUI through the bus’ core web services interface. Infrastructure services support management of the bus and stimulator components, as well as composition of the bus and stimulator components.
The ISSE provides an environment configuration report service. This service provides information relating to stimulator component composition data, model or simulation version data, data base version data, bus orchestration and component deployment data.
The ISSE provides a simulator component composition service. Simulator component composition ser-vice provides service consumer the capability to control those simulation elements exposed to service consumers. This provides a level of service customization.
The ISSE provides a bus orchestration service. This service coordinates the behaviors of the stimulator components.
The ISSE provides a service consumer notification service. An event notification service provided to service consumers.
The ISSE provides a simulator component deployment service. Supports automated deployment of a set of stimulator components.
The ISSE and stimulator components have configurable data monitoring, acquisition, and storage capabilities.
The ISSE supports third party stimulator service requests through its native web services interface. Third party applications may be COTS Test automation tools capable of interacting with a SOA interface.
The ISSE supports interaction directly with a stimulator component by a third party application.
The ISSE supports real-time stimulation of segment interfaces.
The ISSE provides the capability to stimulate segment interfaces in non-real-time as well as modified epoch time reference point.
The ISSE supports automated operation by software test automation tools such as HP Quality Center via the bus’ core web services interface.
The ISSE provides an automated archive service.
Future Capability Description
System program documents convey the concept of developing a “system in the loop” test capability for the system prior to operational deployment. A “test bed” supports the system level test capability.
The concept of a test bed implies that a system or portion of a system is placed in an environment that exercises the system under test (SUT) as if it were in a real world situation. The test bed provides inter-faces to other systems and/or portions of the same system that stimulate the SUT and accept and respond to outputs from the SUT.
In moving closer toward the real world environment, network components that are geographically distributed will comprise the test bed, removing the collocation requirements needed for earlier testing.
To establish the system test and verification environment one must identify a system test bed where the entire set of entities can be assembled and connected in their near operational state. This can be a virtual environment made up of several Integration Laboratories or it can be a physical single site.
The ISSE fits into the above system test environment as a part of the integration lab. The ISSE may evolve into a component of a test bed as testing evolves, allowing the use of actual external systems rather than simulators.
The system distributed test bed concept extends the integration lab to support program objectives in future development spirals of the Interface Stimulation Service Bus.

Definitions
Automated Archive Service – Archives are analogous to logs. There is a probability that artifacts will be required for the test evidence or debug. This service automates the collection and organization of the data logged by the different interface stimulator components, whatever they may have collected. Wraps it all up in a neat package and may even submit it to a CM service interface.
Bus Orchestration Service – If there is a need to have behaviors synchronized at various interface stimulator components, this is the service that is responsible for this. This service may be or is very tightly coupled to the timing service. In an HLA it is similar to what the RTI is responsible for.
Component Strings a component string is a concept where 2 or more atomic service components are engaged in a service contract to provide a complex service.
Composition the creation of service chains, links to external data resources or similar configuration of complex behavior models and simulations.
Environment Configuration Report Service – captures the test component environment attributes. Tool version, operating system, serial numbers of hardware. Supports re-execution of a test precisely as it was originally instantiated.
ISSE (Interface Stimulator Services Enterprise) – is a loosely coupled, modular set of simulation/stimulation service components and a core set of management services
Service Consumer Notification Service – This is the service that notifies the service consumer about alerts and could even provide the service output.
Simulator Component Composition Service – A service that employs an intelligent agent to assist a service consumer in composing another service. Service that post internal service process models can be customized to a specific consumer’s needs via this service mechanism.
Simulator Component Deployment Service – A service that might be employed to deploy atomic services to a test bed. Conversely, the infrastructure team may deploy the services and then this service is not required.
Stimulation Components Models, Simulations, tools, hardware that provides the stimulus for the external interfaces.
Stimulator Components see Stimulation Components.

Advertisements

Opinion on “Building a Systems Integration Lab”

LinkedIn discussion in the System Integration and Test Group

A System Integration Lab (SIL) is a “System”. It is an element of the Test System, in a namespace something like “Program:: Enabling System::Test System::SIL” parallel to “Program:: System of Interest”. It is likely classified as a facility which enables integration level test and possibly much more of the components (e.g.; acceptance, system, component level testing) or quite possibly the whole of your System of Interest. It may provide a SW or HW Execution Environment for the components of your System of Interest and for Test Components of the Test System. It may enable test processes, methods and tools. In the SIL you may plan, manage, design, implement, execute, log and report your integration and test activities called for in the development plan(s) of the System of Interest. The SIL may support all of the lower level processes (e.g.; configuration management, data management, resource management, calibration management, etc…) deemed necessary by your program. Document these requirements and develop a system which satisfies them! It is suggested that 50 to 75% of defects in a System occur during the requirement analysis and design life-cycle phases. “Pay Me now or Pay me later”. “Ounce of Prevention or Pound of Cure”
Like all systems a Test System has an “Architecture”. Its architecture description describes and specifies the test system’s components and component interactions in sufficient detail to realize the objective of your investment in developing the system. The SIL is an element of the test system (I repeat myself, I know), the test system is the context within which the SIL is realized.
The descriptions of your architecture may observe a framework similar to the DoD or MoD Architecture Framework (this framework helps you understand what you need to think about, there are other frameworks just as valuable). Use a framework as a guide to help you manage the problem space and tailor the framework to suit the objectives you have set for your architecture. I would suggest that the objectives of your SIL be bounded by a Risk v. Investment analysis. You must bound your scope!
Do not expend your precious program resources on pedantic minutia, set priorities for your resources and invest in accordance with those priorities. Document these requirements and document how their satisfaction is to be determined. Write test requirements and test cases for EVERY SIL requirement. Use continuous integration as you develop your SIL to measure when you have been successful satisfying the SIL’s user needs. You may discover you over specified! STOP when the need is satisfied! If you don’t measure you won’t know when or if you ever finish.  Describing the architecture must be in your plan; it is how you will be successful. If it isn’t in your plan, you won’t do it!
Contextual Architecture; What is/are the “Goal(s)” of the SIL User? Are there measures of Goal success? What capability must the SIL provide to realize User goal(s)? Document the “Need” that the SIL must satisfy and how to measure satisfaction. If this sounds like I am suggesting writing business use cases, you understand me. Write these Use Cases, then write the specifications to satisfy them.
It won’t take much Google effort, using terms like “GAO, ”program failure”, “concept of operation”, “cancelation” to appreciate that many major acquisition efforts failed because there was never an adequate Concept of Operation created before an attempt to build a system began.
Now that we have established the SIL is a System we know the balance of the systems engineering activities required to be successful; most of which have been addressed in some detail by others already. This isn’t rocket science, just systems engineering.  Model-based Systems Engineering may help you be successful in developing your Test System.

Test System Architecture

A Tool View

The objective of a Test activity is to produce information about a structural or behavioral property of a test item.  Figure 1 illustrates associations of several elements of the UML metamodel core infrastructure and the SysML Requirement model element.   Test components realize the behavior of a test case and elicit from the SUT a behavior.  The context of the SUT’s behavior and the manifestation of its behavior form objective evidence of conformance to a requirement.

Associations

Figure 1 – Requirement, Feature, Test Case Triad

Evaluating test information produces an assessment of a quality attribute of a test item.  Testing is a necessary activity throughout a product’s development life cycle, as mistakes may occur in all stages of the development life cycle.  Mistakes translate into product defects.  Defects negatively impact a product’s quality and diminish the value perception of the acquirer.

Test is a sub-process of the ISO/IEC 15288:2008 technical processes Integration, Verification and Validation.  A “Test System”, Figure 2, has the responsibility to provide test services to a “System of Interest” during its life cycle.  These test services produce the information to assess the quality of the system of interest and the value proposition for the acquirer.

EnablingSystems

Figure 2 – Enabling Systems

Quality is an attribute that has value to an acquirer.  Quality factors are diverse and most require testing to assess, which has diverse methods itself.  Figure 3 illustrates a Test Method model.  Test represents a substantive component of a product’s cost.  The quest for perfection is a noble vision, though seldom is perfection valued at its cost.  These realities drive a test system to be inherently effective and efficient and to produce the greatest product quality coverage within given project constraints.  Achieving efficiency and effectiveness is driven in large part by test automation and the tools that provide the automation infrastructure elements of a Test System.

TestMethodModel

Figure 3 – Test Method Concepts

Use Case

Figure 4 presents the use cases where the System of Interest employs the services provided by the Test System to achieve goals of its technical development processes.  The three prime uses cases represent execution of the integration, verification and validation technical processes during the life cycle of the System of Interest.  Each one of these use cases has unique properties in its underlying service composition, though all employ core test services.

jpg_7

Figure 4 – Test System Services

Test System Architecture

Overview

The architecture of a Test System[i] consists of numerous system elements.  Figure 5 – Figure 7 are excerpts from the ISO/IEC 15288 standard describing a system.  Figure 7 is an example of an architecture description of a notional Test System in a program context.  Our focus is the tool components/elements of a test system architecture.

jpg_4

Figure 5 – System Structure Metamodel

jpg_3

Figure 6 – System Structure Hierarchy

jpg_1

Figure 7 – Test System Architecture in a Notional Context

Tools are “devices for doing work”, according to the Encarta Dictionary.  They have an intended use and when used accordingly, they increase the effectiveness and efficiency of an activity.  Frequently, they replace mandraulic[i] task elements with automated task elements.  The ubiquitous ‘Hammer’ has been relegated to the tool chest by the ‘Air or Battery Powered Nailer’, in the construction industry.  As a process execution resource, a powered nailer is clearly far more expensive than the hammer it replaces,

yet it transforms a mandraulic and highly variable task to a repeatable automated one that produces immense value with the scale of its use in the home construction industry.  If only one nail needs to be driven, the clear choice is the hammer, but scale the task to 100, 1,000 or 10,000 nails and the return on investment is obvious.

Purpose

Let us view work as a process activity responsible to transform inputs to outputs; parts to an assembly or possibly, and more abstractly, a problem to a solution.  Work should produce value.  The output of the process should have a greater value than the sum value of its inputs and expended resources.  When inputs are concrete and the problem deterministic, rarely does a recurring process relying on mandraulic tools achieve the return on investment that an automated tool provides.  Our focus is on how these tools, as elements of a test system’s infrastructure, add value to a program.

The execution of the ‘Integration’, ‘Verification’ and ‘Validation’ technical processes[i] of a system’s life cycle processes falls largely to the test system, an enabling system, employed by a test organization, as a resource, to accomplish test sub-processes.  Figure 8 illustrates ‘Process Concepts’ that are fundamental attributes of all ISO/IEC 15288:2008 defined process models.  Appreciate the role a tool has as a resource performing an activity of a process.  Either a person or a tool may be assigned a role, both are resources.

ProcessConcepts

Figure 8 – ISO/IEC 15288:2008 Process Concepts

Figure 7 contains a number of tools key to the test system architecture, some (e.g., configuration, build, requirement, change, and defect management tools) provide services to the test system while others are responsible for processes that are performed by the test system (e.g., test management, test execution, reporting).  At the system level of a project, the execution of the integration, verification and validation technical processes consumes significant resources.  An anecdotal accounting of this resource consumption suggests more resources are consumed planning, managing execution and providing test status than the actual execution of the test procedures.  Clearly, our test system architecture requires a competent tool infrastructure capable of off-loading mandraulic tasks from the test organization’s staff.  What tasks are well suited to automated tools?  The test system supports integration of system/sub-system elements into capabilities.  Clearly, a Test Architect will benefit from a test generation tool supporting the development, planning, and management of a functional release plan from the system’s design artifacts.  Test engineers define integration procedures where the definition of test cases for the innumerable message combinations and permutations is a formidable book keeping task, tools are particularly adept solving this test generation problem and the category of combinatorial analytics tools are adept at coping with the test case expansion problem created by test generation tools.  Perhaps the key problem facing the test system at the system level of integration, verification and acceptance test processes is the sheer number of information elements that must be managed; clearly the test system’s tool infrastructure has this problem as its key objective.  This is a problem that computer based tools are particularly adept at and deliver a considerable value benefit.

Test System Services

As illustrated in Figure 9, the test system owns a test service process element, which is treated as a behavioral property of the block.  Test service is a sub-process representing capabilities delivered by a collection of process activities.  Many of these activities are accomplished by test automation tools directly or through the interaction of an engineer performing a role of the test organization and a test automation tool.

jpg_8

Figure 9 – Test Sub-process Activity Composition

Process Activity Definitions

Test Requirements Analysis

Test requirements analysis activities produce a specification for the Test System’s test architecture to implement.  The specification entail test design, test case, test procedure and test data requirements.  The specification addresses both structure and behavioral features of test architecture.

Test Data Analysis

Test data analysis activities produce specification of data employed by test cases.  Test data may take the form of a test input, test output or test oracle data assertion.

Test Planning

Test planning activities specialize the project processes defined by ISO/IEC 15288 for the test domain.  Planning activities address managing resources and assets that realize or support the test system.  Tools supporting planning provide insight to resource conflicts, costs, milestones and schedules.

Test Tracing

Test tracing activities produce coverage maps of the test architecture specification.

Test Generation

Test generation activities produce concrete test implementations realized from abstract test specifications

Test Management

Test management activities are closely related to their peer project management process activities.   Core tasks are estimation, risk analysis and scheduling of test activities.  The latter task is a typical capability of a test management tool.  Test management tools frequently possess an extensive portfolio of capabilities (e.g., requirements verification status reporting, status dashboards, defect metrics, test complete/not complete, etc…).

Test Execution

Test execution activities are responsible for implementing the test management plan, test specification (i.e., test script specification, test procedure specification)

Test Reporting

Test reporting activities include test reports, test logs, test data analysis reports, etc.  Test reporting tools typically output to stakeholder dashboards.

A Reference Architecture

The Test System’s tool infrastructure abstracts tools into tool categories. These categories are: Test Management, Test Execution, Status Dashboards, Test Data Analysis, Test Reporting, Defect Reporting, Test Generation, Requirements Management, Change Management, Configuration Management and Build Management; it is not the intent for this list to be exhaustive.  Not all of these tools are contained in the Test system architecture, though the Test System relies on services provided by these tools to automate key process activities.

Tools

Figure 10 – Generic System Level Tool Infrastructure


[i] A Test Systems is an Enabling System, as defined by ISO/IEC 15288:2008 Systems and software engineering – System life cycle processes.  A test system provides support to the system of interest during its full lifecycle.  Specifically, it provides test services in the form of test sub-processes to the lifecycle technical processes of Integration, Verification and Validation.  These technical processes apply across the system hierarchy (i.e., component, system element, system) as well as the level of test (i.e., component, integration, system, and acceptance).  See IEEE 829-2008 IEEE Standard for Software and System Test Documentation.


[i] ISO/IEC 15288-2008 Systems and software engineering – System life cycle processes Pg 12 Figure 4 Clause 6.4.5, 6.4.6, and 6.4.8


[i] Mandraulic – an informal term used as an adjective meaning ‘labour intensive’ according to en.wiktionary.org