Test Requirements – Defining a Product’s Requirement Acceptance Criteria

Acceptance Criteria – The criteria a stakeholder employs in the assessment of a product feature to assert that the feature satisfies their need.  A feature of a product is specified by a system requirement that requires verification of the feature’s implementation for the product to be accepted by the stakeholder.

trace

The question to the stakeholder – “What objective evidence, produced by the test case, will convince you that the feature specified by the requirement has been satisfied?”

The response could identify a scenario, an instance of a Use Case, and their desired outcome of the Use Case Scenario.  It may be a set of input conditions to the product and a measurable outcome, meaning the input and output can be formally defined.

tos3

context

A product requirement expresses a precise attribute set of a product feature.  Typically, these attributes are framed as desired behavior.  These feature attributes may be expressed in terms functionality, inputs, outputs, actions, sequence, qualities and constraints.

Associations

Frequently, requirement statements lack the precision to unambiguously produce evidence through test case execution that the product complies with its specification, a collection of requirements.  Requirements are frequently expressed implicitly rather than explicitly.  Ambiguous characteristics frequently results in conflict during acceptance testing as a result of emergent requirement statement interpretation.

tos1

seq

An approach to clarify the intent of imprecise requirement statements is to author acceptance criteria for these requirements which explicitly define the criteria that support the assertion that the product satisfies its specification.  The acceptance criteria of a functional requirement might be expressed using a model-based behavior specification.

Employ explicit acceptance criteria early in the product development period and obtain stakeholder agreement with it.  Agreement on acceptance criterion should be established at the requirements review phase of a project.  Ideally, requirements should focus on the problems requiring solutions in the stakeholder domain rather than focusing on the system solution.  Stakeholders must be held accountable for their requirement statements and agree to an acceptance criterion for each and every requirement prior to the commencement of high level system design.  This precludes discord late in the program where the stakeholder’s satisfaction is critically important.

An approach to developing acceptance criteria is to ask questions from the user/stakeholder viewpoint.  These criterions will answer the stakeholder question “How do we know the product meets its specification?”

What measured or observed criteria will convince stakeholders that the system implementation satisfies the requirement?  Is there a specific operational situation where a system’s features will provide a user a capability which accomplishes a user mission objective?  Is there an output provided by a system feature that satisfies a stakeholder criterion that can be measured and subjected to a stakeholder standard?

goal

tos2

Answering these questions provides a foundation for test case development.  The first answer forms the system context for a test case.  What objects in the system’s environment are interacting with it and how are they interacting?

jpg_10

jpg_10

The second answer provides the test oracle that an arbiter uses to assert if the test case has “passed” or “failed”.  Acceptance criteria define the successful outcome of a test case.  Each requirement should have an acceptance criterion statement.  Criterion must be measurable, either qualitatively or quantitatively.  Where the measure is qualitative, it is imperative to reach agreement on defining this subjective evaluation.

These answers drive the test strategy which culminates in a demonstration or test that produces the evidence that satisfies the stakeholder’s expectation, thereby establishing the acceptance criterion for the requirement.

Acceptance criteria must be explicitly associated with a requirement and acknowledged formally by the stakeholder as being adequate criterion.

The acquirer’s acceptance criteria for a product should be stated in the acquirer’s product specification at the time of their request for a proposal.  In the US DoD MIL-STD 961E section 4 of the system’s specification contains the acceptance criteria in the form a test method and a test specification for all requirements in section 3.  If the acquirer of the product has not stated acceptance criterion for all requirements in their product specification, then the proposal must contain scoping acceptance criteria to ensure that the acquirer understands what will be delivered by the proposal both in terms of a product and the evidence that the product satisfies their need as stated in the product’s specification.  In any event, the acceptance criterion for every stakeholder requirement must be stated at the product’s requirement review milestone and acknowledged as acceptable by the acquirer of the product.  Delaying the establishment of acceptance criteria levies a significant risk of “scope creep”.  As the design matures and its capabilities begin to be revealed; the acquirer, or the acquirer’s representatives, may realize that they may have provided a product specification which will not fully satisfy their needs and the acceptance criteria is at risk of becoming far more costly to achieve and verify.

IEEE Std 829TM-2008 IEEE Standard for Software and System Test Documentation calls for the development of a Test Traceability Matrix.  This matrix is responsible for establishing the association between each system requirement and the test responsible for producing the evidence that satisfies the requirement’s acceptance criterion.  This matrix has a hierarchy which parallels the requirement decomposition hierarchy of system, sub-systems and components.

testTraceMatrix

The need for requirement acceptance criteria does not end at the acquirer’s specification.  All engineered requirements need acceptance criteria.  The acceptance criterion unambiguously informs the next lower tier in the engineering hierarchy of their stakeholder’s expectations.

The incoming acceptance criteria are a principle driver of the test strategy at that level of the product’s feature realization hierarchy.

Acceptance criteria such as “successful” are qualitative.  In this example there is a need to quantify “Success”.  We can measure “profitability” by assessing the “return on investment” and stating that if each dollars investment returns a 10% return, the standard used to assert a verdict, then success has been achieved.  Success might also be measured by the quantity of process cycles performed in a period of time.

Yes, these quantitative criteria are measures applying to non-functional requirements.  The point being is that by measuring the performance of a function, which may be a derived performance measure, there is implicit verification of the functional requirement statement.

The acceptance criterion defines the domain the product is employed in.  It describes how the product’s features are measured or assessed.  Acceptance criterion may express constraints to be imposed on the product when its features are in use.  Acceptance criterion forms the test case development framework.  The acceptance criteria statements will explicitly communicate our interpretation of stakeholder requirements.

The current SysML standard, as well as the UML 2 Testing Profile, do not address ‘acceptance criteria modeling’ directly or by inference.  A description of a Use Case scenario with an accompanying User goal and satisfaction of outcome criteria expressed in a classifier seems required.  The UTP specification of a ‘Test Context’ which is both a structuredClassifier and a behavioredClassifier seems fit for purpose.  However, the SysML does not include the ‘Test Context’ in its profile; rather it only includes the ‘Test Case’ which is an ‘Operation’ or ‘Behavior’ meta-class.

The System according to ISO, a short pedantic story

A System has a life cycle (“evolution of a system, product, service, project or other human-made entity from conception through retirement” ISO 15288), which embraces a life cycle model (“framework of processes and activities concerned with the life cycle that may be organized into stages, which also acts as a common reference for communication and understanding” ISO 15288). Stages of that life cycle contain execution instances of processes (“set of interrelated or interacting activities which transforms inputs into outputs” ISO 9000). Processes always have a purpose (“high level objective of performing the process and the likely outcomes of effective implementation of the  process” ISO 12207) and an outcome (“observable result of the successful achievement of the process purpose” ISO 12207) and typically these are a deliverable of some sort or another related to the system or the system itself in a new state (e.g., designed, integrated, verified, validated). The purpose always addresses a stakeholder objective (e.g., satisfy a need, achieve a goal). An organization (e.g., corporation, a business, a team) has commonality with a system. They meet many elements of the definition of a system (“combination of interacting elements organized to achieve one or more stated purposes” ISO 15288).

Entities perform processes, this is a role that an entity is responsible for. The responsibility is typically assigned by governance that controls the process (e.g., a contract, an activity, a task, a procedure). The execution of a process requires resources (“asset that is utilized or consumed during the execution of a process” ISO 15288). Resources might be schedule, budget, tools, facilities, people, etc… A resource has a role in the execution of a process. Roles perform or enable (when they are consumed by the process) process activities.
Execution of a thread of activities constitutes a process execution and delivers an outcome.

A system may require other systems during its life cycle and depends on their execution of processes, for which they are responsible, to accomplish a stage in its life cycle. These systems are “Enabling Systems” (“system that supports a system-of-interest during its life cycle stages but does not necessarily contribute  directly to its function during operation” ISO 15288).
One such enabling system is the Test System. While it is unlikely to play a role in the System of Interest in its operational context, it is a key role in the development stage of the System of Interest.
The test system provides services (“A system may be considered as a product or as the services it provides” ISO 15288) by performing processes. Obviously its key service is the “Test Service”. This service can be instantiated to provide specialized services such as integration, verification and validation services. These are elements of their parent processes. A test system has a number of elements that serve as resources to execute the processes for which it is responsible. These elements are things such as Facilities, Tools, Specifications, etc…

Perhaps if the ISO standards did not have such a high cost of entry, more people would avail themselves of the resource. I was fortunate enough to work for a corporation that made them available to me and I used that opportunity to learn as much as I possibly could.

My frustration is with the proliferation of jargon that obfuscates communication. INCOSE, ISTQB, IREB all have some level of harmonization with the ISO standards and the lexicon of the systems and software development domains. In my mind, mastery of the lexicon of the relevant domains is important to effective communications.

Why do I bother with this; because you can look at the world from a certain perspective and you find things have more in common than they have differences. Abstraction can reveal the commonality. Commonality helps reveal patterns and patterns are reusable. Human beings, for reasons I do not claim an understanding, insist on differentiating themselves and the things they create from other things and they guard them fervently. My success has always been in finding the commonality, identifying the pattern and reusing a solution I’ve previously employed with success. Often my solutions come from others that have gone before me. I do not pride myself on my inventions though I have a few, rather I pride myself in my humility to embrace the ideas others have forged.

 

 

Opinion on “Is System Debugging a valid concept?”

LinkedIn discussion from the System Integration and Test group

“Debugging” is a valid concept.

IMHO, “Debugging” is not on par with “Design”. “Debugging” is not a technical process, it is an outcome of the execution of a technical process.

ISO/IEC/IEEE 24765 Systems and software engineering – Vocabulary defines “Debug” as:

to detect, locate, and correct faults in a computer program.

Fault is defined as:

1. manifestation of an error in software. 2. an incorrect step, process, or data definition in a computer program. 3. a defect in a hardware device or component. “bug” is listed as a synonym for “fault”.

There is nothing prohibiting the extension of the term to all elements of a system (Hw, Sw, Design, Requirements, etc…).

“Bugs” or faults are found by executing test cases against a system element, a test case SUT, and comparing the expected result (test oracle) against the observation. The expected result is derived from the test basis and if the observation is non-conforming to the oracle, then the SUT is faulty. The bug or fault must be removed or the SUT will never be found to be compliant with its requirements. And yes, IMHO, a test case can be realized as an “inspection” of a design, abstract implementation, etc…

A “Test System” is an Enabling System of the System of Interest and has a responsibility for the production of objective evidence that the system of interest as well as its elements satisfies acceptance criteria.

ISO 15288 identifies the life cycle stages and technical processes. Test is an activity of the integration, verification and validation technical processes. Test is realized through the execution of test cases, behavior specifications realized by test components against a SUT. Every element of a system traverses the development stage in its own life cycle and is subjected to execution of technical processes. An outcome of “Integration” is:

c) A system capable of being verified against the specified requirements from architectural design is assembled and integrated.

There is an implication that to “be capable of being verified” and subsequently “accepted” by a stakeholder that the system element must be brought into compliance with its requirements or “to be free of faults”. Faults/bugs in a system element are detected and corrected, “debugged”, as an outcome of the execution of process activities and tasks.

There is a new ISO standard under development for Sw Testing. It currently consists of 4 vols. The std is 29119. In vol 1 Annex A a depiction of the role of test in V&V is provided. The principles of the std can apply to all engineering domains, not just Sw (IMHO). I’m not asserting that the std is the holy grail, but it does have some good content. There is some info in the public domain on the std.

ISO/IEC/IEEE 24765 Systems and software engineering — Vocabulary defines “test” as “an activity in which a system or component is executed under specified conditions, the results are observed or recorded, and an evaluation is made of some aspect of the system or component”.

The evaluation of the test’s observation may conclude that the observation does not conform to the expected result. The expected result is confirmed to be consistent with the test basis and in this case the existence of a fault/bug is confirmed.

One objective of the ISO/IC/IEEE standards is to establish a common framework and lexicon to aid communication among diverse domains. There is still much work to be done towards this end and there are some very committed individuals striving to harmonize the ISO standards. There is value in learning a common language.

Test is not analogous to the activity in which young children engage on Easter Sunday. That is an unstructured and random sequence of behavior in which the discovery of an egg is merely happenstance. Many individuals engage in such behavior and call it test.

If bug=defect=fault, then debugging=dedefecting=defaulting

Food for thought.

NIST published a comprehensive report on project statistics and experiences based on data from a large number of software projects

70% of the defects are introduced by the specifications

30% are introduced later in the technical solution

Only 5% of the specification defects are corrected in the specification phase

95% are detected later in the project or after delivery where the cost for correction on average is 22 times higher compared to a correction directly during the specification effort

Find the requirement defects in the program phase where they occur and there will be less defects to find during integration test or system test.

A work product inspection is a Test. It employs the static verification method “inspection”. ISO 29119 supports this concept. The INCOSE Guide for Writing Requirements can serve as your inspection checklist. It is also the specification for writing requirements and is therefore your Test Basis.

SEs (as the authors of the specifications) are, typically, the source of the majority of defects in a system.

Stakeholder politics plays a role in the requirement problem. Incompetence is yet another significant contributor. There are a host of factors.

Many SEs are behind the power curve. The ISO, IEEE, and INCOSE are driving SE maturity, SEs need to get onboard and support these efforts.

Emphasis needs to be on prevention and not detection.

Opinion on “Building a Systems Integration Lab”

LinkedIn discussion in the System Integration and Test Group

A System Integration Lab (SIL) is a “System”. It is an element of the Test System, in a namespace something like “Program:: Enabling System::Test System::SIL” parallel to “Program:: System of Interest”. It is likely classified as a facility which enables integration level test and possibly much more of the components (e.g.; acceptance, system, component level testing) or quite possibly the whole of your System of Interest. It may provide a SW or HW Execution Environment for the components of your System of Interest and for Test Components of the Test System. It may enable test processes, methods and tools. In the SIL you may plan, manage, design, implement, execute, log and report your integration and test activities called for in the development plan(s) of the System of Interest. The SIL may support all of the lower level processes (e.g.; configuration management, data management, resource management, calibration management, etc…) deemed necessary by your program. Document these requirements and develop a system which satisfies them! It is suggested that 50 to 75% of defects in a System occur during the requirement analysis and design life-cycle phases. “Pay Me now or Pay me later”. “Ounce of Prevention or Pound of Cure”
Like all systems a Test System has an “Architecture”. Its architecture description describes and specifies the test system’s components and component interactions in sufficient detail to realize the objective of your investment in developing the system. The SIL is an element of the test system (I repeat myself, I know), the test system is the context within which the SIL is realized.
The descriptions of your architecture may observe a framework similar to the DoD or MoD Architecture Framework (this framework helps you understand what you need to think about, there are other frameworks just as valuable). Use a framework as a guide to help you manage the problem space and tailor the framework to suit the objectives you have set for your architecture. I would suggest that the objectives of your SIL be bounded by a Risk v. Investment analysis. You must bound your scope!
Do not expend your precious program resources on pedantic minutia, set priorities for your resources and invest in accordance with those priorities. Document these requirements and document how their satisfaction is to be determined. Write test requirements and test cases for EVERY SIL requirement. Use continuous integration as you develop your SIL to measure when you have been successful satisfying the SIL’s user needs. You may discover you over specified! STOP when the need is satisfied! If you don’t measure you won’t know when or if you ever finish.  Describing the architecture must be in your plan; it is how you will be successful. If it isn’t in your plan, you won’t do it!
Contextual Architecture; What is/are the “Goal(s)” of the SIL User? Are there measures of Goal success? What capability must the SIL provide to realize User goal(s)? Document the “Need” that the SIL must satisfy and how to measure satisfaction. If this sounds like I am suggesting writing business use cases, you understand me. Write these Use Cases, then write the specifications to satisfy them.
It won’t take much Google effort, using terms like “GAO, ”program failure”, “concept of operation”, “cancelation” to appreciate that many major acquisition efforts failed because there was never an adequate Concept of Operation created before an attempt to build a system began.
Now that we have established the SIL is a System we know the balance of the systems engineering activities required to be successful; most of which have been addressed in some detail by others already. This isn’t rocket science, just systems engineering.  Model-based Systems Engineering may help you be successful in developing your Test System.

Test Driven System Development

TestArchitectureHierarchy

Derive Test design specifications from the abstraction tier in the development hierarchy immediately above the tier where they are employed against the design artifacts. Apply Test cases against the assembled system in the same tier. A host of possible methods exist. The simplest approach may be a checklist of required capability and measures of performance derived from the ConOps that is applied as a checklist against System Requirements. Map problems to solutions. All problems must have a solution that addresses them. Test Engineering subjects the requirements to verification using the method Inspection. At the same time they collect the test requirement for the system’s technical requirement.

An approach to mapping can be through the use of modeling. Model the problem space and the solution space and the traces between them.

Test System Architecture

A Tool View

The objective of a Test activity is to produce information about a structural or behavioral property of a test item.  Figure 1 illustrates associations of several elements of the UML metamodel core infrastructure and the SysML Requirement model element.   Test components realize the behavior of a test case and elicit from the SUT a behavior.  The context of the SUT’s behavior and the manifestation of its behavior form objective evidence of conformance to a requirement.

Associations

Figure 1 – Requirement, Feature, Test Case Triad

Evaluating test information produces an assessment of a quality attribute of a test item.  Testing is a necessary activity throughout a product’s development life cycle, as mistakes may occur in all stages of the development life cycle.  Mistakes translate into product defects.  Defects negatively impact a product’s quality and diminish the value perception of the acquirer.

Test is a sub-process of the ISO/IEC 15288:2008 technical processes Integration, Verification and Validation.  A “Test System”, Figure 2, has the responsibility to provide test services to a “System of Interest” during its life cycle.  These test services produce the information to assess the quality of the system of interest and the value proposition for the acquirer.

EnablingSystems

Figure 2 – Enabling Systems

Quality is an attribute that has value to an acquirer.  Quality factors are diverse and most require testing to assess, which has diverse methods itself.  Figure 3 illustrates a Test Method model.  Test represents a substantive component of a product’s cost.  The quest for perfection is a noble vision, though seldom is perfection valued at its cost.  These realities drive a test system to be inherently effective and efficient and to produce the greatest product quality coverage within given project constraints.  Achieving efficiency and effectiveness is driven in large part by test automation and the tools that provide the automation infrastructure elements of a Test System.

TestMethodModel

Figure 3 – Test Method Concepts

Use Case

Figure 4 presents the use cases where the System of Interest employs the services provided by the Test System to achieve goals of its technical development processes.  The three prime uses cases represent execution of the integration, verification and validation technical processes during the life cycle of the System of Interest.  Each one of these use cases has unique properties in its underlying service composition, though all employ core test services.

jpg_7

Figure 4 – Test System Services

Test System Architecture

Overview

The architecture of a Test System[i] consists of numerous system elements.  Figure 5 – Figure 7 are excerpts from the ISO/IEC 15288 standard describing a system.  Figure 7 is an example of an architecture description of a notional Test System in a program context.  Our focus is the tool components/elements of a test system architecture.

jpg_4

Figure 5 – System Structure Metamodel

jpg_3

Figure 6 – System Structure Hierarchy

jpg_1

Figure 7 – Test System Architecture in a Notional Context

Tools are “devices for doing work”, according to the Encarta Dictionary.  They have an intended use and when used accordingly, they increase the effectiveness and efficiency of an activity.  Frequently, they replace mandraulic[i] task elements with automated task elements.  The ubiquitous ‘Hammer’ has been relegated to the tool chest by the ‘Air or Battery Powered Nailer’, in the construction industry.  As a process execution resource, a powered nailer is clearly far more expensive than the hammer it replaces,

yet it transforms a mandraulic and highly variable task to a repeatable automated one that produces immense value with the scale of its use in the home construction industry.  If only one nail needs to be driven, the clear choice is the hammer, but scale the task to 100, 1,000 or 10,000 nails and the return on investment is obvious.

Purpose

Let us view work as a process activity responsible to transform inputs to outputs; parts to an assembly or possibly, and more abstractly, a problem to a solution.  Work should produce value.  The output of the process should have a greater value than the sum value of its inputs and expended resources.  When inputs are concrete and the problem deterministic, rarely does a recurring process relying on mandraulic tools achieve the return on investment that an automated tool provides.  Our focus is on how these tools, as elements of a test system’s infrastructure, add value to a program.

The execution of the ‘Integration’, ‘Verification’ and ‘Validation’ technical processes[i] of a system’s life cycle processes falls largely to the test system, an enabling system, employed by a test organization, as a resource, to accomplish test sub-processes.  Figure 8 illustrates ‘Process Concepts’ that are fundamental attributes of all ISO/IEC 15288:2008 defined process models.  Appreciate the role a tool has as a resource performing an activity of a process.  Either a person or a tool may be assigned a role, both are resources.

ProcessConcepts

Figure 8 – ISO/IEC 15288:2008 Process Concepts

Figure 7 contains a number of tools key to the test system architecture, some (e.g., configuration, build, requirement, change, and defect management tools) provide services to the test system while others are responsible for processes that are performed by the test system (e.g., test management, test execution, reporting).  At the system level of a project, the execution of the integration, verification and validation technical processes consumes significant resources.  An anecdotal accounting of this resource consumption suggests more resources are consumed planning, managing execution and providing test status than the actual execution of the test procedures.  Clearly, our test system architecture requires a competent tool infrastructure capable of off-loading mandraulic tasks from the test organization’s staff.  What tasks are well suited to automated tools?  The test system supports integration of system/sub-system elements into capabilities.  Clearly, a Test Architect will benefit from a test generation tool supporting the development, planning, and management of a functional release plan from the system’s design artifacts.  Test engineers define integration procedures where the definition of test cases for the innumerable message combinations and permutations is a formidable book keeping task, tools are particularly adept solving this test generation problem and the category of combinatorial analytics tools are adept at coping with the test case expansion problem created by test generation tools.  Perhaps the key problem facing the test system at the system level of integration, verification and acceptance test processes is the sheer number of information elements that must be managed; clearly the test system’s tool infrastructure has this problem as its key objective.  This is a problem that computer based tools are particularly adept at and deliver a considerable value benefit.

Test System Services

As illustrated in Figure 9, the test system owns a test service process element, which is treated as a behavioral property of the block.  Test service is a sub-process representing capabilities delivered by a collection of process activities.  Many of these activities are accomplished by test automation tools directly or through the interaction of an engineer performing a role of the test organization and a test automation tool.

jpg_8

Figure 9 – Test Sub-process Activity Composition

Process Activity Definitions

Test Requirements Analysis

Test requirements analysis activities produce a specification for the Test System’s test architecture to implement.  The specification entail test design, test case, test procedure and test data requirements.  The specification addresses both structure and behavioral features of test architecture.

Test Data Analysis

Test data analysis activities produce specification of data employed by test cases.  Test data may take the form of a test input, test output or test oracle data assertion.

Test Planning

Test planning activities specialize the project processes defined by ISO/IEC 15288 for the test domain.  Planning activities address managing resources and assets that realize or support the test system.  Tools supporting planning provide insight to resource conflicts, costs, milestones and schedules.

Test Tracing

Test tracing activities produce coverage maps of the test architecture specification.

Test Generation

Test generation activities produce concrete test implementations realized from abstract test specifications

Test Management

Test management activities are closely related to their peer project management process activities.   Core tasks are estimation, risk analysis and scheduling of test activities.  The latter task is a typical capability of a test management tool.  Test management tools frequently possess an extensive portfolio of capabilities (e.g., requirements verification status reporting, status dashboards, defect metrics, test complete/not complete, etc…).

Test Execution

Test execution activities are responsible for implementing the test management plan, test specification (i.e., test script specification, test procedure specification)

Test Reporting

Test reporting activities include test reports, test logs, test data analysis reports, etc.  Test reporting tools typically output to stakeholder dashboards.

A Reference Architecture

The Test System’s tool infrastructure abstracts tools into tool categories. These categories are: Test Management, Test Execution, Status Dashboards, Test Data Analysis, Test Reporting, Defect Reporting, Test Generation, Requirements Management, Change Management, Configuration Management and Build Management; it is not the intent for this list to be exhaustive.  Not all of these tools are contained in the Test system architecture, though the Test System relies on services provided by these tools to automate key process activities.

Tools

Figure 10 – Generic System Level Tool Infrastructure


[i] A Test Systems is an Enabling System, as defined by ISO/IEC 15288:2008 Systems and software engineering – System life cycle processes.  A test system provides support to the system of interest during its full lifecycle.  Specifically, it provides test services in the form of test sub-processes to the lifecycle technical processes of Integration, Verification and Validation.  These technical processes apply across the system hierarchy (i.e., component, system element, system) as well as the level of test (i.e., component, integration, system, and acceptance).  See IEEE 829-2008 IEEE Standard for Software and System Test Documentation.


[i] ISO/IEC 15288-2008 Systems and software engineering – System life cycle processes Pg 12 Figure 4 Clause 6.4.5, 6.4.6, and 6.4.8


[i] Mandraulic – an informal term used as an adjective meaning ‘labour intensive’ according to en.wiktionary.org