|
GROUP REPORT
Network Functions Virtualisation (NFV);
Testing;
Test Domain and Description Language Recommendations
Disclaimer
The present document has been produced and approved by the Network Functions Virtualisation (NFV) ETSI Industry
Specification Group (ISG) and represents the views of those members who participated in this ISG.
It does not necessarily represent the views of the entire ETSI membership.
---------------------- Page: 1 ----------------------
2 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
Reference
DGR/NFV-TST011
Keywords
language, NFV, testing
ETSI
650 Route des Lucioles
F-06921 Sophia Antipolis Cedex - FRANCE
Tel.: +33 4 92 94 42 00 Fax: +33 4 93 65 47 16
Siret N° 348 623 562 00017 - NAF 742 C
Association à but non lucratif enregistrée à la
Sous-Préfecture de Grasse (06) N° 7803/88
Important notice
The present document can be downloaded from:
The present document may be made available in electronic versions and/or in print. The content of any electronic and/or
print versions of the present document shall not be modified without the prior written authorization of ETSI. In case of any
existing or perceived difference in contents between such versions and/or in print, the prevailing version of an ETSI
deliverable is the one made publicly available in PDF format at www.etsi.org/deliver.
Users of the present document should be aware that the document may be subject to revision or change of status.
Information on the current status of this and other ETSI documents is available at
If you find errors in the present document, please send your comment to one of the following services:
Copyright Notification
No part may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying
and microfilm except as authorized by written permission of ETSI.
The content of the PDF version shall not be modified without the written authorization of ETSI.
The copyright and the foregoing restriction extend to reproduction in all media.
© ETSI 2019.
All rights reserved.
TM TM TM
DECT , PLUGTESTS , UMTS and the ETSI logo are trademarks of ETSI registered for the benefit of its Members.
TM TM
3GPP and LTE are trademarks of ETSI registered for the benefit of its Members and
of the 3GPP Organizational Partners.
oneM2M™ logo is a trademark of ETSI registered for the benefit of its Members and
of the oneM2M Partners.
GSM and the GSM logo are trademarks registered and owned by the GSM Association.
ETSI
---------------------- Page: 2 ----------------------
3 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
Contents
Intellectual Property Rights . 5
Foreword . 5
Modal verbs terminology . 5
Executive summary . 5
Introduction . 6
1 Scope . 7
2 References . 7
2.1 Normative references . 7
2.2 Informative references . 7
3 Definition of terms, symbols and abbreviations . 8
3.1 Terms . 8
3.2 Symbols . 8
3.3 Abbreviations . 8
4 Test Domain . 8
4.1 Overview . 8
4.2 Test Case Resources . 9
4.3 Test Execution Flow . 10
4.4 High-Level Functions . 11
4.5 Test Case Data . 11
4.6 Execution Segments . 11
4.7 Test Environment . 12
4.8 Test Suites & Traffic Mixes . 12
5 Reuse Guidelines . 12
5.1 Overview . 12
5.2 Environment Decoupling . 12
5.3 Resource API Decoupling . 13
5.4 High-level Function Decoupling . 13
5.5 Test Data Decoupling . 13
6 Recommended Models . 14
6.1 Overview . 14
6.2 Test Case Model . 14
6.3 Test Environment Model . 14
6.4 Test Scenarios . 15
6.5 Full Domain Model . 16
7 Test DSL . 18
7.1 Overview . 18
7.2 Test DSL Concepts . 18
7.3 Abstract Syntax meta-model . 19
7.4 Dynamically Loaded Constraints . 20
7.5 Test Case Header . 20
7.5.0 Introduction. 20
7.5.1 Test Case Identifier . 20
7.5.2 Test Case Description . 21
7.5.3 Custom Test Case Attributes . 21
7.5.4 High-Level Functions . 21
7.5.5 Test Case Data . 21
7.6 Resource Declaration . 21
7.7 Execution Flow . 22
7.7.0 Introduction. 22
7.7.1 Getters for Resource-Specific Data. 22
7.7.2 Symbol Lookup . 22
ETSI
---------------------- Page: 3 ----------------------
4 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
7.7.3 High-level Function Invocation . 22
Annex A: JADL Example . 24
Annex B: Authors & contributors . 28
History . 29
ETSI
---------------------- Page: 4 ----------------------
5 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
Intellectual Property Rights
Essential patents
IPRs essential or potentially essential to normative deliverables may have been declared to ETSI. The information
pertaining to these essential IPRs, if any, is publicly available for ETSI members and non-members, and can be found
in ETSI SR 000 314: "Intellectual Property Rights (IPRs); Essential, or potentially Essential, IPRs notified to ETSI in
respect of ETSI standards", which is available from the ETSI Secretariat. Latest updates are available on the ETSI Web
server (https://ipr.etsi.org/).
Pursuant to the ETSI IPR Policy, no investigation, including IPR searches, has been carried out by ETSI. No guarantee
can be given as to the existence of other IPRs not referenced in ETSI SR 000 314 (or the updates on the ETSI Web
server) which are, or may be, or may become, essential to the present document.
Trademarks
The present document may include trademarks and/or tradenames which are asserted and/or registered by their owners.
ETSI claims no ownership of these except for any which are indicated as being the property of ETSI, and conveys no
right to use or reproduce any trademark and/or tradename. Mention of those trademarks in the present document does
not constitute an endorsement by ETSI of products, services or organizations associated with those trademarks.
Foreword
This Group Report (GR) has been produced by ETSI Industry Specification Group (ISG) Network Functions
Virtualisation (NFV).
Modal verbs terminology
In the present document "should", "should not", "may", "need not", "will", "will not", "can" and "cannot" are to be
interpreted as described in clause 3.2 of the ETSI Drafting Rules (Verbal forms for the expression of provisions).
"must" and "must not" are NOT allowed in ETSI deliverables except when used in direct citation.
Executive summary
The present document proposes a model of the NFV test domain and recommends requirements for a test Domain
Specific Language (DSL) to manipulate it. In the context of NFV, a network service is supplied by multiple vendors and
each vendor has its own test technology, interfaces into the system under test, and test languages (usually GPLs like
Java , Ruby , Python , etc.) In order to create a common test language, the test cases follow a standardized test case
model that the language can manipulate, and that can be implemented within individual test technologies. The model
includes shareable and reusable artefacts tied to the test domain: execution flow, data, abstract resources, environment,
etc.
Integration of multiple test technologies is only possible by a system that can accept contributions of test resources from
multiple parties. These contributions may include lab resources, test APIs, test data, high-level function libraries, test
execution platforms, etc. The test environment is then constructed dynamically from these contributions. To allow the
dynamic nature of the test environment, it is necessary for the test case to be decoupled from specific resource
contributions and express the test process in terms of resource abstractions. Mapping these abstractions to concrete
resources is the job of a Dynamic Resource Management (DRM) system. This is done by creating an environment
resource meta-model available to the test case developers at design time. The meta-model is then used for creation of
specific environment instance models at runtime. Each environment instance includes dynamic resource contributions
to which resource abstractions are mapped.
ETSI
---------------------- Page: 5 ----------------------
6 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
Introduction
With the advent of NFV, the industry is experiencing the following transformative challenges:
• Multiple contributions to a network service
• Open collaboration
• Shift away from dedicated resources (sharing of resources)
• Shift of integration responsibility from vendors to service providers (or their agents)
• Test cases/plans can be repeated multiple times, making reuse critical
To address these challenges and to encourage collaboration, the present document provides recommendations for NFV
test domain modelling and a Test DSL that does not force vendors/participants to change their test technology/language
and enables efficient utilization of resources. To enable reuse, the model also decouples test data and test environment
from the test case and uses dynamic allocation of test resources.
ETSI
---------------------- Page: 6 ----------------------
7 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
1 Scope
The present document proposes a model of the NFV test domain and recommends requirements for a test Domain
Specific Language (DSL) to manipulate it. The description includes an NFV test automation ecosystem that facilitates
interaction among NFV suppliers and operators, based on the DevOps principles.
The NFV test domain contains:
• System Under Test (SUT): Network Functions (NF), Network Functions Virtualisation Infrastructure (NFVI)
and network services.
• Test Resources: tools or instrumented NF's and NFVI elements that test cases can interface to manipulate the
SUT.
• Test Execution Flow: controlled and uncontrolled state transitions.
• Test case configuration data and parameters: test-resource-specific and non-test-resource-specific.
The present document explores the following attributes to enable efficient multi-supplier NFV interaction:
• Reusability of test plans, test cases and test resources.
• Abstraction of test data.
• Decoupling of test case from the test environment.
• Use of test resource abstractions in place of concrete resources.
2 References
2.1 Normative references
Normative references are not applicable in the present document.
2.2 Informative references
References are either specific (identified by date of publication and/or edition number or version number) or
non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the
referenced document (including any amendments) applies.
NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee
their long term validity.
The following referenced documents are not necessary for the application of the present document but they assist the
user with regard to a particular subject area.
[i.1] ETSI GS NFV-TST 001 (V1.1.1): "Network Functions Virtualisation (NFV); Pre-deployment
Testing; Report on Validation of NFV Environments and Services".
[i.2] ETSI GS NFV-MAN 001 (V1.1.1): "Network Functions Virtualisation (NFV); Management and
Orchestration".
[i.3] ETSI GS NFV-SOL 003 (V2.3.1): "Network Functions Virtualisation (NFV) Release 2; Protocols
and Data Models; RESTful protocols specification for the Or-Vnfm Reference Point".
[i.4] ETSI GS NFV-IFA 006: "Network Functions Virtualisation (NFV) Release 2; Management and
Orchestration; Vi-Vnfm reference point - Interface and Information Model Specification".
[i.5] ETSI GS NFV 003: "Network Functions Virtualisation (NFV); Terminology for Main Concepts in
NFV".
ETSI
---------------------- Page: 7 ----------------------
8 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
3 Definition of terms, symbols and abbreviations
3.1 Terms
For the purposes of the present document, the terms given in ETSI GS NFV 003 [i.5] and the following apply:
execution engine: means by which a program can be executed on the target platform using the two approaches of
interpretation and compilation
3.2 Symbols
Void.
3.3 Abbreviations
For the purposes of the present document, the abbreviations given in ETSI GS NFV 003 [i.5] and the following apply:
DSL Domain Specific Language
GPL General Purpose Language
HLF High-Level Function
TEP Test Execution Platform
4 Test Domain
4.1 Overview
The NFV test domain is a set of artefacts and systems for testing NFV-based solutions. NFV introduced the concept of
"dynamically configurable and fully automated cloud environments" (https://www.etsi.org/technologies/nfv) for
network functions. The present document models the NFV test domain as a set of abstractions so that the same level of
flexibility is available in testing those network functions. Figure 1 is included to illustrate relationships among artefacts
discussed in clause 4; the NFV Test Domain model is described in more detail in clause 6. In addition, the present
document proposes requirements for a Domain-specific language (DSL) to manipulate that test domain. Using these
models and recommended requirements, suppliers and service providers are able to leverage different test technologies,
dynamically allocate test resources, and reuse test plans, test cases, and Test Execution Platforms (TEPs). In the present
document, a test case always refers to a computer program that can be executed by a test automation system.
ETSI
---------------------- Page: 8 ----------------------
9 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
Figure 1: NFV Test Domain Artefact Relationships
The NFV test domain is comprised of:
• System Under Test (SUT) [i.1]: Network Functions (NF), Network Functions Virtualisation
Infrastructure (NFVI), and network services
• Test Case Resources: tools or instrumented NF's and NFVI elements that test cases can interface to manipulate
the SUT
• Execution Flow: controlled and uncontrolled state transitions
• High-Level Functions
• Test case configuration data and parameters: test-resource-specific and non-test-resource-specific
• Execution Segments
• Test Environment
• Test Suites & Traffic Mixes
4.2 Test Case Resources
In order to be tested, the SUT exposes a set of interfaces over which test interactions happen. These interfaces vary in
the degree of complexity and may include entire protocol stacks. The test drivers communicate with the SUT by
sending and receiving encoded messages over one or more of these interfaces. This means that the implementation of
the interface is also present on the test side.
It is therefore necessary for the executing test case to create or otherwise acquire one or more objects that implement the
SUT interfaces and use them to send and receive messages to and from the SUT. These objects in turn expose their own
interfaces to the test case to allow the test to manipulate the message flow more easily. These objects will be referred to
as abstract resources. In essence an abstract resource is an instance of an SUT interface that provides its own API to the
test.
ETSI
---------------------- Page: 9 ----------------------
10 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
Abstract resources are test case-facing abstractions that utilize units of lab equipment, software, and/or data to do real
work. These units will be referred to as concrete resources. In the context of a shared lab, concrete resources are shared
among multiple users and are allocated to specific test cases for the duration of the test. This guarantees that the test
case gets exclusive access to concrete resources required for its execution. Some examples of concrete resources include
instances of instrumented or stubbed out MANO components, VNFs, VNFCs, etc. Availability of concrete resources is
generally limited and concurrently executing test cases contend to gain access to them.
The relationship between abstract and concrete resources is typically many-to-many, and the mapping between them is
described in clause 6.3. The test DSL is expected to enforce proper resource declaration, preclude any access to
concrete resources outside the resource management system, and provide the user with an intuitive way to declare and
manipulate abstract resources. Once an abstract resource is mapped to an allocated concrete resource they form a single
entity used by the test case to interact with the SUT. This entity will be called test case resource as illustrated in
Figure 2.
4.3 Test Execution Flow
As the test case executes, every resource goes through a sequence of states, reflecting the SUT functionality being
tested. These sequences range in complexity from trivial to very complex. The test case may be interested in some but
not all of these states. For example, if a resource is running a protocol stack, unless protocol conformance is being
tested, most low-level messaging is of no interest to the test case. It may only be interested in the successful or
unsuccessful outcome of such messaging. The degree to which the test case has visibility of the resource state can vary
for the same type of resource depending on the test scenario. Consequently, it is necessary for a mechanism for different
levels of control over the resource state to be present. The test case should have the ability to "take over" the resource
state transition when necessary and let the resource run its own state machine at other times. The resource states
controlled by the test case are referred to as controlled states.
Controlled states of all test case resources at any given time form the state of the test case. Test case initiates state
transitions by sending or receiving and verifying messages to and from the SUT on any of the test case resources. For
example, if the SUT is a VNFM implementation and an Instantiate VNF request is sent from the NFVO resource, the
test case can verify receiving a Grant VNF Lifecycle Operation request by the NFVO resource and after acknowledging
it with a Grant VNF Lifecycle Operation response, verify that the Allocate Resource request is received by the VIM
resource. Running a single state machine per test case for controlled states while letting individual resources run their
own state machines for uncontrolled states provides an intuitive and flexible framework for SUT interface interworking.
Figure 2: Test Case Resources and State Machine
ETSI
---------------------- Page: 10 ----------------------
11 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
Any controlled state can be passing or failing. Criteria for failing the test are evaluated by the test case at every
controlled state. If failure criteria are satisfied the test fails immediately. Otherwise it continues until it reaches a final
state at which point it passes. If a test case does not reach a final state it gets killed by the TEP once sufficient time has
passed to conclude that the test behaviour is abnormal. Regardless of the type of test case termination (passed, killed,
crashed, hung and killed by the TEP), the allocated concrete resources are returned into their initial state and released.
Since no assumption can be made about normal vs. abnormal test case termination, the responsibility of resource
clean-up and release cannot be assigned to the test case and should lie with individual resource implementations.
4.4 High-Level Functions
A test scenario may also require a series of controlled state transitions involving multiple resources whose details are
outside the main focus of the test. Continuing with the previous example, if the main focus of the test is verification of
VNFM configuring the newly created VNF with deployment-specific parameters, the details of the VNF instantiation
are not important to the test scenario and can be grouped into a single action. These actions will be referred to as
high-level functions. They capture frequently repeated state transition sequences and encapsulate them into invokable,
parameterizable units of functionality. High-level functions can also call other high-level functions. This provides a
flexible mechanism for control of granularity for different parts of the execution flow. High-level functions are reusable
across multiple test cases and activities.
4.5 Test Case Data
Many test cases utilize data that are not explicitly defined in the body of the test case. Separation of the execution flow
and the data that can parameterize the execution flow allows greater flexibility of test case design and reuse of the same
data sets by multiple tests.
Some data are resource-specific and come from test case resources. Since the test case only manipulates abstract
resources and has no prior knowledge of what concrete resource may be allocated to it, any attributes of the concrete
resources used in the test case are acquired at run time. For example, in order to verify that the VNFM under test sends
the correct VNF identifier in the alarm notification to the NFVO, it is necessary for the test case to know the VNF
identifier of the VNF generating the alarm. Until a specific VNF test resource instance is allocated to the test case, its
identifier is not known. The test case should therefore be able to obtain the identifier from the VNF test resource at
runtime to verify that the correct one is sent to the NFVO test resource.
Some data are not resource-specific and come from elsewhere. For example, default values for a set of protocol
messages can be specified outside the test case. This allows the test case writer to only provide non-default values for
message Information Elements (IEs). This also allows reuse of this data across a potentially very large number of test
cases using the same protocol.
Test case data should be easily customizable. In the protocol defaults example there may be global defaults, defaults
that a particular user team is using for their purposes, or even defaults for a particular test activity that further specialize
test case writers' defaults. This can be accomplished by creating progressively specialized test case data hierarchies with
every new child defining a nested scope within its parent's scope. Data lookup then follows the regular scoping rules.
4.6 Execution Segments
In some cases the test execution flow can be separated into individually executable segments. For example, many
control-plane test scenarios are designed to have segments of
...