NETCONF User Guide¶
Overview¶
NETCONF is an XML-based protocol used for configuration and monitoring devices in the network. The base NETCONF protocol is described in RFC-6241.
NETCONF in OpenDaylight:.
OpenDaylight supports the NETCONF protocol as a northbound server as well as a southbound plugin. It also includes a set of test tools for simulating NETCONF devices and clients.
Southbound (netconf-connector)¶
The NETCONF southbound plugin is capable of connecting to remote NETCONF devices and exposing their configuration/operational datastores, RPCs and notifications as MD-SAL mount points. These mount points allow applications and remote users (over RESTCONF) to interact with the mounted devices.
In terms of RFCs, the connector supports:
Netconf-connector is fully model-driven (utilizing the YANG modeling language) so in addition to the above RFCs, it supports any data/RPC/notifications described by a YANG model that is implemented by the device.
Tip
NETCONF southbound can be activated by installing
odl-netconf-connector-all
Karaf feature.
Netconf-connector configuration¶
There are 2 ways for configuring netconf-connector: NETCONF or RESTCONF. This guide focuses on using RESTCONF.
Important
There are 2 different endpoints related to RESTCONF protocols:
http://localhost:8181/restconf
is related to draft-bierman-netconf-restconf-02,can be activated by installingodl-restconf-nb-bierman02
Karaf feature.This user guide uses this approach.http://localhost:8181/rests
is related to RFC-8040,can be activated by installingodl-restconf-nb-rfc8040
Karaf feature.
/rests/data/
,Examples in the Spawning new NETCONF connectors section include both bierman02 and rfc8040 formats
Default configuration¶
The default configuration contains all the necessary dependencies (file: 01-netconf.xml) and a single instance of netconf-connector (file: 99-netconf-connector.xml) called controller-config which connects itself to the NETCONF northbound in OpenDaylight in a loopback fashion. The connector mounts the NETCONF server for config-subsystem in order to enable RESTCONF protocol for config-subsystem. This RESTCONF still goes via NETCONF, but using RESTCONF is much more user friendly than using NETCONF.
Spawning additional netconf-connectors while the controller is running¶
Preconditions:
OpenDaylight is running
In Karaf, you must have the netconf-connector installed (at the Karaf prompt, type:
feature:install odl-netconf-connector-all
); the loopback NETCONF mountpoint will be automatically configured and activatedWait until log displays following entry: RemoteDevice{controller-config}: NETCONF connector initialized successfully
To configure a new netconf-connector you need to send following request to RESTCONF:
Headers:
Accept application/xml
Content-Type application/xml
<module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
<type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">prefix:sal-netconf-connector</type>
<name>new-netconf-device</name>
<address xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">127.0.0.1</address>
<port xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">830</port>
<username xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">admin</username>
<password xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">admin</password>
<tcp-only xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">false</tcp-only>
<event-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
<type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:netty">prefix:netty-event-executor</type>
<name>global-event-executor</name>
</event-executor>
<binding-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
<type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">prefix:binding-broker-osgi-registry</type>
<name>binding-osgi-broker</name>
</binding-registry>
<dom-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
<type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:dom">prefix:dom-broker-osgi-registry</type>
<name>dom-broker</name>
</dom-registry>
<client-dispatcher xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
<type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:config:netconf">prefix:netconf-client-dispatcher</type>
<name>global-netconf-dispatcher</name>
</client-dispatcher>
<processing-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
<type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:threadpool">prefix:threadpool</type>
<name>global-netconf-processing-executor</name>
</processing-executor>
<keepalive-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
<type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:threadpool">prefix:scheduled-threadpool</type>
<name>global-netconf-ssh-scheduled-executor</name>
</keepalive-executor>
</module>
This spawns a new netconf-connector which tries to connect to (or mount) a NETCONF device at 127.0.0.1 and port 830. You can check the configuration of config-subsystem’s configuration datastore. The new netconf-connector will now be present there. Just invoke:
The response will contain the module for new-netconf-device.
Right after the new netconf-connector is created, it writes some useful metadata into the datastore of MD-SAL under the network-topology subtree. This metadata can be found at:
GET http://localhost:8181/restconf/operational/network-topology:network-topology/
Information about connection status, device capabilities, etc. can be found there.
Connecting to a device not supporting NETCONF monitoring¶
The netconf-connector in OpenDaylight relies on ietf-netconf-monitoring support when connecting to remote NETCONF device. The ietf-netconf-monitoring support allows netconf-connector to list and download all YANG schemas that are used by the device. NETCONF connector can only communicate with a device if it knows the set of used schemas (or at least a subset). However, some devices use YANG models internally but do not support NETCONF monitoring. Netconf-connector can also communicate with these devices, but you have to side load the necessary yang models into OpenDaylight’s YANG model cache for netconf-connector. In general there are 2 situations you might encounter:
1. NETCONF device does not support ietf-netconf-monitoring but it does list all its YANG models as capabilities in HELLO message
This could be a device that internally uses only ietf-inet-types YANG model with revision 2010-09-24. In the HELLO message that is sent from this device there is this capability reported:
urn:ietf:params:xml:ns:yang:ietf-inet-types?module=ietf-inet-types&revision=2010-09-24
For such devices you only need to put the schema into folder cache/schema inside your Karaf distribution.
Important
The file with YANG schema for ietf-inet-types has to be called ietf-inet-types@2010-09-24.yang. It is the required naming format of the cache.
2. NETCONF device does not support ietf-netconf-monitoring and it does NOT list its YANG models as capabilities in HELLO message
Compared to device that lists its YANG models in HELLO message, in this case there would be no capability with ietf-inet-types in the HELLO message. This type of device basically provides no information about the YANG schemas it uses so its up to the user of OpenDaylight to properly configure netconf-connector for this device.
Netconf-connector has an optional configuration attribute called yang-module-capabilities and this attribute can contain a list of “YANG module based” capabilities. So by setting this configuration attribute, it is possible to override the “yang-module-based” capabilities reported in HELLO message of the device. To do this, we need to modify the configuration of netconf-connector by adding this XML (It needs to be added next to the address, port, username etc. configuration elements):
<yang-module-capabilities xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
<capability xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
urn:ietf:params:xml:ns:yang:ietf-inet-types?module=ietf-inet-types&revision=2010-09-24
</capability>
</yang-module-capabilities>
Remember to also put the YANG schemas into the cache folder.
Note
For putting multiple capabilities, you just need to replicate the capability xml element inside yang-module-capability element. Capability element is modeled as a leaf-list. With this configuration, we would make the remote device report usage of ietf-inet-types in the eyes of netconf-connector.
Reconfiguring Netconf-Connector While the Controller is Running¶
It is possible to change the configuration of a running module while the whole controller is running. This example will continue where the last left off and will change the configuration for the brand new netconf-connector after it was spawned. Using one RESTCONF request, we will change both username and password for the netconf-connector.
To update an existing netconf-connector you need to send following request to RESTCONF:
<module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
<type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">prefix:sal-netconf-connector</type>
<name>new-netconf-device</name>
<username xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">bob</username>
<password xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">passwd</password>
<tcp-only xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">false</tcp-only>
<event-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
<type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:netty">prefix:netty-event-executor</type>
<name>global-event-executor</name>
</event-executor>
<binding-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
<type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">prefix:binding-broker-osgi-registry</type>
<name>binding-osgi-broker</name>
</binding-registry>
<dom-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
<type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:dom">prefix:dom-broker-osgi-registry</type>
<name>dom-broker</name>
</dom-registry>
<client-dispatcher xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
<type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:config:netconf">prefix:netconf-client-dispatcher</type>
<name>global-netconf-dispatcher</name>
</client-dispatcher>
<processing-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
<type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:threadpool">prefix:threadpool</type>
<name>global-netconf-processing-executor</name>
</processing-executor>
<keepalive-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
<type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:threadpool">prefix:scheduled-threadpool</type>
<name>global-netconf-ssh-scheduled-executor</name>
</keepalive-executor>
</module>
Since a PUT is a replace operation, the whole configuration must be specified along with the new values for username and password. This should result in a 2xx response and the instance of netconf-connector called new-netconf-device will be reconfigured to use username bob and password passwd. New configuration can be verified by executing:
With new configuration, the old connection will be closed and a new one established.
Destroying Netconf-Connector While the Controller is Running¶
Using RESTCONF one can also destroy an instance of a module. In case of netconf-connector, the module will be destroyed, NETCONF connection dropped and all resources will be cleaned. To do this, simply issue a request to following URL:
The last element of the URL is the name of the instance and its predecessor is the type of that module (In our case the type is sal-netconf-connector and name new-netconf-device). The type and name are actually the keys of the module list.
Netconf-connector configuration with MD-SAL¶
It is also possible to configure new NETCONF connectors directly through MD-SAL with the usage of the network-topology model. You can configure new NETCONF connectors both through the NETCONF server for MD-SAL (port 2830) or RESTCONF. This guide focuses on RESTCONF.
Tip
To enable NETCONF connector configuration through MD-SAL install
either the odl-netconf-topology
or
odl-netconf-clustered-topology
feature. We will explain the
difference between these features later.
Preconditions¶
OpenDaylight is running
In Karaf, you must have the
odl-netconf-topology
orodl-netconf-clustered-topology
feature installed.Feature
odl-restconf
must be installedWait until log displays following entry:
Successfully pushed configuration snapshot 02-netconf-topology.xml(odl-netconf-topology,odl-netconf-topology)
or until
GET http://localhost:8181/restconf/operational/network-topology:network-topology/topology/topology-netconf/
returns a non-empty response, for example:
<topology xmlns="urn:TBD:params:xml:ns:yang:network-topology"> <topology-id>topology-netconf</topology-id> </topology>
Spawning new NETCONF connectors¶
To create a new NETCONF connector you need to send the following PUT request to RESTCONF:
bierman02 |
|
rfc8040 |
You could use the same body to create the new NETCONF connector with a POST without specifying the node in the URL:
Headers:
Accept: application/xml
Content-Type: application/xml
Payload:
<node xmlns="urn:TBD:params:xml:ns:yang:network-topology">
<node-id>new-netconf-device</node-id>
<host xmlns="urn:opendaylight:netconf-node-topology">127.0.0.1</host>
<port xmlns="urn:opendaylight:netconf-node-topology">17830</port>
<username xmlns="urn:opendaylight:netconf-node-topology">admin</username>
<password xmlns="urn:opendaylight:netconf-node-topology">admin</password>
<tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only>
<!-- non-mandatory fields with default values, you can safely remove these if you do not wish to override any of these values-->
<reconnect-on-changed-schema xmlns="urn:opendaylight:netconf-node-topology">false</reconnect-on-changed-schema>
<connection-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">20000</connection-timeout-millis>
<max-connection-attempts xmlns="urn:opendaylight:netconf-node-topology">0</max-connection-attempts>
<between-attempts-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">2000</between-attempts-timeout-millis>
<sleep-factor xmlns="urn:opendaylight:netconf-node-topology">1.5</sleep-factor>
<!-- keepalive-delay set to 0 turns off keepalives-->
<keepalive-delay xmlns="urn:opendaylight:netconf-node-topology">120</keepalive-delay>
</node>
Note that the device name in <node-id> element must match the last element of the restconf URL.
Reconfiguring an existing connector¶
The steps to reconfigure an existing connector are exactly the same as when spawning a new connector. The old connection will be disconnected and a new connector with the new configuration will be created. This needs to be done with a PUT request because the node already exists. A POST request will fail for that reason.
Additionally, a PATCH request can be used to modify an existing configuration. Currently, only yang-patch (RFC-8072) is supported. The URL would be the same as the above PUT examples. Using JSON for the body, the headers needed for the request would be:
Headers:
Accept: application/yang.patch-status+json
Content-Type: application/yang.patch+json
Example JSON payload to modify the password entry:
{
"ietf-restconf:yang-patch" : {
"patch-id" : "0",
"edit" : [
{
"edit-id" : "edit1",
"operation" : "merge",
"target" : "",
"value" : {
"node": [
{
"node-id": "new-netconf-device",
"netconf-node-topology:password" : "newpassword"
}
]
}
}
]
}
}
Deleting an existing connector¶
To remove an already configured NETCONF connector you need to send a DELETE request to the same PUT request URL that was used to create the device:
bierman02 |
|
rfc8040 |
Note
No body is needed to delete the node/device
Connecting to a device supporting only NETCONF 1.0¶
OpenDaylight is schema-based distribution and heavily depends on YANG models. However some legacy NETCONF devices are not schema-based and implement just RFC 4741. This type of device does not utilize YANG models internally and OpenDaylight does not know how to communicate with such devices, how to validate data, or what the semantics of data are.
NETCONF connector can communicate also with these devices, but the trade-offs are worsened possibilities in utilization of NETCONF mountpoints. Using RESTCONF with such devices is not suported. Also communicating with schemaless devices from application code is slightly different.
To connect to schemaless device, there is a optional configuration option in netconf-node-topology model called schemaless. You have to set this option to true.
Clustered NETCONF connector¶
To spawn NETCONF connectors that are cluster-aware you need to install
the odl-netconf-clustered-topology
karaf feature.
Warning
The odl-netconf-topology
and odl-netconf-clustered-topology
features are considered INCOMPATIBLE. They both manage the same
space in the datastore and would issue conflicting writes if
installed together.
Configuration of clustered NETCONF connectors works the same as the configuration through the topology model in the previous section.
When a new clustered connector is configured the configuration gets distributed among the member nodes and a NETCONF connector is spawned on each node. From these nodes a master is chosen which handles the schema download from the device and all the communication with the device. You will be able to read/write to/from the device from all slave nodes due to the proxy data brokers implemented.
You can use the odl-netconf-clustered-topology
feature in a single
node scenario as well but the code that uses akka will be used, so for a
scenario where only a single node is used, odl-netconf-topology
might be preferred.
Netconf-connector utilization¶
Once the connector is up and running, users can utilize the new mount point instance. By using RESTCONF or from their application code. This chapter deals with using RESTCONF and more information for app developers can be found in the developers guide or in the official tutorial application ncmount that can be found in the coretutorials project:
Reading data from the device¶
Just invoke (no body needed):
This will return the entire content of operation datastore from the device. To view just the configuration datastore, change operational in this URL to config.
Writing configuration data to the device¶
In general, you cannot simply write any data you want to the device. The data have to conform to the YANG models implemented by the device. In this example we are adding a new interface-configuration to the mounted device (assuming the device supports Cisco-IOS-XR-ifmgr-cfg YANG model). In fact this request comes from the tutorial dedicated to the ncmount tutorial app.
<interface-configuration xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-ifmgr-cfg">
<active>act</active>
<interface-name>mpls</interface-name>
<description>Interface description</description>
<bandwidth>32</bandwidth>
<link-status></link-status>
</interface-configuration>
Should return 200 response code with no body.
Tip
This call is transformed into a couple of NETCONF RPCs. Resulting
NETCONF RPCs that go directly to the device can be found in the
OpenDaylight logs after invoking log:set TRACE
org.opendaylight.controller.sal.connect.netconf
in the Karaf
shell. Seeing the NETCONF RPCs might help with debugging.
This request is very similar to the one where we spawned a new netconf device. That’s because we used the loopback netconf-connector to write configuration data into config-subsystem datastore and config-subsystem picked it up from there.
Invoking custom RPC¶
Devices can implement any additional RPC and as long as it provides YANG models for it, it can be invoked from OpenDaylight. Following example shows how to invoke the get-schema RPC (get-schema is quite common among netconf devices). Invoke:
<input xmlns="urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring">
<identifier>ietf-yang-types</identifier>
<version>2013-07-15</version>
</input>
This call should fetch the source for ietf-yang-types YANG model from the mounted device.
Netconf-connector + Netopeer¶
Netopeer (an open-source NETCONF server) can be used for testing/exploring NETCONF southbound in OpenDaylight.
Netopeer installation¶
A Docker container with netopeer will be used in this guide. To install Docker and start the netopeer image perform following steps:
Install docker http://docs.docker.com/linux/step_one/
Start the netopeer image:
docker run --rm -t -p 1831:830 dockeruser/netopeer
Verify netopeer is running by invoking (netopeer should send its HELLO message right away:
ssh root@localhost -p 1831 -s netconf (password root)
Mounting netopeer NETCONF server¶
Preconditions:
OpenDaylight is started with features
odl-restconf-all
andodl-netconf-connector-all
.Netopeer is up and running in docker
Now just follow the section: Spawning new NETCONF connectors. In the payload change the:
name, e.g., to netopeer
username/password to your system credentials
ip to localhost
port to 1831.
After netopeer is mounted successfully, its configuration can be read using RESTCONF by invoking:
Northbound (NETCONF servers)¶
OpenDaylight provides 2 types of NETCONF servers:
NETCONF server for config-subsystem (listening by default on port 1830)
Serves as a default interface for config-subsystem and allows users to spawn/reconfigure/destroy modules (or applications) in OpenDaylight
NETCONF server for MD-SAL (listening by default on port 2830)
Serves as an alternative interface for MD-SAL (besides RESTCONF) and allows users to read/write data from MD-SAL’s datastore and to invoke its rpcs (NETCONF notifications are not available in the Boron release of OpenDaylight)
Note
The reason for having 2 NETCONF servers is that config-subsystem and MD-SAL are 2 different components of OpenDaylight and require different approach for NETCONF message handling and data translation. These 2 components will probably merge in the future.
Note
Since Nitrogen release, there is performance regression in NETCONF servers accepting SSH connections. While opening a connection takes less than 10 seconds on Carbon, on Nitrogen time can increase up to 60 seconds. Please see https://bugs.opendaylight.org/show_bug.cgi?id=9020
NETCONF server for config-subsystem¶
This NETCONF server is the primary interface for config-subsystem. It allows the users to interact with config-subsystem in a standardized NETCONF manner.
In terms of RFCs, these are supported:
-
(partially, only the schema-change notification is available in Boron release)
For regular users it is recommended to use RESTCONF + the controller-config loopback mountpoint instead of using pure NETCONF. How to do that is spesific for each component/module/application in OpenDaylight and can be found in their dedicated user guides.
NETCONF server for MD-SAL¶
This NETCONF server is just a generic interface to MD-SAL in OpenDaylight. It uses the stadard MD-SAL APIs and serves as an alternative to RESTCONF. It is fully model driven and supports any data and rpcs that are supported by MD-SAL.
In terms of RFCs, these are supported:
Notifications over NETCONF are not supported in the Boron release.
Tip
Install NETCONF northbound for MD-SAL by installing feature:
odl-netconf-mdsal
in karaf. Default binding port is 2830.
Configuration¶
The default configuration can be found in file: 08-netconf-mdsal.xml. The file contains the configuration for all necessary dependencies and a single SSH endpoint starting on port 2830. There is also a (by default disabled) TCP endpoint. It is possible to start multiple endpoints at the same time either in the initial configuration file or while OpenDaylight is running.
The credentials for SSH endpoint can also be configured here, the defaults are admin/admin. Credentials in the SSH endpoint are not yet managed by the centralized AAA component and have to be configured separately.
Verifying MD-SAL’s NETCONF server¶
After the NETCONF server is available it can be examined by a command line ssh tool:
ssh admin@localhost -p 2830 -s netconf
The server will respond by sending its HELLO message and can be used as a regular NETCONF server from then on.
Mounting the MD-SAL’s NETCONF server¶
To perform this operation, just spawn a new netconf-connector as described in Spawning new NETCONF connectors. Just change the ip to “127.0.0.1” port to “2830” and its name to “controller-mdsal”.
Now the MD-SAL’s datastore can be read over RESTCONF via NETCONF by invoking:
Note
This might not seem very useful, since MD-SAL can be accessed directly from RESTCONF or from Application code, but the same method can be used to mount and control other OpenDaylight instances by the “master OpenDaylight”.
NETCONF stress/performance measuring tool¶
This is basically a NETCONF client that puts NETCONF servers under heavy load of NETCONF RPCs and measures the time until a configurable amount of them is processed.
RESTCONF stress-performance measuring tool¶
Very similar to NETCONF stress tool with the difference of using RESTCONF protocol instead of NETCONF.
YANGLIB remote repository¶
There are scenarios in NETCONF deployment, that require for a centralized YANG models repository. YANGLIB plugin provides such remote repository.
To start this plugin, you have to install odl-yanglib feature. Then you have to configure YANGLIB either through RESTCONF or NETCONF. We will show how to configure YANGLIB through RESTCONF.
YANGLIB configuration through RESTCONF¶
You have to specify what local YANG modules directory you want to provide. Then you have to specify address and port whre you want to provide YANG sources. For example, we want to serve yang sources from folder /sources on localhost:5000 adress. The configuration for this scenario will be as follows:
PUT http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/yanglib:yanglib/example
Headers:
Accept: application/xml
Content-Type: application/xml
Payload:
<module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
<name>example</name>
<type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:yanglib:impl">prefix:yanglib</type>
<broker xmlns="urn:opendaylight:params:xml:ns:yang:controller:yanglib:impl">
<type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">prefix:binding-broker-osgi-registry</type>
<name>binding-osgi-broker</name>
</broker>
<cache-folder xmlns="urn:opendaylight:params:xml:ns:yang:controller:yanglib:impl">/sources</cache-folder>
<binding-addr xmlns="urn:opendaylight:params:xml:ns:yang:controller:yanglib:impl">localhost</binding-addr>
<binding-port xmlns="urn:opendaylight:params:xml:ns:yang:controller:yanglib:impl">5000</binding-port>
</module>
This should result in a 2xx response and new YANGLIB instance should be created. This YANGLIB takes all YANG sources from /sources folder and for each generates URL in form:
http://localhost:5000/schemas/{modelName}/{revision}
On this URL will be hosted YANG source for particular module.
YANGLIB instance also write this URL along with source identifier to ietf-netconf-yang-library/modules-state/module list.
Netconf-connector with YANG library as fallback¶
There is an optional configuration in netconf-connector called yang-library. You can specify YANG library to be plugged as additional source provider into the mount’s schema repository. Since YANGLIB plugin is advertising provided modules through yang-library model, we can use it in mount point’s configuration as YANG library. To do this, we need to modify the configuration of netconf-connector by adding this XML
<yang-library xmlns="urn:opendaylight:netconf-node-topology">
<yang-library-url xmlns="urn:opendaylight:netconf-node-topology">http://localhost:8181/restconf/operational/ietf-yang-library:modules-state</yang-library-url>
<username xmlns="urn:opendaylight:netconf-node-topology">admin</username>
<password xmlns="urn:opendaylight:netconf-node-topology">admin</password>
</yang-library>
This will register YANGLIB provided sources as a fallback schemas for particular mount point.
NETCONF Call Home¶
Important
The call home feature is experimental and will change in a future release. In particular, the Yang models will change to those specified in the RFC 8071
Call Home Installation¶
ODL Call-Home server is installed in Karaf by installing karaf feature
odl-netconf-callhome-ssh
. RESTCONF feature is recommended for
configuring Call Home & testing its functionality.
feature:install odl-netconf-callhome-ssh
Note
In order to test Call Home functionality we recommend Netopeer. See Netopeer Call Home to learn how to enable call-home on Netopeer.
Northbound Call-Home API¶
The northbound Call Home API is used for administering the Call-Home Server. The following describes this configuration.
Global Configuration¶
Configuring global credentials¶
ODL Call-Home server allows user to configure global credentials, which will be used for devices which does not have device-specific credentials configured.
This is done by creating
/odl-netconf-callhome-server:netconf-callhome-server/global/credentials
with username and passwords specified.
Configuring global username & passwords to try
PUT
/restconf/config/odl-netconf-callhome-server:netconf-callhome-server/global/credentials HTTP/1.1
Content-Type: application/json
Accept: application/json
{
"credentials":
{
"username": "example",
"passwords": [ "first-password-to-try", "second-password-to-try" ]
}
}
Configuring to accept any ssh server key using global credentials¶
By default Netconf Call-Home Server accepts only incoming connections
from allowed devices
/odl-netconf-callhome-server:netconf-callhome-server/allowed-devices
,
if user desire to allow all incoming connections, it is possible to set
accept-all-ssh-keys
to true
in
/odl-netconf-callhome-server:netconf-callhome-server/global
.
The name of this devices in netconf-topology
will be in format
ip-address:port
. For naming devices see Device-Specific
Configuration.
Allowing unknown devices to connect
This is a debug feature and should not be used in production. Besides being an obvious security issue, this also causes the Call-Home Server to drastically increase its output to the log.
POST
/restconf/config/odl-netconf-callhome-server:netconf-callhome-server/global HTTP/1.1
Content-Type: application/json
Accept: application/json
{
"global": {
"accept-all-ssh-keys": "true"
}
}
Device-Specific Configuration¶
Allowing Device & Configuring Name¶
Netconf Call Home Server uses device provided SSH server key (host key)
to identify device. The pairing of name and server key is configured in
/odl-netconf-callhome-server:netconf-callhome-server/allowed-devices
.
This list is colloquially called a whitelist.
If the Call-Home Server finds the SSH host key in the whitelist, it continues to negotiate a NETCONF connection over an SSH session. If the SSH host key is not found, the connection between the Call Home server and the device is dropped immediately. In either case, the device that connects to the Call home server leaves a record of its presence in the operational store.
Example of configuring device
PUT
/restconf/config/odl-netconf-callhome-server:netconf-callhome-server/allowed-devices/device/example HTTP/1.1
Content-Type: application/json
Accept: application/json
{
"device": {
"unique-id": "example",
"ssh-host-key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDHoH1jMjltOJnCt999uaSfc48ySutaD3ISJ9fSECe1Spdq9o9mxj0kBTTTq+2V8hPspuW75DNgN+V/rgJeoUewWwCAasRx9X4eTcRrJrwOQKzb5Fk+UKgQmenZ5uhLAefi2qXX/agFCtZi99vw+jHXZStfHm9TZCAf2zi+HIBzoVksSNJD0VvPo66EAvLn5qKWQD4AdpQQbKqXRf5/W8diPySbYdvOP2/7HFhDukW8yV/7ZtcywFUIu3gdXsrzwMnTqnATSLPPuckoi0V2jd8dQvEcu1DY+rRqmqu0tEkFBurlRZDf1yhNzq5xWY3OXcjgDGN+RxwuWQK3cRimcosH"
}
}
Configuring Device with Device-specific Credentials¶
Call Home Server also allows to configure credentials per device basis,
this is done by introducing credentials
container into
device-specific configuration. Format is same as in global credentials.
Configuring Device with Credentials
PUT
/restconf/config/odl-netconf-callhome-server:netconf-callhome-server/allowed-devices/device/example HTTP/1.1
Content-Type: application/json
Accept: application/json
{
"device": {
"unique-id": "example",
"credentials": {
"username": "example",
"passwords": [ "password" ]
},
"ssh-host-key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDHoH1jMjltOJnCt999uaSfc48ySutaD3ISJ9fSECe1Spdq9o9mxj0kBTTTq+2V8hPspuW75DNgN+V/rgJeoUewWwCAasRx9X4eTcRrJrwOQKzb5Fk+UKgQmenZ5uhLAefi2qXX/agFCtZi99vw+jHXZStfHm9TZCAf2zi+HIBzoVksSNJD0VvPo66EAvLn5qKWQD4AdpQQbKqXRf5/W8diPySbYdvOP2/7HFhDukW8yV/7ZtcywFUIu3gdXsrzwMnTqnATSLPPuckoi0V2jd8dQvEcu1DY+rRqmqu0tEkFBurlRZDf1yhNzq5xWY3OXcjgDGN+RxwuWQK3cRimcosH"
}
}
Operational Status¶
Once an entry is made into the config side of “allowed-devices”, the Call-Home Server will populate an corresponding operational device that is the same as the config device but has an additional status. By default, this status is DISCONNECTED. Once a device calls home, this status will change to one of:
CONNECTED — The device is currently connected and the NETCONF mount is available for network management.
FAILED_AUTH_FAILURE — The last attempted connection was unsuccessful because the Call-Home Server was unable to provide the acceptable credentials of the device. The device is also disconnected and not available for network management.
FAILED_NOT_ALLOWED — The last attempted connection was unsuccessful because the device was not recognized as an acceptable device. The device is also disconnected and not available for network management.
FAILED — The last attempted connection was unsuccessful for a reason other than not allowed to connect or incorrect client credentials. The device is also disconnected and not available for network management.
DISCONNECTED — The device is currently disconnected.
Rogue Devices¶
Devices which are not on the whitelist might try to connect to the Call-Home Server. In these cases, the server will keep a record by instantiating an operational device. There will be no corresponding config device for these rogues. They can be identified readily because their device id, rather than being user-supplied, will be of the form “address:port”. Note that if a device calls back multiple times, there will only be a single operatinal entry (even if the port changes); these devices are recognized by their unique host key.
Southbound Call-Home API¶
The Call-Home Server listens for incoming TCP connections and assumes that the other side of the connection is a device calling home via a NETCONF connection with SSH for management. The server uses port 6666 by default and this can be configured via a blueprint configuration file.
The device must initiate the connection and the server will not try to re-establish the connection in case of a drop. By requirement, the server cannot assume it has connectivity to the device due to NAT or firewalls among others.
RESTCONF Event Notifications¶
RESTCONF Northbound supports YANG-defined event notifications as defined in the specification Subscription to YANG Notifications.
Subscription to notification event streams is done via SSE (Server-sent-events) when using HTTP/1.1 or via HTTP/2 streams when using HTTP/2 protocol.
Clients can subscribe to and receive content from notification streams that are supported by the RESTCONF server. This mechanism is through the use of a subscription. Currently, only the dynamic subscription mechanism is supported. Subscriptions are created, modified or deleted using the subscription RPC operations defined within the “ietf-subscribed-notifications” YANG module (and augmented from “ietf-yang-push” YANG module). These RPCs are described in more detail further below.
Establishing a subscription¶
Before subscribing to a notifications stream, the client needs to create a stream subscription. This is done via the “establish-subscription” RPC operation.
Here is an example of a POST request for invocation of such operation:
POST
/restconf/operations/ietf-subscribed-notifications:establish-subscription
Content-Type: application/json
Accept: application/json
{
"ietf-subscribed-notifications:input" : {
"stream": "toaster:toasterRestocked",
"encoding": "encode-json"
}
}
The server returns a reply with the RPC output containing the subscription-result and the identifier of the created subscription. In case the subscription could not be created, a negative subscription-result is returned within the RPC output.
{
"output" : {
"subscription-result": "ietf-subscribed-notifications:ok",
"identifier": 123
}
}
The identifier is then used by the client to subscribe to the specified event stream and start receiving notifications. In order to do this, the client issues the following request using the GET method for HTTP/1.1 or POST method for HTTP/2 connections:
POST
/restconf/notification/toaster:toasterRestocked/123
Content-Type: application/json
Accept: application/json
The server will now start sending event notifications back to the client within a SSE event stream if HTTP/1.1 was used for the request or within a HTTP/2 stream in case the request was made with HTTP/2.
Notification with JSON encoding:
{
"ietf-restconf:notification": {
"toaster:toasterRestocked": {
"amountOfBread":100
}
},
"event-time": "2018-03-07T13:33:06.141+01:00"
}
Notification with XML encoding:
<notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
<eventTime>2018-03-07T14:05:06.119+01:00</eventTime>
<toasterRestocked xmlns="http://netconfcentral.org/ns/toaster">
<amountOfBread>100</amountOfBread>
</toasterRestocked>
</notification>
The “establish-subscription” RPC requires at least the input parameter “stream” to be present in the request message-body. This parameter specifies for which notification stream is the subscription established. The value of the parameter must consist of a YANG module name and a name of the notification in that module separated by a colon. Each subscription belongs to exactly one notification stream. Other input parameters that are supported for this RPC are encoding, replay-start-time, stop-time and period. They are described further below.
The data encoding of notifications can be specified with the “establish-subscription” input parameter “encoding”. Notifications can be streamed in either XML or JSON encoding (supported parameter values are “encode-xml” or “encode-json”). If the “encoding” parameter is not present in the RPC input or an unsupported value is used, the server will default to “encode-xml” format.
If the client wants to receive notifications in periodic intervals, “period” parameter must be present in the “establish-subscription” RPC input. This parameter is augmented into the RPC from the “ietf-yang-push” YANG module, so the module name prefix has to be prepended to the parameter name. Value of “period” must be a uint32 number.
Here is an example of request message-body for establishing a periodic subscription:
{
"ietf-subscribed-notifications:input" : {
"stream": "toaster:toasterRestocked",
"encoding": "encode-json",
"ietf-yang-push:period": 3
}
}
In this case, notifications will be sent to the client every three seconds.
Another parameter which may be specified in “establish-subscription” RPC input is the “replay-start-time”. The inclusion of this parameter within an “establish-subscription” RPC indicates a replay request. A subscription established through such request is also capable of passing along recently generated event records. In other words, as the subscription initializes itself, it sends any previously generated content from within target event stream which meets the time frame criteria. These historical event records are then followed by event records generated after the subscription has been established. All event records are delivered in the order generated.
Here is an example of request message-body for establishing a replay request:
{
"ietf-subscribed-notifications:input" : {
"stream": "toaster:toasterRestocked",
"encoding": "encode-json",
"replay-start-time": "2018-03-07T14:05:00.00Z"
}
}
The client will receive all notifications which have been generated for the target event stream since the date and time specified in the “replay-start-time” value up to the present, followed by a “replay-completed” notification. The date and time value is in UTC format.
{
"ietf-restconf:notification": {
"toaster:toasterRestocked": {
"amountOfBread":100
}
},
"event-time":"2018-03-07T14:05:06.096+01:00"
}
{
"ietf-restconf:notification": {
"toaster:toasterRestocked": {
"amountOfBread":100
}
},
"event-time":"2018-03-07T14:13:00.777+01:00"
}
{
"ietf-restconf:notification": {
"ietf-subscribed-notifications: replay-completed": {
"identifier": 123
}
},
"event-time":"2018-03-07T14:23:05.856+01:00"
}
Active subscriptions can also be scheduled to stop using the “stop-time” parameter. The value of this parameter is a date and time in UTC format.
Here is an example of request message-body for establishing a subscription that is scheduled to stop:
{
"ietf-subscribed-notifications:input" : {
"stream": "toaster:toasterRestocked",
"encoding": "encode-json",
"stop-time": "2018-01-24T08:55:00.00Z"
}
}
The subscription will cease to exist once the specified date and time is reached. A “subscription-completed” notification is sent back to the client afterwards.
{
"ietf-restconf:notification": {
"ietf-subscribed-notifications: subscription-completed": {
"identifier": 123
}
},
"event-time":"2018-03-07T15:13:05.347+01:00"
}
The value of “stop-time” parameter must be for a future time, otherwise it will be ignored. However, if combined with “replay-start-time”, it can point to the past, but it must be later than the “replay-start-time”. The combination of these two parameters will result into a replay of notifications from within the time frame specified with “replay-start-time” and “stop-time”.
Modifying a subscription¶
Once a subscription has been established, the client can modify some of its properties using the “modify-subscription” RPC. The identifier of the subscription must be specified in the RPC input. This operation can only be performed using the same transport session as the one that was used for establishing the subscription. RESTCONF Northbound currently supports modification of “period” and “stop-time” parameters.
Here is an example of a request payload for changing the terms of an existing subscription:
{
"ietf-subscribed-notifications:input" : {
"identifier": 123,
"ietf-yang-push:period": 5,
"stop-time": "2018-01-24T08:55:00.00Z"
}
}
The server will respond with a corresponding RPC output specifying whether the operation was successful or not.
{
"output" : {
"subscription-result": "ietf-subscribed-notifications:ok"
}
}
Deleting a subscription¶
Once a subscription has been established, the client can remove it using the “delete-subscription” RPC operation. The identifier of the subscription must be specified in the RPC input.
Here is an example of a request which cancels an existing subscription with the specified identifier:
POST
/restconf/operations/ietf-subscribed-notifications:delete-subscription
Content-Type: application/json
Accept: application/json
{
"ietf-subscribed-notifications:input" : {
"identifier": 123
}
}
After a successful deletion request, no more notifications will be sent for the subscription.
However, a subscription can only be deleted using the same transport session as the one that was used for subscription establishment.
Killing a subscription¶
Once a subscription has been established, the client can remove it using the “kill-subscription” RPC operation. The identifier of the subscription must be specified in the RPC input.
Here is an example of a request which terminates an existing subscription with the specified identifier:
POST
/restconf/operations/ietf-subscribed-notifications:kill-subscription
Content-Type: application/json
Accept: application/json
{
"ietf-subscribed-notifications:input" : {
"identifier": 123
}
}
If the operation is successful, a “subscription-terminated” notification is sent back to the client.
{
"ietf-restconf:notification": {
"ietf-subscribed-notifications:subscription-terminated": {
"identifier":123
}
},
"event-time":"2018-03-07T14:13:53.895+01:00"
}
Unlike the “delete-subscription” RPC, “kill-subscription” can cancel any subscription even when the transport session being used is not the same as the one that was used for subscription creation.
Retrieving information about all active subscriptions¶
Information about active notification stream subscriptions can be obtained using the following GET request on subscriptions container:
GET
/restconf/data/ietf-subscribed-notifications:subscriptions
Content-Type: application/json
Accept: application/json
Subscriptions are removed from the list once they expire (reaching stop-time) or when they are terminated by a “kill-subscription” or “delete-subscription” operation.