Started by upstream project "integration-distribution-test-titanium" build number 365
originally caused by:
Started by upstream project "autorelease-release-titanium-mvn39-openjdk21" build number 356
originally caused by:
Started by timer
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on prd-centos8-robot-2c-8g-1708 (centos8-robot-2c-8g) in workspace /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium
[ssh-agent] Looking for ssh-agent implementation...
[ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine)
$ ssh-agent
SSH_AUTH_SOCK=/tmp/ssh-rtLLyadGEGns/agent.5276
SSH_AGENT_PID=5277
[ssh-agent] Started.
Running ssh-add (command line suppressed)
Identity added: /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium@tmp/private_key_10333249922350968183.key (/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium@tmp/private_key_10333249922350968183.key)
[ssh-agent] Using credentials jenkins (Release Engineering Jenkins Key)
The recommended git tool is: NONE
using credential opendaylight-jenkins-ssh
Wiping out workspace first.
Cloning the remote Git repository
Cloning repository git://devvexx.opendaylight.org/mirror/integration/test
> git init /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test # timeout=10
Fetching upstream changes from git://devvexx.opendaylight.org/mirror/integration/test
> git --version # timeout=10
> git --version # 'git version 2.43.0'
using GIT_SSH to set credentials Release Engineering Jenkins Key
[INFO] Currently running in a labeled security context
[INFO] Currently SELinux is 'enforcing' on the host
> /usr/bin/chcon --type=ssh_home_t /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test@tmp/jenkins-gitclient-ssh10589598184715485022.key
Verifying host key using known hosts file
You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification.
> git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/integration/test +refs/heads/*:refs/remotes/origin/* # timeout=10
> git config remote.origin.url git://devvexx.opendaylight.org/mirror/integration/test # timeout=10
> git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
> git config remote.origin.url git://devvexx.opendaylight.org/mirror/integration/test # timeout=10
Fetching upstream changes from git://devvexx.opendaylight.org/mirror/integration/test
using GIT_SSH to set credentials Release Engineering Jenkins Key
[INFO] Currently running in a labeled security context
[INFO] Currently SELinux is 'enforcing' on the host
> /usr/bin/chcon --type=ssh_home_t /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test@tmp/jenkins-gitclient-ssh4879124978242033295.key
Verifying host key using known hosts file
You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification.
> git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/integration/test master # timeout=10
> git rev-parse FETCH_HEAD^{commit} # timeout=10
Checking out Revision 9e7a2f1bec76f24ac7173c3a00f09ed1af208887 (origin/master)
> git config core.sparsecheckout # timeout=10
> git checkout -f 9e7a2f1bec76f24ac7173c3a00f09ed1af208887 # timeout=10
Commit message: "Add pekko templates"
> git rev-parse FETCH_HEAD^{commit} # timeout=10
> git rev-list --no-walk e12906d887353b3b6c7ca6e293959c75cf9a8409 # timeout=10
No emails were triggered.
provisioning config files...
copy managed file [npmrc] to file:/home/jenkins/.npmrc
copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf
copy managed file [clouds-yaml] to file:/home/jenkins/.config/openstack/clouds.yaml
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins15036472166376922905.sh
---> python-tools-install.sh
Setup pyenv:
system
* 3.8.13 (set by /opt/pyenv/version)
* 3.9.13 (set by /opt/pyenv/version)
* 3.10.13 (set by /opt/pyenv/version)
* 3.11.7 (set by /opt/pyenv/version)
lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-2J6V
lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv
lf-activate-venv(): INFO: Installing: lftools
lf-activate-venv(): INFO: Adding /tmp/venv-2J6V/bin to PATH
Generating Requirements File
Python 3.11.7
pip 25.2 from /tmp/venv-2J6V/lib/python3.11/site-packages/pip (python 3.11)
appdirs==1.4.4
argcomplete==3.6.2
aspy.yaml==1.3.0
attrs==25.3.0
autopage==0.5.2
beautifulsoup4==4.13.4
boto3==1.40.11
botocore==1.40.11
bs4==0.0.2
cachetools==5.5.2
certifi==2025.8.3
cffi==1.17.1
cfgv==3.4.0
chardet==5.2.0
charset-normalizer==3.4.3
click==8.2.1
cliff==4.10.0
cmd2==2.7.0
cryptography==3.3.2
debtcollector==3.0.0
decorator==5.2.1
defusedxml==0.7.1
Deprecated==1.2.18
distlib==0.4.0
dnspython==2.7.0
docker==7.1.0
dogpile.cache==1.4.0
durationpy==0.10
email_validator==2.2.0
filelock==3.19.1
future==1.0.0
gitdb==4.0.12
GitPython==3.1.45
google-auth==2.40.3
httplib2==0.22.0
identify==2.6.13
idna==3.10
importlib-resources==1.5.0
iso8601==2.1.0
Jinja2==3.1.6
jmespath==1.0.1
jsonpatch==1.33
jsonpointer==3.0.0
jsonschema==4.25.0
jsonschema-specifications==2025.4.1
keystoneauth1==5.11.1
kubernetes==33.1.0
lftools==0.37.13
lxml==6.0.0
markdown-it-py==4.0.0
MarkupSafe==3.0.2
mdurl==0.1.2
msgpack==1.1.1
multi_key_dict==2.0.3
munch==4.0.0
netaddr==1.3.0
niet==1.4.2
nodeenv==1.9.1
oauth2client==4.1.3
oauthlib==3.3.1
openstacksdk==4.7.0
os-client-config==2.3.0
os-service-types==1.8.0
osc-lib==4.1.0
oslo.config==10.0.0
oslo.context==6.0.0
oslo.i18n==6.5.1
oslo.log==7.2.0
oslo.serialization==5.7.0
oslo.utils==9.0.0
packaging==25.0
pbr==7.0.0
platformdirs==4.3.8
prettytable==3.16.0
psutil==7.0.0
pyasn1==0.6.1
pyasn1_modules==0.4.2
pycparser==2.22
pygerrit2==2.0.15
PyGithub==2.7.0
Pygments==2.19.2
PyJWT==2.10.1
PyNaCl==1.5.0
pyparsing==2.4.7
pyperclip==1.9.0
pyrsistent==0.20.0
python-cinderclient==9.7.0
python-dateutil==2.9.0.post0
python-heatclient==4.3.0
python-jenkins==1.8.3
python-keystoneclient==5.6.0
python-magnumclient==4.8.1
python-openstackclient==8.1.0
python-swiftclient==4.8.0
PyYAML==6.0.2
referencing==0.36.2
requests==2.32.4
requests-oauthlib==2.0.0
requestsexceptions==1.4.0
rfc3986==2.0.0
rich==14.1.0
rich-argparse==1.7.1
rpds-py==0.27.0
rsa==4.9.1
ruamel.yaml==0.18.14
ruamel.yaml.clib==0.2.12
s3transfer==0.13.1
simplejson==3.20.1
six==1.17.0
smmap==5.0.2
soupsieve==2.7
stevedore==5.4.1
tabulate==0.9.0
toml==0.10.2
tomlkit==0.13.3
tqdm==4.67.1
typing_extensions==4.14.1
tzdata==2025.2
urllib3==1.26.20
virtualenv==20.34.0
wcwidth==0.2.13
websocket-client==1.8.0
wrapt==1.17.3
xdg==6.0.0
xmltodict==0.14.2
yq==3.4.3
[EnvInject] - Injecting environment variables from a build step.
[EnvInject] - Injecting as environment variables the properties content
OS_STACK_TEMPLATE=csit-2-instance-type.yaml
OS_CLOUD=vex
OS_STACK_NAME=releng-ovsdb-csit-3node-upstream-clustering-only-titanium-358
OS_STACK_TEMPLATE_DIR=openstack-hot
[EnvInject] - Variables injected successfully.
provisioning config files...
copy managed file [clouds-yaml] to file:/home/jenkins/.config/openstack/clouds.yaml
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins5927149899283103494.sh
---> Create parameters file for OpenStack HOT
OpenStack Heat parameters generated
-----------------------------------
parameters:
vm_0_count: '3'
vm_0_flavor: 'v3-standard-4'
vm_0_image: 'ZZCI - Ubuntu 22.04 - builder - x86_64 - 20250201-010426.857'
vm_1_count: '1'
vm_1_flavor: 'v3-standard-2'
vm_1_image: 'ZZCI - Ubuntu 22.04 - mininet-ovs-217 - x86_64 - 20250201-060151.911'
job_name: '36866-358'
silo: 'releng'
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash -l /tmp/jenkins15257534254013264228.sh
---> Create HEAT stack
+ source /home/jenkins/lf-env.sh
+ lf-activate-venv --python python3 'lftools[openstack]' kubernetes niet python-heatclient python-openstackclient python-magnumclient yq
++ mktemp -d /tmp/venv-XXXX
+ lf_venv=/tmp/venv-l5WJ
+ local venv_file=/tmp/.os_lf_venv
+ local python=python3
+ local options
+ local set_path=true
+ local install_args=
++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --python python3 'lftools[openstack]' kubernetes niet python-heatclient python-openstackclient python-magnumclient yq
+ options=' --python '\''python3'\'' -- '\''lftools[openstack]'\'' '\''kubernetes'\'' '\''niet'\'' '\''python-heatclient'\'' '\''python-openstackclient'\'' '\''python-magnumclient'\'' '\''yq'\'''
+ eval set -- ' --python '\''python3'\'' -- '\''lftools[openstack]'\'' '\''kubernetes'\'' '\''niet'\'' '\''python-heatclient'\'' '\''python-openstackclient'\'' '\''python-magnumclient'\'' '\''yq'\'''
++ set -- --python python3 -- 'lftools[openstack]' kubernetes niet python-heatclient python-openstackclient python-magnumclient yq
+ true
+ case $1 in
+ python=python3
+ shift 2
+ true
+ case $1 in
+ shift
+ break
+ case $python in
+ local pkg_list=
+ [[ -d /opt/pyenv ]]
+ echo 'Setup pyenv:'
Setup pyenv:
+ export PYENV_ROOT=/opt/pyenv
+ PYENV_ROOT=/opt/pyenv
+ export PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/home/jenkins/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/puppetlabs/bin
+ PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/home/jenkins/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/puppetlabs/bin
+ pyenv versions
system
3.8.13
3.9.13
3.10.13
* 3.11.7 (set by /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/.python-version)
+ command -v pyenv
++ pyenv init - --no-rehash
+ eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH);
for i in ${!paths[@]}; do
if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\'';
fi; done;
echo "${paths[*]}"'\'')"
export PATH="/opt/pyenv/shims:${PATH}"
export PYENV_SHELL=bash
source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\''
pyenv() {
local command
command="${1:-}"
if [ "$#" -gt 0 ]; then
shift
fi
case "$command" in
rehash|shell)
eval "$(pyenv "sh-$command" "$@")"
;;
*)
command pyenv "$command" "$@"
;;
esac
}'
+++ bash --norc -ec 'IFS=:; paths=($PATH);
for i in ${!paths[@]}; do
if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\'';
fi; done;
echo "${paths[*]}"'
++ PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/home/jenkins/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/puppetlabs/bin
++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/home/jenkins/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/puppetlabs/bin
++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/home/jenkins/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/puppetlabs/bin
++ export PYENV_SHELL=bash
++ PYENV_SHELL=bash
++ source /opt/pyenv/libexec/../completions/pyenv.bash
+++ complete -F _pyenv pyenv
++ lf-pyver python3
++ local py_version_xy=python3
++ local py_version_xyz=
++ pyenv versions
++ local command
++ command=versions
++ '[' 1 -gt 0 ']'
++ shift
++ case "$command" in
++ command pyenv versions
++ pyenv versions
++ sed 's/^[ *]* //'
++ grep -E '^[0-9.]*[0-9]$'
++ awk '{ print $1 }'
++ [[ ! -s /tmp/.pyenv_versions ]]
+++ tail -n 1
+++ grep '^3' /tmp/.pyenv_versions
+++ sort -V
++ py_version_xyz=3.11.7
++ [[ -z 3.11.7 ]]
++ echo 3.11.7
++ return 0
+ pyenv local 3.11.7
+ local command
+ command=local
+ '[' 2 -gt 0 ']'
+ shift
+ case "$command" in
+ command pyenv local 3.11.7
+ pyenv local 3.11.7
+ for arg in "$@"
+ case $arg in
+ pkg_list+='lftools[openstack] '
+ for arg in "$@"
+ case $arg in
+ pkg_list+='kubernetes '
+ for arg in "$@"
+ case $arg in
+ pkg_list+='niet '
+ for arg in "$@"
+ case $arg in
+ pkg_list+='python-heatclient '
+ for arg in "$@"
+ case $arg in
+ pkg_list+='python-openstackclient '
+ for arg in "$@"
+ case $arg in
+ pkg_list+='python-magnumclient '
+ for arg in "$@"
+ case $arg in
+ pkg_list+='yq '
+ [[ -f /tmp/.os_lf_venv ]]
++ cat /tmp/.os_lf_venv
+ lf_venv=/tmp/venv-2J6V
+ echo 'lf-activate-venv(): INFO: Reuse venv:/tmp/venv-2J6V from' file:/tmp/.os_lf_venv
lf-activate-venv(): INFO: Reuse venv:/tmp/venv-2J6V from file:/tmp/.os_lf_venv
+ /tmp/venv-2J6V/bin/python3 -m pip install --upgrade --quiet pip 'setuptools<66' virtualenv
+ [[ -z lftools[openstack] kubernetes niet python-heatclient python-openstackclient python-magnumclient yq ]]
+ echo 'lf-activate-venv(): INFO: Installing: lftools[openstack] kubernetes niet python-heatclient python-openstackclient python-magnumclient yq '
lf-activate-venv(): INFO: Installing: lftools[openstack] kubernetes niet python-heatclient python-openstackclient python-magnumclient yq
+ /tmp/venv-2J6V/bin/python3 -m pip install --upgrade --quiet --upgrade-strategy eager 'lftools[openstack]' kubernetes niet python-heatclient python-openstackclient python-magnumclient yq
+ type python3
+ true
+ echo 'lf-activate-venv(): INFO: Adding /tmp/venv-2J6V/bin to PATH'
lf-activate-venv(): INFO: Adding /tmp/venv-2J6V/bin to PATH
+ PATH=/tmp/venv-2J6V/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/home/jenkins/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/puppetlabs/bin
+ return 0
+ openstack --os-cloud vex limits show --absolute
+--------------------------+---------+
| Name | Value |
+--------------------------+---------+
| maxTotalInstances | -1 |
| maxTotalCores | 450 |
| maxTotalRAMSize | 1000000 |
| maxServerMeta | 128 |
| maxImageMeta | 128 |
| maxPersonality | 5 |
| maxPersonalitySize | 10240 |
| maxTotalKeypairs | 100 |
| maxServerGroups | 10 |
| maxServerGroupMembers | 10 |
| maxTotalFloatingIps | -1 |
| maxSecurityGroups | -1 |
| maxSecurityGroupRules | -1 |
| totalRAMUsed | 671744 |
| totalCoresUsed | 164 |
| totalInstancesUsed | 53 |
| totalFloatingIpsUsed | 0 |
| totalSecurityGroupsUsed | 0 |
| totalServerGroupsUsed | 0 |
| maxTotalVolumes | -1 |
| maxTotalSnapshots | 10 |
| maxTotalVolumeGigabytes | 4096 |
| maxTotalBackups | 10 |
| maxTotalBackupGigabytes | 1000 |
| totalVolumesUsed | 3 |
| totalGigabytesUsed | 60 |
| totalSnapshotsUsed | 0 |
| totalBackupsUsed | 0 |
| totalBackupGigabytesUsed | 0 |
+--------------------------+---------+
+ pushd /opt/ciman/openstack-hot
/opt/ciman/openstack-hot /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium
+ lftools openstack --os-cloud vex stack create releng-ovsdb-csit-3node-upstream-clustering-only-titanium-358 csit-2-instance-type.yaml /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/stack-parameters.yaml
Creating stack releng-ovsdb-csit-3node-upstream-clustering-only-titanium-358
Waiting to initialize infrastructure...
Waiting to initialize infrastructure...
Waiting to initialize infrastructure...
Stack initialization successful.
------------------------------------
Stack Details
------------------------------------
{'added': None,
'capabilities': [],
'created_at': '2025-08-16T00:53:42Z',
'deleted': None,
'deleted_at': None,
'description': 'No description',
'environment': None,
'environment_files': None,
'files': None,
'files_container': None,
'id': '215e6a8d-4d8f-4d1a-ae89-a40fefa379f8',
'is_rollback_disabled': True,
'links': [{'href': 'https://orchestration.public.mtl1.vexxhost.net/v1/12c36e260d8e4bb2913965203b1b491f/stacks/releng-ovsdb-csit-3node-upstream-clustering-only-titanium-358/215e6a8d-4d8f-4d1a-ae89-a40fefa379f8',
'rel': 'self'}],
'location': Munch({'cloud': 'vex', 'region_name': 'ca-ymq-1', 'zone': None, 'project': Munch({'id': '12c36e260d8e4bb2913965203b1b491f', 'name': '61975f2c-7c17-4d69-82fa-c3ae420ad6fd', 'domain_id': None, 'domain_name': 'Default'})}),
'name': 'releng-ovsdb-csit-3node-upstream-clustering-only-titanium-358',
'notification_topics': [],
'outputs': [{'description': 'IP addresses of the 2nd vm types',
'output_key': 'vm_1_ips',
'output_value': ['10.30.170.78']},
{'description': 'IP addresses of the 1st vm types',
'output_key': 'vm_0_ips',
'output_value': ['10.30.171.178',
'10.30.170.138',
'10.30.170.234']}],
'owner_id': ****,
'parameters': {'OS::project_id': '12c36e260d8e4bb2913965203b1b491f',
'OS::stack_id': '215e6a8d-4d8f-4d1a-ae89-a40fefa379f8',
'OS::stack_name': 'releng-ovsdb-csit-3node-upstream-clustering-only-titanium-358',
'job_name': '36866-358',
'silo': 'releng',
'vm_0_count': '3',
'vm_0_flavor': 'v3-standard-4',
'vm_0_image': 'ZZCI - Ubuntu 22.04 - builder - x86_64 - '
'20250201-010426.857',
'vm_1_count': '1',
'vm_1_flavor': 'v3-standard-2',
'vm_1_image': 'ZZCI - Ubuntu 22.04 - mininet-ovs-217 - x86_64 '
'- 20250201-060151.911'},
'parent_id': None,
'replaced': None,
'status': 'CREATE_COMPLETE',
'status_reason': 'Stack CREATE completed successfully',
'tags': [],
'template': None,
'template_description': 'No description',
'template_url': None,
'timeout_mins': 15,
'unchanged': None,
'updated': None,
'updated_at': None,
'user_project_id': 'bd1f5abf0eb24797ad6013e3cb86c53f'}
------------------------------------
+ popd
/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash -l /tmp/jenkins14115665812164301218.sh
---> Copy SSH public keys to CSIT lab
Setup pyenv:
system
3.8.13
3.9.13
3.10.13
* 3.11.7 (set by /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/.python-version)
lf-activate-venv(): INFO: Reuse venv:/tmp/venv-2J6V from file:/tmp/.os_lf_venv
lf-activate-venv(): INFO: Installing: lftools[openstack] kubernetes python-heatclient python-openstackclient
lf-activate-venv(): INFO: Adding /tmp/venv-2J6V/bin to PATH
Warning: Permanently added '10.30.171.178' (ECDSA) to the list of known hosts.
Warning: Permanently added '10.30.170.138' (ECDSA) to the list of known hosts.
Warning: Permanently added '10.30.170.78' (ECDSA) to the list of known hosts.
Warning: Permanently added '10.30.170.234' (ECDSA) to the list of known hosts.
releng-36866-358-0-builder-0
Successfully copied public keys to slave 10.30.171.178
Process 6497 ready.
releng-36866-358-0-builder-1
Successfully copied public keys to slave 10.30.170.138
Process 6498 ready.
releng-36866-358-1-mininet-ovs-217-0
Successfully copied public keys to slave 10.30.170.78
releng-36866-358-0-builder-2
Successfully copied public keys to slave 10.30.170.234
Process 6499 ready.
Process 6500 ready.
SSH ready on all stack servers.
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash -l /tmp/jenkins8061511004600466684.sh
Setup pyenv:
system
3.8.13
3.9.13
3.10.13
* 3.11.7 (set by /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/.python-version)
lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-etFa
lf-activate-venv(): INFO: Save venv in file: /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/.robot_venv
lf-activate-venv(): INFO: Installing: setuptools wheel
lf-activate-venv(): INFO: Adding /tmp/venv-etFa/bin to PATH
+ echo 'Installing Python Requirements'
Installing Python Requirements
+ cat
+ python -m pip install -r requirements.txt
Looking in indexes: https://nexus3.opendaylight.org/repository/PyPi/simple
Collecting docker-py (from -r requirements.txt (line 1))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/docker-py/1.10.6/docker_py-1.10.6-py2.py3-none-any.whl (50 kB)
Collecting ipaddr (from -r requirements.txt (line 2))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/ipaddr/2.2.0/ipaddr-2.2.0.tar.gz (26 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting netaddr (from -r requirements.txt (line 3))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/netaddr/1.3.0/netaddr-1.3.0-py3-none-any.whl (2.3 MB)
Collecting netifaces (from -r requirements.txt (line 4))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/netifaces/0.11.0/netifaces-0.11.0.tar.gz (30 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting pyhocon (from -r requirements.txt (line 5))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/pyhocon/0.3.61/pyhocon-0.3.61-py3-none-any.whl (25 kB)
Collecting requests (from -r requirements.txt (line 6))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/requests/2.32.4/requests-2.32.4-py3-none-any.whl (64 kB)
Collecting robotframework (from -r requirements.txt (line 7))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/robotframework/7.3.2/robotframework-7.3.2-py3-none-any.whl (795 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 795.1/795.1 kB 27.5 MB/s 0:00:00
Collecting robotframework-httplibrary (from -r requirements.txt (line 8))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/robotframework-httplibrary/0.4.2/robotframework-httplibrary-0.4.2.tar.gz (9.1 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting robotframework-requests==0.9.7 (from -r requirements.txt (line 9))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/robotframework-requests/0.9.7/robotframework_requests-0.9.7-py3-none-any.whl (21 kB)
Collecting robotframework-selenium2library (from -r requirements.txt (line 10))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/robotframework-selenium2library/3.0.0/robotframework_selenium2library-3.0.0-py2.py3-none-any.whl (6.2 kB)
Collecting robotframework-sshlibrary==3.8.0 (from -r requirements.txt (line 11))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/robotframework-sshlibrary/3.8.0/robotframework-sshlibrary-3.8.0.tar.gz (51 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting scapy (from -r requirements.txt (line 12))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/scapy/2.6.1/scapy-2.6.1-py3-none-any.whl (2.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.4/2.4 MB 51.5 MB/s 0:00:00
Collecting jsonpath-rw (from -r requirements.txt (line 15))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/jsonpath-rw/1.4.0/jsonpath-rw-1.4.0.tar.gz (13 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting elasticsearch (from -r requirements.txt (line 18))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elasticsearch/9.1.0/elasticsearch-9.1.0-py3-none-any.whl (929 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 929.5/929.5 kB 16.0 MB/s 0:00:00
Collecting elasticsearch-dsl (from -r requirements.txt (line 19))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elasticsearch-dsl/8.18.0/elasticsearch_dsl-8.18.0-py3-none-any.whl (10 kB)
Collecting pyangbind (from -r requirements.txt (line 22))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/pyangbind/0.8.6/pyangbind-0.8.6-py3-none-any.whl (52 kB)
Collecting isodate (from -r requirements.txt (line 25))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/isodate/0.7.2/isodate-0.7.2-py3-none-any.whl (22 kB)
Collecting jmespath (from -r requirements.txt (line 28))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/jmespath/1.0.1/jmespath-1.0.1-py3-none-any.whl (20 kB)
Collecting jsonpatch (from -r requirements.txt (line 31))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/jsonpatch/1.33/jsonpatch-1.33-py2.py3-none-any.whl (12 kB)
Collecting paramiko>=1.15.3 (from robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/paramiko/4.0.0/paramiko-4.0.0-py3-none-any.whl (223 kB)
Collecting scp>=0.13.0 (from robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/scp/0.15.0/scp-0.15.0-py2.py3-none-any.whl (8.8 kB)
Collecting docker-pycreds>=0.2.1 (from docker-py->-r requirements.txt (line 1))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/docker-pycreds/0.4.0/docker_pycreds-0.4.0-py2.py3-none-any.whl (9.0 kB)
Collecting six>=1.4.0 (from docker-py->-r requirements.txt (line 1))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/six/1.17.0/six-1.17.0-py2.py3-none-any.whl (11 kB)
Collecting websocket-client>=0.32.0 (from docker-py->-r requirements.txt (line 1))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/websocket-client/1.8.0/websocket_client-1.8.0-py3-none-any.whl (58 kB)
Collecting pyparsing<4,>=2 (from pyhocon->-r requirements.txt (line 5))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/pyparsing/3.2.3/pyparsing-3.2.3-py3-none-any.whl (111 kB)
Collecting charset_normalizer<4,>=2 (from requests->-r requirements.txt (line 6))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/charset-normalizer/3.4.3/charset_normalizer-3.4.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl (150 kB)
Collecting idna<4,>=2.5 (from requests->-r requirements.txt (line 6))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/idna/3.10/idna-3.10-py3-none-any.whl (70 kB)
Collecting urllib3<3,>=1.21.1 (from requests->-r requirements.txt (line 6))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/urllib3/2.5.0/urllib3-2.5.0-py3-none-any.whl (129 kB)
Collecting certifi>=2017.4.17 (from requests->-r requirements.txt (line 6))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/certifi/2025.8.3/certifi-2025.8.3-py3-none-any.whl (161 kB)
Collecting webtest>=2.0 (from robotframework-httplibrary->-r requirements.txt (line 8))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/webtest/3.0.6/webtest-3.0.6-py3-none-any.whl (32 kB)
Collecting jsonpointer (from robotframework-httplibrary->-r requirements.txt (line 8))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/jsonpointer/3.0.0/jsonpointer-3.0.0-py2.py3-none-any.whl (7.6 kB)
Collecting robotframework-seleniumlibrary>=3.0.0 (from robotframework-selenium2library->-r requirements.txt (line 10))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/robotframework-seleniumlibrary/6.7.1/robotframework_seleniumlibrary-6.7.1-py2.py3-none-any.whl (104 kB)
Collecting ply (from jsonpath-rw->-r requirements.txt (line 15))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/ply/3.11/ply-3.11-py2.py3-none-any.whl (49 kB)
Collecting decorator (from jsonpath-rw->-r requirements.txt (line 15))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/decorator/5.2.1/decorator-5.2.1-py3-none-any.whl (9.2 kB)
Collecting elastic-transport<10,>=9.1.0 (from elasticsearch->-r requirements.txt (line 18))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elastic-transport/9.1.0/elastic_transport-9.1.0-py3-none-any.whl (65 kB)
Collecting python-dateutil (from elasticsearch->-r requirements.txt (line 18))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/python-dateutil/2.9.0.post0/python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)
Collecting typing-extensions (from elasticsearch->-r requirements.txt (line 18))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/typing-extensions/4.14.1/typing_extensions-4.14.1-py3-none-any.whl (43 kB)
Collecting elasticsearch (from -r requirements.txt (line 18))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elasticsearch/8.19.0/elasticsearch-8.19.0-py3-none-any.whl (926 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 926.9/926.9 kB 49.1 MB/s 0:00:00
INFO: pip is looking at multiple versions of elasticsearch-dsl to determine which version is compatible with other requirements. This could take a while.
Collecting elasticsearch-dsl (from -r requirements.txt (line 19))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elasticsearch-dsl/8.17.1/elasticsearch_dsl-8.17.1-py3-none-any.whl (158 kB)
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elasticsearch-dsl/8.17.0/elasticsearch_dsl-8.17.0-py3-none-any.whl (158 kB)
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elasticsearch-dsl/8.16.0/elasticsearch_dsl-8.16.0-py3-none-any.whl (158 kB)
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elasticsearch-dsl/8.15.4/elasticsearch_dsl-8.15.4-py3-none-any.whl (104 kB)
Collecting elastic-transport<9,>=8.15.1 (from elasticsearch->-r requirements.txt (line 18))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elastic-transport/8.17.1/elastic_transport-8.17.1-py3-none-any.whl (64 kB)
Collecting pyang (from pyangbind->-r requirements.txt (line 22))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/pyang/2.6.1/pyang-2.6.1-py2.py3-none-any.whl (594 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 594.7/594.7 kB 27.4 MB/s 0:00:00
Collecting lxml (from pyangbind->-r requirements.txt (line 22))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/lxml/6.0.0/lxml-6.0.0-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (5.2 MB)
Collecting regex (from pyangbind->-r requirements.txt (line 22))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/regex/2025.7.34/regex-2025.7.34-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl (798 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 798.9/798.9 kB 36.2 MB/s 0:00:00
Collecting enum34 (from pyangbind->-r requirements.txt (line 22))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/enum34/1.1.10/enum34-1.1.10-py3-none-any.whl (11 kB)
Collecting bcrypt>=3.2 (from paramiko>=1.15.3->robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/bcrypt/4.3.0/bcrypt-4.3.0-cp39-abi3-manylinux_2_28_x86_64.whl (284 kB)
Collecting cryptography>=3.3 (from paramiko>=1.15.3->robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/cryptography/45.0.6/cryptography-45.0.6-cp311-abi3-manylinux_2_28_x86_64.whl (4.5 MB)
Collecting invoke>=2.0 (from paramiko>=1.15.3->robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/invoke/2.2.0/invoke-2.2.0-py3-none-any.whl (160 kB)
Collecting pynacl>=1.5 (from paramiko>=1.15.3->robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/pynacl/1.5.0/PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (856 kB)
Collecting cffi>=1.14 (from cryptography>=3.3->paramiko>=1.15.3->robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/cffi/1.17.1/cffi-1.17.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (467 kB)
Collecting pycparser (from cffi>=1.14->cryptography>=3.3->paramiko>=1.15.3->robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/pycparser/2.22/pycparser-2.22-py3-none-any.whl (117 kB)
Collecting selenium>=4.3.0 (from robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/selenium/4.35.0/selenium-4.35.0-py3-none-any.whl (9.6 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.6/9.6 MB 77.2 MB/s 0:00:00
Collecting robotframework-pythonlibcore>=4.4.1 (from robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/robotframework-pythonlibcore/4.4.1/robotframework_pythonlibcore-4.4.1-py2.py3-none-any.whl (12 kB)
Collecting click>=8.0 (from robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/click/8.2.1/click-8.2.1-py3-none-any.whl (102 kB)
Collecting trio~=0.30.0 (from selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/trio/0.30.0/trio-0.30.0-py3-none-any.whl (499 kB)
Collecting trio-websocket~=0.12.2 (from selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/trio-websocket/0.12.2/trio_websocket-0.12.2-py3-none-any.whl (21 kB)
Collecting attrs>=23.2.0 (from trio~=0.30.0->selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/attrs/25.3.0/attrs-25.3.0-py3-none-any.whl (63 kB)
Collecting sortedcontainers (from trio~=0.30.0->selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/sortedcontainers/2.4.0/sortedcontainers-2.4.0-py2.py3-none-any.whl (29 kB)
Collecting outcome (from trio~=0.30.0->selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/outcome/1.3.0.post0/outcome-1.3.0.post0-py2.py3-none-any.whl (10 kB)
Collecting sniffio>=1.3.0 (from trio~=0.30.0->selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/sniffio/1.3.1/sniffio-1.3.1-py3-none-any.whl (10 kB)
Collecting wsproto>=0.14 (from trio-websocket~=0.12.2->selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/wsproto/1.2.0/wsproto-1.2.0-py3-none-any.whl (24 kB)
Collecting pysocks!=1.5.7,<2.0,>=1.5.6 (from urllib3[socks]<3.0,>=2.5.0->selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/pysocks/1.7.1/PySocks-1.7.1-py3-none-any.whl (16 kB)
Collecting WebOb>=1.2 (from webtest>=2.0->robotframework-httplibrary->-r requirements.txt (line 8))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/webob/1.8.9/WebOb-1.8.9-py2.py3-none-any.whl (115 kB)
Collecting waitress>=3.0.2 (from webtest>=2.0->robotframework-httplibrary->-r requirements.txt (line 8))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/waitress/3.0.2/waitress-3.0.2-py3-none-any.whl (56 kB)
Collecting beautifulsoup4 (from webtest>=2.0->robotframework-httplibrary->-r requirements.txt (line 8))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/beautifulsoup4/4.13.4/beautifulsoup4-4.13.4-py3-none-any.whl (187 kB)
Collecting h11<1,>=0.9.0 (from wsproto>=0.14->trio-websocket~=0.12.2->selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10))
Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/h11/0.16.0/h11-0.16.0-py3-none-any.whl (37 kB)
Collecting soupsieve>1.2 (from beautifulsoup4->webtest>=2.0->robotframework-httplibrary->-r requirements.txt (line 8))
Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/soupsieve/2.7/soupsieve-2.7-py3-none-any.whl (36 kB)
Building wheels for collected packages: robotframework-sshlibrary, ipaddr, netifaces, robotframework-httplibrary, jsonpath-rw
DEPRECATION: Building 'robotframework-sshlibrary' using the legacy setup.py bdist_wheel mechanism, which will be removed in a future version. pip 25.3 will enforce this behaviour change. A possible replacement is to use the standardized build interface by setting the `--use-pep517` option, (possibly combined with `--no-build-isolation`), or adding a `pyproject.toml` file to the source tree of 'robotframework-sshlibrary'. Discussion can be found at https://github.com/pypa/pip/issues/6334
Building wheel for robotframework-sshlibrary (setup.py): started
Building wheel for robotframework-sshlibrary (setup.py): finished with status 'done'
Created wheel for robotframework-sshlibrary: filename=robotframework_sshlibrary-3.8.0-py3-none-any.whl size=55205 sha256=7dd863c6db741a16c1b5c80293cf8f4577389338dca9a4d004ed326cc598f247
Stored in directory: /home/jenkins/.cache/pip/wheels/f7/c9/b3/a977b7bcc410d45ae27d240df3d00a12585509180e373ecccc
DEPRECATION: Building 'ipaddr' using the legacy setup.py bdist_wheel mechanism, which will be removed in a future version. pip 25.3 will enforce this behaviour change. A possible replacement is to use the standardized build interface by setting the `--use-pep517` option, (possibly combined with `--no-build-isolation`), or adding a `pyproject.toml` file to the source tree of 'ipaddr'. Discussion can be found at https://github.com/pypa/pip/issues/6334
Building wheel for ipaddr (setup.py): started
Building wheel for ipaddr (setup.py): finished with status 'done'
Created wheel for ipaddr: filename=ipaddr-2.2.0-py3-none-any.whl size=18353 sha256=5a5ce0c737fd7711a0ff6452a2509d13e82a1bfb09d3ce87755bbb37d019dece
Stored in directory: /home/jenkins/.cache/pip/wheels/dc/6c/04/da2d847fa8d45c59af3e1d83e2acc29cb8adcbaf04c0898dbf
DEPRECATION: Building 'netifaces' using the legacy setup.py bdist_wheel mechanism, which will be removed in a future version. pip 25.3 will enforce this behaviour change. A possible replacement is to use the standardized build interface by setting the `--use-pep517` option, (possibly combined with `--no-build-isolation`), or adding a `pyproject.toml` file to the source tree of 'netifaces'. Discussion can be found at https://github.com/pypa/pip/issues/6334
Building wheel for netifaces (setup.py): started
Building wheel for netifaces (setup.py): finished with status 'done'
Created wheel for netifaces: filename=netifaces-0.11.0-cp311-cp311-linux_x86_64.whl size=41084 sha256=b5204d26d6ec4475a9b5bf7e931ae0bd7581fa33c9d5456c72fe19889b6ea5af
Stored in directory: /home/jenkins/.cache/pip/wheels/f8/18/88/e61d54b995bea304bdb1d040a92b72228a1bf72ca2a3eba7c9
DEPRECATION: Building 'robotframework-httplibrary' using the legacy setup.py bdist_wheel mechanism, which will be removed in a future version. pip 25.3 will enforce this behaviour change. A possible replacement is to use the standardized build interface by setting the `--use-pep517` option, (possibly combined with `--no-build-isolation`), or adding a `pyproject.toml` file to the source tree of 'robotframework-httplibrary'. Discussion can be found at https://github.com/pypa/pip/issues/6334
Building wheel for robotframework-httplibrary (setup.py): started
Building wheel for robotframework-httplibrary (setup.py): finished with status 'done'
Created wheel for robotframework-httplibrary: filename=robotframework_httplibrary-0.4.2-py3-none-any.whl size=10014 sha256=2aada922cac5889ae7f1c53c0f2aa1cdb312ec3880a9434f43a8132a28dc187d
Stored in directory: /home/jenkins/.cache/pip/wheels/aa/bc/0d/9a20dd51effef392aae2733cb4c7b66c6fa29fca33d88b57ed
DEPRECATION: Building 'jsonpath-rw' using the legacy setup.py bdist_wheel mechanism, which will be removed in a future version. pip 25.3 will enforce this behaviour change. A possible replacement is to use the standardized build interface by setting the `--use-pep517` option, (possibly combined with `--no-build-isolation`), or adding a `pyproject.toml` file to the source tree of 'jsonpath-rw'. Discussion can be found at https://github.com/pypa/pip/issues/6334
Building wheel for jsonpath-rw (setup.py): started
Building wheel for jsonpath-rw (setup.py): finished with status 'done'
Created wheel for jsonpath-rw: filename=jsonpath_rw-1.4.0-py3-none-any.whl size=15176 sha256=97085b9f211b9ddb45f0d68bd6c18674ad1b39e854c9488251cec52151fa5a03
Stored in directory: /home/jenkins/.cache/pip/wheels/f1/54/63/9a8da38cefae13755097b36cc852decc25d8ef69c37d58d4eb
Successfully built robotframework-sshlibrary ipaddr netifaces robotframework-httplibrary jsonpath-rw
Installing collected packages: sortedcontainers, ply, netifaces, ipaddr, enum34, websocket-client, WebOb, waitress, urllib3, typing-extensions, soupsieve, sniffio, six, scapy, robotframework-pythonlibcore, robotframework, regex, pysocks, pyparsing, pycparser, netaddr, lxml, jsonpointer, jmespath, isodate, invoke, idna, h11, decorator, click, charset_normalizer, certifi, bcrypt, attrs, wsproto, requests, python-dateutil, pyhocon, pyang, outcome, jsonpath-rw, jsonpatch, elastic-transport, docker-pycreds, cffi, beautifulsoup4, webtest, trio, robotframework-requests, pynacl, pyangbind, elasticsearch, docker-py, cryptography, trio-websocket, robotframework-httplibrary, paramiko, elasticsearch-dsl, selenium, scp, robotframework-sshlibrary, robotframework-seleniumlibrary, robotframework-selenium2library
Successfully installed WebOb-1.8.9 attrs-25.3.0 bcrypt-4.3.0 beautifulsoup4-4.13.4 certifi-2025.8.3 cffi-1.17.1 charset_normalizer-3.4.3 click-8.2.1 cryptography-45.0.6 decorator-5.2.1 docker-py-1.10.6 docker-pycreds-0.4.0 elastic-transport-8.17.1 elasticsearch-8.19.0 elasticsearch-dsl-8.15.4 enum34-1.1.10 h11-0.16.0 idna-3.10 invoke-2.2.0 ipaddr-2.2.0 isodate-0.7.2 jmespath-1.0.1 jsonpatch-1.33 jsonpath-rw-1.4.0 jsonpointer-3.0.0 lxml-6.0.0 netaddr-1.3.0 netifaces-0.11.0 outcome-1.3.0.post0 paramiko-4.0.0 ply-3.11 pyang-2.6.1 pyangbind-0.8.6 pycparser-2.22 pyhocon-0.3.61 pynacl-1.5.0 pyparsing-3.2.3 pysocks-1.7.1 python-dateutil-2.9.0.post0 regex-2025.7.34 requests-2.32.4 robotframework-7.3.2 robotframework-httplibrary-0.4.2 robotframework-pythonlibcore-4.4.1 robotframework-requests-0.9.7 robotframework-selenium2library-3.0.0 robotframework-seleniumlibrary-6.7.1 robotframework-sshlibrary-3.8.0 scapy-2.6.1 scp-0.15.0 selenium-4.35.0 six-1.17.0 sniffio-1.3.1 sortedcontainers-2.4.0 soupsieve-2.7 trio-0.30.0 trio-websocket-0.12.2 typing-extensions-4.14.1 urllib3-2.5.0 waitress-3.0.2 websocket-client-1.8.0 webtest-3.0.6 wsproto-1.2.0
+ pip freeze
attrs==25.3.0
bcrypt==4.3.0
beautifulsoup4==4.13.4
certifi==2025.8.3
cffi==1.17.1
charset-normalizer==3.4.3
click==8.2.1
cryptography==45.0.6
decorator==5.2.1
distlib==0.4.0
docker-py==1.10.6
docker-pycreds==0.4.0
elastic-transport==8.17.1
elasticsearch==8.19.0
elasticsearch-dsl==8.15.4
enum34==1.1.10
filelock==3.19.1
h11==0.16.0
idna==3.10
invoke==2.2.0
ipaddr==2.2.0
isodate==0.7.2
jmespath==1.0.1
jsonpatch==1.33
jsonpath-rw==1.4.0
jsonpointer==3.0.0
lxml==6.0.0
netaddr==1.3.0
netifaces==0.11.0
outcome==1.3.0.post0
paramiko==4.0.0
platformdirs==4.3.8
ply==3.11
pyang==2.6.1
pyangbind==0.8.6
pycparser==2.22
pyhocon==0.3.61
PyNaCl==1.5.0
pyparsing==3.2.3
PySocks==1.7.1
python-dateutil==2.9.0.post0
regex==2025.7.34
requests==2.32.4
robotframework==7.3.2
robotframework-httplibrary==0.4.2
robotframework-pythonlibcore==4.4.1
robotframework-requests==0.9.7
robotframework-selenium2library==3.0.0
robotframework-seleniumlibrary==6.7.1
robotframework-sshlibrary==3.8.0
scapy==2.6.1
scp==0.15.0
selenium==4.35.0
six==1.17.0
sniffio==1.3.1
sortedcontainers==2.4.0
soupsieve==2.7
trio==0.30.0
trio-websocket==0.12.2
typing_extensions==4.14.1
urllib3==2.5.0
virtualenv==20.34.0
waitress==3.0.2
WebOb==1.8.9
websocket-client==1.8.0
WebTest==3.0.6
wsproto==1.2.0
[EnvInject] - Injecting environment variables from a build step.
[EnvInject] - Injecting as environment variables the properties file path 'env.properties'
[EnvInject] - Variables injected successfully.
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash -l /tmp/jenkins1735935764139100724.sh
Setup pyenv:
system
3.8.13
3.9.13
3.10.13
* 3.11.7 (set by /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/.python-version)
lf-activate-venv(): INFO: Reuse venv:/tmp/venv-2J6V from file:/tmp/.os_lf_venv
lf-activate-venv(): INFO: Installing: python-heatclient python-openstackclient yq
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
lftools 0.37.13 requires urllib3<2.1.0, but you have urllib3 2.5.0 which is incompatible.
lf-activate-venv(): INFO: Adding /tmp/venv-2J6V/bin to PATH
+ ODL_SYSTEM=()
+ TOOLS_SYSTEM=()
+ OPENSTACK_SYSTEM=()
+ OPENSTACK_CONTROLLERS=()
+ mapfile -t ADDR
++ openstack stack show -f json -c outputs releng-ovsdb-csit-3node-upstream-clustering-only-titanium-358
++ jq -r '.outputs[] | select(.output_key | match("^vm_[0-9]+_ips$")) | .output_value | .[]'
+ for i in "${ADDR[@]}"
++ ssh 10.30.171.178 hostname -s
Warning: Permanently added '10.30.171.178' (ECDSA) to the list of known hosts.
+ REMHOST=releng-36866-358-0-builder-0
+ case ${REMHOST} in
+ ODL_SYSTEM=("${ODL_SYSTEM[@]}" "${i}")
+ for i in "${ADDR[@]}"
++ ssh 10.30.170.138 hostname -s
Warning: Permanently added '10.30.170.138' (ECDSA) to the list of known hosts.
+ REMHOST=releng-36866-358-0-builder-1
+ case ${REMHOST} in
+ ODL_SYSTEM=("${ODL_SYSTEM[@]}" "${i}")
+ for i in "${ADDR[@]}"
++ ssh 10.30.170.234 hostname -s
Warning: Permanently added '10.30.170.234' (ECDSA) to the list of known hosts.
+ REMHOST=releng-36866-358-0-builder-2
+ case ${REMHOST} in
+ ODL_SYSTEM=("${ODL_SYSTEM[@]}" "${i}")
+ for i in "${ADDR[@]}"
++ ssh 10.30.170.78 hostname -s
Warning: Permanently added '10.30.170.78' (ECDSA) to the list of known hosts.
+ REMHOST=releng-36866-358-1-mininet-ovs-217-0
+ case ${REMHOST} in
+ TOOLS_SYSTEM=("${TOOLS_SYSTEM[@]}" "${i}")
+ echo NUM_ODL_SYSTEM=3
+ echo NUM_TOOLS_SYSTEM=1
+ '[' '' == yes ']'
+ NUM_OPENSTACK_SYSTEM=0
+ echo NUM_OPENSTACK_SYSTEM=0
+ '[' 0 -eq 2 ']'
+ echo ODL_SYSTEM_IP=10.30.171.178
++ seq 0 2
+ for i in $(seq 0 $(( ${#ODL_SYSTEM[@]} - 1 )))
+ echo ODL_SYSTEM_1_IP=10.30.171.178
+ for i in $(seq 0 $(( ${#ODL_SYSTEM[@]} - 1 )))
+ echo ODL_SYSTEM_2_IP=10.30.170.138
+ for i in $(seq 0 $(( ${#ODL_SYSTEM[@]} - 1 )))
+ echo ODL_SYSTEM_3_IP=10.30.170.234
+ echo TOOLS_SYSTEM_IP=10.30.170.78
++ seq 0 0
+ for i in $(seq 0 $(( ${#TOOLS_SYSTEM[@]} - 1 )))
+ echo TOOLS_SYSTEM_1_IP=10.30.170.78
+ openstack_index=0
+ NUM_OPENSTACK_CONTROL_NODES=1
+ echo NUM_OPENSTACK_CONTROL_NODES=1
++ seq 0 0
+ for i in $(seq 0 $((NUM_OPENSTACK_CONTROL_NODES - 1)))
+ echo OPENSTACK_CONTROL_NODE_1_IP=
+ NUM_OPENSTACK_COMPUTE_NODES=-1
+ echo NUM_OPENSTACK_COMPUTE_NODES=-1
+ '[' -1 -ge 2 ']'
++ seq 0 -2
+ NUM_OPENSTACK_HAPROXY_NODES=0
+ echo NUM_OPENSTACK_HAPROXY_NODES=0
++ seq 0 -1
+ echo 'Contents of slave_addresses.txt:'
Contents of slave_addresses.txt:
+ cat slave_addresses.txt
NUM_ODL_SYSTEM=3
NUM_TOOLS_SYSTEM=1
NUM_OPENSTACK_SYSTEM=0
ODL_SYSTEM_IP=10.30.171.178
ODL_SYSTEM_1_IP=10.30.171.178
ODL_SYSTEM_2_IP=10.30.170.138
ODL_SYSTEM_3_IP=10.30.170.234
TOOLS_SYSTEM_IP=10.30.170.78
TOOLS_SYSTEM_1_IP=10.30.170.78
NUM_OPENSTACK_CONTROL_NODES=1
OPENSTACK_CONTROL_NODE_1_IP=
NUM_OPENSTACK_COMPUTE_NODES=-1
NUM_OPENSTACK_HAPROXY_NODES=0
[EnvInject] - Injecting environment variables from a build step.
[EnvInject] - Injecting as environment variables the properties file path 'slave_addresses.txt'
[EnvInject] - Variables injected successfully.
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/sh /tmp/jenkins5923661647344191797.sh
Preparing for JRE Version 21
Karaf artifact is karaf
Karaf project is integration
Java home is /usr/lib/jvm/java-21-openjdk-amd64
[EnvInject] - Injecting environment variables from a build step.
[EnvInject] - Injecting as environment variables the properties file path 'set_variables.env'
[EnvInject] - Variables injected successfully.
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins6374452585982954962.sh
2025-08-16 00:55:30 URL:https://raw.githubusercontent.com/opendaylight/integration-distribution/stable/titanium/pom.xml [2619/2619] -> "pom.xml" [1]
Bundle version is 0.22.1-SNAPSHOT
--2025-08-16 00:55:30-- https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.1-SNAPSHOT/maven-metadata.xml
Resolving nexus.opendaylight.org (nexus.opendaylight.org)... 199.204.45.87, 2604:e100:1:0:f816:3eff:fe45:48d6
Connecting to nexus.opendaylight.org (nexus.opendaylight.org)|199.204.45.87|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1404 (1.4K) [application/xml]
Saving to: ‘maven-metadata.xml’
0K . 100% 57.9M=0s
2025-08-16 00:55:30 (57.9 MB/s) - ‘maven-metadata.xml’ saved [1404/1404]
org.opendaylight.integration
karaf
0.22.1-SNAPSHOT
20250815.175747
18
20250815175747
pom
0.22.1-20250815.175747-18
20250815175747
tar.gz
0.22.1-20250815.175747-18
20250815175747
zip
0.22.1-20250815.175747-18
20250815175747
cyclonedx
xml
0.22.1-20250815.175747-18
20250815175747
cyclonedx
json
0.22.1-20250815.175747-18
20250815175747
Nexus timestamp is 0.22.1-20250815.175747-18
Distribution bundle URL is https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.1-SNAPSHOT/karaf-0.22.1-20250815.175747-18.zip
Distribution bundle is karaf-0.22.1-20250815.175747-18.zip
Distribution bundle version is 0.22.1-SNAPSHOT
Distribution folder is karaf-0.22.1-SNAPSHOT
Nexus prefix is https://nexus.opendaylight.org
[EnvInject] - Injecting environment variables from a build step.
[EnvInject] - Injecting as environment variables the properties file path 'detect_variables.env'
[EnvInject] - Variables injected successfully.
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash -l /tmp/jenkins14006223556051692623.sh
Setup pyenv:
system
3.8.13
3.9.13
3.10.13
* 3.11.7 (set by /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/.python-version)
lf-activate-venv(): INFO: Reuse venv:/tmp/venv-2J6V from file:/tmp/.os_lf_venv
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
lftools 0.37.13 requires urllib3<2.1.0, but you have urllib3 2.5.0 which is incompatible.
lf-activate-venv(): INFO: Installing: python-heatclient python-openstackclient
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
lftools 0.37.13 requires urllib3<2.1.0, but you have urllib3 2.5.0 which is incompatible.
lf-activate-venv(): INFO: Adding /tmp/venv-2J6V/bin to PATH
Copying common-functions.sh to /tmp
Copying common-functions.sh to 10.30.170.78:/tmp
Warning: Permanently added '10.30.170.78' (ECDSA) to the list of known hosts.
Copying common-functions.sh to 10.30.171.178:/tmp
Warning: Permanently added '10.30.171.178' (ECDSA) to the list of known hosts.
Copying common-functions.sh to 10.30.170.138:/tmp
Warning: Permanently added '10.30.170.138' (ECDSA) to the list of known hosts.
Copying common-functions.sh to 10.30.170.234:/tmp
Warning: Permanently added '10.30.170.234' (ECDSA) to the list of known hosts.
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins3623337932363487909.sh
common-functions.sh is being sourced
common-functions environment:
MAVENCONF: /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg
ACTUALFEATURES:
FEATURESCONF: /tmp/karaf-0.22.1-SNAPSHOT/etc/org.apache.karaf.features.cfg
CUSTOMPROP: /tmp/karaf-0.22.1-SNAPSHOT/etc/custom.properties
LOGCONF: /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
MEMCONF: /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
CONTROLLERMEM: 2048m
AKKACONF: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/pekko.conf
MODULESCONF: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/modules.conf
MODULESHARDSCONF: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/module-shards.conf
SUITES:
#################################################
## Configure Cluster and Start ##
#################################################
ACTUALFEATURES: odl-infrautils-ready,odl-jolokia,odl-ovsdb-southbound-impl-rest
SPACE_SEPARATED_FEATURES: odl-infrautils-ready odl-jolokia odl-ovsdb-southbound-impl-rest
Locating script plan to use...
Finished running script plans
Configuring member-1 with IP address 10.30.171.178
Warning: Permanently added '10.30.171.178' (ECDSA) to the list of known hosts.
Warning: Permanently added '10.30.171.178' (ECDSA) to the list of known hosts.
+ source /tmp/common-functions.sh karaf-0.22.1-SNAPSHOT titanium
common-functions.sh is being sourced
++ [[ /tmp/common-functions.sh == \/\t\m\p\/\c\o\n\f\i\g\u\r\a\t\i\o\n\-\s\c\r\i\p\t\.\s\h ]]
++ echo 'common-functions.sh is being sourced'
++ BUNDLEFOLDER=karaf-0.22.1-SNAPSHOT
++ DISTROSTREAM=titanium
++ export MAVENCONF=/tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg
++ MAVENCONF=/tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg
++ export FEATURESCONF=/tmp/karaf-0.22.1-SNAPSHOT/etc/org.apache.karaf.features.cfg
++ FEATURESCONF=/tmp/karaf-0.22.1-SNAPSHOT/etc/org.apache.karaf.features.cfg
++ export CUSTOMPROP=/tmp/karaf-0.22.1-SNAPSHOT/etc/custom.properties
++ CUSTOMPROP=/tmp/karaf-0.22.1-SNAPSHOT/etc/custom.properties
++ export LOGCONF=/tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
++ LOGCONF=/tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
++ export MEMCONF=/tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
++ MEMCONF=/tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
++ export CONTROLLERMEM=
++ CONTROLLERMEM=
++ case "${DISTROSTREAM}" in
++ CLUSTER_SYSTEM=pekko
++ export AKKACONF=/tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/pekko.conf
++ AKKACONF=/tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/pekko.conf
++ export MODULESCONF=/tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/modules.conf
++ MODULESCONF=/tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/modules.conf
++ export MODULESHARDSCONF=/tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/module-shards.conf
++ MODULESHARDSCONF=/tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/module-shards.conf
++ print_common_env
++ cat
common-functions environment:
MAVENCONF: /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg
ACTUALFEATURES:
FEATURESCONF: /tmp/karaf-0.22.1-SNAPSHOT/etc/org.apache.karaf.features.cfg
CUSTOMPROP: /tmp/karaf-0.22.1-SNAPSHOT/etc/custom.properties
LOGCONF: /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
MEMCONF: /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
CONTROLLERMEM:
AKKACONF: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/pekko.conf
MODULESCONF: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/modules.conf
MODULESHARDSCONF: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/module-shards.conf
SUITES:
++ SSH='ssh -t -t'
++ extra_services_cntl=' dnsmasq.service httpd.service libvirtd.service openvswitch.service ovs-vswitchd.service ovsdb-server.service rabbitmq-server.service '
++ extra_services_cmp=' libvirtd.service openvswitch.service ovs-vswitchd.service ovsdb-server.service '
Changing to /tmp
Downloading the distribution from https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.1-SNAPSHOT/karaf-0.22.1-20250815.175747-18.zip
+ echo 'Changing to /tmp'
+ cd /tmp
+ echo 'Downloading the distribution from https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.1-SNAPSHOT/karaf-0.22.1-20250815.175747-18.zip'
+ wget --progress=dot:mega https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.1-SNAPSHOT/karaf-0.22.1-20250815.175747-18.zip
--2025-08-16 00:55:42-- https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.1-SNAPSHOT/karaf-0.22.1-20250815.175747-18.zip
Resolving nexus.opendaylight.org (nexus.opendaylight.org)... 199.204.45.87, 2604:e100:1:0:f816:3eff:fe45:48d6
Connecting to nexus.opendaylight.org (nexus.opendaylight.org)|199.204.45.87|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 236730165 (226M) [application/zip]
Saving to: ‘karaf-0.22.1-20250815.175747-18.zip’
0K ........ ........ ........ ........ ........ ........ 1% 59.6M 4s
3072K ........ ........ ........ ........ ........ ........ 2% 103M 3s
6144K ........ ........ ........ ........ ........ ........ 3% 107M 3s
9216K ........ ........ ........ ........ ........ ........ 5% 103M 2s
12288K ........ ........ ........ ........ ........ ........ 6% 157M 2s
15360K ........ ........ ........ ........ ........ ........ 7% 198M 2s
18432K ........ ........ ........ ........ ........ ........ 9% 139M 2s
21504K ........ ........ ........ ........ ........ ........ 10% 243M 2s
24576K ........ ........ ........ ........ ........ ........ 11% 278M 2s
27648K ........ ........ ........ ........ ........ ........ 13% 263M 1s
30720K ........ ........ ........ ........ ........ ........ 14% 330M 1s
33792K ........ ........ ........ ........ ........ ........ 15% 258M 1s
36864K ........ ........ ........ ........ ........ ........ 17% 329M 1s
39936K ........ ........ ........ ........ ........ ........ 18% 348M 1s
43008K ........ ........ ........ ........ ........ ........ 19% 358M 1s
46080K ........ ........ ........ ........ ........ ........ 21% 331M 1s
49152K ........ ........ ........ ........ ........ ........ 22% 352M 1s
52224K ........ ........ ........ ........ ........ ........ 23% 354M 1s
55296K ........ ........ ........ ........ ........ ........ 25% 356M 1s
58368K ........ ........ ........ ........ ........ ........ 26% 354M 1s
61440K ........ ........ ........ ........ ........ ........ 27% 313M 1s
64512K ........ ........ ........ ........ ........ ........ 29% 346M 1s
67584K ........ ........ ........ ........ ........ ........ 30% 341M 1s
70656K ........ ........ ........ ........ ........ ........ 31% 282M 1s
73728K ........ ........ ........ ........ ........ ........ 33% 342M 1s
76800K ........ ........ ........ ........ ........ ........ 34% 379M 1s
79872K ........ ........ ........ ........ ........ ........ 35% 322M 1s
82944K ........ ........ ........ ........ ........ ........ 37% 340M 1s
86016K ........ ........ ........ ........ ........ ........ 38% 252M 1s
89088K ........ ........ ........ ........ ........ ........ 39% 339M 1s
92160K ........ ........ ........ ........ ........ ........ 41% 325M 1s
95232K ........ ........ ........ ........ ........ ........ 42% 373M 1s
98304K ........ ........ ........ ........ ........ ........ 43% 353M 1s
101376K ........ ........ ........ ........ ........ ........ 45% 341M 1s
104448K ........ ........ ........ ........ ........ ........ 46% 398M 1s
107520K ........ ........ ........ ........ ........ ........ 47% 304M 1s
110592K ........ ........ ........ ........ ........ ........ 49% 334M 0s
113664K ........ ........ ........ ........ ........ ........ 50% 310M 0s
116736K ........ ........ ........ ........ ........ ........ 51% 310M 0s
119808K ........ ........ ........ ........ ........ ........ 53% 306M 0s
122880K ........ ........ ........ ........ ........ ........ 54% 296M 0s
125952K ........ ........ ........ ........ ........ ........ 55% 298M 0s
129024K ........ ........ ........ ........ ........ ........ 57% 286M 0s
132096K ........ ........ ........ ........ ........ ........ 58% 286M 0s
135168K ........ ........ ........ ........ ........ ........ 59% 292M 0s
138240K ........ ........ ........ ........ ........ ........ 61% 292M 0s
141312K ........ ........ ........ ........ ........ ........ 62% 229M 0s
144384K ........ ........ ........ ........ ........ ........ 63% 316M 0s
147456K ........ ........ ........ ........ ........ ........ 65% 392M 0s
150528K ........ ........ ........ ........ ........ ........ 66% 327M 0s
153600K ........ ........ ........ ........ ........ ........ 67% 309M 0s
156672K ........ ........ ........ ........ ........ ........ 69% 333M 0s
159744K ........ ........ ........ ........ ........ ........ 70% 332M 0s
162816K ........ ........ ........ ........ ........ ........ 71% 325M 0s
165888K ........ ........ ........ ........ ........ ........ 73% 314M 0s
168960K ........ ........ ........ ........ ........ ........ 74% 337M 0s
172032K ........ ........ ........ ........ ........ ........ 75% 330M 0s
175104K ........ ........ ........ ........ ........ ........ 77% 330M 0s
178176K ........ ........ ........ ........ ........ ........ 78% 342M 0s
181248K ........ ........ ........ ........ ........ ........ 79% 325M 0s
184320K ........ ........ ........ ........ ........ ........ 81% 334M 0s
187392K ........ ........ ........ ........ ........ ........ 82% 307M 0s
190464K ........ ........ ........ ........ ........ ........ 83% 316M 0s
193536K ........ ........ ........ ........ ........ ........ 85% 322M 0s
196608K ........ ........ ........ ........ ........ ........ 86% 324M 0s
199680K ........ ........ ........ ........ ........ ........ 87% 330M 0s
202752K ........ ........ ........ ........ ........ ........ 89% 319M 0s
205824K ........ ........ ........ ........ ........ ........ 90% 333M 0s
208896K ........ ........ ........ ........ ........ ........ 91% 312M 0s
211968K ........ ........ ........ ........ ........ ........ 93% 330M 0s
215040K ........ ........ ........ ........ ........ ........ 94% 324M 0s
218112K ........ ........ ........ ........ ........ ........ 95% 331M 0s
221184K ........ ........ ........ ........ ........ ........ 97% 325M 0s
224256K ........ ........ ........ ........ ........ ........ 98% 329M 0s
227328K ........ ........ ........ ........ ........ ........ 99% 324M 0s
230400K ........ .... 100% 348M=0.8s
2025-08-16 00:55:42 (271 MB/s) - ‘karaf-0.22.1-20250815.175747-18.zip’ saved [236730165/236730165]
Extracting the new controller...
+ echo 'Extracting the new controller...'
+ unzip -q karaf-0.22.1-20250815.175747-18.zip
Adding external repositories...
+ echo 'Adding external repositories...'
+ sed -ie 's%org.ops4j.pax.url.mvn.repositories=%org.ops4j.pax.url.mvn.repositories=https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot@id=opendaylight-snapshot@snapshots, https://nexus.opendaylight.org/content/repositories/public@id=opendaylight-mirror, http://repo1.maven.org/maven2@id=central, http://repository.springsource.com/maven/bundles/release@id=spring.ebr.release, http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external, http://zodiac.springsource.com/maven/bundles/release@id=gemini, http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases, https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases, https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases%g' /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg
+ cat /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg
################################################################################
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
################################################################################
#
# If set to true, the following property will not allow any certificate to be used
# when accessing Maven repositories through SSL
#
#org.ops4j.pax.url.mvn.certificateCheck=
#
# Path to the local Maven settings file.
# The repositories defined in this file will be automatically added to the list
# of default repositories if the 'org.ops4j.pax.url.mvn.repositories' property
# below is not set.
# The following locations are checked for the existence of the settings.xml file
# * 1. looks for the specified url
# * 2. if not found looks for ${user.home}/.m2/settings.xml
# * 3. if not found looks for ${maven.home}/conf/settings.xml
# * 4. if not found looks for ${M2_HOME}/conf/settings.xml
#
#org.ops4j.pax.url.mvn.settings=
#
# Path to the local Maven repository which is used to avoid downloading
# artifacts when they already exist locally.
# The value of this property will be extracted from the settings.xml file
# above, or defaulted to:
# System.getProperty( "user.home" ) + "/.m2/repository"
#
org.ops4j.pax.url.mvn.localRepository=${karaf.home}/${karaf.default.repository}
#
# Default this to false. It's just weird to use undocumented repos
#
org.ops4j.pax.url.mvn.useFallbackRepositories=false
#
# Uncomment if you don't wanna use the proxy settings
# from the Maven conf/settings.xml file
#
# org.ops4j.pax.url.mvn.proxySupport=false
#
# Comma separated list of repositories scanned when resolving an artifact.
# Those repositories will be checked before iterating through the
# below list of repositories and even before the local repository
# A repository url can be appended with zero or more of the following flags:
# @snapshots : the repository contains snaphots
# @noreleases : the repository does not contain any released artifacts
#
# The following property value will add the system folder as a repo.
#
org.ops4j.pax.url.mvn.defaultRepositories=\
file:${karaf.home}/${karaf.default.repository}@id=system.repository@snapshots,\
file:${karaf.data}/kar@id=kar.repository@multi@snapshots,\
file:${karaf.base}/${karaf.default.repository}@id=child.system.repository@snapshots
# Use the default local repo (e.g.~/.m2/repository) as a "remote" repo
#org.ops4j.pax.url.mvn.defaultLocalRepoAsRemote=false
#
# Comma separated list of repositories scanned when resolving an artifact.
# The default list includes the following repositories:
# http://repo1.maven.org/maven2@id=central
# http://repository.springsource.com/maven/bundles/release@id=spring.ebr
# http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external
# http://zodiac.springsource.com/maven/bundles/release@id=gemini
# http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases
# https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases
# https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases
# To add repositories to the default ones, prepend '+' to the list of repositories
# to add.
# A repository url can be appended with zero or more of the following flags:
# @snapshots : the repository contains snapshots
# @noreleases : the repository does not contain any released artifacts
# @id=repository.id : the id for the repository, just like in the settings.xml this is optional but recommended
#
org.ops4j.pax.url.mvn.repositories=https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot@id=opendaylight-snapshot@snapshots, https://nexus.opendaylight.org/content/repositories/public@id=opendaylight-mirror, http://repo1.maven.org/maven2@id=central, http://repository.springsource.com/maven/bundles/release@id=spring.ebr.release, http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external, http://zodiac.springsource.com/maven/bundles/release@id=gemini, http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases, https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases, https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases
### ^^^ No remote repositories. This is the only ODL change compared to Karaf defaults.Configuring the startup features...
+ [[ True == \T\r\u\e ]]
+ echo 'Configuring the startup features...'
+ sed -ie 's/\(featuresBoot=\|featuresBoot =\)/featuresBoot = odl-infrautils-ready,odl-jolokia,odl-ovsdb-southbound-impl-rest,/g' /tmp/karaf-0.22.1-SNAPSHOT/etc/org.apache.karaf.features.cfg
+ FEATURE_TEST_STRING=features-test
+ FEATURE_TEST_VERSION=0.22.1-SNAPSHOT
+ KARAF_VERSION=karaf4
+ [[ integration == \i\n\t\e\g\r\a\t\i\o\n ]]
+ sed -ie 's%\(featuresRepositories=\|featuresRepositories =\)%featuresRepositories = mvn:org.opendaylight.integration/features-test/0.22.1-SNAPSHOT/xml/features,mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.2.0/xml/features,%g' /tmp/karaf-0.22.1-SNAPSHOT/etc/org.apache.karaf.features.cfg
+ [[ ! -z '' ]]
+ cat /tmp/karaf-0.22.1-SNAPSHOT/etc/org.apache.karaf.features.cfg
################################################################################
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
################################################################################
#
# Comma separated list of features repositories to register by default
#
featuresRepositories = mvn:org.opendaylight.integration/features-test/0.22.1-SNAPSHOT/xml/features,mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.2.0/xml/features, file:${karaf.etc}/e9bd1b55-a01b-4b39-a866-96debd8f5c8f.xml
#
# Comma separated list of features to install at startup
#
featuresBoot = odl-infrautils-ready,odl-jolokia,odl-ovsdb-southbound-impl-rest, 7c06812c-bfbd-44b6-9290-770b569ed18e
#
# Resource repositories (OBR) that the features resolver can use
# to resolve requirements/capabilities
#
# The format of the resourceRepositories is
# resourceRepositories=[xml:url|json:url],...
# for Instance:
#
#resourceRepositories=xml:http://host/path/to/index.xml
# or
#resourceRepositories=json:http://host/path/to/index.json
#
#
# Defines if the boot features are started in asynchronous mode (in a dedicated thread)
#
featuresBootAsynchronous=false
#
# Service requirements enforcement
#
# By default, the feature resolver checks the service requirements/capabilities of
# bundles for new features (xml schema >= 1.3.0) in order to automatically installs
# the required bundles.
# The following flag can have those values:
# - disable: service requirements are completely ignored
# - default: service requirements are ignored for old features
# - enforce: service requirements are always verified
#
#serviceRequirements=default
#
# Store cfg file for config element in feature
#
#configCfgStore=true
#
# Define if the feature service automatically refresh bundles
#
autoRefresh=true
#
# Configuration of features processing mechanism (overrides, blacklisting, modification of features)
# XML file defines instructions related to features processing
# versions.properties may declare properties to resolve placeholders in XML file
# both files are relative to ${karaf.etc}
#
#featureProcessing=org.apache.karaf.features.xml
#featureProcessingVersions=versions.properties
+ configure_karaf_log karaf4 ''
+ local -r karaf_version=karaf4
+ local -r controllerdebugmap=
+ local logapi=log4j
+ grep log4j2 /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
log4j2.pattern = %d{ISO8601} | %-5p | %-16t | %-32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n
log4j2.rootLogger.level = INFO
#log4j2.rootLogger.type = asyncRoot
#log4j2.rootLogger.includeLocation = false
log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile
log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi
log4j2.rootLogger.appenderRef.Console.ref = Console
log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter
log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF}
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.type = ContextMapFilter
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.type = KeyValuePair
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.key = slf4j.marker
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.value = CONFIDENTIAL
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.operator = or
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMatch = DENY
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMismatch = NEUTRAL
log4j2.logger.spifly.name = org.apache.aries.spifly
log4j2.logger.spifly.level = WARN
log4j2.logger.audit.name = org.apache.karaf.jaas.modules.audit
log4j2.logger.audit.level = INFO
log4j2.logger.audit.additivity = false
log4j2.logger.audit.appenderRef.AuditRollingFile.ref = AuditRollingFile
# Console appender not used by default (see log4j2.rootLogger.appenderRefs)
log4j2.appender.console.type = Console
log4j2.appender.console.name = Console
log4j2.appender.console.layout.type = PatternLayout
log4j2.appender.console.layout.pattern = ${log4j2.pattern}
log4j2.appender.rolling.type = RollingRandomAccessFile
log4j2.appender.rolling.name = RollingFile
log4j2.appender.rolling.fileName = ${karaf.data}/log/karaf.log
log4j2.appender.rolling.filePattern = ${karaf.data}/log/karaf.log.%i
#log4j2.appender.rolling.immediateFlush = false
log4j2.appender.rolling.append = true
log4j2.appender.rolling.layout.type = PatternLayout
log4j2.appender.rolling.layout.pattern = ${log4j2.pattern}
log4j2.appender.rolling.policies.type = Policies
log4j2.appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
log4j2.appender.rolling.policies.size.size = 64MB
log4j2.appender.rolling.strategy.type = DefaultRolloverStrategy
log4j2.appender.rolling.strategy.max = 7
log4j2.appender.audit.type = RollingRandomAccessFile
log4j2.appender.audit.name = AuditRollingFile
log4j2.appender.audit.fileName = ${karaf.data}/security/audit.log
log4j2.appender.audit.filePattern = ${karaf.data}/security/audit.log.%i
log4j2.appender.audit.append = true
log4j2.appender.audit.layout.type = PatternLayout
log4j2.appender.audit.layout.pattern = ${log4j2.pattern}
log4j2.appender.audit.policies.type = Policies
log4j2.appender.audit.policies.size.type = SizeBasedTriggeringPolicy
log4j2.appender.audit.policies.size.size = 8MB
log4j2.appender.audit.strategy.type = DefaultRolloverStrategy
log4j2.appender.audit.strategy.max = 7
log4j2.appender.osgi.type = PaxOsgi
log4j2.appender.osgi.name = PaxOsgi
log4j2.appender.osgi.filter = *
#log4j2.logger.aether.name = shaded.org.eclipse.aether
#log4j2.logger.aether.level = TRACE
#log4j2.logger.http-headers.name = shaded.org.apache.http.headers
#log4j2.logger.http-headers.level = DEBUG
#log4j2.logger.maven.name = org.ops4j.pax.url.mvn
#log4j2.logger.maven.level = TRACE
+ logapi=log4j2
+ echo 'Configuring the karaf log... karaf_version: karaf4, logapi: log4j2'
Configuring the karaf log... karaf_version: karaf4, logapi: log4j2
+ '[' log4j2 == log4j2 ']'
+ sed -ie 's/log4j2.appender.rolling.policies.size.size = 64MB/log4j2.appender.rolling.policies.size.size = 1GB/g' /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
+ orgmodule=org.opendaylight.yangtools.yang.parser.repo.YangTextSchemaContextResolver
+ orgmodule_=org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver
+ echo 'log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.name = WARN'
controllerdebugmap:
cat /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
+ echo 'log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.level = WARN'
+ unset IFS
+ echo 'controllerdebugmap: '
+ '[' -n '' ']'
+ echo 'cat /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg'
+ cat /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
################################################################################
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
################################################################################
# Common pattern layout for appenders
log4j2.pattern = %d{ISO8601} | %-5p | %-16t | %-32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n
# Root logger
log4j2.rootLogger.level = INFO
# uncomment to use asynchronous loggers, which require mvn:com.lmax/disruptor/3.3.2 library
#log4j2.rootLogger.type = asyncRoot
#log4j2.rootLogger.includeLocation = false
log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile
log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi
log4j2.rootLogger.appenderRef.Console.ref = Console
log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter
log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF}
# Filters for logs marked by org.opendaylight.odlparent.Markers
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.type = ContextMapFilter
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.type = KeyValuePair
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.key = slf4j.marker
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.value = CONFIDENTIAL
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.operator = or
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMatch = DENY
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMismatch = NEUTRAL
# Loggers configuration
# Spifly logger
log4j2.logger.spifly.name = org.apache.aries.spifly
log4j2.logger.spifly.level = WARN
# Security audit logger
log4j2.logger.audit.name = org.apache.karaf.jaas.modules.audit
log4j2.logger.audit.level = INFO
log4j2.logger.audit.additivity = false
log4j2.logger.audit.appenderRef.AuditRollingFile.ref = AuditRollingFile
# Appenders configuration
# Console appender not used by default (see log4j2.rootLogger.appenderRefs)
log4j2.appender.console.type = Console
log4j2.appender.console.name = Console
log4j2.appender.console.layout.type = PatternLayout
log4j2.appender.console.layout.pattern = ${log4j2.pattern}
# Rolling file appender
log4j2.appender.rolling.type = RollingRandomAccessFile
log4j2.appender.rolling.name = RollingFile
log4j2.appender.rolling.fileName = ${karaf.data}/log/karaf.log
log4j2.appender.rolling.filePattern = ${karaf.data}/log/karaf.log.%i
# uncomment to not force a disk flush
#log4j2.appender.rolling.immediateFlush = false
log4j2.appender.rolling.append = true
log4j2.appender.rolling.layout.type = PatternLayout
log4j2.appender.rolling.layout.pattern = ${log4j2.pattern}
log4j2.appender.rolling.policies.type = Policies
log4j2.appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
log4j2.appender.rolling.policies.size.size = 1GB
log4j2.appender.rolling.strategy.type = DefaultRolloverStrategy
log4j2.appender.rolling.strategy.max = 7
# Audit file appender
log4j2.appender.audit.type = RollingRandomAccessFile
log4j2.appender.audit.name = AuditRollingFile
log4j2.appender.audit.fileName = ${karaf.data}/security/audit.log
log4j2.appender.audit.filePattern = ${karaf.data}/security/audit.log.%i
log4j2.appender.audit.append = true
log4j2.appender.audit.layout.type = PatternLayout
log4j2.appender.audit.layout.pattern = ${log4j2.pattern}
log4j2.appender.audit.policies.type = Policies
log4j2.appender.audit.policies.size.type = SizeBasedTriggeringPolicy
log4j2.appender.audit.policies.size.size = 8MB
log4j2.appender.audit.strategy.type = DefaultRolloverStrategy
log4j2.appender.audit.strategy.max = 7
# OSGi appender
log4j2.appender.osgi.type = PaxOsgi
log4j2.appender.osgi.name = PaxOsgi
log4j2.appender.osgi.filter = *
# help with identification of maven-related problems with pax-url-aether
#log4j2.logger.aether.name = shaded.org.eclipse.aether
#log4j2.logger.aether.level = TRACE
#log4j2.logger.http-headers.name = shaded.org.apache.http.headers
#log4j2.logger.http-headers.level = DEBUG
#log4j2.logger.maven.name = org.ops4j.pax.url.mvn
#log4j2.logger.maven.level = TRACE
log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.name = WARN
log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.level = WARN
+ set_java_vars /usr/lib/jvm/java-21-openjdk-amd64 2048m /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
+ local -r java_home=/usr/lib/jvm/java-21-openjdk-amd64
+ local -r controllermem=2048m
Configure
+ local -r memconf=/tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
+ echo Configure
java home: /usr/lib/jvm/java-21-openjdk-amd64
+ echo ' java home: /usr/lib/jvm/java-21-openjdk-amd64'
max memory: 2048m
memconf: /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
+ echo ' max memory: 2048m'
+ echo ' memconf: /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv'
+ sed -ie 's%^# export JAVA_HOME%export JAVA_HOME=${JAVA_HOME:-/usr/lib/jvm/java-21-openjdk-amd64}%g' /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
+ sed -ie 's/JAVA_MAX_MEM="2048m"/JAVA_MAX_MEM=2048m/g' /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
cat /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
+ echo 'cat /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv'
+ cat /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
#!/bin/sh
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#
# handle specific scripts; the SCRIPT_NAME is exactly the name of the Karaf
# script: client, instance, shell, start, status, stop, karaf
#
# if [ "${KARAF_SCRIPT}" == "SCRIPT_NAME" ]; then
# Actions go here...
# fi
#
# general settings which should be applied for all scripts go here; please keep
# in mind that it is possible that scripts might be executed more than once, e.g.
# in example of the start script where the start script is executed first and the
# karaf script afterwards.
#
#
# The following section shows the possible configuration options for the default
# karaf scripts
#
export JAVA_HOME=${JAVA_HOME:-/usr/lib/jvm/java-21-openjdk-amd64} # Location of Java installation
# export JAVA_OPTS # Generic JVM options, for instance, where you can pass the memory configuration
# export JAVA_NON_DEBUG_OPTS # Additional non-debug JVM options
# export EXTRA_JAVA_OPTS # Additional JVM options
# export KARAF_HOME # Karaf home folder
# export KARAF_DATA # Karaf data folder
# export KARAF_BASE # Karaf base folder
# export KARAF_ETC # Karaf etc folder
# export KARAF_LOG # Karaf log folder
# export KARAF_SYSTEM_OPTS # First citizen Karaf options
# export KARAF_OPTS # Additional available Karaf options
# export KARAF_DEBUG # Enable debug mode
# export KARAF_REDIRECT # Enable/set the std/err redirection when using bin/start
# export KARAF_NOROOT # Prevent execution as root if set to true
Set Java version
+ echo 'Set Java version'
+ sudo /usr/sbin/alternatives --install /usr/bin/java java /usr/lib/jvm/java-21-openjdk-amd64/bin/java 1
sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
sudo: a password is required
+ sudo /usr/sbin/alternatives --set java /usr/lib/jvm/java-21-openjdk-amd64/bin/java
sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
sudo: a password is required
JDK default version ...
+ echo 'JDK default version ...'
+ java -version
openjdk version "21.0.5" 2024-10-15
OpenJDK Runtime Environment (build 21.0.5+11-Ubuntu-1ubuntu122.04)
OpenJDK 64-Bit Server VM (build 21.0.5+11-Ubuntu-1ubuntu122.04, mixed mode, sharing)
Set JAVA_HOME
+ echo 'Set JAVA_HOME'
+ export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
++ readlink -e /usr/lib/jvm/java-21-openjdk-amd64/bin/java
Java binary pointed at by JAVA_HOME: /usr/lib/jvm/java-21-openjdk-amd64/bin/java
+ JAVA_RESOLVED=/usr/lib/jvm/java-21-openjdk-amd64/bin/java
+ echo 'Java binary pointed at by JAVA_HOME: /usr/lib/jvm/java-21-openjdk-amd64/bin/java'
Listing all open ports on controller system...
+ echo 'Listing all open ports on controller system...'
+ netstat -pnatu
/tmp/configuration-script.sh: line 40: netstat: command not found
Configuring cluster
+ '[' -f /tmp/custom_shard_config.txt ']'
+ echo 'Configuring cluster'
+ /tmp/karaf-0.22.1-SNAPSHOT/bin/configure_cluster.sh 1 10.30.171.178 10.30.170.138 10.30.170.234
################################################
## Configure Cluster ##
################################################
ERROR: Cluster configurations files not found. Please configure clustering feature.
Dump pekko.conf
+ echo 'Dump pekko.conf'
+ cat /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/pekko.conf
cat: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/pekko.conf: No such file or directory
Dump modules.conf
+ echo 'Dump modules.conf'
+ cat /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/modules.conf
cat: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/modules.conf: No such file or directory
Dump module-shards.conf
+ echo 'Dump module-shards.conf'
+ cat /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/module-shards.conf
cat: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/module-shards.conf: No such file or directory
Configuring member-2 with IP address 10.30.170.138
Warning: Permanently added '10.30.170.138' (ECDSA) to the list of known hosts.
Warning: Permanently added '10.30.170.138' (ECDSA) to the list of known hosts.
+ source /tmp/common-functions.sh karaf-0.22.1-SNAPSHOT titanium
common-functions.sh is being sourced
++ [[ /tmp/common-functions.sh == \/\t\m\p\/\c\o\n\f\i\g\u\r\a\t\i\o\n\-\s\c\r\i\p\t\.\s\h ]]
++ echo 'common-functions.sh is being sourced'
++ BUNDLEFOLDER=karaf-0.22.1-SNAPSHOT
++ DISTROSTREAM=titanium
++ export MAVENCONF=/tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg
++ MAVENCONF=/tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg
++ export FEATURESCONF=/tmp/karaf-0.22.1-SNAPSHOT/etc/org.apache.karaf.features.cfg
++ FEATURESCONF=/tmp/karaf-0.22.1-SNAPSHOT/etc/org.apache.karaf.features.cfg
++ export CUSTOMPROP=/tmp/karaf-0.22.1-SNAPSHOT/etc/custom.properties
++ CUSTOMPROP=/tmp/karaf-0.22.1-SNAPSHOT/etc/custom.properties
++ export LOGCONF=/tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
++ LOGCONF=/tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
++ export MEMCONF=/tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
++ MEMCONF=/tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
++ export CONTROLLERMEM=
++ CONTROLLERMEM=
++ case "${DISTROSTREAM}" in
++ CLUSTER_SYSTEM=pekko
++ export AKKACONF=/tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/pekko.conf
++ AKKACONF=/tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/pekko.conf
++ export MODULESCONF=/tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/modules.conf
++ MODULESCONF=/tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/modules.conf
++ export MODULESHARDSCONF=/tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/module-shards.conf
++ MODULESHARDSCONF=/tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/module-shards.conf
++ print_common_env
++ cat
common-functions environment:
MAVENCONF: /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg
ACTUALFEATURES:
FEATURESCONF: /tmp/karaf-0.22.1-SNAPSHOT/etc/org.apache.karaf.features.cfg
CUSTOMPROP: /tmp/karaf-0.22.1-SNAPSHOT/etc/custom.properties
LOGCONF: /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
MEMCONF: /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
CONTROLLERMEM:
AKKACONF: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/pekko.conf
MODULESCONF: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/modules.conf
MODULESHARDSCONF: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/module-shards.conf
SUITES:
++ SSH='ssh -t -t'
++ extra_services_cntl=' dnsmasq.service httpd.service libvirtd.service openvswitch.service ovs-vswitchd.service ovsdb-server.service rabbitmq-server.service '
++ extra_services_cmp=' libvirtd.service openvswitch.service ovs-vswitchd.service ovsdb-server.service '
Changing to /tmp
Downloading the distribution from https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.1-SNAPSHOT/karaf-0.22.1-20250815.175747-18.zip
+ echo 'Changing to /tmp'
+ cd /tmp
+ echo 'Downloading the distribution from https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.1-SNAPSHOT/karaf-0.22.1-20250815.175747-18.zip'
+ wget --progress=dot:mega https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.1-SNAPSHOT/karaf-0.22.1-20250815.175747-18.zip
--2025-08-16 00:55:45-- https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.1-SNAPSHOT/karaf-0.22.1-20250815.175747-18.zip
Resolving nexus.opendaylight.org (nexus.opendaylight.org)... 199.204.45.87, 2604:e100:1:0:f816:3eff:fe45:48d6
Connecting to nexus.opendaylight.org (nexus.opendaylight.org)|199.204.45.87|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 236730165 (226M) [application/zip]
Saving to: ‘karaf-0.22.1-20250815.175747-18.zip’
0K ........ ........ ........ ........ ........ ........ 1% 84.1M 3s
3072K ........ ........ ........ ........ ........ ........ 2% 152M 2s
6144K ........ ........ ........ ........ ........ ........ 3% 146M 2s
9216K ........ ........ ........ ........ ........ ........ 5% 104M 2s
12288K ........ ........ ........ ........ ........ ........ 6% 160M 2s
15360K ........ ........ ........ ........ ........ ........ 7% 185M 2s
18432K ........ ........ ........ ........ ........ ........ 9% 233M 1s
21504K ........ ........ ........ ........ ........ ........ 10% 230M 1s
24576K ........ ........ ........ ........ ........ ........ 11% 270M 1s
27648K ........ ........ ........ ........ ........ ........ 13% 260M 1s
30720K ........ ........ ........ ........ ........ ........ 14% 287M 1s
33792K ........ ........ ........ ........ ........ ........ 15% 297M 1s
36864K ........ ........ ........ ........ ........ ........ 17% 318M 1s
39936K ........ ........ ........ ........ ........ ........ 18% 326M 1s
43008K ........ ........ ........ ........ ........ ........ 19% 331M 1s
46080K ........ ........ ........ ........ ........ ........ 21% 346M 1s
49152K ........ ........ ........ ........ ........ ........ 22% 353M 1s
52224K ........ ........ ........ ........ ........ ........ 23% 362M 1s
55296K ........ ........ ........ ........ ........ ........ 25% 373M 1s
58368K ........ ........ ........ ........ ........ ........ 26% 365M 1s
61440K ........ ........ ........ ........ ........ ........ 27% 375M 1s
64512K ........ ........ ........ ........ ........ ........ 29% 307M 1s
67584K ........ ........ ........ ........ ........ ........ 30% 261M 1s
70656K ........ ........ ........ ........ ........ ........ 31% 307M 1s
73728K ........ ........ ........ ........ ........ ........ 33% 367M 1s
76800K ........ ........ ........ ........ ........ ........ 34% 364M 1s
79872K ........ ........ ........ ........ ........ ........ 35% 369M 1s
82944K ........ ........ ........ ........ ........ ........ 37% 368M 1s
86016K ........ ........ ........ ........ ........ ........ 38% 351M 1s
89088K ........ ........ ........ ........ ........ ........ 39% 361M 1s
92160K ........ ........ ........ ........ ........ ........ 41% 359M 1s
95232K ........ ........ ........ ........ ........ ........ 42% 354M 1s
98304K ........ ........ ........ ........ ........ ........ 43% 235M 1s
101376K ........ ........ ........ ........ ........ ........ 45% 310M 0s
104448K ........ ........ ........ ........ ........ ........ 46% 412M 0s
107520K ........ ........ ........ ........ ........ ........ 47% 426M 0s
110592K ........ ........ ........ ........ ........ ........ 49% 411M 0s
113664K ........ ........ ........ ........ ........ ........ 50% 426M 0s
116736K ........ ........ ........ ........ ........ ........ 51% 371M 0s
119808K ........ ........ ........ ........ ........ ........ 53% 352M 0s
122880K ........ ........ ........ ........ ........ ........ 54% 335M 0s
125952K ........ ........ ........ ........ ........ ........ 55% 272M 0s
129024K ........ ........ ........ ........ ........ ........ 57% 189M 0s
132096K ........ ........ ........ ........ ........ ........ 58% 299M 0s
135168K ........ ........ ........ ........ ........ ........ 59% 357M 0s
138240K ........ ........ ........ ........ ........ ........ 61% 331M 0s
141312K ........ ........ ........ ........ ........ ........ 62% 359M 0s
144384K ........ ........ ........ ........ ........ ........ 63% 355M 0s
147456K ........ ........ ........ ........ ........ ........ 65% 356M 0s
150528K ........ ........ ........ ........ ........ ........ 66% 343M 0s
153600K ........ ........ ........ ........ ........ ........ 67% 353M 0s
156672K ........ ........ ........ ........ ........ ........ 69% 335M 0s
159744K ........ ........ ........ ........ ........ ........ 70% 303M 0s
162816K ........ ........ ........ ........ ........ ........ 71% 289M 0s
165888K ........ ........ ........ ........ ........ ........ 73% 267M 0s
168960K ........ ........ ........ ........ ........ ........ 74% 257M 0s
172032K ........ ........ ........ ........ ........ ........ 75% 266M 0s
175104K ........ ........ ........ ........ ........ ........ 77% 248M 0s
178176K ........ ........ ........ ........ ........ ........ 78% 271M 0s
181248K ........ ........ ........ ........ ........ ........ 79% 277M 0s
184320K ........ ........ ........ ........ ........ ........ 81% 267M 0s
187392K ........ ........ ........ ........ ........ ........ 82% 254M 0s
190464K ........ ........ ........ ........ ........ ........ 83% 223M 0s
193536K ........ ........ ........ ........ ........ ........ 85% 233M 0s
196608K ........ ........ ........ ........ ........ ........ 86% 284M 0s
199680K ........ ........ ........ ........ ........ ........ 87% 242M 0s
202752K ........ ........ ........ ........ ........ ........ 89% 292M 0s
205824K ........ ........ ........ ........ ........ ........ 90% 268M 0s
208896K ........ ........ ........ ........ ........ ........ 91% 296M 0s
211968K ........ ........ ........ ........ ........ ........ 93% 251M 0s
215040K ........ ........ ........ ........ ........ ........ 94% 280M 0s
218112K ........ ........ ........ ........ ........ ........ 95% 357M 0s
221184K ........ ........ ........ ........ ........ ........ 97% 351M 0s
224256K ........ ........ ........ ........ ........ ........ 98% 342M 0s
227328K ........ ........ ........ ........ ........ ........ 99% 357M 0s
230400K ........ .... 100% 371M=0.8s
2025-08-16 00:55:46 (276 MB/s) - ‘karaf-0.22.1-20250815.175747-18.zip’ saved [236730165/236730165]
Extracting the new controller...
+ echo 'Extracting the new controller...'
+ unzip -q karaf-0.22.1-20250815.175747-18.zip
Adding external repositories...
+ echo 'Adding external repositories...'
+ sed -ie 's%org.ops4j.pax.url.mvn.repositories=%org.ops4j.pax.url.mvn.repositories=https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot@id=opendaylight-snapshot@snapshots, https://nexus.opendaylight.org/content/repositories/public@id=opendaylight-mirror, http://repo1.maven.org/maven2@id=central, http://repository.springsource.com/maven/bundles/release@id=spring.ebr.release, http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external, http://zodiac.springsource.com/maven/bundles/release@id=gemini, http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases, https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases, https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases%g' /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg
+ cat /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg
################################################################################
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
################################################################################
#
# If set to true, the following property will not allow any certificate to be used
# when accessing Maven repositories through SSL
#
#org.ops4j.pax.url.mvn.certificateCheck=
#
# Path to the local Maven settings file.
# The repositories defined in this file will be automatically added to the list
# of default repositories if the 'org.ops4j.pax.url.mvn.repositories' property
# below is not set.
# The following locations are checked for the existence of the settings.xml file
# * 1. looks for the specified url
# * 2. if not found looks for ${user.home}/.m2/settings.xml
# * 3. if not found looks for ${maven.home}/conf/settings.xml
# * 4. if not found looks for ${M2_HOME}/conf/settings.xml
#
#org.ops4j.pax.url.mvn.settings=
#
# Path to the local Maven repository which is used to avoid downloading
# artifacts when they already exist locally.
# The value of this property will be extracted from the settings.xml file
# above, or defaulted to:
# System.getProperty( "user.home" ) + "/.m2/repository"
#
org.ops4j.pax.url.mvn.localRepository=${karaf.home}/${karaf.default.repository}
#
# Default this to false. It's just weird to use undocumented repos
#
org.ops4j.pax.url.mvn.useFallbackRepositories=false
#
# Uncomment if you don't wanna use the proxy settings
# from the Maven conf/settings.xml file
#
# org.ops4j.pax.url.mvn.proxySupport=false
#
# Comma separated list of repositories scanned when resolving an artifact.
# Those repositories will be checked before iterating through the
# below list of repositories and even before the local repository
# A repository url can be appended with zero or more of the following flags:
# @snapshots : the repository contains snaphots
# @noreleases : the repository does not contain any released artifacts
#
# The following property value will add the system folder as a repo.
#
org.ops4j.pax.url.mvn.defaultRepositories=\
file:${karaf.home}/${karaf.default.repository}@id=system.repository@snapshots,\
file:${karaf.data}/kar@id=kar.repository@multi@snapshots,\
file:${karaf.base}/${karaf.default.repository}@id=child.system.repository@snapshots
# Use the default local repo (e.g.~/.m2/repository) as a "remote" repo
#org.ops4j.pax.url.mvn.defaultLocalRepoAsRemote=false
#
# Comma separated list of repositories scanned when resolving an artifact.
# The default list includes the following repositories:
# http://repo1.maven.org/maven2@id=central
# http://repository.springsource.com/maven/bundles/release@id=spring.ebr
# http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external
# http://zodiac.springsource.com/maven/bundles/release@id=gemini
# http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases
# https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases
# https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases
# To add repositories to the default ones, prepend '+' to the list of repositories
# to add.
# A repository url can be appended with zero or more of the following flags:
# @snapshots : the repository contains snapshots
# @noreleases : the repository does not contain any released artifacts
# @id=repository.id : the id for the repository, just like in the settings.xml this is optional but recommended
#
org.ops4j.pax.url.mvn.repositories=https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot@id=opendaylight-snapshot@snapshots, https://nexus.opendaylight.org/content/repositories/public@id=opendaylight-mirror, http://repo1.maven.org/maven2@id=central, http://repository.springsource.com/maven/bundles/release@id=spring.ebr.release, http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external, http://zodiac.springsource.com/maven/bundles/release@id=gemini, http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases, https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases, https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases
### ^^^ No remote repositories. This is the only ODL change compared to Karaf defaults.Configuring the startup features...
+ [[ True == \T\r\u\e ]]
+ echo 'Configuring the startup features...'
+ sed -ie 's/\(featuresBoot=\|featuresBoot =\)/featuresBoot = odl-infrautils-ready,odl-jolokia,odl-ovsdb-southbound-impl-rest,/g' /tmp/karaf-0.22.1-SNAPSHOT/etc/org.apache.karaf.features.cfg
+ FEATURE_TEST_STRING=features-test
+ FEATURE_TEST_VERSION=0.22.1-SNAPSHOT
+ KARAF_VERSION=karaf4
+ [[ integration == \i\n\t\e\g\r\a\t\i\o\n ]]
+ sed -ie 's%\(featuresRepositories=\|featuresRepositories =\)%featuresRepositories = mvn:org.opendaylight.integration/features-test/0.22.1-SNAPSHOT/xml/features,mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.2.0/xml/features,%g' /tmp/karaf-0.22.1-SNAPSHOT/etc/org.apache.karaf.features.cfg
+ [[ ! -z '' ]]
+ cat /tmp/karaf-0.22.1-SNAPSHOT/etc/org.apache.karaf.features.cfg
################################################################################
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
################################################################################
#
# Comma separated list of features repositories to register by default
#
featuresRepositories = mvn:org.opendaylight.integration/features-test/0.22.1-SNAPSHOT/xml/features,mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.2.0/xml/features, file:${karaf.etc}/e9bd1b55-a01b-4b39-a866-96debd8f5c8f.xml
#
# Comma separated list of features to install at startup
#
featuresBoot = odl-infrautils-ready,odl-jolokia,odl-ovsdb-southbound-impl-rest, 7c06812c-bfbd-44b6-9290-770b569ed18e
#
# Resource repositories (OBR) that the features resolver can use
# to resolve requirements/capabilities
#
# The format of the resourceRepositories is
# resourceRepositories=[xml:url|json:url],...
# for Instance:
#
#resourceRepositories=xml:http://host/path/to/index.xml
# or
#resourceRepositories=json:http://host/path/to/index.json
#
#
# Defines if the boot features are started in asynchronous mode (in a dedicated thread)
#
featuresBootAsynchronous=false
#
# Service requirements enforcement
#
# By default, the feature resolver checks the service requirements/capabilities of
# bundles for new features (xml schema >= 1.3.0) in order to automatically installs
# the required bundles.
# The following flag can have those values:
# - disable: service requirements are completely ignored
# - default: service requirements are ignored for old features
# - enforce: service requirements are always verified
#
#serviceRequirements=default
#
# Store cfg file for config element in feature
#
#configCfgStore=true
#
# Define if the feature service automatically refresh bundles
#
autoRefresh=true
#
# Configuration of features processing mechanism (overrides, blacklisting, modification of features)
# XML file defines instructions related to features processing
# versions.properties may declare properties to resolve placeholders in XML file
# both files are relative to ${karaf.etc}
#
#featureProcessing=org.apache.karaf.features.xml
#featureProcessingVersions=versions.properties
+ configure_karaf_log karaf4 ''
+ local -r karaf_version=karaf4
+ local -r controllerdebugmap=
+ local logapi=log4j
+ grep log4j2 /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
log4j2.pattern = %d{ISO8601} | %-5p | %-16t | %-32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n
log4j2.rootLogger.level = INFO
#log4j2.rootLogger.type = asyncRoot
#log4j2.rootLogger.includeLocation = false
log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile
log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi
log4j2.rootLogger.appenderRef.Console.ref = Console
log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter
log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF}
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.type = ContextMapFilter
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.type = KeyValuePair
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.key = slf4j.marker
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.value = CONFIDENTIAL
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.operator = or
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMatch = DENY
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMismatch = NEUTRAL
log4j2.logger.spifly.name = org.apache.aries.spifly
log4j2.logger.spifly.level = WARN
log4j2.logger.audit.name = org.apache.karaf.jaas.modules.audit
log4j2.logger.audit.level = INFO
log4j2.logger.audit.additivity = false
log4j2.logger.audit.appenderRef.AuditRollingFile.ref = AuditRollingFile
# Console appender not used by default (see log4j2.rootLogger.appenderRefs)
log4j2.appender.console.type = Console
log4j2.appender.console.name = Console
log4j2.appender.console.layout.type = PatternLayout
log4j2.appender.console.layout.pattern = ${log4j2.pattern}
log4j2.appender.rolling.type = RollingRandomAccessFile
log4j2.appender.rolling.name = RollingFile
log4j2.appender.rolling.fileName = ${karaf.data}/log/karaf.log
log4j2.appender.rolling.filePattern = ${karaf.data}/log/karaf.log.%i
#log4j2.appender.rolling.immediateFlush = false
log4j2.appender.rolling.append = true
log4j2.appender.rolling.layout.type = PatternLayout
log4j2.appender.rolling.layout.pattern = ${log4j2.pattern}
log4j2.appender.rolling.policies.type = Policies
log4j2.appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
log4j2.appender.rolling.policies.size.size = 64MB
log4j2.appender.rolling.strategy.type = DefaultRolloverStrategy
log4j2.appender.rolling.strategy.max = 7
log4j2.appender.audit.type = RollingRandomAccessFile
log4j2.appender.audit.name = AuditRollingFile
log4j2.appender.audit.fileName = ${karaf.data}/security/audit.log
log4j2.appender.audit.filePattern = ${karaf.data}/security/audit.log.%i
log4j2.appender.audit.append = true
log4j2.appender.audit.layout.type = PatternLayout
log4j2.appender.audit.layout.pattern = ${log4j2.pattern}
log4j2.appender.audit.policies.type = Policies
log4j2.appender.audit.policies.size.type = SizeBasedTriggeringPolicy
log4j2.appender.audit.policies.size.size = 8MB
log4j2.appender.audit.strategy.type = DefaultRolloverStrategy
log4j2.appender.audit.strategy.max = 7
log4j2.appender.osgi.type = PaxOsgi
log4j2.appender.osgi.name = PaxOsgi
log4j2.appender.osgi.filter = *
#log4j2.logger.aether.name = shaded.org.eclipse.aether
#log4j2.logger.aether.level = TRACE
#log4j2.logger.http-headers.name = shaded.org.apache.http.headers
#log4j2.logger.http-headers.level = DEBUG
#log4j2.logger.maven.name = org.ops4j.pax.url.mvn
#log4j2.logger.maven.level = TRACE
Configuring the karaf log... karaf_version: karaf4, logapi: log4j2
+ logapi=log4j2
+ echo 'Configuring the karaf log... karaf_version: karaf4, logapi: log4j2'
+ '[' log4j2 == log4j2 ']'
+ sed -ie 's/log4j2.appender.rolling.policies.size.size = 64MB/log4j2.appender.rolling.policies.size.size = 1GB/g' /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
+ orgmodule=org.opendaylight.yangtools.yang.parser.repo.YangTextSchemaContextResolver
+ orgmodule_=org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver
+ echo 'log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.name = WARN'
controllerdebugmap:
cat /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
+ echo 'log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.level = WARN'
+ unset IFS
+ echo 'controllerdebugmap: '
+ '[' -n '' ']'
+ echo 'cat /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg'
+ cat /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
################################################################################
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
################################################################################
# Common pattern layout for appenders
log4j2.pattern = %d{ISO8601} | %-5p | %-16t | %-32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n
# Root logger
log4j2.rootLogger.level = INFO
# uncomment to use asynchronous loggers, which require mvn:com.lmax/disruptor/3.3.2 library
#log4j2.rootLogger.type = asyncRoot
#log4j2.rootLogger.includeLocation = false
log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile
log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi
log4j2.rootLogger.appenderRef.Console.ref = Console
log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter
log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF}
# Filters for logs marked by org.opendaylight.odlparent.Markers
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.type = ContextMapFilter
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.type = KeyValuePair
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.key = slf4j.marker
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.value = CONFIDENTIAL
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.operator = or
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMatch = DENY
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMismatch = NEUTRAL
# Loggers configuration
# Spifly logger
log4j2.logger.spifly.name = org.apache.aries.spifly
log4j2.logger.spifly.level = WARN
# Security audit logger
log4j2.logger.audit.name = org.apache.karaf.jaas.modules.audit
log4j2.logger.audit.level = INFO
log4j2.logger.audit.additivity = false
log4j2.logger.audit.appenderRef.AuditRollingFile.ref = AuditRollingFile
# Appenders configuration
# Console appender not used by default (see log4j2.rootLogger.appenderRefs)
log4j2.appender.console.type = Console
log4j2.appender.console.name = Console
log4j2.appender.console.layout.type = PatternLayout
log4j2.appender.console.layout.pattern = ${log4j2.pattern}
# Rolling file appender
log4j2.appender.rolling.type = RollingRandomAccessFile
log4j2.appender.rolling.name = RollingFile
log4j2.appender.rolling.fileName = ${karaf.data}/log/karaf.log
log4j2.appender.rolling.filePattern = ${karaf.data}/log/karaf.log.%i
# uncomment to not force a disk flush
#log4j2.appender.rolling.immediateFlush = false
log4j2.appender.rolling.append = true
log4j2.appender.rolling.layout.type = PatternLayout
log4j2.appender.rolling.layout.pattern = ${log4j2.pattern}
log4j2.appender.rolling.policies.type = Policies
log4j2.appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
log4j2.appender.rolling.policies.size.size = 1GB
log4j2.appender.rolling.strategy.type = DefaultRolloverStrategy
log4j2.appender.rolling.strategy.max = 7
# Audit file appender
log4j2.appender.audit.type = RollingRandomAccessFile
log4j2.appender.audit.name = AuditRollingFile
log4j2.appender.audit.fileName = ${karaf.data}/security/audit.log
log4j2.appender.audit.filePattern = ${karaf.data}/security/audit.log.%i
log4j2.appender.audit.append = true
log4j2.appender.audit.layout.type = PatternLayout
log4j2.appender.audit.layout.pattern = ${log4j2.pattern}
log4j2.appender.audit.policies.type = Policies
log4j2.appender.audit.policies.size.type = SizeBasedTriggeringPolicy
log4j2.appender.audit.policies.size.size = 8MB
log4j2.appender.audit.strategy.type = DefaultRolloverStrategy
log4j2.appender.audit.strategy.max = 7
# OSGi appender
log4j2.appender.osgi.type = PaxOsgi
log4j2.appender.osgi.name = PaxOsgi
log4j2.appender.osgi.filter = *
# help with identification of maven-related problems with pax-url-aether
#log4j2.logger.aether.name = shaded.org.eclipse.aether
#log4j2.logger.aether.level = TRACE
#log4j2.logger.http-headers.name = shaded.org.apache.http.headers
#log4j2.logger.http-headers.level = DEBUG
#log4j2.logger.maven.name = org.ops4j.pax.url.mvn
#log4j2.logger.maven.level = TRACE
log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.name = WARN
log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.level = WARN
Configure
java home: /usr/lib/jvm/java-21-openjdk-amd64
max memory: 2048m
memconf: /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
+ set_java_vars /usr/lib/jvm/java-21-openjdk-amd64 2048m /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
+ local -r java_home=/usr/lib/jvm/java-21-openjdk-amd64
+ local -r controllermem=2048m
+ local -r memconf=/tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
+ echo Configure
+ echo ' java home: /usr/lib/jvm/java-21-openjdk-amd64'
+ echo ' max memory: 2048m'
+ echo ' memconf: /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv'
+ sed -ie 's%^# export JAVA_HOME%export JAVA_HOME=${JAVA_HOME:-/usr/lib/jvm/java-21-openjdk-amd64}%g' /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
+ sed -ie 's/JAVA_MAX_MEM="2048m"/JAVA_MAX_MEM=2048m/g' /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
cat /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
+ echo 'cat /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv'
+ cat /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
#!/bin/sh
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#
# handle specific scripts; the SCRIPT_NAME is exactly the name of the Karaf
# script: client, instance, shell, start, status, stop, karaf
#
# if [ "${KARAF_SCRIPT}" == "SCRIPT_NAME" ]; then
# Actions go here...
# fi
#
# general settings which should be applied for all scripts go here; please keep
# in mind that it is possible that scripts might be executed more than once, e.g.
# in example of the start script where the start script is executed first and the
# karaf script afterwards.
#
#
# The following section shows the possible configuration options for the default
# karaf scripts
#
export JAVA_HOME=${JAVA_HOME:-/usr/lib/jvm/java-21-openjdk-amd64} # Location of Java installation
# export JAVA_OPTS # Generic JVM options, for instance, where you can pass the memory configuration
# export JAVA_NON_DEBUG_OPTS # Additional non-debug JVM options
# export EXTRA_JAVA_OPTS # Additional JVM options
# export KARAF_HOME # Karaf home folder
# export KARAF_DATA # Karaf data folder
# export KARAF_BASE # Karaf base folder
# export KARAF_ETC # Karaf etc folder
# export KARAF_LOG # Karaf log folder
# export KARAF_SYSTEM_OPTS # First citizen Karaf options
# export KARAF_OPTS # Additional available Karaf options
# export KARAF_DEBUG # Enable debug mode
# export KARAF_REDIRECT # Enable/set the std/err redirection when using bin/start
# export KARAF_NOROOT # Prevent execution as root if set to true
Set Java version
+ echo 'Set Java version'
+ sudo /usr/sbin/alternatives --install /usr/bin/java java /usr/lib/jvm/java-21-openjdk-amd64/bin/java 1
sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
sudo: a password is required
+ sudo /usr/sbin/alternatives --set java /usr/lib/jvm/java-21-openjdk-amd64/bin/java
sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
sudo: a password is required
JDK default version ...
+ echo 'JDK default version ...'
+ java -version
openjdk version "21.0.5" 2024-10-15
OpenJDK Runtime Environment (build 21.0.5+11-Ubuntu-1ubuntu122.04)
OpenJDK 64-Bit Server VM (build 21.0.5+11-Ubuntu-1ubuntu122.04, mixed mode, sharing)
Set JAVA_HOME
+ echo 'Set JAVA_HOME'
+ export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
++ readlink -e /usr/lib/jvm/java-21-openjdk-amd64/bin/java
Java binary pointed at by JAVA_HOME: /usr/lib/jvm/java-21-openjdk-amd64/bin/java
+ JAVA_RESOLVED=/usr/lib/jvm/java-21-openjdk-amd64/bin/java
+ echo 'Java binary pointed at by JAVA_HOME: /usr/lib/jvm/java-21-openjdk-amd64/bin/java'
Listing all open ports on controller system...
+ echo 'Listing all open ports on controller system...'
+ netstat -pnatu
/tmp/configuration-script.sh: line 40: netstat: command not found
Configuring cluster
+ '[' -f /tmp/custom_shard_config.txt ']'
+ echo 'Configuring cluster'
+ /tmp/karaf-0.22.1-SNAPSHOT/bin/configure_cluster.sh 2 10.30.171.178 10.30.170.138 10.30.170.234
################################################
## Configure Cluster ##
################################################
ERROR: Cluster configurations files not found. Please configure clustering feature.
Dump pekko.conf
+ echo 'Dump pekko.conf'
+ cat /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/pekko.conf
cat: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/pekko.conf: No such file or directory
Dump modules.conf
+ echo 'Dump modules.conf'
+ cat /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/modules.conf
cat: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/modules.conf: No such file or directory
Dump module-shards.conf
+ echo 'Dump module-shards.conf'
+ cat /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/module-shards.conf
cat: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/module-shards.conf: No such file or directory
Configuring member-3 with IP address 10.30.170.234
Warning: Permanently added '10.30.170.234' (ECDSA) to the list of known hosts.
Warning: Permanently added '10.30.170.234' (ECDSA) to the list of known hosts.
+ source /tmp/common-functions.sh karaf-0.22.1-SNAPSHOT titanium
++ [[ /tmp/common-functions.sh == \/\t\m\p\/\c\o\n\f\i\g\u\r\a\t\i\o\n\-\s\c\r\i\p\t\.\s\h ]]
common-functions.sh is being sourced
++ echo 'common-functions.sh is being sourced'
++ BUNDLEFOLDER=karaf-0.22.1-SNAPSHOT
++ DISTROSTREAM=titanium
++ export MAVENCONF=/tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg
++ MAVENCONF=/tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg
++ export FEATURESCONF=/tmp/karaf-0.22.1-SNAPSHOT/etc/org.apache.karaf.features.cfg
++ FEATURESCONF=/tmp/karaf-0.22.1-SNAPSHOT/etc/org.apache.karaf.features.cfg
++ export CUSTOMPROP=/tmp/karaf-0.22.1-SNAPSHOT/etc/custom.properties
++ CUSTOMPROP=/tmp/karaf-0.22.1-SNAPSHOT/etc/custom.properties
++ export LOGCONF=/tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
++ LOGCONF=/tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
++ export MEMCONF=/tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
++ MEMCONF=/tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
++ export CONTROLLERMEM=
++ CONTROLLERMEM=
++ case "${DISTROSTREAM}" in
++ CLUSTER_SYSTEM=pekko
++ export AKKACONF=/tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/pekko.conf
++ AKKACONF=/tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/pekko.conf
++ export MODULESCONF=/tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/modules.conf
++ MODULESCONF=/tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/modules.conf
++ export MODULESHARDSCONF=/tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/module-shards.conf
++ MODULESHARDSCONF=/tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/module-shards.conf
++ print_common_env
++ cat
common-functions environment:
MAVENCONF: /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg
ACTUALFEATURES:
FEATURESCONF: /tmp/karaf-0.22.1-SNAPSHOT/etc/org.apache.karaf.features.cfg
CUSTOMPROP: /tmp/karaf-0.22.1-SNAPSHOT/etc/custom.properties
LOGCONF: /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
MEMCONF: /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
CONTROLLERMEM:
AKKACONF: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/pekko.conf
MODULESCONF: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/modules.conf
MODULESHARDSCONF: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/module-shards.conf
SUITES:
++ SSH='ssh -t -t'
++ extra_services_cntl=' dnsmasq.service httpd.service libvirtd.service openvswitch.service ovs-vswitchd.service ovsdb-server.service rabbitmq-server.service '
++ extra_services_cmp=' libvirtd.service openvswitch.service ovs-vswitchd.service ovsdb-server.service '
Changing to /tmp
+ echo 'Changing to /tmp'
+ cd /tmp
Downloading the distribution from https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.1-SNAPSHOT/karaf-0.22.1-20250815.175747-18.zip
+ echo 'Downloading the distribution from https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.1-SNAPSHOT/karaf-0.22.1-20250815.175747-18.zip'
+ wget --progress=dot:mega https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.1-SNAPSHOT/karaf-0.22.1-20250815.175747-18.zip
--2025-08-16 00:55:49-- https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.1-SNAPSHOT/karaf-0.22.1-20250815.175747-18.zip
Resolving nexus.opendaylight.org (nexus.opendaylight.org)... 199.204.45.87, 2604:e100:1:0:f816:3eff:fe45:48d6
Connecting to nexus.opendaylight.org (nexus.opendaylight.org)|199.204.45.87|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 236730165 (226M) [application/zip]
Saving to: ‘karaf-0.22.1-20250815.175747-18.zip’
0K ........ ........ ........ ........ ........ ........ 1% 61.1M 4s
3072K ........ ........ ........ ........ ........ ........ 2% 93.6M 3s
6144K ........ ........ ........ ........ ........ ........ 3% 95.5M 3s
9216K ........ ........ ........ ........ ........ ........ 5% 110M 2s
12288K ........ ........ ........ ........ ........ ........ 6% 212M 2s
15360K ........ ........ ........ ........ ........ ........ 7% 220M 2s
18432K ........ ........ ........ ........ ........ ........ 9% 219M 2s
21504K ........ ........ ........ ........ ........ ........ 10% 238M 2s
24576K ........ ........ ........ ........ ........ ........ 11% 217M 2s
27648K ........ ........ ........ ........ ........ ........ 13% 222M 1s
30720K ........ ........ ........ ........ ........ ........ 14% 237M 1s
33792K ........ ........ ........ ........ ........ ........ 15% 249M 1s
36864K ........ ........ ........ ........ ........ ........ 17% 250M 1s
39936K ........ ........ ........ ........ ........ ........ 18% 265M 1s
43008K ........ ........ ........ ........ ........ ........ 19% 251M 1s
46080K ........ ........ ........ ........ ........ ........ 21% 283M 1s
49152K ........ ........ ........ ........ ........ ........ 22% 261M 1s
52224K ........ ........ ........ ........ ........ ........ 23% 264M 1s
55296K ........ ........ ........ ........ ........ ........ 25% 283M 1s
58368K ........ ........ ........ ........ ........ ........ 26% 273M 1s
61440K ........ ........ ........ ........ ........ ........ 27% 254M 1s
64512K ........ ........ ........ ........ ........ ........ 29% 296M 1s
67584K ........ ........ ........ ........ ........ ........ 30% 350M 1s
70656K ........ ........ ........ ........ ........ ........ 31% 356M 1s
73728K ........ ........ ........ ........ ........ ........ 33% 300M 1s
76800K ........ ........ ........ ........ ........ ........ 34% 243M 1s
79872K ........ ........ ........ ........ ........ ........ 35% 260M 1s
82944K ........ ........ ........ ........ ........ ........ 37% 274M 1s
86016K ........ ........ ........ ........ ........ ........ 38% 371M 1s
89088K ........ ........ ........ ........ ........ ........ 39% 321M 1s
92160K ........ ........ ........ ........ ........ ........ 41% 282M 1s
95232K ........ ........ ........ ........ ........ ........ 42% 346M 1s
98304K ........ ........ ........ ........ ........ ........ 43% 357M 1s
101376K ........ ........ ........ ........ ........ ........ 45% 358M 1s
104448K ........ ........ ........ ........ ........ ........ 46% 353M 1s
107520K ........ ........ ........ ........ ........ ........ 47% 355M 1s
110592K ........ ........ ........ ........ ........ ........ 49% 358M 1s
113664K ........ ........ ........ ........ ........ ........ 50% 356M 0s
116736K ........ ........ ........ ........ ........ ........ 51% 350M 0s
119808K ........ ........ ........ ........ ........ ........ 53% 353M 0s
122880K ........ ........ ........ ........ ........ ........ 54% 357M 0s
125952K ........ ........ ........ ........ ........ ........ 55% 351M 0s
129024K ........ ........ ........ ........ ........ ........ 57% 350M 0s
132096K ........ ........ ........ ........ ........ ........ 58% 358M 0s
135168K ........ ........ ........ ........ ........ ........ 59% 352M 0s
138240K ........ ........ ........ ........ ........ ........ 61% 359M 0s
141312K ........ ........ ........ ........ ........ ........ 62% 345M 0s
144384K ........ ........ ........ ........ ........ ........ 63% 353M 0s
147456K ........ ........ ........ ........ ........ ........ 65% 347M 0s
150528K ........ ........ ........ ........ ........ ........ 66% 344M 0s
153600K ........ ........ ........ ........ ........ ........ 67% 329M 0s
156672K ........ ........ ........ ........ ........ ........ 69% 349M 0s
159744K ........ ........ ........ ........ ........ ........ 70% 339M 0s
162816K ........ ........ ........ ........ ........ ........ 71% 341M 0s
165888K ........ ........ ........ ........ ........ ........ 73% 370M 0s
168960K ........ ........ ........ ........ ........ ........ 74% 368M 0s
172032K ........ ........ ........ ........ ........ ........ 75% 372M 0s
175104K ........ ........ ........ ........ ........ ........ 77% 309M 0s
178176K ........ ........ ........ ........ ........ ........ 78% 190M 0s
181248K ........ ........ ........ ........ ........ ........ 79% 301M 0s
184320K ........ ........ ........ ........ ........ ........ 81% 354M 0s
187392K ........ ........ ........ ........ ........ ........ 82% 349M 0s
190464K ........ ........ ........ ........ ........ ........ 83% 346M 0s
193536K ........ ........ ........ ........ ........ ........ 85% 349M 0s
196608K ........ ........ ........ ........ ........ ........ 86% 336M 0s
199680K ........ ........ ........ ........ ........ ........ 87% 344M 0s
202752K ........ ........ ........ ........ ........ ........ 89% 343M 0s
205824K ........ ........ ........ ........ ........ ........ 90% 352M 0s
208896K ........ ........ ........ ........ ........ ........ 91% 339M 0s
211968K ........ ........ ........ ........ ........ ........ 93% 343M 0s
215040K ........ ........ ........ ........ ........ ........ 94% 336M 0s
218112K ........ ........ ........ ........ ........ ........ 95% 340M 0s
221184K ........ ........ ........ ........ ........ ........ 97% 339M 0s
224256K ........ ........ ........ ........ ........ ........ 98% 323M 0s
227328K ........ ........ ........ ........ ........ ........ 99% 339M 0s
230400K ........ .... 100% 324M=0.8s
2025-08-16 00:55:50 (269 MB/s) - ‘karaf-0.22.1-20250815.175747-18.zip’ saved [236730165/236730165]
Extracting the new controller...
+ echo 'Extracting the new controller...'
+ unzip -q karaf-0.22.1-20250815.175747-18.zip
Adding external repositories...
+ echo 'Adding external repositories...'
+ sed -ie 's%org.ops4j.pax.url.mvn.repositories=%org.ops4j.pax.url.mvn.repositories=https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot@id=opendaylight-snapshot@snapshots, https://nexus.opendaylight.org/content/repositories/public@id=opendaylight-mirror, http://repo1.maven.org/maven2@id=central, http://repository.springsource.com/maven/bundles/release@id=spring.ebr.release, http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external, http://zodiac.springsource.com/maven/bundles/release@id=gemini, http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases, https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases, https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases%g' /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg
+ cat /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg
################################################################################
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
################################################################################
#
# If set to true, the following property will not allow any certificate to be used
# when accessing Maven repositories through SSL
#
#org.ops4j.pax.url.mvn.certificateCheck=
#
# Path to the local Maven settings file.
# The repositories defined in this file will be automatically added to the list
# of default repositories if the 'org.ops4j.pax.url.mvn.repositories' property
# below is not set.
# The following locations are checked for the existence of the settings.xml file
# * 1. looks for the specified url
# * 2. if not found looks for ${user.home}/.m2/settings.xml
# * 3. if not found looks for ${maven.home}/conf/settings.xml
# * 4. if not found looks for ${M2_HOME}/conf/settings.xml
#
#org.ops4j.pax.url.mvn.settings=
#
# Path to the local Maven repository which is used to avoid downloading
# artifacts when they already exist locally.
# The value of this property will be extracted from the settings.xml file
# above, or defaulted to:
# System.getProperty( "user.home" ) + "/.m2/repository"
#
org.ops4j.pax.url.mvn.localRepository=${karaf.home}/${karaf.default.repository}
#
# Default this to false. It's just weird to use undocumented repos
#
org.ops4j.pax.url.mvn.useFallbackRepositories=false
#
# Uncomment if you don't wanna use the proxy settings
# from the Maven conf/settings.xml file
#
# org.ops4j.pax.url.mvn.proxySupport=false
#
# Comma separated list of repositories scanned when resolving an artifact.
# Those repositories will be checked before iterating through the
# below list of repositories and even before the local repository
# A repository url can be appended with zero or more of the following flags:
# @snapshots : the repository contains snaphots
# @noreleases : the repository does not contain any released artifacts
#
# The following property value will add the system folder as a repo.
#
org.ops4j.pax.url.mvn.defaultRepositories=\
file:${karaf.home}/${karaf.default.repository}@id=system.repository@snapshots,\
file:${karaf.data}/kar@id=kar.repository@multi@snapshots,\
file:${karaf.base}/${karaf.default.repository}@id=child.system.repository@snapshots
# Use the default local repo (e.g.~/.m2/repository) as a "remote" repo
#org.ops4j.pax.url.mvn.defaultLocalRepoAsRemote=false
#
# Comma separated list of repositories scanned when resolving an artifact.
# The default list includes the following repositories:
# http://repo1.maven.org/maven2@id=central
# http://repository.springsource.com/maven/bundles/release@id=spring.ebr
# http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external
# http://zodiac.springsource.com/maven/bundles/release@id=gemini
# http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases
# https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases
# https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases
# To add repositories to the default ones, prepend '+' to the list of repositories
# to add.
# A repository url can be appended with zero or more of the following flags:
# @snapshots : the repository contains snapshots
# @noreleases : the repository does not contain any released artifacts
# @id=repository.id : the id for the repository, just like in the settings.xml this is optional but recommended
#
org.ops4j.pax.url.mvn.repositories=https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot@id=opendaylight-snapshot@snapshots, https://nexus.opendaylight.org/content/repositories/public@id=opendaylight-mirror, http://repo1.maven.org/maven2@id=central, http://repository.springsource.com/maven/bundles/release@id=spring.ebr.release, http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external, http://zodiac.springsource.com/maven/bundles/release@id=gemini, http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases, https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases, https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases
### ^^^ No remote repositories. This is the only ODL change compared to Karaf defaults.Configuring the startup features...
+ [[ True == \T\r\u\e ]]
+ echo 'Configuring the startup features...'
+ sed -ie 's/\(featuresBoot=\|featuresBoot =\)/featuresBoot = odl-infrautils-ready,odl-jolokia,odl-ovsdb-southbound-impl-rest,/g' /tmp/karaf-0.22.1-SNAPSHOT/etc/org.apache.karaf.features.cfg
+ FEATURE_TEST_STRING=features-test
+ FEATURE_TEST_VERSION=0.22.1-SNAPSHOT
+ KARAF_VERSION=karaf4
+ [[ integration == \i\n\t\e\g\r\a\t\i\o\n ]]
+ sed -ie 's%\(featuresRepositories=\|featuresRepositories =\)%featuresRepositories = mvn:org.opendaylight.integration/features-test/0.22.1-SNAPSHOT/xml/features,mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.2.0/xml/features,%g' /tmp/karaf-0.22.1-SNAPSHOT/etc/org.apache.karaf.features.cfg
+ [[ ! -z '' ]]
+ cat /tmp/karaf-0.22.1-SNAPSHOT/etc/org.apache.karaf.features.cfg
################################################################################
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
################################################################################
#
# Comma separated list of features repositories to register by default
#
featuresRepositories = mvn:org.opendaylight.integration/features-test/0.22.1-SNAPSHOT/xml/features,mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.2.0/xml/features, file:${karaf.etc}/e9bd1b55-a01b-4b39-a866-96debd8f5c8f.xml
#
# Comma separated list of features to install at startup
#
featuresBoot = odl-infrautils-ready,odl-jolokia,odl-ovsdb-southbound-impl-rest, 7c06812c-bfbd-44b6-9290-770b569ed18e
#
# Resource repositories (OBR) that the features resolver can use
# to resolve requirements/capabilities
#
# The format of the resourceRepositories is
# resourceRepositories=[xml:url|json:url],...
# for Instance:
#
#resourceRepositories=xml:http://host/path/to/index.xml
# or
#resourceRepositories=json:http://host/path/to/index.json
#
#
# Defines if the boot features are started in asynchronous mode (in a dedicated thread)
#
featuresBootAsynchronous=false
#
# Service requirements enforcement
#
# By default, the feature resolver checks the service requirements/capabilities of
# bundles for new features (xml schema >= 1.3.0) in order to automatically installs
# the required bundles.
# The following flag can have those values:
# - disable: service requirements are completely ignored
# - default: service requirements are ignored for old features
# - enforce: service requirements are always verified
#
#serviceRequirements=default
#
# Store cfg file for config element in feature
#
#configCfgStore=true
#
# Define if the feature service automatically refresh bundles
#
autoRefresh=true
#
# Configuration of features processing mechanism (overrides, blacklisting, modification of features)
# XML file defines instructions related to features processing
# versions.properties may declare properties to resolve placeholders in XML file
# both files are relative to ${karaf.etc}
#
#featureProcessing=org.apache.karaf.features.xml
#featureProcessingVersions=versions.properties
+ configure_karaf_log karaf4 ''
+ local -r karaf_version=karaf4
+ local -r controllerdebugmap=
+ local logapi=log4j
+ grep log4j2 /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
log4j2.pattern = %d{ISO8601} | %-5p | %-16t | %-32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n
log4j2.rootLogger.level = INFO
#log4j2.rootLogger.type = asyncRoot
#log4j2.rootLogger.includeLocation = false
log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile
log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi
log4j2.rootLogger.appenderRef.Console.ref = Console
log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter
log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF}
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.type = ContextMapFilter
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.type = KeyValuePair
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.key = slf4j.marker
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.value = CONFIDENTIAL
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.operator = or
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMatch = DENY
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMismatch = NEUTRAL
log4j2.logger.spifly.name = org.apache.aries.spifly
log4j2.logger.spifly.level = WARN
log4j2.logger.audit.name = org.apache.karaf.jaas.modules.audit
log4j2.logger.audit.level = INFO
log4j2.logger.audit.additivity = false
log4j2.logger.audit.appenderRef.AuditRollingFile.ref = AuditRollingFile
# Console appender not used by default (see log4j2.rootLogger.appenderRefs)
log4j2.appender.console.type = Console
log4j2.appender.console.name = Console
log4j2.appender.console.layout.type = PatternLayout
log4j2.appender.console.layout.pattern = ${log4j2.pattern}
log4j2.appender.rolling.type = RollingRandomAccessFile
log4j2.appender.rolling.name = RollingFile
log4j2.appender.rolling.fileName = ${karaf.data}/log/karaf.log
log4j2.appender.rolling.filePattern = ${karaf.data}/log/karaf.log.%i
#log4j2.appender.rolling.immediateFlush = false
log4j2.appender.rolling.append = true
log4j2.appender.rolling.layout.type = PatternLayout
log4j2.appender.rolling.layout.pattern = ${log4j2.pattern}
log4j2.appender.rolling.policies.type = Policies
log4j2.appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
log4j2.appender.rolling.policies.size.size = 64MB
log4j2.appender.rolling.strategy.type = DefaultRolloverStrategy
log4j2.appender.rolling.strategy.max = 7
log4j2.appender.audit.type = RollingRandomAccessFile
log4j2.appender.audit.name = AuditRollingFile
log4j2.appender.audit.fileName = ${karaf.data}/security/audit.log
log4j2.appender.audit.filePattern = ${karaf.data}/security/audit.log.%i
log4j2.appender.audit.append = true
log4j2.appender.audit.layout.type = PatternLayout
log4j2.appender.audit.layout.pattern = ${log4j2.pattern}
log4j2.appender.audit.policies.type = Policies
log4j2.appender.audit.policies.size.type = SizeBasedTriggeringPolicy
log4j2.appender.audit.policies.size.size = 8MB
log4j2.appender.audit.strategy.type = DefaultRolloverStrategy
log4j2.appender.audit.strategy.max = 7
log4j2.appender.osgi.type = PaxOsgi
log4j2.appender.osgi.name = PaxOsgi
log4j2.appender.osgi.filter = *
#log4j2.logger.aether.name = shaded.org.eclipse.aether
#log4j2.logger.aether.level = TRACE
#log4j2.logger.http-headers.name = shaded.org.apache.http.headers
#log4j2.logger.http-headers.level = DEBUG
#log4j2.logger.maven.name = org.ops4j.pax.url.mvn
#log4j2.logger.maven.level = TRACE
Configuring the karaf log... karaf_version: karaf4, logapi: log4j2
+ logapi=log4j2
+ echo 'Configuring the karaf log... karaf_version: karaf4, logapi: log4j2'
+ '[' log4j2 == log4j2 ']'
+ sed -ie 's/log4j2.appender.rolling.policies.size.size = 64MB/log4j2.appender.rolling.policies.size.size = 1GB/g' /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
+ orgmodule=org.opendaylight.yangtools.yang.parser.repo.YangTextSchemaContextResolver
+ orgmodule_=org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver
controllerdebugmap:
cat /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
+ echo 'log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.name = WARN'
+ echo 'log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.level = WARN'
+ unset IFS
+ echo 'controllerdebugmap: '
+ '[' -n '' ']'
+ echo 'cat /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg'
+ cat /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
################################################################################
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
################################################################################
# Common pattern layout for appenders
log4j2.pattern = %d{ISO8601} | %-5p | %-16t | %-32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n
# Root logger
log4j2.rootLogger.level = INFO
# uncomment to use asynchronous loggers, which require mvn:com.lmax/disruptor/3.3.2 library
#log4j2.rootLogger.type = asyncRoot
#log4j2.rootLogger.includeLocation = false
log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile
log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi
log4j2.rootLogger.appenderRef.Console.ref = Console
log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter
log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF}
# Filters for logs marked by org.opendaylight.odlparent.Markers
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.type = ContextMapFilter
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.type = KeyValuePair
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.key = slf4j.marker
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.value = CONFIDENTIAL
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.operator = or
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMatch = DENY
log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMismatch = NEUTRAL
# Loggers configuration
# Spifly logger
log4j2.logger.spifly.name = org.apache.aries.spifly
log4j2.logger.spifly.level = WARN
# Security audit logger
log4j2.logger.audit.name = org.apache.karaf.jaas.modules.audit
log4j2.logger.audit.level = INFO
log4j2.logger.audit.additivity = false
log4j2.logger.audit.appenderRef.AuditRollingFile.ref = AuditRollingFile
# Appenders configuration
# Console appender not used by default (see log4j2.rootLogger.appenderRefs)
log4j2.appender.console.type = Console
log4j2.appender.console.name = Console
log4j2.appender.console.layout.type = PatternLayout
log4j2.appender.console.layout.pattern = ${log4j2.pattern}
# Rolling file appender
log4j2.appender.rolling.type = RollingRandomAccessFile
log4j2.appender.rolling.name = RollingFile
log4j2.appender.rolling.fileName = ${karaf.data}/log/karaf.log
log4j2.appender.rolling.filePattern = ${karaf.data}/log/karaf.log.%i
# uncomment to not force a disk flush
#log4j2.appender.rolling.immediateFlush = false
log4j2.appender.rolling.append = true
log4j2.appender.rolling.layout.type = PatternLayout
log4j2.appender.rolling.layout.pattern = ${log4j2.pattern}
log4j2.appender.rolling.policies.type = Policies
log4j2.appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
log4j2.appender.rolling.policies.size.size = 1GB
log4j2.appender.rolling.strategy.type = DefaultRolloverStrategy
log4j2.appender.rolling.strategy.max = 7
# Audit file appender
log4j2.appender.audit.type = RollingRandomAccessFile
log4j2.appender.audit.name = AuditRollingFile
log4j2.appender.audit.fileName = ${karaf.data}/security/audit.log
log4j2.appender.audit.filePattern = ${karaf.data}/security/audit.log.%i
log4j2.appender.audit.append = true
log4j2.appender.audit.layout.type = PatternLayout
log4j2.appender.audit.layout.pattern = ${log4j2.pattern}
log4j2.appender.audit.policies.type = Policies
log4j2.appender.audit.policies.size.type = SizeBasedTriggeringPolicy
log4j2.appender.audit.policies.size.size = 8MB
log4j2.appender.audit.strategy.type = DefaultRolloverStrategy
log4j2.appender.audit.strategy.max = 7
# OSGi appender
log4j2.appender.osgi.type = PaxOsgi
log4j2.appender.osgi.name = PaxOsgi
log4j2.appender.osgi.filter = *
# help with identification of maven-related problems with pax-url-aether
#log4j2.logger.aether.name = shaded.org.eclipse.aether
#log4j2.logger.aether.level = TRACE
#log4j2.logger.http-headers.name = shaded.org.apache.http.headers
#log4j2.logger.http-headers.level = DEBUG
#log4j2.logger.maven.name = org.ops4j.pax.url.mvn
#log4j2.logger.maven.level = TRACE
log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.name = WARN
log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.level = WARN
Configure
java home: /usr/lib/jvm/java-21-openjdk-amd64
max memory: 2048m
+ set_java_vars /usr/lib/jvm/java-21-openjdk-amd64 2048m /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
+ local -r java_home=/usr/lib/jvm/java-21-openjdk-amd64
+ local -r controllermem=2048m
+ local -r memconf=/tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
+ echo Configure
+ echo ' java home: /usr/lib/jvm/java-21-openjdk-amd64'
+ echo ' max memory: 2048m'
memconf: /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
+ echo ' memconf: /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv'
+ sed -ie 's%^# export JAVA_HOME%export JAVA_HOME=${JAVA_HOME:-/usr/lib/jvm/java-21-openjdk-amd64}%g' /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
+ sed -ie 's/JAVA_MAX_MEM="2048m"/JAVA_MAX_MEM=2048m/g' /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
cat /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
+ echo 'cat /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv'
+ cat /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
#!/bin/sh
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#
# handle specific scripts; the SCRIPT_NAME is exactly the name of the Karaf
# script: client, instance, shell, start, status, stop, karaf
#
# if [ "${KARAF_SCRIPT}" == "SCRIPT_NAME" ]; then
# Actions go here...
# fi
#
# general settings which should be applied for all scripts go here; please keep
# in mind that it is possible that scripts might be executed more than once, e.g.
# in example of the start script where the start script is executed first and the
# karaf script afterwards.
#
#
# The following section shows the possible configuration options for the default
# karaf scripts
#
export JAVA_HOME=${JAVA_HOME:-/usr/lib/jvm/java-21-openjdk-amd64} # Location of Java installation
# export JAVA_OPTS # Generic JVM options, for instance, where you can pass the memory configuration
# export JAVA_NON_DEBUG_OPTS # Additional non-debug JVM options
# export EXTRA_JAVA_OPTS # Additional JVM options
# export KARAF_HOME # Karaf home folder
# export KARAF_DATA # Karaf data folder
# export KARAF_BASE # Karaf base folder
# export KARAF_ETC # Karaf etc folder
# export KARAF_LOG # Karaf log folder
# export KARAF_SYSTEM_OPTS # First citizen Karaf options
# export KARAF_OPTS # Additional available Karaf options
# export KARAF_DEBUG # Enable debug mode
# export KARAF_REDIRECT # Enable/set the std/err redirection when using bin/start
# export KARAF_NOROOT # Prevent execution as root if set to true
Set Java version
+ echo 'Set Java version'
+ sudo /usr/sbin/alternatives --install /usr/bin/java java /usr/lib/jvm/java-21-openjdk-amd64/bin/java 1
sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
sudo: a password is required
+ sudo /usr/sbin/alternatives --set java /usr/lib/jvm/java-21-openjdk-amd64/bin/java
sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
sudo: a password is required
JDK default version ...
+ echo 'JDK default version ...'
+ java -version
openjdk version "21.0.5" 2024-10-15
OpenJDK Runtime Environment (build 21.0.5+11-Ubuntu-1ubuntu122.04)
OpenJDK 64-Bit Server VM (build 21.0.5+11-Ubuntu-1ubuntu122.04, mixed mode, sharing)
Set JAVA_HOME
+ echo 'Set JAVA_HOME'
+ export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
++ readlink -e /usr/lib/jvm/java-21-openjdk-amd64/bin/java
+ JAVA_RESOLVED=/usr/lib/jvm/java-21-openjdk-amd64/bin/java
Java binary pointed at by JAVA_HOME: /usr/lib/jvm/java-21-openjdk-amd64/bin/java
Listing all open ports on controller system...
+ echo 'Java binary pointed at by JAVA_HOME: /usr/lib/jvm/java-21-openjdk-amd64/bin/java'
+ echo 'Listing all open ports on controller system...'
+ netstat -pnatu
/tmp/configuration-script.sh: line 40: netstat: command not found
Configuring cluster
+ '[' -f /tmp/custom_shard_config.txt ']'
+ echo 'Configuring cluster'
+ /tmp/karaf-0.22.1-SNAPSHOT/bin/configure_cluster.sh 3 10.30.171.178 10.30.170.138 10.30.170.234
################################################
## Configure Cluster ##
################################################
ERROR: Cluster configurations files not found. Please configure clustering feature.
Dump pekko.conf
+ echo 'Dump pekko.conf'
+ cat /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/pekko.conf
cat: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/pekko.conf: No such file or directory
Dump modules.conf
+ echo 'Dump modules.conf'
+ cat /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/modules.conf
cat: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/modules.conf: No such file or directory
Dump module-shards.conf
+ echo 'Dump module-shards.conf'
+ cat /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/module-shards.conf
cat: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/module-shards.conf: No such file or directory
Locating config plan to use...
Finished running config plans
Starting member-1 with IP address 10.30.171.178
Warning: Permanently added '10.30.171.178' (ECDSA) to the list of known hosts.
Warning: Permanently added '10.30.171.178' (ECDSA) to the list of known hosts.
Redirecting karaf console output to karaf_console.log
Starting controller...
start: Redirecting Karaf output to /tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf_console.log
Starting member-2 with IP address 10.30.170.138
Warning: Permanently added '10.30.170.138' (ECDSA) to the list of known hosts.
Warning: Permanently added '10.30.170.138' (ECDSA) to the list of known hosts.
Redirecting karaf console output to karaf_console.log
Starting controller...
start: Redirecting Karaf output to /tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf_console.log
Starting member-3 with IP address 10.30.170.234
Warning: Permanently added '10.30.170.234' (ECDSA) to the list of known hosts.
Warning: Permanently added '10.30.170.234' (ECDSA) to the list of known hosts.
Redirecting karaf console output to karaf_console.log
Starting controller...
start: Redirecting Karaf output to /tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf_console.log
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins732607331460450147.sh
common-functions.sh is being sourced
common-functions environment:
MAVENCONF: /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg
ACTUALFEATURES:
FEATURESCONF: /tmp/karaf-0.22.1-SNAPSHOT/etc/org.apache.karaf.features.cfg
CUSTOMPROP: /tmp/karaf-0.22.1-SNAPSHOT/etc/custom.properties
LOGCONF: /tmp/karaf-0.22.1-SNAPSHOT/etc/org.ops4j.pax.logging.cfg
MEMCONF: /tmp/karaf-0.22.1-SNAPSHOT/bin/setenv
CONTROLLERMEM: 2048m
AKKACONF: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/pekko.conf
MODULESCONF: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/modules.conf
MODULESHARDSCONF: /tmp/karaf-0.22.1-SNAPSHOT/configuration/initial/module-shards.conf
SUITES:
+ echo '#################################################'
#################################################
+ echo '## Verify Cluster is UP ##'
## Verify Cluster is UP ##
+ echo '#################################################'
#################################################
+ create_post_startup_script
+ cat
+ copy_and_run_post_startup_script
+ seed_index=1
++ seq 1 3
+ for i in $(seq 1 "${NUM_ODL_SYSTEM}")
+ CONTROLLERIP=ODL_SYSTEM_1_IP
+ echo 'Execute the post startup script on controller 10.30.171.178'
Execute the post startup script on controller 10.30.171.178
+ scp /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/post-startup-script.sh 10.30.171.178:/tmp/
Warning: Permanently added '10.30.171.178' (ECDSA) to the list of known hosts.
+ ssh 10.30.171.178 'bash /tmp/post-startup-script.sh 1'
Warning: Permanently added '10.30.171.178' (ECDSA) to the list of known hosts.
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
Waiting up to 3 minutes for controller to come up, checking every 5 seconds...
2025-08-16T00:56:09,161 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 209 - org.opendaylight.infrautils.ready-api - 7.1.4 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.]
Controller is UP
2025-08-16T00:56:09,161 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 209 - org.opendaylight.infrautils.ready-api - 7.1.4 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.]
Listing all open ports on controller system...
/tmp/post-startup-script.sh: line 51: netstat: command not found
looking for "BindException: Address already in use" in log file
looking for "server is unhealthy" in log file
+ '[' 1 == 0 ']'
+ for i in $(seq 1 "${NUM_ODL_SYSTEM}")
+ CONTROLLERIP=ODL_SYSTEM_2_IP
+ echo 'Execute the post startup script on controller 10.30.170.138'
Execute the post startup script on controller 10.30.170.138
+ scp /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/post-startup-script.sh 10.30.170.138:/tmp/
Warning: Permanently added '10.30.170.138' (ECDSA) to the list of known hosts.
+ ssh 10.30.170.138 'bash /tmp/post-startup-script.sh 2'
Warning: Permanently added '10.30.170.138' (ECDSA) to the list of known hosts.
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
Waiting up to 3 minutes for controller to come up, checking every 5 seconds...
2025-08-16T00:56:10,718 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 209 - org.opendaylight.infrautils.ready-api - 7.1.4 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.]
Controller is UP
2025-08-16T00:56:10,718 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 209 - org.opendaylight.infrautils.ready-api - 7.1.4 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.]
Listing all open ports on controller system...
/tmp/post-startup-script.sh: line 51: netstat: command not found
looking for "BindException: Address already in use" in log file
looking for "server is unhealthy" in log file
+ '[' 2 == 0 ']'
+ for i in $(seq 1 "${NUM_ODL_SYSTEM}")
+ CONTROLLERIP=ODL_SYSTEM_3_IP
+ echo 'Execute the post startup script on controller 10.30.170.234'
Execute the post startup script on controller 10.30.170.234
+ scp /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/post-startup-script.sh 10.30.170.234:/tmp/
Warning: Permanently added '10.30.170.234' (ECDSA) to the list of known hosts.
+ ssh 10.30.170.234 'bash /tmp/post-startup-script.sh 3'
Warning: Permanently added '10.30.170.234' (ECDSA) to the list of known hosts.
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
/tmp/post-startup-script.sh: line 4: netstat: command not found
Waiting up to 3 minutes for controller to come up, checking every 5 seconds...
2025-08-16T00:56:09,559 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 209 - org.opendaylight.infrautils.ready-api - 7.1.4 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.]
Controller is UP
2025-08-16T00:56:09,559 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 209 - org.opendaylight.infrautils.ready-api - 7.1.4 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.]
Listing all open ports on controller system...
/tmp/post-startup-script.sh: line 51: netstat: command not found
looking for "BindException: Address already in use" in log file
looking for "server is unhealthy" in log file
+ '[' 0 == 0 ']'
+ seed_index=1
+ dump_controller_threads
++ seq 1 3
+ for i in $(seq 1 "${NUM_ODL_SYSTEM}")
+ CONTROLLERIP=ODL_SYSTEM_1_IP
+ echo 'Let'\''s take the karaf thread dump'
Let's take the karaf thread dump
+ ssh 10.30.171.178 'sudo ps aux'
Warning: Permanently added '10.30.171.178' (ECDSA) to the list of known hosts.
++ grep org.apache.karaf.main.Main /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/ps_before.log
++ grep -v grep
++ tr -s ' '
++ cut -f2 '-d '
+ pid=2013
+ echo 'karaf main: org.apache.karaf.main.Main, pid:2013'
karaf main: org.apache.karaf.main.Main, pid:2013
+ ssh 10.30.171.178 '/usr/lib/jvm/java-21-openjdk-amd64/bin/jstack -l 2013'
Warning: Permanently added '10.30.171.178' (ECDSA) to the list of known hosts.
+ for i in $(seq 1 "${NUM_ODL_SYSTEM}")
+ CONTROLLERIP=ODL_SYSTEM_2_IP
+ echo 'Let'\''s take the karaf thread dump'
Let's take the karaf thread dump
+ ssh 10.30.170.138 'sudo ps aux'
Warning: Permanently added '10.30.170.138' (ECDSA) to the list of known hosts.
++ grep org.apache.karaf.main.Main /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/ps_before.log
++ grep -v grep
++ cut -f2 '-d '
++ tr -s ' '
+ pid=2011
+ echo 'karaf main: org.apache.karaf.main.Main, pid:2011'
karaf main: org.apache.karaf.main.Main, pid:2011
+ ssh 10.30.170.138 '/usr/lib/jvm/java-21-openjdk-amd64/bin/jstack -l 2011'
Warning: Permanently added '10.30.170.138' (ECDSA) to the list of known hosts.
+ for i in $(seq 1 "${NUM_ODL_SYSTEM}")
+ CONTROLLERIP=ODL_SYSTEM_3_IP
+ echo 'Let'\''s take the karaf thread dump'
Let's take the karaf thread dump
+ ssh 10.30.170.234 'sudo ps aux'
Warning: Permanently added '10.30.170.234' (ECDSA) to the list of known hosts.
++ grep org.apache.karaf.main.Main /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/ps_before.log
++ grep -v grep
++ cut -f2 '-d '
++ tr -s ' '
+ pid=2004
+ echo 'karaf main: org.apache.karaf.main.Main, pid:2004'
karaf main: org.apache.karaf.main.Main, pid:2004
+ ssh 10.30.170.234 '/usr/lib/jvm/java-21-openjdk-amd64/bin/jstack -l 2004'
Warning: Permanently added '10.30.170.234' (ECDSA) to the list of known hosts.
+ '[' 0 -gt 0 ']'
+ echo 'Generating controller variables...'
Generating controller variables...
++ seq 1 3
+ for i in $(seq 1 "${NUM_ODL_SYSTEM}")
+ CONTROLLERIP=ODL_SYSTEM_1_IP
+ odl_variables=' -v ODL_SYSTEM_1_IP:10.30.171.178'
+ for i in $(seq 1 "${NUM_ODL_SYSTEM}")
+ CONTROLLERIP=ODL_SYSTEM_2_IP
+ odl_variables=' -v ODL_SYSTEM_1_IP:10.30.171.178 -v ODL_SYSTEM_2_IP:10.30.170.138'
+ for i in $(seq 1 "${NUM_ODL_SYSTEM}")
+ CONTROLLERIP=ODL_SYSTEM_3_IP
+ odl_variables=' -v ODL_SYSTEM_1_IP:10.30.171.178 -v ODL_SYSTEM_2_IP:10.30.170.138 -v ODL_SYSTEM_3_IP:10.30.170.234'
+ echo 'Generating mininet variables...'
Generating mininet variables...
++ seq 1 1
+ for i in $(seq 1 "${NUM_TOOLS_SYSTEM}")
+ MININETIP=TOOLS_SYSTEM_1_IP
+ tools_variables=' -v TOOLS_SYSTEM_1_IP:10.30.170.78'
+ get_test_suites SUITES
+ local __suite_list=SUITES
+ echo 'Locating test plan to use...'
Locating test plan to use...
+ testplan_filepath=/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/testplans/ovsdb-upstream-clustering-titanium.txt
+ '[' '!' -f /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/testplans/ovsdb-upstream-clustering-titanium.txt ']'
+ testplan_filepath=/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/testplans/ovsdb-upstream-clustering.txt
+ '[' disabled '!=' disabled ']'
+ echo 'Changing the testplan path...'
Changing the testplan path...
+ sed s:integration:/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium: /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/testplans/ovsdb-upstream-clustering.txt
+ cat testplan.txt
# Place the suites in run order:
/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster
/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster
+ '[' -z '' ']'
++ grep -E -v '(^[[:space:]]*#|^[[:space:]]*$)' testplan.txt
++ tr '\012' ' '
+ suite_list='/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster '
+ eval 'SUITES='\''/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster '\'''
++ SUITES='/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster '
+ echo 'Starting Robot test suites /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster ...'
Starting Robot test suites /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster ...
+ robot -N ovsdb-upstream-clustering.txt --removekeywords wuks -e exclude -e skip_if_titanium -v BUNDLEFOLDER:karaf-0.22.1-SNAPSHOT -v BUNDLE_URL:https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.1-SNAPSHOT/karaf-0.22.1-20250815.175747-18.zip -v CONTROLLER:10.30.171.178 -v CONTROLLER1:10.30.170.138 -v CONTROLLER2:10.30.170.234 -v CONTROLLER_USER:jenkins -v JAVA_HOME:/usr/lib/jvm/java-21-openjdk-amd64 -v JDKVERSION:openjdk21 -v JENKINS_WORKSPACE:/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium -v MININET:10.30.170.78 -v MININET1: -v MININET2: -v MININET_USER:jenkins -v NEXUSURL_PREFIX:https://nexus.opendaylight.org -v NUM_ODL_SYSTEM:3 -v NUM_TOOLS_SYSTEM:1 -v ODL_STREAM:titanium -v ODL_SYSTEM_IP:10.30.171.178 -v ODL_SYSTEM_1_IP:10.30.171.178 -v ODL_SYSTEM_2_IP:10.30.170.138 -v ODL_SYSTEM_3_IP:10.30.170.234 -v ODL_SYSTEM_USER:jenkins -v TOOLS_SYSTEM_IP:10.30.170.78 -v TOOLS_SYSTEM_1_IP:10.30.170.78 -v TOOLS_SYSTEM_USER:jenkins -v USER_HOME:/home/jenkins -v IS_KARAF_APPL:True -v WORKSPACE:/tmp /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster
==============================================================================
ovsdb-upstream-clustering.txt
==============================================================================
ovsdb-upstream-clustering.txt.Southbound Cluster
==============================================================================
ovsdb-upstream-clustering.txt.Southbound Cluster.Ovsdb Southbound Cluster :...
==============================================================================
Check Shards Status Before Fail :: Check Status for all shards in ... | FAIL |
Evaluating expression 'json.loads(\'\'\'{\n "error": "javax.management.InstanceNotFoundException : org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore",\n "error_type": "javax.management.InstanceNotFoundException",\n "request": {\n "mbean": "org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore",\n "type": "read"\n },\n "stacktrace": "javax.management.InstanceNotFoundException: org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1073)\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1343)\\n\\tat java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:921)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:46)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:41)\\n\\tat org.jolokia.backend.executor.AbstractMBeanServerExecutor.call(AbstractMBeanServerExecutor.java:90)\\n\\tat org.jolokia.handler.ReadHandler.getMBeanInfo(ReadHandler.java:233)\\n\\tat org.jolokia.handler.ReadHandler.getAllAttributesNames(ReadHandler.java:245)\\n\\tat org.jolokia.handler.ReadHandler.resolveAttributes(ReadHandler.java:221)\\n\\tat org.jolokia.handler.ReadHandler.fetchAttributes(ReadHandl...
[ Message content over the limit has been removed. ]
...rvice.jetty.internal.PrioritizedHandlerCollection.handle(PrioritizedHandlerCollection.java:96)\\n\\tat org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\\n\\tat org.eclipse.jetty.server.Server.handle(Server.java:516)\\n\\tat org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)\\n\\tat org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)\\n\\tat org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)\\n\\tat org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)\\n\\tat org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)\\n\\tat org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)\\n\\tat org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)\\n\\tat java.base/java.lang.Thread.run(Thread.java:1583)\\n",\n "status": 404\n}\n\'\'\')' failed: JSONDecodeError: Invalid control character at: line 8 column 182 (char 595)
------------------------------------------------------------------------------
Start OVS Multiple Connections :: Connect OVS to all cluster insta... | PASS |
------------------------------------------------------------------------------
Check Entity Owner Status And Find Owner and Candidate Before Fail... | FAIL |
Keyword 'ClusterManagement.Verify_Owner_And_Successors_For_Device' failed after retrying for 20 seconds. The last error was: Successor list [] is not the came as expected [2, 3]
Lengths are different: 2 != 0
------------------------------------------------------------------------------
Create Bridge Manually and Verify Before Fail :: Create bridge wit... | PASS |
------------------------------------------------------------------------------
Add Port Manually and Verify Before Fail :: Add port with OVS comm... | PASS |
------------------------------------------------------------------------------
Create Tap Device Before Fail :: Create tap devices to add to the ... | PASS |
------------------------------------------------------------------------------
Add Tap Device Manually and Verify Before Fail :: Add tap devices ... | PASS |
------------------------------------------------------------------------------
Delete the Bridge Manually and Verify Before Fail :: Delete bridge... | PASS |
------------------------------------------------------------------------------
Create Bridge In Owner and Verify Before Fail :: Create Bridge in ... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Create Port In Owner and Verify Before Fail :: Create Port in Owne... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Modify the destination IP of Port In Owner Before Fail :: Modify t... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Verify Port Is Modified Before Fail :: Verify port is modified in ... | FAIL |
Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.178:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2F548ac92b-69d7-4d67-8018-8c0c339a56f8%2Fbridge%2Fbr01?content=nonconfig
------------------------------------------------------------------------------
Delete Port In Owner Before Fail :: Delete port in Owner and verif... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Delete Bridge In Owner And Verify Before Fail :: Delete bridge in ... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Kill Owner Instance :: Kill Owner Instance and verify it is dead | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Check Shards Status After Fail :: Create original cluster list and... | FAIL |
Variable '${new_cluster_list}' not found.
------------------------------------------------------------------------------
Check Entity Owner Status And Find Owner and Candidate After Fail ... | FAIL |
Variable '${original_candidate}' not found.
------------------------------------------------------------------------------
Create Bridge Manually and Verify After Fail :: Create bridge with... | FAIL |
Variable '${new_cluster_list}' not found.
------------------------------------------------------------------------------
Add Port Manually and Verify After Fail :: Add port with OVS comma... | FAIL |
Variable '${new_cluster_list}' not found.
------------------------------------------------------------------------------
Create Tap Device After Fail :: Create tap devices to add to the b... | PASS |
------------------------------------------------------------------------------
Add Tap Device Manually and Verify After Fail :: Add tap devices t... | FAIL |
Variable '${new_cluster_list}' not found.
------------------------------------------------------------------------------
Delete the Bridge Manually and Verify After Fail :: Delete bridge ... | FAIL |
Variable '${new_cluster_list}' not found.
------------------------------------------------------------------------------
Create Bridge In Owner and Verify After Fail :: Create Bridge in O... | FAIL |
Variable '${new_owner}' not found.
------------------------------------------------------------------------------
Create Port In Owner and Verify After Fail :: Create Port in Owner... | FAIL |
Variable '${new_owner}' not found.
------------------------------------------------------------------------------
Modify the destination IP of Port In Owner After Fail :: Modify th... | FAIL |
Variable '${new_owner}' not found.
------------------------------------------------------------------------------
Verify Port Is Modified After Fail :: Verify port is modified in a... | FAIL |
Variable '${new_cluster_list}' not found.
------------------------------------------------------------------------------
Start Old Owner Instance :: Start Owner Instance and verify it is ... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Check Shards Status After Recover :: Create original cluster list ... | FAIL |
Keyword 'Check Ovsdb Shards Status' failed after retrying for 1 minute 30 seconds. The last error was: Evaluating expression 'json.loads(\'\'\'{\n "error": "javax.management.InstanceNotFoundException : org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore",\n "error_type": "javax.management.InstanceNotFoundException",\n "request": {\n "mbean": "org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore",\n "type": "read"\n },\n "stacktrace": "javax.management.InstanceNotFoundException: org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1073)\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1343)\\n\\tat java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:921)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:46)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:41)\\n\\tat org.jolokia.backend.executor.AbstractMBeanServerExecutor.call(AbstractMBeanServerExecutor.java:90)\\n\\tat org.jolokia.handler.ReadHandler.getMBeanInfo(ReadHandler.java:233)\\n\\tat org.jolokia.handler.ReadHandler.getAllAttributesNames(ReadHandler.java:245)\\n\\tat org.jolokia.handler.ReadHandler.re...
[ Message content over the limit has been removed. ]
...lipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\\n\\tat org.eclipse.jetty.server.Server.handle(Server.java:516)\\n\\tat org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)\\n\\tat org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)\\n\\tat org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)\\n\\tat org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)\\n\\tat org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)\\n\\tat org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)\\n\\tat org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)\\n\\tat org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)\\n\\tat java.base/java.lang.Thread.run(Thread.java:1583)\\n",\n "status": 404\n}\n\'\'\')' failed: JSONDecodeError: Invalid control character at: line 8 column 182 (char 595)
------------------------------------------------------------------------------
Check Entity Owner Status After Recover :: Check Entity Owner Stat... | FAIL |
Keyword 'ClusterManagement.Verify_Owner_And_Successors_For_Device' failed after retrying for 20 seconds. The last error was: Successor list [] is not the came as expected [2, 3]
Lengths are different: 2 != 0
------------------------------------------------------------------------------
Create Bridge Manually and Verify After Recover :: Create bridge w... | PASS |
------------------------------------------------------------------------------
Add Port Manually and Verify After Recover :: Add port with OVS co... | PASS |
------------------------------------------------------------------------------
Create Tap Device After Recover :: Create tap devices to add to th... | PASS |
------------------------------------------------------------------------------
Add Tap Device Manually and Verify After Recover :: Add tap device... | PASS |
------------------------------------------------------------------------------
Delete the Bridge Manually and Verify After Recover :: Delete brid... | PASS |
------------------------------------------------------------------------------
Verify Modified Port After Recover :: Verify modified port exists ... | FAIL |
Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.178:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2F548ac92b-69d7-4d67-8018-8c0c339a56f8%2Fbridge%2Fbr01?content=nonconfig
------------------------------------------------------------------------------
Delete Port In New Owner After Recover :: Delete port in Owner and... | FAIL |
Variable '${new_owner}' not found.
------------------------------------------------------------------------------
Delete Bridge In New Owner And Verify After Recover :: Delete brid... | FAIL |
Variable '${new_owner}' not found.
------------------------------------------------------------------------------
Create Bridge In Old Owner and Verify After Recover :: Create Brid... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Create Port In Old Owner and Verify After Recover :: Create Port i... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Modify the destination IP of Port In Old Owner After Recover :: Mo... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Verify Port Is Modified After Recover :: Verify port is modified i... | FAIL |
Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.178:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2F548ac92b-69d7-4d67-8018-8c0c339a56f8%2Fbridge%2Fbr01?content=nonconfig
------------------------------------------------------------------------------
Delete Port In Old Owner After Recover :: Delete port in Owner and... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Delete Bridge In Old Owner And Verify After Recover :: Delete brid... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Cleans Up Test Environment For Next Suite :: Cleans up test enviro... | PASS |
------------------------------------------------------------------------------
ovsdb-upstream-clustering.txt.Southbound Cluster.Ovsdb Southbound ... | FAIL |
44 tests, 13 passed, 31 failed
==============================================================================
ovsdb-upstream-clustering.txt.Southbound Cluster.Southbound Cluster Extensi...
==============================================================================
Check Shards Status Before Fail :: Check Status for all shards in ... | FAIL |
Evaluating expression 'json.loads(\'\'\'{\n "error": "javax.management.InstanceNotFoundException : org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore",\n "error_type": "javax.management.InstanceNotFoundException",\n "request": {\n "mbean": "org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore",\n "type": "read"\n },\n "stacktrace": "javax.management.InstanceNotFoundException: org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1073)\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1343)\\n\\tat java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:921)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:46)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:41)\\n\\tat org.jolokia.backend.executor.AbstractMBeanServerExecutor.call(AbstractMBeanServerExecutor.java:90)\\n\\tat org.jolokia.handler.ReadHandler.getMBeanInfo(ReadHandler.java:233)\\n\\tat org.jolokia.handler.ReadHandler.getAllAttributesNames(ReadHandler.java:245)\\n\\tat org.jolokia.handler.ReadHandler.resolveAttributes(ReadHandler.java:221)\\n\\tat org.jolokia.handler.ReadHandler.fetchAttributes(ReadHandl...
[ Message content over the limit has been removed. ]
...lipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\\n\\tat org.eclipse.jetty.server.Server.handle(Server.java:516)\\n\\tat org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)\\n\\tat org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)\\n\\tat org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)\\n\\tat org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)\\n\\tat org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)\\n\\tat org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)\\n\\tat org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)\\n\\tat org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)\\n\\tat java.base/java.lang.Thread.run(Thread.java:1583)\\n",\n "status": 404\n}\n\'\'\')' failed: JSONDecodeError: Invalid control character at: line 8 column 182 (char 595)
------------------------------------------------------------------------------
Start OVS Multiple Connections :: Connect OVS to all cluster insta... | PASS |
------------------------------------------------------------------------------
Check Entity Owner Status And Find Owner and Candidate Before Fail... | FAIL |
Keyword 'ClusterManagement.Verify_Owner_And_Successors_For_Device' failed after retrying for 20 seconds. The last error was: Successor list [] is not the came as expected [2, 3]
Lengths are different: 2 != 0
------------------------------------------------------------------------------
Create Bridge Manually and Verify Before Fail :: Create bridge wit... | PASS |
------------------------------------------------------------------------------
Add Port Manually and Verify Before Fail :: Add port with OVS comm... | PASS |
------------------------------------------------------------------------------
Delete the Bridge Manually and Verify Before Fail :: Delete bridge... | PASS |
------------------------------------------------------------------------------
Create Bridge In Owner and Verify Before Fail :: Create Bridge in ... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Create Port In Owner and Verify Before Fail :: Create Port in Owne... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Modify the destination IP of Port In Owner Before Fail :: Modify t... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Verify Port Is Modified Before Fail :: Verify port is modified in ... | FAIL |
Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.178:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2F01e0faf1-bd5e-45d3-9151-980f4f97f734%2Fbridge%2Fbr01?content=nonconfig
------------------------------------------------------------------------------
Delete Port In Owner Before Fail :: Delete port in Owner and verif... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Delete Bridge In Owner And Verify Before Fail :: Delete bridge in ... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Kill Candidate Instance :: Kill Owner Instance and verify it is dead | FAIL |
Variable '${original_candidate}' not found.
------------------------------------------------------------------------------
Check Shards Status After Fail :: Create original cluster list and... | FAIL |
Variable '${new_cluster_list}' not found.
------------------------------------------------------------------------------
Check Entity Owner Status And Find Owner and Candidate After Fail ... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Create Bridge Manually and Verify After Fail :: Create bridge with... | FAIL |
Variable '${new_cluster_list}' not found.
------------------------------------------------------------------------------
Add Port Manually and Verify After Fail :: Add port with OVS comma... | FAIL |
Variable '${new_cluster_list}' not found.
------------------------------------------------------------------------------
Delete the Bridge Manually and Verify After Fail :: Delete bridge ... | FAIL |
Variable '${new_cluster_list}' not found.
------------------------------------------------------------------------------
Create Bridge In Owner and Verify After Fail :: Create Bridge in O... | FAIL |
Variable '${new_owner}' not found.
------------------------------------------------------------------------------
Create Port In Owner and Verify After Fail :: Create Port in Owner... | FAIL |
Variable '${new_owner}' not found.
------------------------------------------------------------------------------
Modify the destination IP of Port In Owner After Fail :: Modify th... | FAIL |
Variable '${new_owner}' not found.
------------------------------------------------------------------------------
Verify Port Is Modified After Fail :: Verify port is modified in a... | FAIL |
Variable '${new_cluster_list}' not found.
------------------------------------------------------------------------------
Start Old Candidate Instance :: Start Owner Instance and verify it... | FAIL |
Variable '${original_candidate}' not found.
------------------------------------------------------------------------------
Check Shards Status After Recover :: Create original cluster list ... | FAIL |
Keyword 'Check Ovsdb Shards Status' failed after retrying for 1 minute 30 seconds. The last error was: Evaluating expression 'json.loads(\'\'\'{\n "error": "javax.management.InstanceNotFoundException : org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore",\n "error_type": "javax.management.InstanceNotFoundException",\n "request": {\n "mbean": "org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore",\n "type": "read"\n },\n "stacktrace": "javax.management.InstanceNotFoundException: org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1073)\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1343)\\n\\tat java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:921)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:46)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:41)\\n\\tat org.jolokia.backend.executor.AbstractMBeanServerExecutor.call(AbstractMBeanServerExecutor.java:90)\\n\\tat org.jolokia.handler.ReadHandler.getMBeanInfo(ReadHandler.java:233)\\n\\tat org.jolokia.handler.ReadHandler.getAllAttributesNames(ReadHandler.java:245)\\n\\tat org.jolokia.handler.ReadHandler.re...
[ Message content over the limit has been removed. ]
...lipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\\n\\tat org.eclipse.jetty.server.Server.handle(Server.java:516)\\n\\tat org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)\\n\\tat org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)\\n\\tat org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)\\n\\tat org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)\\n\\tat org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)\\n\\tat org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)\\n\\tat org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)\\n\\tat org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)\\n\\tat java.base/java.lang.Thread.run(Thread.java:1583)\\n",\n "status": 404\n}\n\'\'\')' failed: JSONDecodeError: Invalid control character at: line 8 column 182 (char 595)
------------------------------------------------------------------------------
Check Entity Owner Status After Recover :: Check Entity Owner Stat... | FAIL |
Keyword 'ClusterManagement.Verify_Owner_And_Successors_For_Device' failed after retrying for 20 seconds. The last error was: Successor list [] is not the came as expected [2, 3]
Lengths are different: 2 != 0
------------------------------------------------------------------------------
Create Bridge Manually and Verify After Recover :: Create bridge w... | PASS |
------------------------------------------------------------------------------
Add Port Manually and Verify After Recover :: Add port with OVS co... | PASS |
------------------------------------------------------------------------------
Delete the Bridge Manually and Verify After Recover :: Delete brid... | PASS |
------------------------------------------------------------------------------
Verify Modified Port After Recover :: Verify modified port exists ... | FAIL |
Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.178:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2F01e0faf1-bd5e-45d3-9151-980f4f97f734%2Fbridge%2Fbr01?content=nonconfig
------------------------------------------------------------------------------
Delete Port In New Owner After Recover :: Delete port in Owner and... | FAIL |
Variable '${new_owner}' not found.
------------------------------------------------------------------------------
Delete Bridge In New Owner And Verify After Recover :: Delete brid... | FAIL |
Variable '${new_owner}' not found.
------------------------------------------------------------------------------
Create Bridge In Old Candidate and Verify After Recover :: Create ... | FAIL |
Variable '${original_candidate}' not found.
------------------------------------------------------------------------------
Create Port In Old Owner and Verify After Recover :: Create Port i... | FAIL |
Variable '${original_candidate}' not found.
------------------------------------------------------------------------------
Modify the destination IP of Port In Old Owner After Recover :: Mo... | FAIL |
Variable '${original_candidate}' not found.
------------------------------------------------------------------------------
Verify Port Is Modified After Recover :: Verify port is modified i... | FAIL |
Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.178:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2F01e0faf1-bd5e-45d3-9151-980f4f97f734%2Fbridge%2Fbr01?content=nonconfig
------------------------------------------------------------------------------
Delete Port In Old Owner After Recover :: Delete port in Owner and... | FAIL |
Variable '${original_candidate}' not found.
------------------------------------------------------------------------------
Delete Bridge In Old Owner And Verify After Recover :: Delete brid... | FAIL |
Variable '${original_candidate}' not found.
------------------------------------------------------------------------------
Cleans Up Test Environment For Next Suite :: Cleans up test enviro... | PASS |
------------------------------------------------------------------------------
ovsdb-upstream-clustering.txt.Southbound Cluster.Southbound Cluste... | FAIL |
38 tests, 8 passed, 30 failed
==============================================================================
ovsdb-upstream-clustering.txt.Southbound Cluster | FAIL |
82 tests, 21 passed, 61 failed
==============================================================================
ovsdb-upstream-clustering.txt.Southbound Cluster
==============================================================================
ovsdb-upstream-clustering.txt.Southbound Cluster.Ovsdb Southbound Cluster :...
==============================================================================
Check Shards Status Before Fail :: Check Status for all shards in ... | FAIL |
Evaluating expression 'json.loads(\'\'\'{\n "error": "javax.management.InstanceNotFoundException : org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore",\n "error_type": "javax.management.InstanceNotFoundException",\n "request": {\n "mbean": "org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore",\n "type": "read"\n },\n "stacktrace": "javax.management.InstanceNotFoundException: org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1073)\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1343)\\n\\tat java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:921)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:46)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:41)\\n\\tat org.jolokia.backend.executor.AbstractMBeanServerExecutor.call(AbstractMBeanServerExecutor.java:90)\\n\\tat org.jolokia.handler.ReadHandler.getMBeanInfo(ReadHandler.java:233)\\n\\tat org.jolokia.handler.ReadHandler.getAllAttributesNames(ReadHandler.java:245)\\n\\tat org.jolokia.handler.ReadHandler.resolveAttributes(ReadHandler.java:221)\\n\\tat org.jolokia.handler.ReadHandler.fetchAttributes(ReadHandl...
[ Message content over the limit has been removed. ]
...lipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\\n\\tat org.eclipse.jetty.server.Server.handle(Server.java:516)\\n\\tat org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)\\n\\tat org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)\\n\\tat org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)\\n\\tat org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)\\n\\tat org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)\\n\\tat org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)\\n\\tat org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)\\n\\tat org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)\\n\\tat java.base/java.lang.Thread.run(Thread.java:1583)\\n",\n "status": 404\n}\n\'\'\')' failed: JSONDecodeError: Invalid control character at: line 8 column 182 (char 595)
------------------------------------------------------------------------------
Start OVS Multiple Connections :: Connect OVS to all cluster insta... | PASS |
------------------------------------------------------------------------------
Check Entity Owner Status And Find Owner and Candidate Before Fail... | FAIL |
Keyword 'ClusterManagement.Verify_Owner_And_Successors_For_Device' failed after retrying for 20 seconds. The last error was: Successor list [] is not the came as expected [2, 3]
Lengths are different: 2 != 0
------------------------------------------------------------------------------
Create Bridge Manually and Verify Before Fail :: Create bridge wit... | PASS |
------------------------------------------------------------------------------
Add Port Manually and Verify Before Fail :: Add port with OVS comm... | PASS |
------------------------------------------------------------------------------
Create Tap Device Before Fail :: Create tap devices to add to the ... | PASS |
------------------------------------------------------------------------------
Add Tap Device Manually and Verify Before Fail :: Add tap devices ... | PASS |
------------------------------------------------------------------------------
Delete the Bridge Manually and Verify Before Fail :: Delete bridge... | PASS |
------------------------------------------------------------------------------
Create Bridge In Owner and Verify Before Fail :: Create Bridge in ... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Create Port In Owner and Verify Before Fail :: Create Port in Owne... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Modify the destination IP of Port In Owner Before Fail :: Modify t... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Verify Port Is Modified Before Fail :: Verify port is modified in ... | FAIL |
Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.178:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2Fcde5392b-c3ca-4ff3-89fe-27b13e517b63%2Fbridge%2Fbr01?content=nonconfig
------------------------------------------------------------------------------
Delete Port In Owner Before Fail :: Delete port in Owner and verif... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Delete Bridge In Owner And Verify Before Fail :: Delete bridge in ... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Kill Owner Instance :: Kill Owner Instance and verify it is dead | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Check Shards Status After Fail :: Create original cluster list and... | FAIL |
Variable '${new_cluster_list}' not found.
------------------------------------------------------------------------------
Check Entity Owner Status And Find Owner and Candidate After Fail ... | FAIL |
Variable '${original_candidate}' not found.
------------------------------------------------------------------------------
Create Bridge Manually and Verify After Fail :: Create bridge with... | FAIL |
Variable '${new_cluster_list}' not found.
------------------------------------------------------------------------------
Add Port Manually and Verify After Fail :: Add port with OVS comma... | FAIL |
Variable '${new_cluster_list}' not found.
------------------------------------------------------------------------------
Create Tap Device After Fail :: Create tap devices to add to the b... | PASS |
------------------------------------------------------------------------------
Add Tap Device Manually and Verify After Fail :: Add tap devices t... | FAIL |
Variable '${new_cluster_list}' not found.
------------------------------------------------------------------------------
Delete the Bridge Manually and Verify After Fail :: Delete bridge ... | FAIL |
Variable '${new_cluster_list}' not found.
------------------------------------------------------------------------------
Create Bridge In Owner and Verify After Fail :: Create Bridge in O... | FAIL |
Variable '${new_owner}' not found.
------------------------------------------------------------------------------
Create Port In Owner and Verify After Fail :: Create Port in Owner... | FAIL |
Variable '${new_owner}' not found.
------------------------------------------------------------------------------
Modify the destination IP of Port In Owner After Fail :: Modify th... | FAIL |
Variable '${new_owner}' not found.
------------------------------------------------------------------------------
Verify Port Is Modified After Fail :: Verify port is modified in a... | FAIL |
Variable '${new_cluster_list}' not found.
------------------------------------------------------------------------------
Start Old Owner Instance :: Start Owner Instance and verify it is ... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Check Shards Status After Recover :: Create original cluster list ... | FAIL |
Keyword 'Check Ovsdb Shards Status' failed after retrying for 1 minute 30 seconds. The last error was: Evaluating expression 'json.loads(\'\'\'{\n "error": "javax.management.InstanceNotFoundException : org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore",\n "error_type": "javax.management.InstanceNotFoundException",\n "request": {\n "mbean": "org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore",\n "type": "read"\n },\n "stacktrace": "javax.management.InstanceNotFoundException: org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1073)\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1343)\\n\\tat java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:921)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:46)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:41)\\n\\tat org.jolokia.backend.executor.AbstractMBeanServerExecutor.call(AbstractMBeanServerExecutor.java:90)\\n\\tat org.jolokia.handler.ReadHandler.getMBeanInfo(ReadHandler.java:233)\\n\\tat org.jolokia.handler.ReadHandler.getAllAttributesNames(ReadHandler.java:245)\\n\\tat org.jolokia.handler.ReadHandler.re...
[ Message content over the limit has been removed. ]
...lipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\\n\\tat org.eclipse.jetty.server.Server.handle(Server.java:516)\\n\\tat org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)\\n\\tat org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)\\n\\tat org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)\\n\\tat org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)\\n\\tat org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)\\n\\tat org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)\\n\\tat org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)\\n\\tat org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)\\n\\tat java.base/java.lang.Thread.run(Thread.java:1583)\\n",\n "status": 404\n}\n\'\'\')' failed: JSONDecodeError: Invalid control character at: line 8 column 182 (char 595)
------------------------------------------------------------------------------
Check Entity Owner Status After Recover :: Check Entity Owner Stat... | FAIL |
Keyword 'ClusterManagement.Verify_Owner_And_Successors_For_Device' failed after retrying for 20 seconds. The last error was: Successor list [] is not the came as expected [2, 3]
Lengths are different: 2 != 0
------------------------------------------------------------------------------
Create Bridge Manually and Verify After Recover :: Create bridge w... | PASS |
------------------------------------------------------------------------------
Add Port Manually and Verify After Recover :: Add port with OVS co... | PASS |
------------------------------------------------------------------------------
Create Tap Device After Recover :: Create tap devices to add to th... | PASS |
------------------------------------------------------------------------------
Add Tap Device Manually and Verify After Recover :: Add tap device... | PASS |
------------------------------------------------------------------------------
Delete the Bridge Manually and Verify After Recover :: Delete brid... | PASS |
------------------------------------------------------------------------------
Verify Modified Port After Recover :: Verify modified port exists ... | FAIL |
Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.178:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2Fcde5392b-c3ca-4ff3-89fe-27b13e517b63%2Fbridge%2Fbr01?content=nonconfig
------------------------------------------------------------------------------
Delete Port In New Owner After Recover :: Delete port in Owner and... | FAIL |
Variable '${new_owner}' not found.
------------------------------------------------------------------------------
Delete Bridge In New Owner And Verify After Recover :: Delete brid... | FAIL |
Variable '${new_owner}' not found.
------------------------------------------------------------------------------
Create Bridge In Old Owner and Verify After Recover :: Create Brid... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Create Port In Old Owner and Verify After Recover :: Create Port i... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Modify the destination IP of Port In Old Owner After Recover :: Mo... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Verify Port Is Modified After Recover :: Verify port is modified i... | FAIL |
Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.178:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2Fcde5392b-c3ca-4ff3-89fe-27b13e517b63%2Fbridge%2Fbr01?content=nonconfig
------------------------------------------------------------------------------
Delete Port In Old Owner After Recover :: Delete port in Owner and... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Delete Bridge In Old Owner And Verify After Recover :: Delete brid... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Cleans Up Test Environment For Next Suite :: Cleans up test enviro... | PASS |
------------------------------------------------------------------------------
ovsdb-upstream-clustering.txt.Southbound Cluster.Ovsdb Southbound ... | FAIL |
44 tests, 13 passed, 31 failed
==============================================================================
ovsdb-upstream-clustering.txt.Southbound Cluster.Southbound Cluster Extensi...
==============================================================================
Check Shards Status Before Fail :: Check Status for all shards in ... | FAIL |
Evaluating expression 'json.loads(\'\'\'{\n "error": "javax.management.InstanceNotFoundException : org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore",\n "error_type": "javax.management.InstanceNotFoundException",\n "request": {\n "mbean": "org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore",\n "type": "read"\n },\n "stacktrace": "javax.management.InstanceNotFoundException: org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1073)\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1343)\\n\\tat java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:921)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:46)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:41)\\n\\tat org.jolokia.backend.executor.AbstractMBeanServerExecutor.call(AbstractMBeanServerExecutor.java:90)\\n\\tat org.jolokia.handler.ReadHandler.getMBeanInfo(ReadHandler.java:233)\\n\\tat org.jolokia.handler.ReadHandler.getAllAttributesNames(ReadHandler.java:245)\\n\\tat org.jolokia.handler.ReadHandler.resolveAttributes(ReadHandler.java:221)\\n\\tat org.jolokia.handler.ReadHandler.fetchAttributes(ReadHandl...
[ Message content over the limit has been removed. ]
...lipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\\n\\tat org.eclipse.jetty.server.Server.handle(Server.java:516)\\n\\tat org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)\\n\\tat org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)\\n\\tat org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)\\n\\tat org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)\\n\\tat org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)\\n\\tat org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)\\n\\tat org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)\\n\\tat org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)\\n\\tat java.base/java.lang.Thread.run(Thread.java:1583)\\n",\n "status": 404\n}\n\'\'\')' failed: JSONDecodeError: Invalid control character at: line 8 column 182 (char 595)
------------------------------------------------------------------------------
Start OVS Multiple Connections :: Connect OVS to all cluster insta... | PASS |
------------------------------------------------------------------------------
Check Entity Owner Status And Find Owner and Candidate Before Fail... | FAIL |
Keyword 'ClusterManagement.Verify_Owner_And_Successors_For_Device' failed after retrying for 20 seconds. The last error was: Successor list [] is not the came as expected [2, 3]
Lengths are different: 2 != 0
------------------------------------------------------------------------------
Create Bridge Manually and Verify Before Fail :: Create bridge wit... | PASS |
------------------------------------------------------------------------------
Add Port Manually and Verify Before Fail :: Add port with OVS comm... | PASS |
------------------------------------------------------------------------------
Delete the Bridge Manually and Verify Before Fail :: Delete bridge... | PASS |
------------------------------------------------------------------------------
Create Bridge In Owner and Verify Before Fail :: Create Bridge in ... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Create Port In Owner and Verify Before Fail :: Create Port in Owne... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Modify the destination IP of Port In Owner Before Fail :: Modify t... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Verify Port Is Modified Before Fail :: Verify port is modified in ... | FAIL |
Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.178:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2Fc0e44bfe-0d1a-4868-ae20-270af5011403%2Fbridge%2Fbr01?content=nonconfig
------------------------------------------------------------------------------
Delete Port In Owner Before Fail :: Delete port in Owner and verif... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Delete Bridge In Owner And Verify Before Fail :: Delete bridge in ... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Kill Candidate Instance :: Kill Owner Instance and verify it is dead | FAIL |
Variable '${original_candidate}' not found.
------------------------------------------------------------------------------
Check Shards Status After Fail :: Create original cluster list and... | FAIL |
Variable '${new_cluster_list}' not found.
------------------------------------------------------------------------------
Check Entity Owner Status And Find Owner and Candidate After Fail ... | FAIL |
Variable '${original_owner}' not found.
------------------------------------------------------------------------------
Create Bridge Manually and Verify After Fail :: Create bridge with... | FAIL |
Variable '${new_cluster_list}' not found.
------------------------------------------------------------------------------
Add Port Manually and Verify After Fail :: Add port with OVS comma... | FAIL |
Variable '${new_cluster_list}' not found.
------------------------------------------------------------------------------
Delete the Bridge Manually and Verify After Fail :: Delete bridge ... | FAIL |
Variable '${new_cluster_list}' not found.
------------------------------------------------------------------------------
Create Bridge In Owner and Verify After Fail :: Create Bridge in O... | FAIL |
Variable '${new_owner}' not found.
------------------------------------------------------------------------------
Create Port In Owner and Verify After Fail :: Create Port in Owner... | FAIL |
Variable '${new_owner}' not found.
------------------------------------------------------------------------------
Modify the destination IP of Port In Owner After Fail :: Modify th... | FAIL |
Variable '${new_owner}' not found.
------------------------------------------------------------------------------
Verify Port Is Modified After Fail :: Verify port is modified in a... | FAIL |
Variable '${new_cluster_list}' not found.
------------------------------------------------------------------------------
Start Old Candidate Instance :: Start Owner Instance and verify it... | FAIL |
Variable '${original_candidate}' not found.
------------------------------------------------------------------------------
Check Shards Status After Recover :: Create original cluster list ... | FAIL |
Keyword 'Check Ovsdb Shards Status' failed after retrying for 1 minute 30 seconds. The last error was: Evaluating expression 'json.loads(\'\'\'{\n "error": "javax.management.InstanceNotFoundException : org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore",\n "error_type": "javax.management.InstanceNotFoundException",\n "request": {\n "mbean": "org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore",\n "type": "read"\n },\n "stacktrace": "javax.management.InstanceNotFoundException: org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-operational,type=DistributedOperationalDatastore\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1073)\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1343)\\n\\tat java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:921)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:46)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:41)\\n\\tat org.jolokia.backend.executor.AbstractMBeanServerExecutor.call(AbstractMBeanServerExecutor.java:90)\\n\\tat org.jolokia.handler.ReadHandler.getMBeanInfo(ReadHandler.java:233)\\n\\tat org.jolokia.handler.ReadHandler.getAllAttributesNames(ReadHandler.java:245)\\n\\tat org.jolokia.handler.ReadHandler.re...
[ Message content over the limit has been removed. ]
...lipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\\n\\tat org.eclipse.jetty.server.Server.handle(Server.java:516)\\n\\tat org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)\\n\\tat org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)\\n\\tat org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)\\n\\tat org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)\\n\\tat org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)\\n\\tat org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)\\n\\tat org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)\\n\\tat org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)\\n\\tat java.base/java.lang.Thread.run(Thread.java:1583)\\n",\n "status": 404\n}\n\'\'\')' failed: JSONDecodeError: Invalid control character at: line 8 column 182 (char 595)
------------------------------------------------------------------------------
Check Entity Owner Status After Recover :: Check Entity Owner Stat... | FAIL |
Keyword 'ClusterManagement.Verify_Owner_And_Successors_For_Device' failed after retrying for 20 seconds. The last error was: Successor list [] is not the came as expected [2, 3]
Lengths are different: 2 != 0
------------------------------------------------------------------------------
Create Bridge Manually and Verify After Recover :: Create bridge w... | PASS |
------------------------------------------------------------------------------
Add Port Manually and Verify After Recover :: Add port with OVS co... | PASS |
------------------------------------------------------------------------------
Delete the Bridge Manually and Verify After Recover :: Delete brid... | PASS |
------------------------------------------------------------------------------
Verify Modified Port After Recover :: Verify modified port exists ... | FAIL |
Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.178:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2Fc0e44bfe-0d1a-4868-ae20-270af5011403%2Fbridge%2Fbr01?content=nonconfig
------------------------------------------------------------------------------
Delete Port In New Owner After Recover :: Delete port in Owner and... | FAIL |
Variable '${new_owner}' not found.
------------------------------------------------------------------------------
Delete Bridge In New Owner And Verify After Recover :: Delete brid... | FAIL |
Variable '${new_owner}' not found.
------------------------------------------------------------------------------
Create Bridge In Old Candidate and Verify After Recover :: Create ... | FAIL |
Variable '${original_candidate}' not found.
------------------------------------------------------------------------------
Create Port In Old Owner and Verify After Recover :: Create Port i... | FAIL |
Variable '${original_candidate}' not found.
------------------------------------------------------------------------------
Modify the destination IP of Port In Old Owner After Recover :: Mo... | FAIL |
Variable '${original_candidate}' not found.
------------------------------------------------------------------------------
Verify Port Is Modified After Recover :: Verify port is modified i... | FAIL |
Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.178:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2Fc0e44bfe-0d1a-4868-ae20-270af5011403%2Fbridge%2Fbr01?content=nonconfig
------------------------------------------------------------------------------
Delete Port In Old Owner After Recover :: Delete port in Owner and... | FAIL |
Variable '${original_candidate}' not found.
------------------------------------------------------------------------------
Delete Bridge In Old Owner And Verify After Recover :: Delete brid... | FAIL |
Variable '${original_candidate}' not found.
------------------------------------------------------------------------------
Cleans Up Test Environment For Next Suite :: Cleans up test enviro... | PASS |
------------------------------------------------------------------------------
ovsdb-upstream-clustering.txt.Southbound Cluster.Southbound Cluste... | FAIL |
38 tests, 8 passed, 30 failed
==============================================================================
ovsdb-upstream-clustering.txt.Southbound Cluster | FAIL |
82 tests, 21 passed, 61 failed
==============================================================================
ovsdb-upstream-clustering.txt | FAIL |
164 tests, 42 passed, 122 failed
==============================================================================
Output: /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/output.xml
Log: /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/log.html
Report: /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/report.html
+ true
+ echo 'Examining the files in data/log and checking filesize'
Examining the files in data/log and checking filesize
+ ssh 10.30.171.178 'ls -altr /tmp/karaf-0.22.1-SNAPSHOT/data/log/'
Warning: Permanently added '10.30.171.178' (ECDSA) to the list of known hosts.
total 520
drwxrwxr-x 2 jenkins jenkins 4096 Aug 16 00:55 .
-rw-rw-r-- 1 jenkins jenkins 1720 Aug 16 00:55 karaf_console.log
drwxrwxr-x 9 jenkins jenkins 4096 Aug 16 00:56 ..
-rw-rw-r-- 1 jenkins jenkins 517824 Aug 16 01:11 karaf.log
+ ssh 10.30.171.178 'du -hs /tmp/karaf-0.22.1-SNAPSHOT/data/log/*'
Warning: Permanently added '10.30.171.178' (ECDSA) to the list of known hosts.
508K /tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf.log
4.0K /tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf_console.log
+ ssh 10.30.170.138 'ls -altr /tmp/karaf-0.22.1-SNAPSHOT/data/log/'
Warning: Permanently added '10.30.170.138' (ECDSA) to the list of known hosts.
total 520
drwxrwxr-x 2 jenkins jenkins 4096 Aug 16 00:55 .
-rw-rw-r-- 1 jenkins jenkins 1720 Aug 16 00:55 karaf_console.log
drwxrwxr-x 9 jenkins jenkins 4096 Aug 16 00:56 ..
-rw-rw-r-- 1 jenkins jenkins 519088 Aug 16 01:11 karaf.log
+ ssh 10.30.170.138 'du -hs /tmp/karaf-0.22.1-SNAPSHOT/data/log/*'
Warning: Permanently added '10.30.170.138' (ECDSA) to the list of known hosts.
508K /tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf.log
4.0K /tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf_console.log
+ ssh 10.30.170.234 'ls -altr /tmp/karaf-0.22.1-SNAPSHOT/data/log/'
Warning: Permanently added '10.30.170.234' (ECDSA) to the list of known hosts.
total 520
drwxrwxr-x 2 jenkins jenkins 4096 Aug 16 00:55 .
-rw-rw-r-- 1 jenkins jenkins 1720 Aug 16 00:55 karaf_console.log
drwxrwxr-x 9 jenkins jenkins 4096 Aug 16 00:56 ..
-rw-rw-r-- 1 jenkins jenkins 517793 Aug 16 01:11 karaf.log
+ ssh 10.30.170.234 'du -hs /tmp/karaf-0.22.1-SNAPSHOT/data/log/*'
Warning: Permanently added '10.30.170.234' (ECDSA) to the list of known hosts.
508K /tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf.log
4.0K /tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf_console.log
+ set +e
++ seq 1 3
+ for i in $(seq 1 "${NUM_ODL_SYSTEM}")
+ CONTROLLERIP=ODL_SYSTEM_1_IP
+ echo 'Let'\''s take the karaf thread dump again'
Let's take the karaf thread dump again
+ ssh 10.30.171.178 'sudo ps aux'
Warning: Permanently added '10.30.171.178' (ECDSA) to the list of known hosts.
++ tr -s ' '
++ grep org.apache.karaf.main.Main /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/ps_after.log
++ cut -f2 '-d '
++ grep -v grep
+ pid=2013
+ echo 'karaf main: org.apache.karaf.main.Main, pid:2013'
karaf main: org.apache.karaf.main.Main, pid:2013
+ ssh 10.30.171.178 '/usr/lib/jvm/java-21-openjdk-amd64/bin/jstack -l 2013'
Warning: Permanently added '10.30.171.178' (ECDSA) to the list of known hosts.
+ echo 'killing karaf process...'
killing karaf process...
+ ssh 10.30.171.178 bash -c 'ps axf | grep karaf | grep -v grep | awk '\''{print "kill -9 " $1}'\'' | sh'
Warning: Permanently added '10.30.171.178' (ECDSA) to the list of known hosts.
+ for i in $(seq 1 "${NUM_ODL_SYSTEM}")
+ CONTROLLERIP=ODL_SYSTEM_2_IP
+ echo 'Let'\''s take the karaf thread dump again'
Let's take the karaf thread dump again
+ ssh 10.30.170.138 'sudo ps aux'
Warning: Permanently added '10.30.170.138' (ECDSA) to the list of known hosts.
++ grep org.apache.karaf.main.Main /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/ps_after.log
++ grep -v grep
++ cut -f2 '-d '
++ tr -s ' '
+ pid=2011
+ echo 'karaf main: org.apache.karaf.main.Main, pid:2011'
karaf main: org.apache.karaf.main.Main, pid:2011
+ ssh 10.30.170.138 '/usr/lib/jvm/java-21-openjdk-amd64/bin/jstack -l 2011'
Warning: Permanently added '10.30.170.138' (ECDSA) to the list of known hosts.
+ echo 'killing karaf process...'
killing karaf process...
+ ssh 10.30.170.138 bash -c 'ps axf | grep karaf | grep -v grep | awk '\''{print "kill -9 " $1}'\'' | sh'
Warning: Permanently added '10.30.170.138' (ECDSA) to the list of known hosts.
+ for i in $(seq 1 "${NUM_ODL_SYSTEM}")
+ CONTROLLERIP=ODL_SYSTEM_3_IP
+ echo 'Let'\''s take the karaf thread dump again'
Let's take the karaf thread dump again
+ ssh 10.30.170.234 'sudo ps aux'
Warning: Permanently added '10.30.170.234' (ECDSA) to the list of known hosts.
++ grep org.apache.karaf.main.Main /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/ps_after.log
++ grep -v grep
++ tr -s ' '
++ cut -f2 '-d '
+ pid=2004
+ echo 'karaf main: org.apache.karaf.main.Main, pid:2004'
karaf main: org.apache.karaf.main.Main, pid:2004
+ ssh 10.30.170.234 '/usr/lib/jvm/java-21-openjdk-amd64/bin/jstack -l 2004'
Warning: Permanently added '10.30.170.234' (ECDSA) to the list of known hosts.
+ echo 'killing karaf process...'
killing karaf process...
+ ssh 10.30.170.234 bash -c 'ps axf | grep karaf | grep -v grep | awk '\''{print "kill -9 " $1}'\'' | sh'
Warning: Permanently added '10.30.170.234' (ECDSA) to the list of known hosts.
+ sleep 5
++ seq 1 3
+ for i in $(seq 1 "${NUM_ODL_SYSTEM}")
+ CONTROLLERIP=ODL_SYSTEM_1_IP
+ echo 'Compressing karaf.log 1'
Compressing karaf.log 1
+ ssh 10.30.171.178 gzip --best /tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf.log
Warning: Permanently added '10.30.171.178' (ECDSA) to the list of known hosts.
+ echo 'Fetching compressed karaf.log 1'
Fetching compressed karaf.log 1
+ scp 10.30.171.178:/tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf.log.gz odl1_karaf.log.gz
Warning: Permanently added '10.30.171.178' (ECDSA) to the list of known hosts.
+ ssh 10.30.171.178 rm -f /tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf.log.gz
Warning: Permanently added '10.30.171.178' (ECDSA) to the list of known hosts.
+ scp 10.30.171.178:/tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf_console.log odl1_karaf_console.log
Warning: Permanently added '10.30.171.178' (ECDSA) to the list of known hosts.
+ ssh 10.30.171.178 rm -f /tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf_console.log
Warning: Permanently added '10.30.171.178' (ECDSA) to the list of known hosts.
+ echo 'Fetch GC logs'
Fetch GC logs
+ mkdir -p gclogs-1
+ scp '10.30.171.178:/tmp/karaf-0.22.1-SNAPSHOT/data/log/*.log' gclogs-1/
Warning: Permanently added '10.30.171.178' (ECDSA) to the list of known hosts.
scp: /tmp/karaf-0.22.1-SNAPSHOT/data/log/*.log: No such file or directory
+ for i in $(seq 1 "${NUM_ODL_SYSTEM}")
+ CONTROLLERIP=ODL_SYSTEM_2_IP
+ echo 'Compressing karaf.log 2'
Compressing karaf.log 2
+ ssh 10.30.170.138 gzip --best /tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf.log
Warning: Permanently added '10.30.170.138' (ECDSA) to the list of known hosts.
+ echo 'Fetching compressed karaf.log 2'
Fetching compressed karaf.log 2
+ scp 10.30.170.138:/tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf.log.gz odl2_karaf.log.gz
Warning: Permanently added '10.30.170.138' (ECDSA) to the list of known hosts.
+ ssh 10.30.170.138 rm -f /tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf.log.gz
Warning: Permanently added '10.30.170.138' (ECDSA) to the list of known hosts.
+ scp 10.30.170.138:/tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf_console.log odl2_karaf_console.log
Warning: Permanently added '10.30.170.138' (ECDSA) to the list of known hosts.
+ ssh 10.30.170.138 rm -f /tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf_console.log
Warning: Permanently added '10.30.170.138' (ECDSA) to the list of known hosts.
+ echo 'Fetch GC logs'
Fetch GC logs
+ mkdir -p gclogs-2
+ scp '10.30.170.138:/tmp/karaf-0.22.1-SNAPSHOT/data/log/*.log' gclogs-2/
Warning: Permanently added '10.30.170.138' (ECDSA) to the list of known hosts.
scp: /tmp/karaf-0.22.1-SNAPSHOT/data/log/*.log: No such file or directory
+ for i in $(seq 1 "${NUM_ODL_SYSTEM}")
+ CONTROLLERIP=ODL_SYSTEM_3_IP
+ echo 'Compressing karaf.log 3'
Compressing karaf.log 3
+ ssh 10.30.170.234 gzip --best /tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf.log
Warning: Permanently added '10.30.170.234' (ECDSA) to the list of known hosts.
+ echo 'Fetching compressed karaf.log 3'
Fetching compressed karaf.log 3
+ scp 10.30.170.234:/tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf.log.gz odl3_karaf.log.gz
Warning: Permanently added '10.30.170.234' (ECDSA) to the list of known hosts.
+ ssh 10.30.170.234 rm -f /tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf.log.gz
Warning: Permanently added '10.30.170.234' (ECDSA) to the list of known hosts.
+ scp 10.30.170.234:/tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf_console.log odl3_karaf_console.log
Warning: Permanently added '10.30.170.234' (ECDSA) to the list of known hosts.
+ ssh 10.30.170.234 rm -f /tmp/karaf-0.22.1-SNAPSHOT/data/log/karaf_console.log
Warning: Permanently added '10.30.170.234' (ECDSA) to the list of known hosts.
+ echo 'Fetch GC logs'
Fetch GC logs
+ mkdir -p gclogs-3
+ scp '10.30.170.234:/tmp/karaf-0.22.1-SNAPSHOT/data/log/*.log' gclogs-3/
Warning: Permanently added '10.30.170.234' (ECDSA) to the list of known hosts.
scp: /tmp/karaf-0.22.1-SNAPSHOT/data/log/*.log: No such file or directory
+ echo 'Examine copied files'
Examine copied files
+ ls -lt
total 76008
drwxrwxr-x. 2 jenkins jenkins 6 Aug 16 01:11 gclogs-3
-rw-rw-r--. 1 jenkins jenkins 1720 Aug 16 01:11 odl3_karaf_console.log
-rw-rw-r--. 1 jenkins jenkins 36581 Aug 16 01:11 odl3_karaf.log.gz
drwxrwxr-x. 2 jenkins jenkins 6 Aug 16 01:11 gclogs-2
-rw-rw-r--. 1 jenkins jenkins 1720 Aug 16 01:11 odl2_karaf_console.log
-rw-rw-r--. 1 jenkins jenkins 36616 Aug 16 01:11 odl2_karaf.log.gz
drwxrwxr-x. 2 jenkins jenkins 6 Aug 16 01:11 gclogs-1
-rw-rw-r--. 1 jenkins jenkins 1720 Aug 16 01:11 odl1_karaf_console.log
-rw-rw-r--. 1 jenkins jenkins 36716 Aug 16 01:11 odl1_karaf.log.gz
-rw-rw-r--. 1 jenkins jenkins 121292 Aug 16 01:11 karaf_3_2004_threads_after.log
-rw-rw-r--. 1 jenkins jenkins 13627 Aug 16 01:11 ps_after.log
-rw-rw-r--. 1 jenkins jenkins 123283 Aug 16 01:11 karaf_2_2011_threads_after.log
-rw-rw-r--. 1 jenkins jenkins 125360 Aug 16 01:11 karaf_1_2013_threads_after.log
-rw-rw-r--. 1 jenkins jenkins 260991 Aug 16 01:11 report.html
-rw-rw-r--. 1 jenkins jenkins 2198131 Aug 16 01:11 log.html
-rw-rw-r--. 1 jenkins jenkins 74499282 Aug 16 01:11 output.xml
-rw-rw-r--. 1 jenkins jenkins 245 Aug 16 00:59 testplan.txt
-rw-rw-r--. 1 jenkins jenkins 90299 Aug 16 00:59 karaf_3_2004_threads_before.log
-rw-rw-r--. 1 jenkins jenkins 14188 Aug 16 00:59 ps_before.log
-rw-rw-r--. 1 jenkins jenkins 91129 Aug 16 00:59 karaf_2_2011_threads_before.log
-rw-rw-r--. 1 jenkins jenkins 90305 Aug 16 00:59 karaf_1_2013_threads_before.log
-rw-rw-r--. 1 jenkins jenkins 3106 Aug 16 00:55 post-startup-script.sh
-rw-rw-r--. 1 jenkins jenkins 252 Aug 16 00:55 startup-script.sh
-rw-rw-r--. 1 jenkins jenkins 3450 Aug 16 00:55 configuration-script.sh
-rw-rw-r--. 1 jenkins jenkins 335 Aug 16 00:55 detect_variables.env
-rw-rw-r--. 1 jenkins jenkins 2619 Aug 16 00:55 pom.xml
-rw-rw-r--. 1 jenkins jenkins 92 Aug 16 00:55 set_variables.env
-rw-rw-r--. 1 jenkins jenkins 357 Aug 16 00:55 slave_addresses.txt
-rw-rw-r--. 1 jenkins jenkins 570 Aug 16 00:54 requirements.txt
-rw-rw-r--. 1 jenkins jenkins 26 Aug 16 00:54 env.properties
-rw-rw-r--. 1 jenkins jenkins 334 Aug 16 00:53 stack-parameters.yaml
drwxrwxr-x. 7 jenkins jenkins 4096 Aug 16 00:52 test
drwxrwxr-x. 2 jenkins jenkins 6 Aug 16 00:52 test@tmp
-rw-rw-r--. 1 jenkins jenkins 1404 Aug 15 17:57 maven-metadata.xml
+ true
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/sh /tmp/jenkins4326027143980439824.sh
Cleaning up Robot installation...
$ ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 5277 killed;
[ssh-agent] Stopped.
Recording plot data
Robot results publisher started...
INFO: Checking test criticality is deprecated and will be dropped in a future release!
-Parsing output xml:
Done!
-Copying log files to build dir:
Done!
-Assigning results to build:
Done!
-Checking thresholds:
Done!
Done publishing Robot results.
Build step 'Publish Robot Framework test results' changed build result to UNSTABLE
[PostBuildScript] - [INFO] Executing post build scripts.
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins5497756255983295499.sh
Archiving csit artifacts
mv: cannot stat '*_1.png': No such file or directory
mv: cannot stat '/tmp/odl1_*': No such file or directory
mv: cannot stat '*_2.png': No such file or directory
mv: cannot stat '/tmp/odl2_*': No such file or directory
mv: cannot stat '*_3.png': No such file or directory
mv: cannot stat '/tmp/odl3_*': No such file or directory
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 5928k 0 5928k 0 0 5384k 0 --:--:-- 0:00:01 --:--:-- 5379k
100 8496k 0 8496k 0 0 5586k 0 --:--:-- 0:00:01 --:--:-- 5582k
Archive: robot-plugin.zip
inflating: ./archives/robot-plugin/log.html
inflating: ./archives/robot-plugin/output.xml
inflating: ./archives/robot-plugin/report.html
mv: cannot stat '*.log.gz': No such file or directory
mv: cannot stat '*.csv': No such file or directory
mv: cannot stat '*.png': No such file or directory
[PostBuildScript] - [INFO] Executing post build scripts.
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins17732437275204582288.sh
[PostBuildScript] - [INFO] Executing post build scripts.
[EnvInject] - Injecting environment variables from a build step.
[EnvInject] - Injecting as environment variables the properties content
OS_CLOUD=vex
OS_STACK_NAME=releng-ovsdb-csit-3node-upstream-clustering-only-titanium-358
[EnvInject] - Variables injected successfully.
provisioning config files...
copy managed file [clouds-yaml] to file:/home/jenkins/.config/openstack/clouds.yaml
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins1135889168917484635.sh
---> openstack-stack-delete.sh
Setup pyenv:
system
3.8.13
3.9.13
3.10.13
* 3.11.7 (set by /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/.python-version)
lf-activate-venv(): INFO: Reuse venv:/tmp/venv-2J6V from file:/tmp/.os_lf_venv
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
lftools 0.37.13 requires urllib3<2.1.0, but you have urllib3 2.5.0 which is incompatible.
lf-activate-venv(): INFO: Installing: lftools[openstack] kubernetes python-heatclient python-openstackclient
lf-activate-venv(): INFO: Adding /tmp/venv-2J6V/bin to PATH
INFO: Retrieving stack cost for: releng-ovsdb-csit-3node-upstream-clustering-only-titanium-358
DEBUG: Successfully retrieved stack cost: total: 0.38999999999999996
INFO: Deleting stack releng-ovsdb-csit-3node-upstream-clustering-only-titanium-358
Successfully deleted stack releng-ovsdb-csit-3node-upstream-clustering-only-titanium-358
[PostBuildScript] - [INFO] Executing post build scripts.
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins361936793413540315.sh
---> sysstat.sh
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins11497701023512356361.sh
---> package-listing.sh
++ tr '[:upper:]' '[:lower:]'
++ facter osfamily
+ OS_FAMILY=redhat
+ workspace=/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium
+ START_PACKAGES=/tmp/packages_start.txt
+ END_PACKAGES=/tmp/packages_end.txt
+ DIFF_PACKAGES=/tmp/packages_diff.txt
+ PACKAGES=/tmp/packages_start.txt
+ '[' /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium ']'
+ PACKAGES=/tmp/packages_end.txt
+ case "${OS_FAMILY}" in
+ rpm -qa
+ sort
+ '[' -f /tmp/packages_start.txt ']'
+ '[' -f /tmp/packages_end.txt ']'
+ diff /tmp/packages_start.txt /tmp/packages_end.txt
+ '[' /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium ']'
+ mkdir -p /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/archives/
+ cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/archives/
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins5137766876385975808.sh
---> capture-instance-metadata.sh
Setup pyenv:
system
3.8.13
3.9.13
3.10.13
* 3.11.7 (set by /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/.python-version)
lf-activate-venv(): INFO: Reuse venv:/tmp/venv-2J6V from file:/tmp/.os_lf_venv
lf-activate-venv(): INFO: Installing: lftools
lf-activate-venv(): INFO: Adding /tmp/venv-2J6V/bin to PATH
INFO: Running in OpenStack, capturing instance metadata
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins15522822112460319660.sh
provisioning config files...
Could not find credentials [logs] for ovsdb-csit-3node-upstream-clustering-only-titanium #358
copy managed file [jenkins-log-archives-settings] to file:/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium@tmp/config3614633827090081079tmp
Regular expression run condition: Expression=[^.*logs-s3.*], Label=[odl-logs-s3-cloudfront-index]
Run condition [Regular expression match] enabling perform for step [Provide Configuration files]
provisioning config files...
copy managed file [jenkins-s3-log-ship] to file:/home/jenkins/.aws/credentials
[EnvInject] - Injecting environment variables from a build step.
[EnvInject] - Injecting as environment variables the properties content
SERVER_ID=logs
[EnvInject] - Variables injected successfully.
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins863140434374425522.sh
---> create-netrc.sh
WARN: Log server credential not found.
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins10887407165161910006.sh
---> python-tools-install.sh
Setup pyenv:
system
3.8.13
3.9.13
3.10.13
* 3.11.7 (set by /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/.python-version)
lf-activate-venv(): INFO: Reuse venv:/tmp/venv-2J6V from file:/tmp/.os_lf_venv
lf-activate-venv(): INFO: Installing: lftools
lf-activate-venv(): INFO: Adding /tmp/venv-2J6V/bin to PATH
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins16195328498968742994.sh
---> sudo-logs.sh
Archiving 'sudo' log..
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins3364234036419045149.sh
---> job-cost.sh
Setup pyenv:
system
3.8.13
3.9.13
3.10.13
* 3.11.7 (set by /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/.python-version)
lf-activate-venv(): INFO: Reuse venv:/tmp/venv-2J6V from file:/tmp/.os_lf_venv
lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15
lf-activate-venv(): INFO: Adding /tmp/venv-2J6V/bin to PATH
DEBUG: total: 0.38999999999999996
INFO: Retrieving Stack Cost...
INFO: Retrieving Pricing Info for: v3-standard-2
INFO: Archiving Costs
[ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash -l /tmp/jenkins9634725440699161759.sh
---> logs-deploy.sh
Setup pyenv:
system
3.8.13
3.9.13
3.10.13
* 3.11.7 (set by /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/.python-version)
lf-activate-venv(): INFO: Reuse venv:/tmp/venv-2J6V from file:/tmp/.os_lf_venv
lf-activate-venv(): INFO: Installing: lftools
lf-activate-venv(): INFO: Adding /tmp/venv-2J6V/bin to PATH
WARNING: Nexus logging server not set
INFO: S3 path logs/releng/vex-yul-odl-jenkins-1/ovsdb-csit-3node-upstream-clustering-only-titanium/358/
INFO: archiving logs to S3
---> uname -a:
Linux prd-centos8-robot-2c-8g-1708.novalocal 4.18.0-553.5.1.el8.x86_64 #1 SMP Tue May 21 05:46:01 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
---> lscpu:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 2
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC-Rome Processor
Stepping: 0
CPU MHz: 2799.998
BogoMIPS: 5599.99
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0,1
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities
---> nproc:
2
---> df -h:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.8G 0 3.8G 0% /dev
tmpfs 3.8G 0 3.8G 0% /dev/shm
tmpfs 3.8G 17M 3.8G 1% /run
tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup
/dev/vda1 40G 8.5G 32G 22% /
tmpfs 770M 0 770M 0% /run/user/1001
---> free -m:
total used free shared buff/cache available
Mem: 7697 650 4716 19 2330 6749
Swap: 1023 0 1023
---> ip addr:
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1458 qdisc mq state UP group default qlen 1000
link/ether fa:16:3e:3c:18:72 brd ff:ff:ff:ff:ff:ff
altname enp0s3
altname ens3
inet 10.30.171.114/23 brd 10.30.171.255 scope global dynamic noprefixroute eth0
valid_lft 85092sec preferred_lft 85092sec
inet6 fe80::f816:3eff:fe3c:1872/64 scope link
valid_lft forever preferred_lft forever
---> sar -b -r -n DEV:
Linux 4.18.0-553.5.1.el8.x86_64 (centos-stream-8-robot-7d7a37eb-bc14-4dd6-9530-dc22c5eae738.noval) 08/16/2025 _x86_64_ (2 CPU)
00:51:36 LINUX RESTART (2 CPU)
12:52:01 AM tps rtps wtps bread/s bwrtn/s
12:53:01 AM 142.02 61.00 81.02 7651.32 24803.20
12:54:01 AM 92.52 0.72 91.80 52.92 10897.38
12:55:01 AM 29.51 0.35 29.16 61.18 2631.32
12:56:01 AM 85.07 9.45 75.62 1685.59 8680.89
12:57:01 AM 7.95 0.00 7.95 0.00 604.23
12:58:01 AM 2.33 0.00 2.33 0.00 68.99
12:59:01 AM 0.15 0.00 0.15 0.00 1.45
01:00:01 AM 0.58 0.22 0.37 11.73 48.84
01:01:01 AM 2.13 0.02 2.12 0.13 333.81
01:02:01 AM 0.55 0.07 0.48 1.73 193.04
01:03:01 AM 0.40 0.00 0.40 0.00 149.79
01:04:01 AM 0.33 0.00 0.33 0.00 188.22
01:05:01 AM 0.35 0.00 0.35 0.00 312.38
01:06:01 AM 0.23 0.00 0.23 0.00 106.02
01:07:01 AM 0.42 0.03 0.38 0.27 225.36
01:08:02 AM 1.45 0.00 1.45 0.00 351.02
01:09:01 AM 0.25 0.00 0.25 0.00 86.53
01:10:01 AM 0.33 0.00 0.33 0.00 260.81
01:11:01 AM 0.45 0.00 0.45 0.00 272.29
01:12:01 AM 3.52 0.35 3.17 38.26 186.12
01:13:01 AM 33.06 0.33 32.72 37.19 4199.73
Average: 19.23 3.46 15.78 454.68 2602.08
12:52:01 AM kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty
12:53:01 AM 5366208 7051616 2516224 31.92 2688 1868304 696476 7.80 188076 2011132 176248
12:54:01 AM 5173528 7011300 2708904 34.37 2688 2013360 695616 7.79 198488 2159824 42488
12:55:01 AM 5154328 7041952 2728104 34.61 2688 2061180 686480 7.69 229752 2142816 51480
12:56:01 AM 4926364 7026980 2956068 37.50 2688 2267996 674276 7.55 267600 2307856 10868
12:57:01 AM 4927196 7027736 2955236 37.49 2688 2267996 674276 7.55 267600 2307828 4
12:58:01 AM 4926872 7027416 2955560 37.50 2688 2268000 674276 7.55 267600 2307932 4
12:59:01 AM 4926984 7027528 2955448 37.49 2688 2268000 674276 7.55 267616 2308060 4
01:00:01 AM 4875936 6978944 3006496 38.14 2688 2270480 756556 8.47 267828 2358328 876
01:01:01 AM 4852072 6963496 3030360 38.44 2688 2278960 756556 8.47 267852 2381620 1304
01:02:01 AM 4831904 6950408 3050528 38.70 2688 2285968 772152 8.65 267984 2402200 2616
01:03:01 AM 4824820 6946024 3057612 38.79 2688 2288660 812408 9.10 267984 2409272 956
01:04:01 AM 4815020 6944692 3067412 38.91 2688 2297172 812408 9.10 267984 2418152 3904
01:05:01 AM 4805904 6942332 3076528 39.03 2688 2303892 812408 9.10 267984 2427644 1316
01:06:01 AM 4803480 6942592 3078952 39.06 2688 2306576 812408 9.10 267984 2430196 868
01:07:01 AM 4794912 6942184 3087520 39.17 2688 2314764 826124 9.25 268156 2438256 2408
01:08:02 AM 4787364 6942168 3095068 39.27 2688 2322280 826124 9.25 268156 2446116 288
01:09:01 AM 4782988 6940196 3099444 39.32 2688 2324668 843724 9.45 268156 2450384 192
01:10:01 AM 4774156 6939780 3108276 39.43 2688 2333108 853976 9.56 268156 2458904 848
01:11:01 AM 4766756 6939724 3115676 39.53 2688 2340420 853976 9.56 268156 2466476 80
01:12:01 AM 4873536 6952428 3008896 38.17 2688 2250116 695900 7.79 335636 2309496 98132
01:13:01 AM 4808104 6885404 3074328 39.00 2688 2250600 784276 8.78 536460 2182716 10976
Average: 4895163 6972614 2987269 37.90 2688 2246786 761651 8.53 275010 2339296 19327
12:52:01 AM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
12:53:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
12:53:01 AM eth0 334.52 198.20 1758.61 59.02 0.00 0.00 0.00 0.00
12:54:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
12:54:01 AM eth0 60.12 47.23 715.54 8.08 0.00 0.00 0.00 0.00
12:55:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
12:55:01 AM eth0 20.79 18.64 86.23 5.91 0.00 0.00 0.00 0.00
12:56:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
12:56:01 AM eth0 600.32 435.38 360.92 101.47 0.00 0.00 0.00 0.00
12:57:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
12:57:01 AM eth0 2.55 1.97 1.07 0.43 0.00 0.00 0.00 0.00
12:58:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
12:58:01 AM eth0 1.37 0.78 0.29 0.18 0.00 0.00 0.00 0.00
12:59:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
12:59:01 AM eth0 4.53 2.03 0.79 0.60 0.00 0.00 0.00 0.00
01:00:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:00:01 AM eth0 147.56 131.64 28.26 10.67 0.00 0.00 0.00 0.00
01:01:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:01:01 AM eth0 130.14 131.34 42.68 9.74 0.00 0.00 0.00 0.00
01:02:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:02:01 AM eth0 14.88 15.88 21.09 1.86 0.00 0.00 0.00 0.00
01:03:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:03:01 AM eth0 192.63 193.43 32.73 14.97 0.00 0.00 0.00 0.00
01:04:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:04:01 AM eth0 135.77 137.07 43.81 9.98 0.00 0.00 0.00 0.00
01:05:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:05:01 AM eth0 15.43 15.50 20.75 2.01 0.00 0.00 0.00 0.00
01:06:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:06:01 AM eth0 184.52 185.47 32.11 14.39 0.00 0.00 0.00 0.00
01:07:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:07:01 AM eth0 136.99 136.92 44.16 10.20 0.00 0.00 0.00 0.00
01:08:02 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:08:02 AM eth0 14.73 15.65 23.34 2.12 0.00 0.00 0.00 0.00
01:09:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:09:01 AM eth0 164.50 164.32 28.47 12.74 0.00 0.00 0.00 0.00
01:10:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:10:01 AM eth0 152.59 154.72 46.29 11.32 0.00 0.00 0.00 0.00
01:11:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:11:01 AM eth0 15.81 16.48 22.45 1.91 0.00 0.00 0.00 0.00
01:12:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:12:01 AM eth0 252.14 196.12 247.86 316.09 0.00 0.00 0.00 0.00
01:13:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:13:01 AM eth0 16.86 16.16 14.70 7.92 0.00 0.00 0.00 0.00
Average: lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: eth0 123.71 105.42 170.21 28.66 0.00 0.00 0.00 0.00
---> sar -P ALL:
Linux 4.18.0-553.5.1.el8.x86_64 (centos-stream-8-robot-7d7a37eb-bc14-4dd6-9530-dc22c5eae738.noval) 08/16/2025 _x86_64_ (2 CPU)
00:51:36 LINUX RESTART (2 CPU)
12:52:01 AM CPU %user %nice %system %iowait %steal %idle
12:53:01 AM all 46.40 0.03 8.28 3.60 0.11 41.58
12:53:01 AM 0 45.27 0.02 8.94 2.84 0.10 42.83
12:53:01 AM 1 47.54 0.03 7.62 4.37 0.12 40.32
12:54:01 AM all 25.70 0.00 3.72 1.07 0.08 69.44
12:54:01 AM 0 35.05 0.00 4.79 1.32 0.08 58.75
12:54:01 AM 1 16.36 0.00 2.65 0.82 0.07 80.11
12:55:01 AM all 20.96 0.00 3.28 0.28 0.08 75.42
12:55:01 AM 0 30.86 0.00 3.69 0.15 0.08 65.22
12:55:01 AM 1 11.05 0.00 2.87 0.40 0.07 85.61
12:56:01 AM all 25.42 0.00 5.52 0.87 0.08 68.11
12:56:01 AM 0 27.55 0.00 5.53 0.70 0.08 66.13
12:56:01 AM 1 23.28 0.00 5.51 1.04 0.08 70.09
12:57:01 AM all 0.23 0.00 0.13 0.05 0.03 99.57
12:57:01 AM 0 0.33 0.00 0.15 0.10 0.03 99.38
12:57:01 AM 1 0.12 0.00 0.10 0.00 0.02 99.77
12:58:01 AM all 0.29 0.00 0.07 0.01 0.03 99.61
12:58:01 AM 0 0.50 0.00 0.07 0.00 0.02 99.42
12:58:01 AM 1 0.08 0.00 0.07 0.02 0.03 99.80
12:59:01 AM all 0.35 0.00 0.08 0.00 0.02 99.55
12:59:01 AM 0 0.57 0.00 0.07 0.00 0.00 99.37
12:59:01 AM 1 0.13 0.00 0.10 0.00 0.03 99.73
01:00:01 AM all 7.94 0.00 0.82 0.01 0.06 91.17
01:00:01 AM 0 13.08 0.00 0.98 0.02 0.07 85.85
01:00:01 AM 1 2.82 0.00 0.67 0.00 0.05 96.46
01:01:01 AM all 13.72 0.00 1.09 0.02 0.07 85.11
01:01:01 AM 0 17.49 0.00 1.26 0.00 0.07 81.19
01:01:01 AM 1 9.96 0.00 0.92 0.03 0.07 89.02
01:02:01 AM all 10.35 0.00 0.48 0.02 0.05 89.10
01:02:01 AM 0 13.77 0.00 0.57 0.00 0.05 85.61
01:02:01 AM 1 6.93 0.00 0.38 0.03 0.05 92.60
01:03:01 AM all 10.98 0.00 0.93 0.00 0.07 88.03
01:03:01 AM 0 15.41 0.00 0.95 0.00 0.07 83.58
01:03:01 AM 1 6.56 0.00 0.91 0.00 0.07 92.47
01:03:01 AM CPU %user %nice %system %iowait %steal %idle
01:04:01 AM all 13.12 0.00 1.06 0.01 0.07 85.75
01:04:01 AM 0 12.03 0.00 1.02 0.02 0.07 86.86
01:04:01 AM 1 14.20 0.00 1.09 0.00 0.07 84.64
01:05:01 AM all 10.18 0.00 0.46 0.02 0.06 89.28
01:05:01 AM 0 8.08 0.00 0.37 0.03 0.05 91.47
01:05:01 AM 1 12.28 0.00 0.55 0.00 0.07 87.10
01:06:01 AM all 10.97 0.00 0.92 0.00 0.07 88.05
01:06:01 AM 0 13.91 0.00 0.86 0.00 0.07 85.16
01:06:01 AM 1 8.05 0.00 0.97 0.00 0.07 90.91
01:07:01 AM all 13.65 0.00 1.24 0.01 0.08 85.02
01:07:01 AM 0 22.72 0.00 1.53 0.02 0.08 75.65
01:07:01 AM 1 4.61 0.00 0.96 0.00 0.07 94.37
01:08:02 AM all 11.05 0.00 0.54 0.03 0.06 88.33
01:08:02 AM 0 19.44 0.00 0.82 0.00 0.07 79.68
01:08:02 AM 1 2.64 0.00 0.25 0.05 0.05 97.00
01:09:01 AM all 9.92 0.00 0.86 0.00 0.07 89.14
01:09:01 AM 0 11.63 0.00 0.91 0.00 0.07 87.40
01:09:01 AM 1 8.22 0.00 0.82 0.00 0.07 90.89
01:10:01 AM all 13.71 0.00 1.15 0.02 0.07 85.06
01:10:01 AM 0 19.35 0.00 1.30 0.00 0.07 79.29
01:10:01 AM 1 8.09 0.00 1.01 0.03 0.07 90.80
01:11:01 AM all 10.85 0.00 0.56 0.02 0.07 88.51
01:11:01 AM 0 15.95 0.00 0.74 0.02 0.07 83.23
01:11:01 AM 1 5.74 0.00 0.38 0.02 0.07 93.79
01:12:01 AM all 19.99 0.00 2.43 0.02 0.08 77.49
01:12:01 AM 0 15.49 0.00 2.20 0.02 0.08 82.21
01:12:01 AM 1 24.48 0.00 2.65 0.02 0.07 72.78
01:13:01 AM all 29.07 0.00 3.04 0.33 0.08 67.48
01:13:01 AM 0 42.58 0.00 3.56 0.27 0.08 53.51
01:13:01 AM 1 15.60 0.00 2.53 0.40 0.07 81.40
Average: all 14.53 0.00 1.75 0.30 0.06 83.36
Average: 0 18.16 0.00 1.92 0.26 0.06 79.59
Average: 1 10.90 0.00 1.57 0.34 0.06 87.12