Started by upstream project "integration-distribution-test-titanium" build number 565 originally caused by: Started by upstream project "autorelease-release-vanadium-mvn39-openjdk21" build number 122 originally caused by: Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-centos8-robot-2c-8g-2091 (centos8-robot-2c-8g) in workspace /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-sFHqN5k5L4Xb/agent.5309 SSH_AGENT_PID=5311 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium@tmp/private_key_7931239333142570104.key (/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium@tmp/private_key_7931239333142570104.key) [ssh-agent] Using credentials jenkins (Release Engineering Jenkins Key) The recommended git tool is: NONE using credential opendaylight-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://devvexx.opendaylight.org/mirror/integration/test > git init /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test # timeout=10 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/integration/test > git --version # timeout=10 > git --version # 'git version 2.43.0' using GIT_SSH to set credentials Release Engineering Jenkins Key [INFO] Currently running in a labeled security context [INFO] Currently SELinux is 'enforcing' on the host > /usr/bin/chcon --type=ssh_home_t /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test@tmp/jenkins-gitclient-ssh4608563550362084931.key Verifying host key using known hosts file You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/integration/test +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/integration/test # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/integration/test # timeout=10 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/integration/test using GIT_SSH to set credentials Release Engineering Jenkins Key [INFO] Currently running in a labeled security context [INFO] Currently SELinux is 'enforcing' on the host > /usr/bin/chcon --type=ssh_home_t /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test@tmp/jenkins-gitclient-ssh11394887343339579839.key Verifying host key using known hosts file You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/integration/test master # timeout=10 > git rev-parse FETCH_HEAD^{commit} # timeout=10 Checking out Revision 6c60ddfc8acc87c45ab0767b2ba1d2c4e7d34388 (origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 6c60ddfc8acc87c45ab0767b2ba1d2c4e7d34388 # timeout=10 Commit message: "Adapt test for new pce-allocation field" > git rev-parse FETCH_HEAD^{commit} # timeout=10 > git rev-list --no-walk 62cb016f4f4171033927cf2ae7f4ac5095373e88 # timeout=10 No emails were triggered. provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf copy managed file [clouds-yaml] to file:/home/jenkins/.config/openstack/clouds.yaml [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins3741125038564637931.sh ---> python-tools-install.sh Setup pyenv: system * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.13 (set by /opt/pyenv/version) * 3.11.7 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-ptD2 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: lftools lf-activate-venv(): INFO: Adding /tmp/venv-ptD2/bin to PATH Generating Requirements File ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. httplib2 0.31.0 requires pyparsing<4,>=3.0.4, but you have pyparsing 2.4.7 which is incompatible. Python 3.11.7 pip 25.3 from /tmp/venv-ptD2/lib/python3.11/site-packages/pip (python 3.11) appdirs==1.4.4 argcomplete==3.6.3 aspy.yaml==1.3.0 attrs==25.4.0 autopage==0.5.2 beautifulsoup4==4.14.2 boto3==1.41.5 botocore==1.41.5 bs4==0.0.2 cachetools==6.2.2 certifi==2025.11.12 cffi==2.0.0 cfgv==3.5.0 chardet==5.2.0 charset-normalizer==3.4.4 click==8.3.1 cliff==4.12.0 cmd2==2.7.0 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.3.1 distlib==0.4.0 dnspython==2.8.0 docker==7.1.0 dogpile.cache==1.5.0 durationpy==0.10 email-validator==2.3.0 filelock==3.20.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.45 google-auth==2.43.0 httplib2==0.31.0 identify==2.6.15 idna==3.11 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.25.1 jsonschema-specifications==2025.9.1 keystoneauth1==5.12.0 kubernetes==34.1.0 lftools==0.37.16 lxml==6.0.2 markdown-it-py==4.0.0 MarkupSafe==3.0.3 mdurl==0.1.2 msgpack==1.1.2 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.3.1 openstacksdk==4.8.0 os-service-types==1.8.2 osc-lib==4.2.0 oslo.config==10.1.0 oslo.context==6.2.0 oslo.i18n==6.7.1 oslo.log==7.2.1 oslo.serialization==5.8.0 oslo.utils==9.2.0 packaging==25.0 pbr==7.0.3 platformdirs==4.5.0 prettytable==3.17.0 psutil==7.1.3 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.23 pygerrit2==2.0.15 PyGithub==2.8.1 Pygments==2.19.2 PyJWT==2.10.1 PyNaCl==1.6.1 pyparsing==2.4.7 pyperclip==1.11.0 pyrsistent==0.20.0 python-cinderclient==9.8.0 python-dateutil==2.9.0.post0 python-heatclient==4.3.0 python-jenkins==1.8.3 python-keystoneclient==5.7.0 python-magnumclient==4.9.0 python-openstackclient==8.2.0 python-swiftclient==4.9.0 PyYAML==6.0.3 referencing==0.37.0 requests==2.32.5 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rich==14.2.0 rich-argparse==1.7.2 rpds-py==0.29.0 rsa==4.9.1 ruamel.yaml==0.18.16 ruamel.yaml.clib==0.2.15 s3transfer==0.15.0 simplejson==3.20.2 six==1.17.0 smmap==5.0.2 soupsieve==2.8 stevedore==5.6.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.15.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.35.4 wcwidth==0.2.14 websocket-client==1.9.0 wrapt==2.0.1 xdg==6.0.0 xmltodict==1.0.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content OS_STACK_TEMPLATE=csit-2-instance-type.yaml OS_CLOUD=vex OS_STACK_NAME=releng-ovsdb-csit-3node-upstream-clustering-only-titanium-554 OS_STACK_TEMPLATE_DIR=openstack-hot [EnvInject] - Variables injected successfully. provisioning config files... copy managed file [clouds-yaml] to file:/home/jenkins/.config/openstack/clouds.yaml [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins2621371324859594227.sh ---> Create parameters file for OpenStack HOT OpenStack Heat parameters generated ----------------------------------- parameters: vm_0_count: '3' vm_0_flavor: 'v3-standard-4' vm_0_image: 'ZZCI - Ubuntu 22.04 - builder - x86_64 - 20250917-133034.447' vm_1_count: '1' vm_1_flavor: 'v3-standard-2' vm_1_image: 'ZZCI - Ubuntu 22.04 - mininet-ovs-217 - x86_64 - 20250917-133034.654' job_name: '36866-554' silo: 'releng' [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash -l /tmp/jenkins9203350104182796742.sh ---> Create HEAT stack + source /home/jenkins/lf-env.sh + lf-activate-venv --python python3 'lftools[openstack]' kubernetes niet python-heatclient python-openstackclient python-magnumclient urllib3~=1.26.15 yq ++ mktemp -d /tmp/venv-XXXX + lf_venv=/tmp/venv-zAFH + local venv_file=/tmp/.os_lf_venv + local python=python3 + local options + local set_path=true + local install_args= ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --python python3 'lftools[openstack]' kubernetes niet python-heatclient python-openstackclient python-magnumclient urllib3~=1.26.15 yq + options=' --python '\''python3'\'' -- '\''lftools[openstack]'\'' '\''kubernetes'\'' '\''niet'\'' '\''python-heatclient'\'' '\''python-openstackclient'\'' '\''python-magnumclient'\'' '\''urllib3~=1.26.15'\'' '\''yq'\''' + eval set -- ' --python '\''python3'\'' -- '\''lftools[openstack]'\'' '\''kubernetes'\'' '\''niet'\'' '\''python-heatclient'\'' '\''python-openstackclient'\'' '\''python-magnumclient'\'' '\''urllib3~=1.26.15'\'' '\''yq'\''' ++ set -- --python python3 -- 'lftools[openstack]' kubernetes niet python-heatclient python-openstackclient python-magnumclient urllib3~=1.26.15 yq + true + case $1 in + python=python3 + shift 2 + true + case $1 in + shift + break + case $python in + local pkg_list= + [[ -d /opt/pyenv ]] + echo 'Setup pyenv:' Setup pyenv: + export PYENV_ROOT=/opt/pyenv + PYENV_ROOT=/opt/pyenv + export PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/home/jenkins/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/puppetlabs/bin + PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/home/jenkins/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/puppetlabs/bin + pyenv versions system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/.python-version) + command -v pyenv ++ pyenv init - --no-rehash + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); for i in ${!paths[@]}; do if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; fi; done; echo "${paths[*]}"'\'')" export PATH="/opt/pyenv/shims:${PATH}" export PYENV_SHELL=bash source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' pyenv() { local command command="${1:-}" if [ "$#" -gt 0 ]; then shift fi case "$command" in rehash|shell) eval "$(pyenv "sh-$command" "$@")" ;; *) command pyenv "$command" "$@" ;; esac }' +++ bash --norc -ec 'IFS=:; paths=($PATH); for i in ${!paths[@]}; do if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; fi; done; echo "${paths[*]}"' ++ PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/home/jenkins/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/puppetlabs/bin ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/home/jenkins/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/puppetlabs/bin ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/home/jenkins/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/puppetlabs/bin ++ export PYENV_SHELL=bash ++ PYENV_SHELL=bash ++ source /opt/pyenv/libexec/../completions/pyenv.bash +++ complete -F _pyenv pyenv ++ lf-pyver python3 ++ local py_version_xy=python3 ++ local py_version_xyz= ++ awk '{ print $1 }' ++ pyenv versions ++ local command ++ command=versions ++ '[' 1 -gt 0 ']' ++ shift ++ case "$command" in ++ command pyenv versions ++ pyenv versions ++ sed 's/^[ *]* //' ++ grep -E '^[0-9.]*[0-9]$' ++ [[ ! -s /tmp/.pyenv_versions ]] +++ grep '^3' /tmp/.pyenv_versions +++ tail -n 1 +++ sort -V ++ py_version_xyz=3.11.7 ++ [[ -z 3.11.7 ]] ++ echo 3.11.7 ++ return 0 + pyenv local 3.11.7 + local command + command=local + '[' 2 -gt 0 ']' + shift + case "$command" in + command pyenv local 3.11.7 + pyenv local 3.11.7 + for arg in "$@" + case $arg in + pkg_list+='lftools[openstack] ' + for arg in "$@" + case $arg in + pkg_list+='kubernetes ' + for arg in "$@" + case $arg in + pkg_list+='niet ' + for arg in "$@" + case $arg in + pkg_list+='python-heatclient ' + for arg in "$@" + case $arg in + pkg_list+='python-openstackclient ' + for arg in "$@" + case $arg in + pkg_list+='python-magnumclient ' + for arg in "$@" + case $arg in + pkg_list+='urllib3~=1.26.15 ' + for arg in "$@" + case $arg in + pkg_list+='yq ' + [[ -f /tmp/.os_lf_venv ]] ++ cat /tmp/.os_lf_venv + lf_venv=/tmp/venv-ptD2 + echo 'lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ptD2 from' file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ptD2 from file:/tmp/.os_lf_venv + echo 'lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv)' lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) + local 'pip_opts=--upgrade --quiet' + pip_opts='--upgrade --quiet --trusted-host pypi.org' + pip_opts='--upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org' + pip_opts='--upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org' + [[ -n '' ]] + [[ -n '' ]] + echo 'lf-activate-venv(): INFO: Attempting to install with network-safe options...' lf-activate-venv(): INFO: Attempting to install with network-safe options... + /tmp/venv-ptD2/bin/python3 -m pip install --upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org pip 'setuptools<66' virtualenv + echo 'lf-activate-venv(): INFO: Base packages installed successfully' lf-activate-venv(): INFO: Base packages installed successfully + [[ -z lftools[openstack] kubernetes niet python-heatclient python-openstackclient python-magnumclient urllib3~=1.26.15 yq ]] + echo 'lf-activate-venv(): INFO: Installing additional packages: lftools[openstack] kubernetes niet python-heatclient python-openstackclient python-magnumclient urllib3~=1.26.15 yq ' lf-activate-venv(): INFO: Installing additional packages: lftools[openstack] kubernetes niet python-heatclient python-openstackclient python-magnumclient urllib3~=1.26.15 yq + /tmp/venv-ptD2/bin/python3 -m pip install --upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org --upgrade-strategy eager 'lftools[openstack]' kubernetes niet python-heatclient python-openstackclient python-magnumclient urllib3~=1.26.15 yq + type python3 + true + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-ptD2/bin to PATH' lf-activate-venv(): INFO: Adding /tmp/venv-ptD2/bin to PATH + PATH=/tmp/venv-ptD2/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/home/jenkins/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/puppetlabs/bin + return 0 + openstack --os-cloud vex limits show --absolute +--------------------------+---------+ | Name | Value | +--------------------------+---------+ | maxTotalInstances | -1 | | maxTotalCores | 450 | | maxTotalRAMSize | 1000000 | | maxServerMeta | 128 | | maxImageMeta | 128 | | maxPersonality | 5 | | maxPersonalitySize | 10240 | | maxTotalKeypairs | 100 | | maxServerGroups | 10 | | maxServerGroupMembers | 10 | | maxTotalFloatingIps | -1 | | maxSecurityGroups | -1 | | maxSecurityGroupRules | -1 | | totalRAMUsed | 794624 | | totalCoresUsed | 194 | | totalInstancesUsed | 74 | | totalFloatingIpsUsed | 0 | | totalSecurityGroupsUsed | 0 | | totalServerGroupsUsed | 0 | | maxTotalVolumes | -1 | | maxTotalSnapshots | 10 | | maxTotalVolumeGigabytes | 4096 | | maxTotalBackups | 10 | | maxTotalBackupGigabytes | 1000 | | totalVolumesUsed | 3 | | totalGigabytesUsed | 60 | | totalSnapshotsUsed | 0 | | totalBackupsUsed | 0 | | totalBackupGigabytesUsed | 0 | +--------------------------+---------+ + pushd /opt/ciman/openstack-hot /opt/ciman/openstack-hot /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium + lftools openstack --os-cloud vex stack create releng-ovsdb-csit-3node-upstream-clustering-only-titanium-554 csit-2-instance-type.yaml /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/stack-parameters.yaml Creating stack releng-ovsdb-csit-3node-upstream-clustering-only-titanium-554 Waiting to initialize infrastructure... Stack initialization successful. ------------------------------------ Stack Details ------------------------------------ {'added': None, 'capabilities': [], 'created_at': '2025-11-29T00:47:42Z', 'deleted': None, 'deleted_at': None, 'description': 'No description', 'environment': None, 'environment_files': None, 'files': None, 'files_container': None, 'id': '245d1b20-bb3f-4b85-9675-4f515707454c', 'is_rollback_disabled': True, 'links': [{'href': 'https://orchestration.public.mtl1.vexxhost.net/v1/12c36e260d8e4bb2913965203b1b491f/stacks/releng-ovsdb-csit-3node-upstream-clustering-only-titanium-554/245d1b20-bb3f-4b85-9675-4f515707454c', 'rel': 'self'}], 'location': Munch({'cloud': 'vex', 'region_name': 'ca-ymq-1', 'zone': None, 'project': Munch({'id': '12c36e260d8e4bb2913965203b1b491f', 'name': '61975f2c-7c17-4d69-82fa-c3ae420ad6fd', 'domain_id': None, 'domain_name': 'Default'})}), 'name': 'releng-ovsdb-csit-3node-upstream-clustering-only-titanium-554', 'notification_topics': [], 'outputs': [{'description': 'IP addresses of the 2nd vm types', 'output_key': 'vm_1_ips', 'output_value': ['10.30.171.62']}, {'description': 'IP addresses of the 1st vm types', 'output_key': 'vm_0_ips', 'output_value': ['10.30.171.180', '10.30.170.93', '10.30.171.233']}], 'owner_id': ****, 'parameters': {'OS::project_id': '12c36e260d8e4bb2913965203b1b491f', 'OS::stack_id': '245d1b20-bb3f-4b85-9675-4f515707454c', 'OS::stack_name': 'releng-ovsdb-csit-3node-upstream-clustering-only-titanium-554', 'job_name': '36866-554', 'silo': 'releng', 'vm_0_count': '3', 'vm_0_flavor': 'v3-standard-4', 'vm_0_image': 'ZZCI - Ubuntu 22.04 - builder - x86_64 - ' '20250917-133034.447', 'vm_1_count': '1', 'vm_1_flavor': 'v3-standard-2', 'vm_1_image': 'ZZCI - Ubuntu 22.04 - mininet-ovs-217 - x86_64 ' '- 20250917-133034.654'}, 'parent_id': None, 'replaced': None, 'status': 'CREATE_COMPLETE', 'status_reason': 'Stack CREATE completed successfully', 'tags': [], 'template': None, 'template_description': 'No description', 'template_url': None, 'timeout_mins': 15, 'unchanged': None, 'updated': None, 'updated_at': None, 'user_project_id': '96e513c7880d40e8a87e4ead198c7915'} ------------------------------------ + popd /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash -l /tmp/jenkins16524904268539326747.sh ---> Copy SSH public keys to CSIT lab Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ptD2 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: lftools[openstack] kubernetes python-heatclient python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-ptD2/bin to PATH SSH not responding on 10.30.170.93. Retrying in 10 seconds... Warning: Permanently added '10.30.171.233' (ECDSA) to the list of known hosts. Warning: Permanently added '10.30.171.180' (ECDSA) to the list of known hosts. releng-36866-554-0-builder-2 Successfully copied public keys to slave 10.30.171.233 releng-36866-554-0-builder-0 Successfully copied public keys to slave 10.30.171.180 Process 6524 ready. Warning: Permanently added '10.30.171.62' (ECDSA) to the list of known hosts. releng-36866-554-1-mininet-ovs-217-0 Successfully copied public keys to slave 10.30.171.62 Ping to 10.30.170.93 successful. Warning: Permanently added '10.30.170.93' (ECDSA) to the list of known hosts. releng-36866-554-0-builder-1 Successfully copied public keys to slave 10.30.170.93 Process 6525 ready. Process 6526 ready. Process 6527 ready. SSH ready on all stack servers. [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash -l /tmp/jenkins5912982083520460239.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/.python-version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-xrWY lf-activate-venv(): INFO: Save venv in file: /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/.robot_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: setuptools wheel lf-activate-venv(): INFO: Adding /tmp/venv-xrWY/bin to PATH + echo 'Installing Python Requirements' Installing Python Requirements + cat + python -m pip install -r requirements.txt Looking in indexes: https://nexus3.opendaylight.org/repository/PyPi/simple Collecting docker-py (from -r requirements.txt (line 1)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/docker-py/1.10.6/docker_py-1.10.6-py2.py3-none-any.whl (50 kB) Collecting ipaddr (from -r requirements.txt (line 2)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/ipaddr/2.2.0/ipaddr-2.2.0.tar.gz (26 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting netaddr (from -r requirements.txt (line 3)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/netaddr/1.3.0/netaddr-1.3.0-py3-none-any.whl (2.3 MB) Collecting netifaces (from -r requirements.txt (line 4)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/netifaces/0.11.0/netifaces-0.11.0.tar.gz (30 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting pyhocon (from -r requirements.txt (line 5)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/pyhocon/0.3.61/pyhocon-0.3.61-py3-none-any.whl (25 kB) Collecting requests (from -r requirements.txt (line 6)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/requests/2.32.5/requests-2.32.5-py3-none-any.whl (64 kB) Collecting robotframework (from -r requirements.txt (line 7)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/robotframework/7.3.2/robotframework-7.3.2-py3-none-any.whl (795 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 795.1/795.1 kB 34.3 MB/s 0:00:00 Collecting robotframework-httplibrary (from -r requirements.txt (line 8)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/robotframework-httplibrary/0.4.2/robotframework-httplibrary-0.4.2.tar.gz (9.1 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting robotframework-requests==0.9.7 (from -r requirements.txt (line 9)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/robotframework-requests/0.9.7/robotframework_requests-0.9.7-py3-none-any.whl (21 kB) Collecting robotframework-selenium2library (from -r requirements.txt (line 10)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/robotframework-selenium2library/3.0.0/robotframework_selenium2library-3.0.0-py2.py3-none-any.whl (6.2 kB) Collecting robotframework-sshlibrary==3.8.0 (from -r requirements.txt (line 11)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/robotframework-sshlibrary/3.8.0/robotframework-sshlibrary-3.8.0.tar.gz (51 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting scapy (from -r requirements.txt (line 12)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/scapy/2.6.1/scapy-2.6.1-py3-none-any.whl (2.4 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.4/2.4 MB 41.3 MB/s 0:00:00 Collecting jsonpath-rw (from -r requirements.txt (line 15)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/jsonpath-rw/1.4.0/jsonpath-rw-1.4.0.tar.gz (13 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting elasticsearch (from -r requirements.txt (line 18)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elasticsearch/9.2.0/elasticsearch-9.2.0-py3-none-any.whl (960 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 960.5/960.5 kB 35.1 MB/s 0:00:00 Collecting elasticsearch-dsl (from -r requirements.txt (line 19)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elasticsearch-dsl/8.18.0/elasticsearch_dsl-8.18.0-py3-none-any.whl (10 kB) Collecting pyangbind (from -r requirements.txt (line 22)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/pyangbind/0.8.6/pyangbind-0.8.6-py3-none-any.whl (52 kB) Collecting isodate (from -r requirements.txt (line 25)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/isodate/0.7.2/isodate-0.7.2-py3-none-any.whl (22 kB) Collecting jmespath (from -r requirements.txt (line 28)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/jmespath/1.0.1/jmespath-1.0.1-py3-none-any.whl (20 kB) Collecting jsonpatch (from -r requirements.txt (line 31)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/jsonpatch/1.33/jsonpatch-1.33-py2.py3-none-any.whl (12 kB) Collecting paramiko>=1.15.3 (from robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/paramiko/4.0.0/paramiko-4.0.0-py3-none-any.whl (223 kB) Collecting scp>=0.13.0 (from robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/scp/0.15.0/scp-0.15.0-py2.py3-none-any.whl (8.8 kB) Collecting docker-pycreds>=0.2.1 (from docker-py->-r requirements.txt (line 1)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/docker-pycreds/0.4.0/docker_pycreds-0.4.0-py2.py3-none-any.whl (9.0 kB) Collecting six>=1.4.0 (from docker-py->-r requirements.txt (line 1)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/six/1.17.0/six-1.17.0-py2.py3-none-any.whl (11 kB) Collecting websocket-client>=0.32.0 (from docker-py->-r requirements.txt (line 1)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/websocket-client/1.9.0/websocket_client-1.9.0-py3-none-any.whl (82 kB) Collecting pyparsing<4,>=2 (from pyhocon->-r requirements.txt (line 5)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/pyparsing/3.2.5/pyparsing-3.2.5-py3-none-any.whl (113 kB) Collecting charset_normalizer<4,>=2 (from requests->-r requirements.txt (line 6)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/charset-normalizer/3.4.4/charset_normalizer-3.4.4-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl (151 kB) Collecting idna<4,>=2.5 (from requests->-r requirements.txt (line 6)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/idna/3.11/idna-3.11-py3-none-any.whl (71 kB) Collecting urllib3<3,>=1.21.1 (from requests->-r requirements.txt (line 6)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/urllib3/2.5.0/urllib3-2.5.0-py3-none-any.whl (129 kB) Collecting certifi>=2017.4.17 (from requests->-r requirements.txt (line 6)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/certifi/2025.11.12/certifi-2025.11.12-py3-none-any.whl (159 kB) Collecting webtest>=2.0 (from robotframework-httplibrary->-r requirements.txt (line 8)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/webtest/3.0.7/webtest-3.0.7-py3-none-any.whl (32 kB) Collecting jsonpointer (from robotframework-httplibrary->-r requirements.txt (line 8)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/jsonpointer/3.0.0/jsonpointer-3.0.0-py2.py3-none-any.whl (7.6 kB) Collecting robotframework-seleniumlibrary>=3.0.0 (from robotframework-selenium2library->-r requirements.txt (line 10)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/robotframework-seleniumlibrary/6.8.0/robotframework_seleniumlibrary-6.8.0-py3-none-any.whl (104 kB) Collecting ply (from jsonpath-rw->-r requirements.txt (line 15)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/ply/3.11/ply-3.11-py2.py3-none-any.whl (49 kB) Collecting decorator (from jsonpath-rw->-r requirements.txt (line 15)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/decorator/5.2.1/decorator-5.2.1-py3-none-any.whl (9.2 kB) Collecting anyio (from elasticsearch->-r requirements.txt (line 18)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/anyio/4.12.0/anyio-4.12.0-py3-none-any.whl (113 kB) Collecting elastic-transport<10,>=9.2.0 (from elasticsearch->-r requirements.txt (line 18)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elastic-transport/9.2.0/elastic_transport-9.2.0-py3-none-any.whl (65 kB) Collecting python-dateutil (from elasticsearch->-r requirements.txt (line 18)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/python-dateutil/2.9.0.post0/python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB) Collecting sniffio (from elasticsearch->-r requirements.txt (line 18)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/sniffio/1.3.1/sniffio-1.3.1-py3-none-any.whl (10 kB) Collecting typing-extensions (from elasticsearch->-r requirements.txt (line 18)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/typing-extensions/4.15.0/typing_extensions-4.15.0-py3-none-any.whl (44 kB) Collecting elasticsearch (from -r requirements.txt (line 18)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elasticsearch/8.19.2/elasticsearch-8.19.2-py3-none-any.whl (949 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 949.7/949.7 kB 37.6 MB/s 0:00:00 INFO: pip is looking at multiple versions of elasticsearch-dsl to determine which version is compatible with other requirements. This could take a while. Collecting elasticsearch-dsl (from -r requirements.txt (line 19)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elasticsearch-dsl/8.17.1/elasticsearch_dsl-8.17.1-py3-none-any.whl (158 kB) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elasticsearch-dsl/8.17.0/elasticsearch_dsl-8.17.0-py3-none-any.whl (158 kB) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elasticsearch-dsl/8.16.0/elasticsearch_dsl-8.16.0-py3-none-any.whl (158 kB) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elasticsearch-dsl/8.15.4/elasticsearch_dsl-8.15.4-py3-none-any.whl (104 kB) Collecting elastic-transport<9,>=8.15.1 (from elasticsearch->-r requirements.txt (line 18)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elastic-transport/8.17.1/elastic_transport-8.17.1-py3-none-any.whl (64 kB) Collecting pyang (from pyangbind->-r requirements.txt (line 22)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/pyang/2.7.1/pyang-2.7.1-py2.py3-none-any.whl (598 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 598.5/598.5 kB 13.9 MB/s 0:00:00 Collecting lxml (from pyangbind->-r requirements.txt (line 22)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/lxml/6.0.2/lxml-6.0.2-cp311-cp311-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl (5.2 MB) Collecting regex (from pyangbind->-r requirements.txt (line 22)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/regex/2025.11.3/regex-2025.11.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl (800 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 800.4/800.4 kB 24.3 MB/s 0:00:00 Collecting enum34 (from pyangbind->-r requirements.txt (line 22)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/enum34/1.1.10/enum34-1.1.10-py3-none-any.whl (11 kB) Collecting bcrypt>=3.2 (from paramiko>=1.15.3->robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/bcrypt/5.0.0/bcrypt-5.0.0-cp39-abi3-manylinux_2_28_x86_64.whl (278 kB) Collecting cryptography>=3.3 (from paramiko>=1.15.3->robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/cryptography/46.0.3/cryptography-46.0.3-cp311-abi3-manylinux_2_28_x86_64.whl (4.5 MB) Collecting invoke>=2.0 (from paramiko>=1.15.3->robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/invoke/2.2.1/invoke-2.2.1-py3-none-any.whl (160 kB) Collecting pynacl>=1.5 (from paramiko>=1.15.3->robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/pynacl/1.6.1/pynacl-1.6.1-cp38-abi3-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl (1.4 MB) Collecting cffi>=2.0.0 (from cryptography>=3.3->paramiko>=1.15.3->robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/cffi/2.0.0/cffi-2.0.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (215 kB) Collecting pycparser (from cffi>=2.0.0->cryptography>=3.3->paramiko>=1.15.3->robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/pycparser/2.23/pycparser-2.23-py3-none-any.whl (118 kB) Collecting selenium>=4.3.0 (from robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/selenium/4.38.0/selenium-4.38.0-py3-none-any.whl (9.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.7/9.7 MB 60.8 MB/s 0:00:00 Collecting robotframework-pythonlibcore>=4.4.1 (from robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/robotframework-pythonlibcore/4.4.1/robotframework_pythonlibcore-4.4.1-py2.py3-none-any.whl (12 kB) Collecting click>=8.0 (from robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/click/8.3.1/click-8.3.1-py3-none-any.whl (108 kB) Collecting trio<1.0,>=0.31.0 (from selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/trio/0.32.0/trio-0.32.0-py3-none-any.whl (512 kB) Collecting trio-websocket<1.0,>=0.12.2 (from selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/trio-websocket/0.12.2/trio_websocket-0.12.2-py3-none-any.whl (21 kB) Collecting attrs>=23.2.0 (from trio<1.0,>=0.31.0->selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/attrs/25.4.0/attrs-25.4.0-py3-none-any.whl (67 kB) Collecting sortedcontainers (from trio<1.0,>=0.31.0->selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/sortedcontainers/2.4.0/sortedcontainers-2.4.0-py2.py3-none-any.whl (29 kB) Collecting outcome (from trio<1.0,>=0.31.0->selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/outcome/1.3.0.post0/outcome-1.3.0.post0-py2.py3-none-any.whl (10 kB) Collecting wsproto>=0.14 (from trio-websocket<1.0,>=0.12.2->selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/wsproto/1.3.2/wsproto-1.3.2-py3-none-any.whl (24 kB) Collecting pysocks!=1.5.7,<2.0,>=1.5.6 (from urllib3[socks]<3.0,>=2.5.0->selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/pysocks/1.7.1/PySocks-1.7.1-py3-none-any.whl (16 kB) Collecting WebOb>=1.2 (from webtest>=2.0->robotframework-httplibrary->-r requirements.txt (line 8)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/webob/1.8.9/WebOb-1.8.9-py2.py3-none-any.whl (115 kB) Collecting waitress>=3.0.2 (from webtest>=2.0->robotframework-httplibrary->-r requirements.txt (line 8)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/waitress/3.0.2/waitress-3.0.2-py3-none-any.whl (56 kB) Collecting beautifulsoup4 (from webtest>=2.0->robotframework-httplibrary->-r requirements.txt (line 8)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/beautifulsoup4/4.14.2/beautifulsoup4-4.14.2-py3-none-any.whl (106 kB) Collecting h11<1,>=0.16.0 (from wsproto>=0.14->trio-websocket<1.0,>=0.12.2->selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/h11/0.16.0/h11-0.16.0-py3-none-any.whl (37 kB) Collecting soupsieve>1.2 (from beautifulsoup4->webtest>=2.0->robotframework-httplibrary->-r requirements.txt (line 8)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/soupsieve/2.8/soupsieve-2.8-py3-none-any.whl (36 kB) Building wheels for collected packages: robotframework-sshlibrary, ipaddr, netifaces, robotframework-httplibrary, jsonpath-rw Building wheel for robotframework-sshlibrary (pyproject.toml): started Building wheel for robotframework-sshlibrary (pyproject.toml): finished with status 'done' Created wheel for robotframework-sshlibrary: filename=robotframework_sshlibrary-3.8.0-py3-none-any.whl size=55205 sha256=bde9cabbacc7dec742fe0be8b379e0ade4654bbb9c60c7b6517e2786b33cf52c Stored in directory: /home/jenkins/.cache/pip/wheels/f7/c9/b3/a977b7bcc410d45ae27d240df3d00a12585509180e373ecccc Building wheel for ipaddr (pyproject.toml): started Building wheel for ipaddr (pyproject.toml): finished with status 'done' Created wheel for ipaddr: filename=ipaddr-2.2.0-py3-none-any.whl size=18353 sha256=d1bc9870fa23202b0b98d1e1ad68c206b33b84b490678500276d605c58f492a0 Stored in directory: /home/jenkins/.cache/pip/wheels/dc/6c/04/da2d847fa8d45c59af3e1d83e2acc29cb8adcbaf04c0898dbf Building wheel for netifaces (pyproject.toml): started Building wheel for netifaces (pyproject.toml): finished with status 'done' Created wheel for netifaces: filename=netifaces-0.11.0-cp311-cp311-linux_x86_64.whl size=41067 sha256=237d956e9a8a9503f3867503cc4c6e8b53088ab05c748a33c0103c99eaa0674d Stored in directory: /home/jenkins/.cache/pip/wheels/f8/18/88/e61d54b995bea304bdb1d040a92b72228a1bf72ca2a3eba7c9 Building wheel for robotframework-httplibrary (pyproject.toml): started Building wheel for robotframework-httplibrary (pyproject.toml): finished with status 'done' Created wheel for robotframework-httplibrary: filename=robotframework_httplibrary-0.4.2-py3-none-any.whl size=10014 sha256=76184d964ebb4c108c926181081becc92da15e0891285861d288ccbff869cafd Stored in directory: /home/jenkins/.cache/pip/wheels/aa/bc/0d/9a20dd51effef392aae2733cb4c7b66c6fa29fca33d88b57ed Building wheel for jsonpath-rw (pyproject.toml): started Building wheel for jsonpath-rw (pyproject.toml): finished with status 'done' Created wheel for jsonpath-rw: filename=jsonpath_rw-1.4.0-py3-none-any.whl size=15176 sha256=694e4465a05f64c68a151a196ab840aec9a2f53c3aa49274caa3bdcd31f49479 Stored in directory: /home/jenkins/.cache/pip/wheels/f1/54/63/9a8da38cefae13755097b36cc852decc25d8ef69c37d58d4eb Successfully built robotframework-sshlibrary ipaddr netifaces robotframework-httplibrary jsonpath-rw Installing collected packages: sortedcontainers, ply, netifaces, ipaddr, enum34, websocket-client, WebOb, waitress, urllib3, typing-extensions, soupsieve, sniffio, six, scapy, robotframework-pythonlibcore, robotframework, regex, pysocks, pyparsing, pycparser, netaddr, lxml, jsonpointer, jmespath, isodate, invoke, idna, h11, decorator, click, charset_normalizer, certifi, bcrypt, attrs, wsproto, requests, python-dateutil, pyhocon, pyang, outcome, jsonpath-rw, jsonpatch, elastic-transport, docker-pycreds, cffi, beautifulsoup4, webtest, trio, robotframework-requests, pynacl, pyangbind, elasticsearch, docker-py, cryptography, trio-websocket, robotframework-httplibrary, paramiko, elasticsearch-dsl, selenium, scp, robotframework-sshlibrary, robotframework-seleniumlibrary, robotframework-selenium2library Successfully installed WebOb-1.8.9 attrs-25.4.0 bcrypt-5.0.0 beautifulsoup4-4.14.2 certifi-2025.11.12 cffi-2.0.0 charset_normalizer-3.4.4 click-8.3.1 cryptography-46.0.3 decorator-5.2.1 docker-py-1.10.6 docker-pycreds-0.4.0 elastic-transport-8.17.1 elasticsearch-8.19.2 elasticsearch-dsl-8.15.4 enum34-1.1.10 h11-0.16.0 idna-3.11 invoke-2.2.1 ipaddr-2.2.0 isodate-0.7.2 jmespath-1.0.1 jsonpatch-1.33 jsonpath-rw-1.4.0 jsonpointer-3.0.0 lxml-6.0.2 netaddr-1.3.0 netifaces-0.11.0 outcome-1.3.0.post0 paramiko-4.0.0 ply-3.11 pyang-2.7.1 pyangbind-0.8.6 pycparser-2.23 pyhocon-0.3.61 pynacl-1.6.1 pyparsing-3.2.5 pysocks-1.7.1 python-dateutil-2.9.0.post0 regex-2025.11.3 requests-2.32.5 robotframework-7.3.2 robotframework-httplibrary-0.4.2 robotframework-pythonlibcore-4.4.1 robotframework-requests-0.9.7 robotframework-selenium2library-3.0.0 robotframework-seleniumlibrary-6.8.0 robotframework-sshlibrary-3.8.0 scapy-2.6.1 scp-0.15.0 selenium-4.38.0 six-1.17.0 sniffio-1.3.1 sortedcontainers-2.4.0 soupsieve-2.8 trio-0.32.0 trio-websocket-0.12.2 typing-extensions-4.15.0 urllib3-2.5.0 waitress-3.0.2 websocket-client-1.9.0 webtest-3.0.7 wsproto-1.3.2 + pip freeze attrs==25.4.0 bcrypt==5.0.0 beautifulsoup4==4.14.2 certifi==2025.11.12 cffi==2.0.0 charset-normalizer==3.4.4 click==8.3.1 cryptography==46.0.3 decorator==5.2.1 distlib==0.4.0 docker-py==1.10.6 docker-pycreds==0.4.0 elastic-transport==8.17.1 elasticsearch==8.19.2 elasticsearch-dsl==8.15.4 enum34==1.1.10 filelock==3.20.0 h11==0.16.0 idna==3.11 invoke==2.2.1 ipaddr==2.2.0 isodate==0.7.2 jmespath==1.0.1 jsonpatch==1.33 jsonpath-rw==1.4.0 jsonpointer==3.0.0 lxml==6.0.2 netaddr==1.3.0 netifaces==0.11.0 outcome==1.3.0.post0 paramiko==4.0.0 platformdirs==4.5.0 ply==3.11 pyang==2.7.1 pyangbind==0.8.6 pycparser==2.23 pyhocon==0.3.61 PyNaCl==1.6.1 pyparsing==3.2.5 PySocks==1.7.1 python-dateutil==2.9.0.post0 regex==2025.11.3 requests==2.32.5 robotframework==7.3.2 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==4.4.1 robotframework-requests==0.9.7 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==6.8.0 robotframework-sshlibrary==3.8.0 scapy==2.6.1 scp==0.15.0 selenium==4.38.0 six==1.17.0 sniffio==1.3.1 sortedcontainers==2.4.0 soupsieve==2.8 trio==0.32.0 trio-websocket==0.12.2 typing_extensions==4.15.0 urllib3==2.5.0 virtualenv==20.35.4 waitress==3.0.2 WebOb==1.8.9 websocket-client==1.9.0 WebTest==3.0.7 wsproto==1.3.2 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path 'env.properties' [EnvInject] - Variables injected successfully. [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash -l /tmp/jenkins3926543419744609192.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ptD2 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: python-heatclient python-openstackclient yq ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lftools 0.37.16 requires urllib3<2.1.0, but you have urllib3 2.5.0 which is incompatible. kubernetes 34.1.0 requires urllib3<2.4.0,>=1.24.2, but you have urllib3 2.5.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-ptD2/bin to PATH + ODL_SYSTEM=() + TOOLS_SYSTEM=() + OPENSTACK_SYSTEM=() + OPENSTACK_CONTROLLERS=() + mapfile -t ADDR ++ openstack stack show -f json -c outputs releng-ovsdb-csit-3node-upstream-clustering-only-titanium-554 ++ jq -r '.outputs[] | select(.output_key | match("^vm_[0-9]+_ips$")) | .output_value | .[]' + for i in "${ADDR[@]}" ++ ssh 10.30.171.62 hostname -s Warning: Permanently added '10.30.171.62' (ECDSA) to the list of known hosts. + REMHOST=releng-36866-554-1-mininet-ovs-217-0 + case ${REMHOST} in + TOOLS_SYSTEM=("${TOOLS_SYSTEM[@]}" "${i}") + for i in "${ADDR[@]}" ++ ssh 10.30.171.180 hostname -s Warning: Permanently added '10.30.171.180' (ECDSA) to the list of known hosts. + REMHOST=releng-36866-554-0-builder-0 + case ${REMHOST} in + ODL_SYSTEM=("${ODL_SYSTEM[@]}" "${i}") + for i in "${ADDR[@]}" ++ ssh 10.30.170.93 hostname -s Warning: Permanently added '10.30.170.93' (ECDSA) to the list of known hosts. + REMHOST=releng-36866-554-0-builder-1 + case ${REMHOST} in + ODL_SYSTEM=("${ODL_SYSTEM[@]}" "${i}") + for i in "${ADDR[@]}" ++ ssh 10.30.171.233 hostname -s Warning: Permanently added '10.30.171.233' (ECDSA) to the list of known hosts. + REMHOST=releng-36866-554-0-builder-2 + case ${REMHOST} in + ODL_SYSTEM=("${ODL_SYSTEM[@]}" "${i}") + echo NUM_ODL_SYSTEM=3 + echo NUM_TOOLS_SYSTEM=1 + '[' '' == yes ']' + NUM_OPENSTACK_SYSTEM=0 + echo NUM_OPENSTACK_SYSTEM=0 + '[' 0 -eq 2 ']' + echo ODL_SYSTEM_IP=10.30.171.180 ++ seq 0 2 + for i in $(seq 0 $(( ${#ODL_SYSTEM[@]} - 1 ))) + echo ODL_SYSTEM_1_IP=10.30.171.180 + for i in $(seq 0 $(( ${#ODL_SYSTEM[@]} - 1 ))) + echo ODL_SYSTEM_2_IP=10.30.170.93 + for i in $(seq 0 $(( ${#ODL_SYSTEM[@]} - 1 ))) + echo ODL_SYSTEM_3_IP=10.30.171.233 + echo TOOLS_SYSTEM_IP=10.30.171.62 ++ seq 0 0 + for i in $(seq 0 $(( ${#TOOLS_SYSTEM[@]} - 1 ))) + echo TOOLS_SYSTEM_1_IP=10.30.171.62 + openstack_index=0 + NUM_OPENSTACK_CONTROL_NODES=1 + echo NUM_OPENSTACK_CONTROL_NODES=1 ++ seq 0 0 + for i in $(seq 0 $((NUM_OPENSTACK_CONTROL_NODES - 1))) + echo OPENSTACK_CONTROL_NODE_1_IP= + NUM_OPENSTACK_COMPUTE_NODES=-1 + echo NUM_OPENSTACK_COMPUTE_NODES=-1 + '[' -1 -ge 2 ']' ++ seq 0 -2 + NUM_OPENSTACK_HAPROXY_NODES=0 + echo NUM_OPENSTACK_HAPROXY_NODES=0 ++ seq 0 -1 + echo 'Contents of slave_addresses.txt:' Contents of slave_addresses.txt: + cat slave_addresses.txt NUM_ODL_SYSTEM=3 NUM_TOOLS_SYSTEM=1 NUM_OPENSTACK_SYSTEM=0 ODL_SYSTEM_IP=10.30.171.180 ODL_SYSTEM_1_IP=10.30.171.180 ODL_SYSTEM_2_IP=10.30.170.93 ODL_SYSTEM_3_IP=10.30.171.233 TOOLS_SYSTEM_IP=10.30.171.62 TOOLS_SYSTEM_1_IP=10.30.171.62 NUM_OPENSTACK_CONTROL_NODES=1 OPENSTACK_CONTROL_NODE_1_IP= NUM_OPENSTACK_COMPUTE_NODES=-1 NUM_OPENSTACK_HAPROXY_NODES=0 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path 'slave_addresses.txt' [EnvInject] - Variables injected successfully. [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/sh /tmp/jenkins14675506042687360986.sh Preparing for JRE Version 21 Karaf artifact is karaf Karaf project is integration Java home is /usr/lib/jvm/java-21-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path 'set_variables.env' [EnvInject] - Variables injected successfully. [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins6440862016034832486.sh Distribution bundle URL is https://nexus.opendaylight.org/content/repositories//autorelease-9399/org/opendaylight/integration/karaf/0.23.0/karaf-0.23.0.zip Distribution bundle is karaf-0.23.0.zip Distribution bundle version is 0.23.0 Distribution folder is karaf-0.23.0 Nexus prefix is https://nexus.opendaylight.org [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path 'detect_variables.env' [EnvInject] - Variables injected successfully. [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash -l /tmp/jenkins1603374521258410681.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ptD2 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lftools 0.37.16 requires urllib3<2.1.0, but you have urllib3 2.5.0 which is incompatible. lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: python-heatclient python-openstackclient ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lftools 0.37.16 requires urllib3<2.1.0, but you have urllib3 2.5.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-ptD2/bin to PATH Copying common-functions.sh to /tmp Copying common-functions.sh to 10.30.171.62:/tmp Warning: Permanently added '10.30.171.62' (ECDSA) to the list of known hosts. Copying common-functions.sh to 10.30.171.180:/tmp Warning: Permanently added '10.30.171.180' (ECDSA) to the list of known hosts. Copying common-functions.sh to 10.30.170.93:/tmp Warning: Permanently added '10.30.170.93' (ECDSA) to the list of known hosts. Copying common-functions.sh to 10.30.171.233:/tmp Warning: Permanently added '10.30.171.233' (ECDSA) to the list of known hosts. [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins17918852151375935237.sh common-functions.sh is being sourced common-functions environment: MAVENCONF: /tmp/karaf-0.23.0/etc/org.ops4j.pax.url.mvn.cfg ACTUALFEATURES: FEATURESCONF: /tmp/karaf-0.23.0/etc/org.apache.karaf.features.cfg CUSTOMPROP: /tmp/karaf-0.23.0/etc/custom.properties LOGCONF: /tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg MEMCONF: /tmp/karaf-0.23.0/bin/setenv CONTROLLERMEM: 2048m AKKACONF: /tmp/karaf-0.23.0/configuration/initial/pekko.conf MODULESCONF: /tmp/karaf-0.23.0/configuration/initial/modules.conf MODULESHARDSCONF: /tmp/karaf-0.23.0/configuration/initial/module-shards.conf SUITES: ################################################# ## Configure Cluster and Start ## ################################################# ACTUALFEATURES: odl-infrautils-ready,odl-jolokia,odl-ovsdb-southbound-impl-rest SPACE_SEPARATED_FEATURES: odl-infrautils-ready odl-jolokia odl-ovsdb-southbound-impl-rest Locating script plan to use... Finished running script plans Configuring member-1 with IP address 10.30.171.180 Warning: Permanently added '10.30.171.180' (ECDSA) to the list of known hosts. Warning: Permanently added '10.30.171.180' (ECDSA) to the list of known hosts. + source /tmp/common-functions.sh karaf-0.23.0 titanium ++ [[ /tmp/common-functions.sh == \/\t\m\p\/\c\o\n\f\i\g\u\r\a\t\i\o\n\-\s\c\r\i\p\t\.\s\h ]] common-functions.sh is being sourced ++ echo 'common-functions.sh is being sourced' ++ BUNDLEFOLDER=karaf-0.23.0 ++ DISTROSTREAM=titanium ++ export MAVENCONF=/tmp/karaf-0.23.0/etc/org.ops4j.pax.url.mvn.cfg ++ MAVENCONF=/tmp/karaf-0.23.0/etc/org.ops4j.pax.url.mvn.cfg ++ export FEATURESCONF=/tmp/karaf-0.23.0/etc/org.apache.karaf.features.cfg ++ FEATURESCONF=/tmp/karaf-0.23.0/etc/org.apache.karaf.features.cfg ++ export CUSTOMPROP=/tmp/karaf-0.23.0/etc/custom.properties ++ CUSTOMPROP=/tmp/karaf-0.23.0/etc/custom.properties ++ export LOGCONF=/tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg ++ LOGCONF=/tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg ++ export MEMCONF=/tmp/karaf-0.23.0/bin/setenv ++ MEMCONF=/tmp/karaf-0.23.0/bin/setenv ++ export CONTROLLERMEM= ++ CONTROLLERMEM= ++ case "${DISTROSTREAM}" in ++ CLUSTER_SYSTEM=pekko ++ export AKKACONF=/tmp/karaf-0.23.0/configuration/initial/pekko.conf ++ AKKACONF=/tmp/karaf-0.23.0/configuration/initial/pekko.conf ++ export MODULESCONF=/tmp/karaf-0.23.0/configuration/initial/modules.conf ++ MODULESCONF=/tmp/karaf-0.23.0/configuration/initial/modules.conf ++ export MODULESHARDSCONF=/tmp/karaf-0.23.0/configuration/initial/module-shards.conf ++ MODULESHARDSCONF=/tmp/karaf-0.23.0/configuration/initial/module-shards.conf ++ print_common_env ++ cat common-functions environment: MAVENCONF: /tmp/karaf-0.23.0/etc/org.ops4j.pax.url.mvn.cfg ACTUALFEATURES: FEATURESCONF: /tmp/karaf-0.23.0/etc/org.apache.karaf.features.cfg CUSTOMPROP: /tmp/karaf-0.23.0/etc/custom.properties LOGCONF: /tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg MEMCONF: /tmp/karaf-0.23.0/bin/setenv CONTROLLERMEM: AKKACONF: /tmp/karaf-0.23.0/configuration/initial/pekko.conf MODULESCONF: /tmp/karaf-0.23.0/configuration/initial/modules.conf MODULESHARDSCONF: /tmp/karaf-0.23.0/configuration/initial/module-shards.conf SUITES: ++ SSH='ssh -t -t' ++ extra_services_cntl=' dnsmasq.service httpd.service libvirtd.service openvswitch.service ovs-vswitchd.service ovsdb-server.service rabbitmq-server.service ' ++ extra_services_cmp=' libvirtd.service openvswitch.service ovs-vswitchd.service ovsdb-server.service ' Changing to /tmp Downloading the distribution from https://nexus.opendaylight.org/content/repositories//autorelease-9399/org/opendaylight/integration/karaf/0.23.0/karaf-0.23.0.zip + echo 'Changing to /tmp' + cd /tmp + echo 'Downloading the distribution from https://nexus.opendaylight.org/content/repositories//autorelease-9399/org/opendaylight/integration/karaf/0.23.0/karaf-0.23.0.zip' + wget --progress=dot:mega https://nexus.opendaylight.org/content/repositories//autorelease-9399/org/opendaylight/integration/karaf/0.23.0/karaf-0.23.0.zip --2025-11-29 00:49:50-- https://nexus.opendaylight.org/content/repositories//autorelease-9399/org/opendaylight/integration/karaf/0.23.0/karaf-0.23.0.zip Resolving nexus.opendaylight.org (nexus.opendaylight.org)... 199.204.45.87, 2604:e100:1:0:f816:3eff:fe45:48d6 Connecting to nexus.opendaylight.org (nexus.opendaylight.org)|199.204.45.87|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 227740074 (217M) [application/zip] Saving to: ‘karaf-0.23.0.zip’ 0K ........ ........ ........ ........ ........ ........ 1% 20.2M 11s 3072K ........ ........ ........ ........ ........ ........ 2% 26.8M 9s 6144K ........ ........ ........ ........ ........ ........ 4% 24.9M 9s 9216K ........ ........ ........ ........ ........ ........ 5% 69.0M 7s 12288K ........ ........ ........ ........ ........ ........ 6% 88.7M 6s 15360K ........ ........ ........ ........ ........ ........ 8% 71.7M 6s 18432K ........ ........ ........ ........ ........ ........ 9% 70.3M 5s 21504K ........ ........ ........ ........ ........ ........ 11% 78.1M 5s 24576K ........ ........ ........ ........ ........ ........ 12% 77.2M 4s 27648K ........ ........ ........ ........ ........ ........ 13% 76.5M 4s 30720K ........ ........ ........ ........ ........ ........ 15% 82.6M 4s 33792K ........ ........ ........ ........ ........ ........ 16% 65.8M 4s 36864K ........ ........ ........ ........ ........ ........ 17% 103M 4s 39936K ........ ........ ........ ........ ........ ........ 19% 91.1M 3s 43008K ........ ........ ........ ........ ........ ........ 20% 95.0M 3s 46080K ........ ........ ........ ........ ........ ........ 22% 104M 3s 49152K ........ ........ ........ ........ ........ ........ 23% 134M 3s 52224K ........ ........ ........ ........ ........ ........ 24% 231M 3s 55296K ........ ........ ........ ........ ........ ........ 26% 236M 3s 58368K ........ ........ ........ ........ ........ ........ 27% 245M 2s 61440K ........ ........ ........ ........ ........ ........ 29% 261M 2s 64512K ........ ........ ........ ........ ........ ........ 30% 267M 2s 67584K ........ ........ ........ ........ ........ ........ 31% 296M 2s 70656K ........ ........ ........ ........ ........ ........ 33% 296M 2s 73728K ........ ........ ........ ........ ........ ........ 34% 349M 2s 76800K ........ ........ ........ ........ ........ ........ 35% 338M 2s 79872K ........ ........ ........ ........ ........ ........ 37% 309M 2s 82944K ........ ........ ........ ........ ........ ........ 38% 370M 2s 86016K ........ ........ ........ ........ ........ ........ 40% 361M 2s 89088K ........ ........ ........ ........ ........ ........ 41% 292M 1s 92160K ........ ........ ........ ........ ........ ........ 42% 287M 1s 95232K ........ ........ ........ ........ ........ ........ 44% 340M 1s 98304K ........ ........ ........ ........ ........ ........ 45% 349M 1s 101376K ........ ........ ........ ........ ........ ........ 46% 384M 1s 104448K ........ ........ ........ ........ ........ ........ 48% 406M 1s 107520K ........ ........ ........ ........ ........ ........ 49% 389M 1s 110592K ........ ........ ........ ........ ........ ........ 51% 283M 1s 113664K ........ ........ ........ ........ ........ ........ 52% 370M 1s 116736K ........ ........ ........ ........ ........ ........ 53% 339M 1s 119808K ........ ........ ........ ........ ........ ........ 55% 387M 1s 122880K ........ ........ ........ ........ ........ ........ 56% 367M 1s 125952K ........ ........ ........ ........ ........ ........ 58% 289M 1s 129024K ........ ........ ........ ........ ........ ........ 59% 304M 1s 132096K ........ ........ ........ ........ ........ ........ 60% 346M 1s 135168K ........ ........ ........ ........ ........ ........ 62% 336M 1s 138240K ........ ........ ........ ........ ........ ........ 63% 337M 1s 141312K ........ ........ ........ ........ ........ ........ 64% 339M 1s 144384K ........ ........ ........ ........ ........ ........ 66% 280M 1s 147456K ........ ........ ........ ........ ........ ........ 67% 273M 1s 150528K ........ ........ ........ ........ ........ ........ 69% 286M 1s 153600K ........ ........ ........ ........ ........ ........ 70% 300M 1s 156672K ........ ........ ........ ........ ........ ........ 71% 265M 0s 159744K ........ ........ ........ ........ ........ ........ 73% 277M 0s 162816K ........ ........ ........ ........ ........ ........ 74% 285M 0s 165888K ........ ........ ........ ........ ........ ........ 75% 269M 0s 168960K ........ ........ ........ ........ ........ ........ 77% 279M 0s 172032K ........ ........ ........ ........ ........ ........ 78% 315M 0s 175104K ........ ........ ........ ........ ........ ........ 80% 318M 0s 178176K ........ ........ ........ ........ ........ ........ 81% 306M 0s 181248K ........ ........ ........ ........ ........ ........ 82% 301M 0s 184320K ........ ........ ........ ........ ........ ........ 84% 280M 0s 187392K ........ ........ ........ ........ ........ ........ 85% 305M 0s 190464K ........ ........ ........ ........ ........ ........ 87% 322M 0s 193536K ........ ........ ........ ........ ........ ........ 88% 329M 0s 196608K ........ ........ ........ ........ ........ ........ 89% 298M 0s 199680K ........ ........ ........ ........ ........ ........ 91% 319M 0s 202752K ........ ........ ........ ........ ........ ........ 92% 319M 0s 205824K ........ ........ ........ ........ ........ ........ 93% 357M 0s 208896K ........ ........ ........ ........ ........ ........ 95% 410M 0s 211968K ........ ........ ........ .Extracting the new controller... ....... ........ ........ 96% 420M 0s 215040K ........ ........ ........ ........ ........ ........ 98% 415M 0s 218112K ........ ........ ........ ........ ........ ........ 99% 118M 0s 221184K ........ ........ ... 100% 335M=1.4s 2025-11-29 00:49:51 (152 MB/s) - ‘karaf-0.23.0.zip’ saved [227740074/227740074] + echo 'Extracting the new controller...' + unzip -q karaf-0.23.0.zip Adding external repositories... + echo 'Adding external repositories...' + sed -ie 's%org.ops4j.pax.url.mvn.repositories=%org.ops4j.pax.url.mvn.repositories=https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot@id=opendaylight-snapshot@snapshots, https://nexus.opendaylight.org/content/repositories/public@id=opendaylight-mirror, http://repo1.maven.org/maven2@id=central, http://repository.springsource.com/maven/bundles/release@id=spring.ebr.release, http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external, http://zodiac.springsource.com/maven/bundles/release@id=gemini, http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases, https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases, https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases%g' /tmp/karaf-0.23.0/etc/org.ops4j.pax.url.mvn.cfg + cat /tmp/karaf-0.23.0/etc/org.ops4j.pax.url.mvn.cfg ################################################################################ # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ################################################################################ # # If set to true, the following property will not allow any certificate to be used # when accessing Maven repositories through SSL # #org.ops4j.pax.url.mvn.certificateCheck= # # Path to the local Maven settings file. # The repositories defined in this file will be automatically added to the list # of default repositories if the 'org.ops4j.pax.url.mvn.repositories' property # below is not set. # The following locations are checked for the existence of the settings.xml file # * 1. looks for the specified url # * 2. if not found looks for ${user.home}/.m2/settings.xml # * 3. if not found looks for ${maven.home}/conf/settings.xml # * 4. if not found looks for ${M2_HOME}/conf/settings.xml # #org.ops4j.pax.url.mvn.settings= # # Path to the local Maven repository which is used to avoid downloading # artifacts when they already exist locally. # The value of this property will be extracted from the settings.xml file # above, or defaulted to: # System.getProperty( "user.home" ) + "/.m2/repository" # org.ops4j.pax.url.mvn.localRepository=${karaf.home}/${karaf.default.repository} # # Default this to false. It's just weird to use undocumented repos # org.ops4j.pax.url.mvn.useFallbackRepositories=false # # Uncomment if you don't wanna use the proxy settings # from the Maven conf/settings.xml file # # org.ops4j.pax.url.mvn.proxySupport=false # # Comma separated list of repositories scanned when resolving an artifact. # Those repositories will be checked before iterating through the # below list of repositories and even before the local repository # A repository url can be appended with zero or more of the following flags: # @snapshots : the repository contains snaphots # @noreleases : the repository does not contain any released artifacts # # The following property value will add the system folder as a repo. # org.ops4j.pax.url.mvn.defaultRepositories=\ file:${karaf.home}/${karaf.default.repository}@id=system.repository@snapshots,\ file:${karaf.data}/kar@id=kar.repository@multi@snapshots,\ file:${karaf.base}/${karaf.default.repository}@id=child.system.repository@snapshots # Use the default local repo (e.g.~/.m2/repository) as a "remote" repo #org.ops4j.pax.url.mvn.defaultLocalRepoAsRemote=false # # Comma separated list of repositories scanned when resolving an artifact. # The default list includes the following repositories: # http://repo1.maven.org/maven2@id=central # http://repository.springsource.com/maven/bundles/release@id=spring.ebr # http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external # http://zodiac.springsource.com/maven/bundles/release@id=gemini # http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases # https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases # https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases # To add repositories to the default ones, prepend '+' to the list of repositories # to add. # A repository url can be appended with zero or more of the following flags: # @snapshots : the repository contains snapshots # @noreleases : the repository does not contain any released artifacts # @id=repository.id : the id for the repository, just like in the settings.xml this is optional but recommended # org.ops4j.pax.url.mvn.repositories=https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot@id=opendaylight-snapshot@snapshots, https://nexus.opendaylight.org/content/repositories/public@id=opendaylight-mirror, http://repo1.maven.org/maven2@id=central, http://repository.springsource.com/maven/bundles/release@id=spring.ebr.release, http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external, http://zodiac.springsource.com/maven/bundles/release@id=gemini, http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases, https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases, https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases ### ^^^ No remote repositories. This is the only ODL change compared to Karaf defaults.Configuring the startup features... + [[ True == \T\r\u\e ]] + echo 'Configuring the startup features...' + sed -ie 's/\(featuresBoot=\|featuresBoot =\)/featuresBoot = odl-infrautils-ready,odl-jolokia,odl-ovsdb-southbound-impl-rest,/g' /tmp/karaf-0.23.0/etc/org.apache.karaf.features.cfg + FEATURE_TEST_STRING=features-test + FEATURE_TEST_VERSION=0.23.0 + KARAF_VERSION=karaf4 + [[ integration == \i\n\t\e\g\r\a\t\i\o\n ]] + sed -ie 's%\(featuresRepositories=\|featuresRepositories =\)%featuresRepositories = mvn:org.opendaylight.integration/features-test/0.23.0/xml/features,mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.2.0/xml/features,%g' /tmp/karaf-0.23.0/etc/org.apache.karaf.features.cfg + [[ ! -z '' ]] + cat /tmp/karaf-0.23.0/etc/org.apache.karaf.features.cfg ################################################################################ # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ################################################################################ # # Comma separated list of features repositories to register by default # featuresRepositories = mvn:org.opendaylight.integration/features-test/0.23.0/xml/features,mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.2.0/xml/features, file:${karaf.etc}/222c0a9b-2692-4a84-811a-d59fc80dde67.xml # # Comma separated list of features to install at startup # featuresBoot = odl-infrautils-ready,odl-jolokia,odl-ovsdb-southbound-impl-rest, 81b0da90-bdd3-4e19-bf9f-47cb7f839689 # # Resource repositories (OBR) that the features resolver can use # to resolve requirements/capabilities # # The format of the resourceRepositories is # resourceRepositories=[xml:url|json:url],... # for Instance: # #resourceRepositories=xml:http://host/path/to/index.xml # or #resourceRepositories=json:http://host/path/to/index.json # # # Defines if the boot features are started in asynchronous mode (in a dedicated thread) # featuresBootAsynchronous=false # # Service requirements enforcement # # By default, the feature resolver checks the service requirements/capabilities of # bundles for new features (xml schema >= 1.3.0) in order to automatically installs # the required bundles. # The following flag can have those values: # - disable: service requirements are completely ignored # - default: service requirements are ignored for old features # - enforce: service requirements are always verified # #serviceRequirements=default # # Store cfg file for config element in feature # #configCfgStore=true # # Define if the feature service automatically refresh bundles # autoRefresh=true # # Configuration of features processing mechanism (overrides, blacklisting, modification of features) # XML file defines instructions related to features processing # versions.properties may declare properties to resolve placeholders in XML file # both files are relative to ${karaf.etc} # #featureProcessing=org.apache.karaf.features.xml #featureProcessingVersions=versions.properties + configure_karaf_log karaf4 '' + local -r karaf_version=karaf4 + local -r controllerdebugmap= + local logapi=log4j + grep log4j2 /tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg log4j2.pattern = %d{ISO8601} | %-5p | %-16t | %-32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n log4j2.rootLogger.level = INFO #log4j2.rootLogger.type = asyncRoot #log4j2.rootLogger.includeLocation = false log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi log4j2.rootLogger.appenderRef.Console.ref = Console log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF} log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.type = ContextMapFilter log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.type = KeyValuePair log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.key = slf4j.marker log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.value = CONFIDENTIAL log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.operator = or log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMatch = DENY log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMismatch = NEUTRAL log4j2.logger.spifly.name = org.apache.aries.spifly log4j2.logger.spifly.level = WARN log4j2.logger.audit.name = org.apache.karaf.jaas.modules.audit log4j2.logger.audit.level = INFO log4j2.logger.audit.additivity = false log4j2.logger.audit.appenderRef.AuditRollingFile.ref = AuditRollingFile # Console appender not used by default (see log4j2.rootLogger.appenderRefs) log4j2.appender.console.type = Console log4j2.appender.console.name = Console log4j2.appender.console.layout.type = PatternLayout log4j2.appender.console.layout.pattern = ${log4j2.pattern} log4j2.appender.rolling.type = RollingRandomAccessFile log4j2.appender.rolling.name = RollingFile log4j2.appender.rolling.fileName = ${karaf.data}/log/karaf.log log4j2.appender.rolling.filePattern = ${karaf.data}/log/karaf.log.%i #log4j2.appender.rolling.immediateFlush = false log4j2.appender.rolling.append = true log4j2.appender.rolling.layout.type = PatternLayout log4j2.appender.rolling.layout.pattern = ${log4j2.pattern} log4j2.appender.rolling.policies.type = Policies log4j2.appender.rolling.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.rolling.policies.size.size = 64MB log4j2.appender.rolling.strategy.type = DefaultRolloverStrategy log4j2.appender.rolling.strategy.max = 7 log4j2.appender.audit.type = RollingRandomAccessFile log4j2.appender.audit.name = AuditRollingFile log4j2.appender.audit.fileName = ${karaf.data}/security/audit.log log4j2.appender.audit.filePattern = ${karaf.data}/security/audit.log.%i log4j2.appender.audit.append = true log4j2.appender.audit.layout.type = PatternLayout log4j2.appender.audit.layout.pattern = ${log4j2.pattern} log4j2.appender.audit.policies.type = Policies log4j2.appender.audit.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.audit.policies.size.size = 8MB log4j2.appender.audit.strategy.type = DefaultRolloverStrategy log4j2.appender.audit.strategy.max = 7 log4j2.appender.osgi.type = PaxOsgi log4j2.appender.osgi.name = PaxOsgi log4j2.appender.osgi.filter = * #log4j2.logger.aether.name = shaded.org.eclipse.aether #log4j2.logger.aether.level = TRACE #log4j2.logger.http-headers.name = shaded.org.apache.http.headers #log4j2.logger.http-headers.level = DEBUG #log4j2.logger.maven.name = org.ops4j.pax.url.mvn #log4j2.logger.maven.level = TRACE Configuring the karaf log... karaf_version: karaf4, logapi: log4j2 + logapi=log4j2 + echo 'Configuring the karaf log... karaf_version: karaf4, logapi: log4j2' + '[' log4j2 == log4j2 ']' + sed -ie 's/log4j2.appender.rolling.policies.size.size = 64MB/log4j2.appender.rolling.policies.size.size = 1GB/g' /tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg + orgmodule=org.opendaylight.yangtools.yang.parser.repo.YangTextSchemaContextResolver + orgmodule_=org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver + echo 'log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.name = WARN' + echo 'log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.level = WARN' controllerdebugmap: cat /tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg + unset IFS + echo 'controllerdebugmap: ' + '[' -n '' ']' + echo 'cat /tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg' + cat /tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg ################################################################################ # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ################################################################################ # Common pattern layout for appenders log4j2.pattern = %d{ISO8601} | %-5p | %-16t | %-32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n # Root logger log4j2.rootLogger.level = INFO # uncomment to use asynchronous loggers, which require mvn:com.lmax/disruptor/3.3.2 library #log4j2.rootLogger.type = asyncRoot #log4j2.rootLogger.includeLocation = false log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi log4j2.rootLogger.appenderRef.Console.ref = Console log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF} # Filters for logs marked by org.opendaylight.odlparent.Markers log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.type = ContextMapFilter log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.type = KeyValuePair log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.key = slf4j.marker log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.value = CONFIDENTIAL log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.operator = or log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMatch = DENY log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMismatch = NEUTRAL # Loggers configuration # Spifly logger log4j2.logger.spifly.name = org.apache.aries.spifly log4j2.logger.spifly.level = WARN # Security audit logger log4j2.logger.audit.name = org.apache.karaf.jaas.modules.audit log4j2.logger.audit.level = INFO log4j2.logger.audit.additivity = false log4j2.logger.audit.appenderRef.AuditRollingFile.ref = AuditRollingFile # Appenders configuration # Console appender not used by default (see log4j2.rootLogger.appenderRefs) log4j2.appender.console.type = Console log4j2.appender.console.name = Console log4j2.appender.console.layout.type = PatternLayout log4j2.appender.console.layout.pattern = ${log4j2.pattern} # Rolling file appender log4j2.appender.rolling.type = RollingRandomAccessFile log4j2.appender.rolling.name = RollingFile log4j2.appender.rolling.fileName = ${karaf.data}/log/karaf.log log4j2.appender.rolling.filePattern = ${karaf.data}/log/karaf.log.%i # uncomment to not force a disk flush #log4j2.appender.rolling.immediateFlush = false log4j2.appender.rolling.append = true log4j2.appender.rolling.layout.type = PatternLayout log4j2.appender.rolling.layout.pattern = ${log4j2.pattern} log4j2.appender.rolling.policies.type = Policies log4j2.appender.rolling.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.rolling.policies.size.size = 1GB log4j2.appender.rolling.strategy.type = DefaultRolloverStrategy log4j2.appender.rolling.strategy.max = 7 # Audit file appender log4j2.appender.audit.type = RollingRandomAccessFile log4j2.appender.audit.name = AuditRollingFile log4j2.appender.audit.fileName = ${karaf.data}/security/audit.log log4j2.appender.audit.filePattern = ${karaf.data}/security/audit.log.%i log4j2.appender.audit.append = true log4j2.appender.audit.layout.type = PatternLayout log4j2.appender.audit.layout.pattern = ${log4j2.pattern} log4j2.appender.audit.policies.type = Policies log4j2.appender.audit.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.audit.policies.size.size = 8MB log4j2.appender.audit.strategy.type = DefaultRolloverStrategy log4j2.appender.audit.strategy.max = 7 # OSGi appender log4j2.appender.osgi.type = PaxOsgi log4j2.appender.osgi.name = PaxOsgi log4j2.appender.osgi.filter = * # help with identification of maven-related problems with pax-url-aether #log4j2.logger.aether.name = shaded.org.eclipse.aether #log4j2.logger.aether.level = TRACE #log4j2.logger.http-headers.name = shaded.org.apache.http.headers #log4j2.logger.http-headers.level = DEBUG #log4j2.logger.maven.name = org.ops4j.pax.url.mvn #log4j2.logger.maven.level = TRACE log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.name = WARN log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.level = WARN + set_java_vars /usr/lib/jvm/java-21-openjdk-amd64 2048m /tmp/karaf-0.23.0/bin/setenv + local -r java_home=/usr/lib/jvm/java-21-openjdk-amd64 + local -r controllermem=2048m Configure + local -r memconf=/tmp/karaf-0.23.0/bin/setenv + echo Configure java home: /usr/lib/jvm/java-21-openjdk-amd64 max memory: 2048m + echo ' java home: /usr/lib/jvm/java-21-openjdk-amd64' + echo ' max memory: 2048m' memconf: /tmp/karaf-0.23.0/bin/setenv + echo ' memconf: /tmp/karaf-0.23.0/bin/setenv' + sed -ie 's%^# export JAVA_HOME%export JAVA_HOME=${JAVA_HOME:-/usr/lib/jvm/java-21-openjdk-amd64}%g' /tmp/karaf-0.23.0/bin/setenv + sed -ie 's/JAVA_MAX_MEM="2048m"/JAVA_MAX_MEM=2048m/g' /tmp/karaf-0.23.0/bin/setenv cat /tmp/karaf-0.23.0/bin/setenv + echo 'cat /tmp/karaf-0.23.0/bin/setenv' + cat /tmp/karaf-0.23.0/bin/setenv #!/bin/sh # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # handle specific scripts; the SCRIPT_NAME is exactly the name of the Karaf # script: client, instance, shell, start, status, stop, karaf # # if [ "${KARAF_SCRIPT}" == "SCRIPT_NAME" ]; then # Actions go here... # fi # # general settings which should be applied for all scripts go here; please keep # in mind that it is possible that scripts might be executed more than once, e.g. # in example of the start script where the start script is executed first and the # karaf script afterwards. # # # The following section shows the possible configuration options for the default # karaf scripts # export JAVA_HOME=${JAVA_HOME:-/usr/lib/jvm/java-21-openjdk-amd64} # Location of Java installation # export JAVA_OPTS # Generic JVM options, for instance, where you can pass the memory configuration # export JAVA_NON_DEBUG_OPTS # Additional non-debug JVM options # export EXTRA_JAVA_OPTS # Additional JVM options # export KARAF_HOME # Karaf home folder # export KARAF_DATA # Karaf data folder # export KARAF_BASE # Karaf base folder # export KARAF_ETC # Karaf etc folder # export KARAF_LOG # Karaf log folder # export KARAF_SYSTEM_OPTS # First citizen Karaf options # export KARAF_OPTS # Additional available Karaf options # export KARAF_DEBUG # Enable debug mode # export KARAF_REDIRECT # Enable/set the std/err redirection when using bin/start # export KARAF_NOROOT # Prevent execution as root if set to true Set Java version + echo 'Set Java version' + sudo /usr/sbin/alternatives --install /usr/bin/java java /usr/lib/jvm/java-21-openjdk-amd64/bin/java 1 sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper sudo: a password is required + sudo /usr/sbin/alternatives --set java /usr/lib/jvm/java-21-openjdk-amd64/bin/java sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper sudo: a password is required JDK default version ... + echo 'JDK default version ...' + java -version openjdk version "21.0.8" 2025-07-15 OpenJDK Runtime Environment (build 21.0.8+9-Ubuntu-0ubuntu122.04.1) OpenJDK 64-Bit Server VM (build 21.0.8+9-Ubuntu-0ubuntu122.04.1, mixed mode, sharing) Set JAVA_HOME + echo 'Set JAVA_HOME' + export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64 ++ readlink -e /usr/lib/jvm/java-21-openjdk-amd64/bin/java Java binary pointed at by JAVA_HOME: /usr/lib/jvm/java-21-openjdk-amd64/bin/java Listing all open ports on controller system... + JAVA_RESOLVED=/usr/lib/jvm/java-21-openjdk-amd64/bin/java + echo 'Java binary pointed at by JAVA_HOME: /usr/lib/jvm/java-21-openjdk-amd64/bin/java' + echo 'Listing all open ports on controller system...' + netstat -pnatu /tmp/configuration-script.sh: line 40: netstat: command not found Configuring cluster + '[' -f /tmp/custom_shard_config.txt ']' + echo 'Configuring cluster' + /tmp/karaf-0.23.0/bin/configure_cluster.sh 1 10.30.171.180 10.30.170.93 10.30.171.233 ################################################ ## Configure Cluster ## ################################################ NOTE: Cluster configuration files not found. Copying from /tmp/karaf-0.23.0/system/org/opendaylight/controller/sal-clustering-config/12.0.3 Configuring unique name in pekko.conf Configuring hostname in pekko.conf Configuring data and rpc seed nodes in pekko.conf modules = [ { name = "inventory" namespace = "urn:opendaylight:inventory" shard-strategy = "module" }, { name = "topology" namespace = "urn:TBD:params:xml:ns:yang:network-topology" shard-strategy = "module" }, { name = "toaster" namespace = "http://netconfcentral.org/ns/toaster" shard-strategy = "module" } ] Configuring replication type in module-shards.conf ################################################ ## NOTE: Manually restart controller to ## ## apply configuration. ## ################################################ Dump pekko.conf + echo 'Dump pekko.conf' + cat /tmp/karaf-0.23.0/configuration/initial/pekko.conf odl-cluster-data { pekko { remote { artery { enabled = on transport = tcp canonical.hostname = "10.30.171.180" canonical.port = 2550 } } cluster { # Using artery. seed-nodes = ["pekko://opendaylight-cluster-data@10.30.171.180:2550", "pekko://opendaylight-cluster-data@10.30.170.93:2550", "pekko://opendaylight-cluster-data@10.30.171.233:2550"] roles = ["member-1"] # when under load we might trip a false positive on the failure detector # failure-detector { # heartbeat-interval = 4 s # acceptable-heartbeat-pause = 16s # } } persistence { # By default the snapshots/journal directories live in KARAF_HOME. You can choose to put it somewhere else by # modifying the following two properties. The directory location specified may be a relative or absolute path. # The relative path is always relative to KARAF_HOME. # snapshot-store.local.dir = "target/snapshots" } disable-default-actor-system-quarantined-event-handling = "false" } } Dump modules.conf + echo 'Dump modules.conf' + cat /tmp/karaf-0.23.0/configuration/initial/modules.conf modules = [ { name = "inventory" namespace = "urn:opendaylight:inventory" shard-strategy = "module" }, { name = "topology" namespace = "urn:TBD:params:xml:ns:yang:network-topology" shard-strategy = "module" }, { name = "toaster" namespace = "http://netconfcentral.org/ns/toaster" shard-strategy = "module" } ] Dump module-shards.conf + echo 'Dump module-shards.conf' + cat /tmp/karaf-0.23.0/configuration/initial/module-shards.conf module-shards = [ { name = "default" shards = [ { name = "default" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "inventory" shards = [ { name="inventory" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "topology" shards = [ { name="topology" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "toaster" shards = [ { name="toaster" replicas = ["member-1", "member-2", "member-3"] } ] } ] Configuring member-2 with IP address 10.30.170.93 Warning: Permanently added '10.30.170.93' (ECDSA) to the list of known hosts. Warning: Permanently added '10.30.170.93' (ECDSA) to the list of known hosts. + source /tmp/common-functions.sh karaf-0.23.0 titanium ++ [[ /tmp/common-functions.sh == \/\t\m\p\/\c\o\n\f\i\g\u\r\a\t\i\o\n\-\s\c\r\i\p\t\.\s\h ]] common-functions.sh is being sourced ++ echo 'common-functions.sh is being sourced' ++ BUNDLEFOLDER=karaf-0.23.0 ++ DISTROSTREAM=titanium ++ export MAVENCONF=/tmp/karaf-0.23.0/etc/org.ops4j.pax.url.mvn.cfg ++ MAVENCONF=/tmp/karaf-0.23.0/etc/org.ops4j.pax.url.mvn.cfg ++ export FEATURESCONF=/tmp/karaf-0.23.0/etc/org.apache.karaf.features.cfg ++ FEATURESCONF=/tmp/karaf-0.23.0/etc/org.apache.karaf.features.cfg ++ export CUSTOMPROP=/tmp/karaf-0.23.0/etc/custom.properties ++ CUSTOMPROP=/tmp/karaf-0.23.0/etc/custom.properties ++ export LOGCONF=/tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg ++ LOGCONF=/tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg ++ export MEMCONF=/tmp/karaf-0.23.0/bin/setenv ++ MEMCONF=/tmp/karaf-0.23.0/bin/setenv ++ export CONTROLLERMEM= ++ CONTROLLERMEM= ++ case "${DISTROSTREAM}" in ++ CLUSTER_SYSTEM=pekko ++ export AKKACONF=/tmp/karaf-0.23.0/configuration/initial/pekko.conf ++ AKKACONF=/tmp/karaf-0.23.0/configuration/initial/pekko.conf ++ export MODULESCONF=/tmp/karaf-0.23.0/configuration/initial/modules.conf ++ MODULESCONF=/tmp/karaf-0.23.0/configuration/initial/modules.conf ++ export MODULESHARDSCONF=/tmp/karaf-0.23.0/configuration/initial/module-shards.conf ++ MODULESHARDSCONF=/tmp/karaf-0.23.0/configuration/initial/module-shards.conf ++ print_common_env ++ cat common-functions environment: MAVENCONF: /tmp/karaf-0.23.0/etc/org.ops4j.pax.url.mvn.cfg ACTUALFEATURES: FEATURESCONF: /tmp/karaf-0.23.0/etc/org.apache.karaf.features.cfg CUSTOMPROP: /tmp/karaf-0.23.0/etc/custom.properties LOGCONF: /tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg MEMCONF: /tmp/karaf-0.23.0/bin/setenv CONTROLLERMEM: AKKACONF: /tmp/karaf-0.23.0/configuration/initial/pekko.conf MODULESCONF: /tmp/karaf-0.23.0/configuration/initial/modules.conf MODULESHARDSCONF: /tmp/karaf-0.23.0/configuration/initial/module-shards.conf SUITES: ++ SSH='ssh -t -t' ++ extra_services_cntl=' dnsmasq.service httpd.service libvirtd.service openvswitch.service ovs-vswitchd.service ovsdb-server.service rabbitmq-server.service ' ++ extra_services_cmp=' libvirtd.service openvswitch.service ovs-vswitchd.service ovsdb-server.service ' Changing to /tmp Downloading the distribution from https://nexus.opendaylight.org/content/repositories//autorelease-9399/org/opendaylight/integration/karaf/0.23.0/karaf-0.23.0.zip + echo 'Changing to /tmp' + cd /tmp + echo 'Downloading the distribution from https://nexus.opendaylight.org/content/repositories//autorelease-9399/org/opendaylight/integration/karaf/0.23.0/karaf-0.23.0.zip' + wget --progress=dot:mega https://nexus.opendaylight.org/content/repositories//autorelease-9399/org/opendaylight/integration/karaf/0.23.0/karaf-0.23.0.zip --2025-11-29 00:49:54-- https://nexus.opendaylight.org/content/repositories//autorelease-9399/org/opendaylight/integration/karaf/0.23.0/karaf-0.23.0.zip Resolving nexus.opendaylight.org (nexus.opendaylight.org)... 199.204.45.87, 2604:e100:1:0:f816:3eff:fe45:48d6 Connecting to nexus.opendaylight.org (nexus.opendaylight.org)|199.204.45.87|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 227740074 (217M) [application/zip] Saving to: ‘karaf-0.23.0.zip’ 0K ........ ........ ........ ........ ........ ........ 1% 62.3M 3s 3072K ........ ........ ........ ........ ........ ........ 2% 104M 3s 6144K ........ ........ ........ ........ ........ ........ 4% 113M 2s 9216K ........ ........ ........ ........ ........ ........ 5% 135M 2s 12288K ........ ........ ........ ........ ........ ........ 6% 174M 2s 15360K ........ ........ ........ ........ ........ ........ 8% 194M 2s 18432K ........ ........ ........ ........ ........ ........ 9% 209M 2s 21504K ........ ........ ........ ........ ........ ........ 11% 231M 1s 24576K ........ ........ ........ ........ ........ ........ 12% 274M 1s 27648K ........ ........ ........ ........ ........ ........ 13% 245M 1s 30720K ........ ........ ........ ........ ........ ........ 15% 309M 1s 33792K ........ ........ ........ ........ ........ ........ 16% 230M 1s 36864K ........ ........ ........ ........ ........ ........ 17% 261M 1s 39936K ........ ........ ........ ........ ........ ........ 19% 324M 1s 43008K ........ ........ ........ ........ ........ ........ 20% 270M 1s 46080K ........ ........ ........ ........ ........ ........ 22% 351M 1s 49152K ........ ........ ........ ........ ........ ........ 23% 387M 1s 52224K ........ ........ ........ ........ ........ ........ 24% 314M 1s 55296K ........ ........ ........ ........ ........ ........ 26% 334M 1s 58368K ........ ........ ........ ........ ........ ........ 27% 333M 1s 61440K ........ ........ ........ ........ ........ ........ 29% 335M 1s 64512K ........ ........ ........ ........ ........ ........ 30% 314M 1s 67584K ........ ........ ........ ........ ........ ........ 31% 324M 1s 70656K ........ ........ ........ ........ ........ ........ 33% 291M 1s 73728K ........ ........ ........ ........ ........ ........ 34% 330M 1s 76800K ........ ........ ........ ........ ........ ........ 35% 355M 1s 79872K ........ ........ ........ ........ ........ ........ 37% 337M 1s 82944K ........ ........ ........ ........ ........ ........ 38% 324M 1s 86016K ........ ........ ........ ........ ........ ........ 40% 337M 1s 89088K ........ ........ ........ ........ ........ ........ 41% 336M 1s 92160K ........ ........ ........ ........ ........ ........ 42% 333M 1s 95232K ........ ........ ........ ........ ........ ........ 44% 323M 1s 98304K ........ ........ ........ ........ ........ ........ 45% 336M 1s 101376K ........ ........ ........ ........ ........ ........ 46% 338M 0s 104448K ........ ........ ........ ........ ........ ........ 48% 339M 0s 107520K ........ ........ ........ ........ ........ ........ 49% 336M 0s 110592K ........ ........ ........ ........ ........ ........ 51% 333M 0s 113664K ........ ........ ........ ........ ........ ........ 52% 347M 0s 116736K ........ ........ ........ ........ ........ ........ 53% 313M 0s 119808K ........ ........ ........ ........ ........ ........ 55% 324M 0s 122880K ........ ........ ........ ........ ........ ........ 56% 350M 0s 125952K ........ ........ ........ ........ ........ ........ 58% 343M 0s 129024K ........ ........ ........ ........ ........ ........ 59% 328M 0s 132096K ........ ........ ........ ........ ........ ........ 60% 339M 0s 135168K ........ ........ ........ ........ ........ ........ 62% 345M 0s 138240K ........ ........ ........ ........ ........ ........ 63% 326M 0s 141312K ........ ........ ........ ........ ........ ........ 64% 335M 0s 144384K ........ ........ ........ ........ ........ ........ 66% 345M 0s 147456K ........ ........ ........ ........ ........ ........ 67% 333M 0s 150528K ........ ........ ........ ........ ........ ........ 69% 320M 0s 153600K ........ ........ ........ ........ ........ ........ 70% 339M 0s 156672K ........ ........ ........ ........ ........ ........ 71% 312M 0s 159744K ........ ........ ........ ........ ........ ........ 73% 318M 0s 162816K ........ ........ ........ ........ ........ ........ 74% 331M 0s 165888K ........ ........ ........ ........ ........ ........ 75% 337M 0s 168960K ........ ........ ........ ........ ........ ........ 77% 337M 0s 172032K ........ ........ ........ ........ ........ ........ 78% 321M 0s 175104K ........ ........ ........ ........ ........ ........ 80% 334M 0s 178176K ........ ........ ........ ........ ........ ........ 81% 320M 0s 181248K ........ ........ ........ ........ ........ ........ 82% 337M 0s 184320K ........ ........ ........ ........ ........ ........ 84% 332M 0s 187392K ........ ........ ........ ........ ........ ........ 85% 343M 0s 190464K ........ ........ ........ ........ ........ ........ 87% 343M 0s 193536K ........ ........ ........ ........ ........ ........ 88% 309M 0s 196608K ........ ........ ........ ........ ........ ........ 89% 342M 0s 199680K ........ ........ ........ ........ ........ ........ 91% 348M 0s 202752K ........ ........ ........ ........ ........ ........ 92% 338M 0s 205824K ........ ........ ........ ........ ........ ........ 93% 316M 0s 208896K ........ ........ ........ ........ ........ ........ 95% 308M 0s 211968K ........ ........ ........ ........ ........ ........ 96% 319M 0s 215040K ........ ........ ........ ........ ........ ........ 98% 276M 0s 218112K ........ ........ ........ ........ ........ ........ 99% 207M 0s 221184K ........ ........ ... 100% 209M=0.8s 2025-11-29 00:49:55 (274 MB/s) - ‘karaf-0.23.0.zip’ saved [227740074/227740074] Extracting the new controller... + echo 'Extracting the new controller...' + unzip -q karaf-0.23.0.zip Adding external repositories... + echo 'Adding external repositories...' + sed -ie 's%org.ops4j.pax.url.mvn.repositories=%org.ops4j.pax.url.mvn.repositories=https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot@id=opendaylight-snapshot@snapshots, https://nexus.opendaylight.org/content/repositories/public@id=opendaylight-mirror, http://repo1.maven.org/maven2@id=central, http://repository.springsource.com/maven/bundles/release@id=spring.ebr.release, http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external, http://zodiac.springsource.com/maven/bundles/release@id=gemini, http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases, https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases, https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases%g' /tmp/karaf-0.23.0/etc/org.ops4j.pax.url.mvn.cfg + cat /tmp/karaf-0.23.0/etc/org.ops4j.pax.url.mvn.cfg ################################################################################ # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ################################################################################ # # If set to true, the following property will not allow any certificate to be used # when accessing Maven repositories through SSL # #org.ops4j.pax.url.mvn.certificateCheck= # # Path to the local Maven settings file. # The repositories defined in this file will be automatically added to the list # of default repositories if the 'org.ops4j.pax.url.mvn.repositories' property # below is not set. # The following locations are checked for the existence of the settings.xml file # * 1. looks for the specified url # * 2. if not found looks for ${user.home}/.m2/settings.xml # * 3. if not found looks for ${maven.home}/conf/settings.xml # * 4. if not found looks for ${M2_HOME}/conf/settings.xml # #org.ops4j.pax.url.mvn.settings= # # Path to the local Maven repository which is used to avoid downloading # artifacts when they already exist locally. # The value of this property will be extracted from the settings.xml file # above, or defaulted to: # System.getProperty( "user.home" ) + "/.m2/repository" # org.ops4j.pax.url.mvn.localRepository=${karaf.home}/${karaf.default.repository} # # Default this to false. It's just weird to use undocumented repos # org.ops4j.pax.url.mvn.useFallbackRepositories=false # # Uncomment if you don't wanna use the proxy settings # from the Maven conf/settings.xml file # # org.ops4j.pax.url.mvn.proxySupport=false # # Comma separated list of repositories scanned when resolving an artifact. # Those repositories will be checked before iterating through the # below list of repositories and even before the local repository # A repository url can be appended with zero or more of the following flags: # @snapshots : the repository contains snaphots # @noreleases : the repository does not contain any released artifacts # # The following property value will add the system folder as a repo. # org.ops4j.pax.url.mvn.defaultRepositories=\ file:${karaf.home}/${karaf.default.repository}@id=system.repository@snapshots,\ file:${karaf.data}/kar@id=kar.repository@multi@snapshots,\ file:${karaf.base}/${karaf.default.repository}@id=child.system.repository@snapshots # Use the default local repo (e.g.~/.m2/repository) as a "remote" repo #org.ops4j.pax.url.mvn.defaultLocalRepoAsRemote=false # # Comma separated list of repositories scanned when resolving an artifact. # The default list includes the following repositories: # http://repo1.maven.org/maven2@id=central # http://repository.springsource.com/maven/bundles/release@id=spring.ebr # http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external # http://zodiac.springsource.com/maven/bundles/release@id=gemini # http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases # https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases # https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases # To add repositories to the default ones, prepend '+' to the list of repositories # to add. # A repository url can be appended with zero or more of the following flags: # @snapshots : the repository contains snapshots # @noreleases : the repository does not contain any released artifacts # @id=repository.id : the id for the repository, just like in the settings.xml this is optional but recommended # org.ops4j.pax.url.mvn.repositories=https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot@id=opendaylight-snapshot@snapshots, https://nexus.opendaylight.org/content/repositories/public@id=opendaylight-mirror, http://repo1.maven.org/maven2@id=central, http://repository.springsource.com/maven/bundles/release@id=spring.ebr.release, http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external, http://zodiac.springsource.com/maven/bundles/release@id=gemini, http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases, https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases, https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases ### ^^^ No remote repositories. This is the only ODL change compared to Karaf defaults.Configuring the startup features... + [[ True == \T\r\u\e ]] + echo 'Configuring the startup features...' + sed -ie 's/\(featuresBoot=\|featuresBoot =\)/featuresBoot = odl-infrautils-ready,odl-jolokia,odl-ovsdb-southbound-impl-rest,/g' /tmp/karaf-0.23.0/etc/org.apache.karaf.features.cfg + FEATURE_TEST_STRING=features-test + FEATURE_TEST_VERSION=0.23.0 + KARAF_VERSION=karaf4 + [[ integration == \i\n\t\e\g\r\a\t\i\o\n ]] + sed -ie 's%\(featuresRepositories=\|featuresRepositories =\)%featuresRepositories = mvn:org.opendaylight.integration/features-test/0.23.0/xml/features,mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.2.0/xml/features,%g' /tmp/karaf-0.23.0/etc/org.apache.karaf.features.cfg + [[ ! -z '' ]] + cat /tmp/karaf-0.23.0/etc/org.apache.karaf.features.cfg ################################################################################ # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ################################################################################ # # Comma separated list of features repositories to register by default # featuresRepositories = mvn:org.opendaylight.integration/features-test/0.23.0/xml/features,mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.2.0/xml/features, file:${karaf.etc}/222c0a9b-2692-4a84-811a-d59fc80dde67.xml # # Comma separated list of features to install at startup # featuresBoot = odl-infrautils-ready,odl-jolokia,odl-ovsdb-southbound-impl-rest, 81b0da90-bdd3-4e19-bf9f-47cb7f839689 # # Resource repositories (OBR) that the features resolver can use # to resolve requirements/capabilities # # The format of the resourceRepositories is # resourceRepositories=[xml:url|json:url],... # for Instance: # #resourceRepositories=xml:http://host/path/to/index.xml # or #resourceRepositories=json:http://host/path/to/index.json # # # Defines if the boot features are started in asynchronous mode (in a dedicated thread) # featuresBootAsynchronous=false # # Service requirements enforcement # # By default, the feature resolver checks the service requirements/capabilities of # bundles for new features (xml schema >= 1.3.0) in order to automatically installs # the required bundles. # The following flag can have those values: # - disable: service requirements are completely ignored # - default: service requirements are ignored for old features # - enforce: service requirements are always verified # #serviceRequirements=default # # Store cfg file for config element in feature # #configCfgStore=true # # Define if the feature service automatically refresh bundles # autoRefresh=true # # Configuration of features processing mechanism (overrides, blacklisting, modification of features) # XML file defines instructions related to features processing # versions.properties may declare properties to resolve placeholders in XML file # both files are relative to ${karaf.etc} # #featureProcessing=org.apache.karaf.features.xml #featureProcessingVersions=versions.properties + configure_karaf_log karaf4 '' + local -r karaf_version=karaf4 + local -r controllerdebugmap= + local logapi=log4j + grep log4j2 /tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg log4j2.pattern = %d{ISO8601} | %-5p | %-16t | %-32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n log4j2.rootLogger.level = INFO #log4j2.rootLogger.type = asyncRoot #log4j2.rootLogger.includeLocation = false log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi log4j2.rootLogger.appenderRef.Console.ref = Console log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF} log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.type = ContextMapFilter log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.type = KeyValuePair log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.key = slf4j.marker log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.value = CONFIDENTIAL log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.operator = or log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMatch = DENY log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMismatch = NEUTRAL log4j2.logger.spifly.name = org.apache.aries.spifly log4j2.logger.spifly.level = WARN log4j2.logger.audit.name = org.apache.karaf.jaas.modules.audit log4j2.logger.audit.level = INFO log4j2.logger.audit.additivity = false log4j2.logger.audit.appenderRef.AuditRollingFile.ref = AuditRollingFile # Console appender not used by default (see log4j2.rootLogger.appenderRefs) log4j2.appender.console.type = Console log4j2.appender.console.name = Console log4j2.appender.console.layout.type = PatternLayout log4j2.appender.console.layout.pattern = ${log4j2.pattern} log4j2.appender.rolling.type = RollingRandomAccessFile log4j2.appender.rolling.name = RollingFile log4j2.appender.rolling.fileName = ${karaf.data}/log/karaf.log log4j2.appender.rolling.filePattern = ${karaf.data}/log/karaf.log.%i #log4j2.appender.rolling.immediateFlush = false log4j2.appender.rolling.append = true log4j2.appender.rolling.layout.type = PatternLayout log4j2.appender.rolling.layout.pattern = ${log4j2.pattern} log4j2.appender.rolling.policies.type = Policies log4j2.appender.rolling.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.rolling.policies.size.size = 64MB log4j2.appender.rolling.strategy.type = DefaultRolloverStrategy log4j2.appender.rolling.strategy.max = 7 log4j2.appender.audit.type = RollingRandomAccessFile log4j2.appender.audit.name = AuditRollingFile log4j2.appender.audit.fileName = ${karaf.data}/security/audit.log log4j2.appender.audit.filePattern = ${karaf.data}/security/audit.log.%i log4j2.appender.audit.append = true log4j2.appender.audit.layout.type = PatternLayout log4j2.appender.audit.layout.pattern = ${log4j2.pattern} log4j2.appender.audit.policies.type = Policies log4j2.appender.audit.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.audit.policies.size.size = 8MB log4j2.appender.audit.strategy.type = DefaultRolloverStrategy log4j2.appender.audit.strategy.max = 7 log4j2.appender.osgi.type = PaxOsgi log4j2.appender.osgi.name = PaxOsgi log4j2.appender.osgi.filter = * #log4j2.logger.aether.name = shaded.org.eclipse.aether #log4j2.logger.aether.level = TRACE #log4j2.logger.http-headers.name = shaded.org.apache.http.headers #log4j2.logger.http-headers.level = DEBUG #log4j2.logger.maven.name = org.ops4j.pax.url.mvn #log4j2.logger.maven.level = TRACE + logapi=log4j2 + echo 'Configuring the karaf log... karaf_version: karaf4, logapi: log4j2' Configuring the karaf log... karaf_version: karaf4, logapi: log4j2 + '[' log4j2 == log4j2 ']' + sed -ie 's/log4j2.appender.rolling.policies.size.size = 64MB/log4j2.appender.rolling.policies.size.size = 1GB/g' /tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg controllerdebugmap: cat /tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg + orgmodule=org.opendaylight.yangtools.yang.parser.repo.YangTextSchemaContextResolver + orgmodule_=org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver + echo 'log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.name = WARN' + echo 'log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.level = WARN' + unset IFS + echo 'controllerdebugmap: ' + '[' -n '' ']' + echo 'cat /tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg' + cat /tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg ################################################################################ # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ################################################################################ # Common pattern layout for appenders log4j2.pattern = %d{ISO8601} | %-5p | %-16t | %-32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n # Root logger log4j2.rootLogger.level = INFO # uncomment to use asynchronous loggers, which require mvn:com.lmax/disruptor/3.3.2 library #log4j2.rootLogger.type = asyncRoot #log4j2.rootLogger.includeLocation = false log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi log4j2.rootLogger.appenderRef.Console.ref = Console log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF} # Filters for logs marked by org.opendaylight.odlparent.Markers log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.type = ContextMapFilter log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.type = KeyValuePair log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.key = slf4j.marker log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.value = CONFIDENTIAL log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.operator = or log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMatch = DENY log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMismatch = NEUTRAL # Loggers configuration # Spifly logger log4j2.logger.spifly.name = org.apache.aries.spifly log4j2.logger.spifly.level = WARN # Security audit logger log4j2.logger.audit.name = org.apache.karaf.jaas.modules.audit log4j2.logger.audit.level = INFO log4j2.logger.audit.additivity = false log4j2.logger.audit.appenderRef.AuditRollingFile.ref = AuditRollingFile # Appenders configuration # Console appender not used by default (see log4j2.rootLogger.appenderRefs) log4j2.appender.console.type = Console log4j2.appender.console.name = Console log4j2.appender.console.layout.type = PatternLayout log4j2.appender.console.layout.pattern = ${log4j2.pattern} # Rolling file appender log4j2.appender.rolling.type = RollingRandomAccessFile log4j2.appender.rolling.name = RollingFile log4j2.appender.rolling.fileName = ${karaf.data}/log/karaf.log log4j2.appender.rolling.filePattern = ${karaf.data}/log/karaf.log.%i # uncomment to not force a disk flush #log4j2.appender.rolling.immediateFlush = false log4j2.appender.rolling.append = true log4j2.appender.rolling.layout.type = PatternLayout log4j2.appender.rolling.layout.pattern = ${log4j2.pattern} log4j2.appender.rolling.policies.type = Policies log4j2.appender.rolling.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.rolling.policies.size.size = 1GB log4j2.appender.rolling.strategy.type = DefaultRolloverStrategy log4j2.appender.rolling.strategy.max = 7 # Audit file appender log4j2.appender.audit.type = RollingRandomAccessFile log4j2.appender.audit.name = AuditRollingFile log4j2.appender.audit.fileName = ${karaf.data}/security/audit.log log4j2.appender.audit.filePattern = ${karaf.data}/security/audit.log.%i log4j2.appender.audit.append = true log4j2.appender.audit.layout.type = PatternLayout log4j2.appender.audit.layout.pattern = ${log4j2.pattern} log4j2.appender.audit.policies.type = Policies log4j2.appender.audit.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.audit.policies.size.size = 8MB log4j2.appender.audit.strategy.type = DefaultRolloverStrategy log4j2.appender.audit.strategy.max = 7 # OSGi appender log4j2.appender.osgi.type = PaxOsgi log4j2.appender.osgi.name = PaxOsgi log4j2.appender.osgi.filter = * # help with identification of maven-related problems with pax-url-aether #log4j2.logger.aether.name = shaded.org.eclipse.aether #log4j2.logger.aether.level = TRACE #log4j2.logger.http-headers.name = shaded.org.apache.http.headers #log4j2.logger.http-headers.level = DEBUG #log4j2.logger.maven.name = org.ops4j.pax.url.mvn #log4j2.logger.maven.level = TRACE log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.name = WARN log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.level = WARN Configure java home: /usr/lib/jvm/java-21-openjdk-amd64 max memory: 2048m memconf: /tmp/karaf-0.23.0/bin/setenv + set_java_vars /usr/lib/jvm/java-21-openjdk-amd64 2048m /tmp/karaf-0.23.0/bin/setenv + local -r java_home=/usr/lib/jvm/java-21-openjdk-amd64 + local -r controllermem=2048m + local -r memconf=/tmp/karaf-0.23.0/bin/setenv + echo Configure + echo ' java home: /usr/lib/jvm/java-21-openjdk-amd64' + echo ' max memory: 2048m' + echo ' memconf: /tmp/karaf-0.23.0/bin/setenv' + sed -ie 's%^# export JAVA_HOME%export JAVA_HOME=${JAVA_HOME:-/usr/lib/jvm/java-21-openjdk-amd64}%g' /tmp/karaf-0.23.0/bin/setenv + sed -ie 's/JAVA_MAX_MEM="2048m"/JAVA_MAX_MEM=2048m/g' /tmp/karaf-0.23.0/bin/setenv cat /tmp/karaf-0.23.0/bin/setenv + echo 'cat /tmp/karaf-0.23.0/bin/setenv' + cat /tmp/karaf-0.23.0/bin/setenv #!/bin/sh # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # handle specific scripts; the SCRIPT_NAME is exactly the name of the Karaf # script: client, instance, shell, start, status, stop, karaf # # if [ "${KARAF_SCRIPT}" == "SCRIPT_NAME" ]; then # Actions go here... # fi # # general settings which should be applied for all scripts go here; please keep # in mind that it is possible that scripts might be executed more than once, e.g. # in example of the start script where the start script is executed first and the # karaf script afterwards. # # # The following section shows the possible configuration options for the default # karaf scripts # export JAVA_HOME=${JAVA_HOME:-/usr/lib/jvm/java-21-openjdk-amd64} # Location of Java installation # export JAVA_OPTS # Generic JVM options, for instance, where you can pass the memory configuration # export JAVA_NON_DEBUG_OPTS # Additional non-debug JVM options # export EXTRA_JAVA_OPTS # Additional JVM options # export KARAF_HOME # Karaf home folder # export KARAF_DATA # Karaf data folder # export KARAF_BASE # Karaf base folder # export KARAF_ETC # Karaf etc folder # export KARAF_LOG # Karaf log folder # export KARAF_SYSTEM_OPTS # First citizen Karaf options # export KARAF_OPTS # Additional available Karaf options # export KARAF_DEBUG # Enable debug mode # export KARAF_REDIRECT # Enable/set the std/err redirection when using bin/start # export KARAF_NOROOT # Prevent execution as root if set to true Set Java version + echo 'Set Java version' + sudo /usr/sbin/alternatives --install /usr/bin/java java /usr/lib/jvm/java-21-openjdk-amd64/bin/java 1 sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper sudo: a password is required + sudo /usr/sbin/alternatives --set java /usr/lib/jvm/java-21-openjdk-amd64/bin/java sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper sudo: a password is required JDK default version ... + echo 'JDK default version ...' + java -version openjdk version "21.0.8" 2025-07-15 OpenJDK Runtime Environment (build 21.0.8+9-Ubuntu-0ubuntu122.04.1) OpenJDK 64-Bit Server VM (build 21.0.8+9-Ubuntu-0ubuntu122.04.1, mixed mode, sharing) Set JAVA_HOME + echo 'Set JAVA_HOME' + export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64 ++ readlink -e /usr/lib/jvm/java-21-openjdk-amd64/bin/java + JAVA_RESOLVED=/usr/lib/jvm/java-21-openjdk-amd64/bin/java Java binary pointed at by JAVA_HOME: /usr/lib/jvm/java-21-openjdk-amd64/bin/java Listing all open ports on controller system... + echo 'Java binary pointed at by JAVA_HOME: /usr/lib/jvm/java-21-openjdk-amd64/bin/java' + echo 'Listing all open ports on controller system...' + netstat -pnatu /tmp/configuration-script.sh: line 40: netstat: command not found Configuring cluster + '[' -f /tmp/custom_shard_config.txt ']' + echo 'Configuring cluster' + /tmp/karaf-0.23.0/bin/configure_cluster.sh 2 10.30.171.180 10.30.170.93 10.30.171.233 ################################################ ## Configure Cluster ## ################################################ NOTE: Cluster configuration files not found. Copying from /tmp/karaf-0.23.0/system/org/opendaylight/controller/sal-clustering-config/12.0.3 Configuring unique name in pekko.conf Configuring hostname in pekko.conf Configuring data and rpc seed nodes in pekko.conf modules = [ { name = "inventory" namespace = "urn:opendaylight:inventory" shard-strategy = "module" }, { name = "topology" namespace = "urn:TBD:params:xml:ns:yang:network-topology" shard-strategy = "module" }, { name = "toaster" namespace = "http://netconfcentral.org/ns/toaster" shard-strategy = "module" } ] Configuring replication type in module-shards.conf ################################################ ## NOTE: Manually restart controller to ## ## apply configuration. ## ################################################ Dump pekko.conf + echo 'Dump pekko.conf' + cat /tmp/karaf-0.23.0/configuration/initial/pekko.conf odl-cluster-data { pekko { remote { artery { enabled = on transport = tcp canonical.hostname = "10.30.170.93" canonical.port = 2550 } } cluster { # Using artery. seed-nodes = ["pekko://opendaylight-cluster-data@10.30.171.180:2550", "pekko://opendaylight-cluster-data@10.30.170.93:2550", "pekko://opendaylight-cluster-data@10.30.171.233:2550"] roles = ["member-2"] # when under load we might trip a false positive on the failure detector # failure-detector { # heartbeat-interval = 4 s # acceptable-heartbeat-pause = 16s # } } persistence { # By default the snapshots/journal directories live in KARAF_HOME. You can choose to put it somewhere else by # modifying the following two properties. The directory location specified may be a relative or absolute path. # The relative path is always relative to KARAF_HOME. # snapshot-store.local.dir = "target/snapshots" } disable-default-actor-system-quarantined-event-handling = "false" } } Dump modules.conf + echo 'Dump modules.conf' + cat /tmp/karaf-0.23.0/configuration/initial/modules.conf modules = [ { name = "inventory" namespace = "urn:opendaylight:inventory" shard-strategy = "module" }, { name = "topology" namespace = "urn:TBD:params:xml:ns:yang:network-topology" shard-strategy = "module" }, { name = "toaster" namespace = "http://netconfcentral.org/ns/toaster" shard-strategy = "module" } ] Dump module-shards.conf + echo 'Dump module-shards.conf' + cat /tmp/karaf-0.23.0/configuration/initial/module-shards.conf module-shards = [ { name = "default" shards = [ { name = "default" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "inventory" shards = [ { name="inventory" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "topology" shards = [ { name="topology" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "toaster" shards = [ { name="toaster" replicas = ["member-1", "member-2", "member-3"] } ] } ] Configuring member-3 with IP address 10.30.171.233 Warning: Permanently added '10.30.171.233' (ECDSA) to the list of known hosts. Warning: Permanently added '10.30.171.233' (ECDSA) to the list of known hosts. + source /tmp/common-functions.sh karaf-0.23.0 titanium ++ [[ /tmp/common-functions.sh == \/\t\m\p\/\c\o\n\f\i\g\u\r\a\t\i\o\n\-\s\c\r\i\p\t\.\s\h ]] common-functions.sh is being sourced ++ echo 'common-functions.sh is being sourced' ++ BUNDLEFOLDER=karaf-0.23.0 ++ DISTROSTREAM=titanium ++ export MAVENCONF=/tmp/karaf-0.23.0/etc/org.ops4j.pax.url.mvn.cfg ++ MAVENCONF=/tmp/karaf-0.23.0/etc/org.ops4j.pax.url.mvn.cfg ++ export FEATURESCONF=/tmp/karaf-0.23.0/etc/org.apache.karaf.features.cfg ++ FEATURESCONF=/tmp/karaf-0.23.0/etc/org.apache.karaf.features.cfg ++ export CUSTOMPROP=/tmp/karaf-0.23.0/etc/custom.properties ++ CUSTOMPROP=/tmp/karaf-0.23.0/etc/custom.properties ++ export LOGCONF=/tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg ++ LOGCONF=/tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg ++ export MEMCONF=/tmp/karaf-0.23.0/bin/setenv ++ MEMCONF=/tmp/karaf-0.23.0/bin/setenv ++ export CONTROLLERMEM= ++ CONTROLLERMEM= ++ case "${DISTROSTREAM}" in ++ CLUSTER_SYSTEM=pekko ++ export AKKACONF=/tmp/karaf-0.23.0/configuration/initial/pekko.conf ++ AKKACONF=/tmp/karaf-0.23.0/configuration/initial/pekko.conf ++ export MODULESCONF=/tmp/karaf-0.23.0/configuration/initial/modules.conf ++ MODULESCONF=/tmp/karaf-0.23.0/configuration/initial/modules.conf ++ export MODULESHARDSCONF=/tmp/karaf-0.23.0/configuration/initial/module-shards.conf ++ MODULESHARDSCONF=/tmp/karaf-0.23.0/configuration/initial/module-shards.conf ++ print_common_env ++ cat common-functions environment: MAVENCONF: /tmp/karaf-0.23.0/etc/org.ops4j.pax.url.mvn.cfg ACTUALFEATURES: FEATURESCONF: /tmp/karaf-0.23.0/etc/org.apache.karaf.features.cfg CUSTOMPROP: /tmp/karaf-0.23.0/etc/custom.properties LOGCONF: /tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg MEMCONF: /tmp/karaf-0.23.0/bin/setenv CONTROLLERMEM: AKKACONF: /tmp/karaf-0.23.0/configuration/initial/pekko.conf MODULESCONF: /tmp/karaf-0.23.0/configuration/initial/modules.conf MODULESHARDSCONF: /tmp/karaf-0.23.0/configuration/initial/module-shards.conf SUITES: ++ SSH='ssh -t -t' ++ extra_services_cntl=' dnsmasq.service httpd.service libvirtd.service openvswitch.service ovs-vswitchd.service ovsdb-server.service rabbitmq-server.service ' ++ extra_services_cmp=' libvirtd.service openvswitch.service ovs-vswitchd.service ovsdb-server.service ' Changing to /tmp + echo 'Changing to /tmp' + cd /tmp Downloading the distribution from https://nexus.opendaylight.org/content/repositories//autorelease-9399/org/opendaylight/integration/karaf/0.23.0/karaf-0.23.0.zip + echo 'Downloading the distribution from https://nexus.opendaylight.org/content/repositories//autorelease-9399/org/opendaylight/integration/karaf/0.23.0/karaf-0.23.0.zip' + wget --progress=dot:mega https://nexus.opendaylight.org/content/repositories//autorelease-9399/org/opendaylight/integration/karaf/0.23.0/karaf-0.23.0.zip --2025-11-29 00:49:58-- https://nexus.opendaylight.org/content/repositories//autorelease-9399/org/opendaylight/integration/karaf/0.23.0/karaf-0.23.0.zip Resolving nexus.opendaylight.org (nexus.opendaylight.org)... 199.204.45.87, 2604:e100:1:0:f816:3eff:fe45:48d6 Connecting to nexus.opendaylight.org (nexus.opendaylight.org)|199.204.45.87|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 227740074 (217M) [application/zip] Saving to: ‘karaf-0.23.0.zip’ 0K ........ ........ ........ ........ ........ ........ 1% 59.4M 4s 3072K ........ ........ ........ ........ ........ ........ 2% 89.3M 3s 6144K ........ ........ ........ ........ ........ ........ 4% 103M 3s 9216K ........ ........ ........ ........ ........ ........ 5% 133M 2s 12288K ........ ........ ........ ........ ........ ........ 6% 149M 2s 15360K ........ ........ ........ ........ ........ ........ 8% 169M 2s 18432K ........ ........ ........ ........ ........ ........ 9% 172M 2s 21504K ........ ........ ........ ........ ........ ........ 11% 209M 2s 24576K ........ ........ ........ ........ ........ ........ 12% 183M 2s 27648K ........ ........ ........ ........ ........ ........ 13% 226M 1s 30720K ........ ........ ........ ........ ........ ........ 15% 228M 1s 33792K ........ ........ ........ ........ ........ ........ 16% 231M 1s 36864K ........ ........ ........ ........ ........ ........ 17% 255M 1s 39936K ........ ........ ........ ........ ........ ........ 19% 248M 1s 43008K ........ ........ ........ ........ ........ ........ 20% 221M 1s 46080K ........ ........ ........ ........ ........ ........ 22% 204M 1s 49152K ........ ........ ........ ........ ........ ........ 23% 204M 1s 52224K ........ ........ ........ ........ ........ ........ 24% 196M 1s 55296K ........ ........ ........ ........ ........ ........ 26% 196M 1s 58368K ........ ........ ........ ........ ........ ........ 27% 230M 1s 61440K ........ ........ ........ ........ ........ ........ 29% 204M 1s 64512K ........ ........ ........ ........ ........ ........ 30% 191M 1s 67584K ........ ........ ........ ........ ........ ........ 31% 191M 1s 70656K ........ ........ ........ ........ ........ ........ 33% 196M 1s 73728K ........ ........ ........ ........ ........ ........ 34% 199M 1s 76800K ........ ........ ........ ........ ........ ........ 35% 236M 1s 79872K ........ ........ ........ ........ ........ ........ 37% 260M 1s 82944K ........ ........ ........ ........ ........ ........ 38% 248M 1s 86016K ........ ........ ........ ........ ........ ........ 40% 244M 1s 89088K ........ ........ ........ ........ ........ ........ 41% 251M 1s 92160K ........ ........ ........ ........ ........ ........ 42% 274M 1s 95232K ........ ........ ........ ........ ........ ........ 44% 256M 1s 98304K ........ ........ ........ ........ ........ ........ 45% 253M 1s 101376K ........ ........ ........ ........ ........ ........ 46% 262M 1s 104448K ........ ........ ........ ........ ........ ........ 48% 297M 1s 107520K ........ ........ ........ ........ ........ ........ 49% 311M 1s 110592K ........ ........ ........ ........ ........ ........ 51% 299M 1s 113664K ........ ........ ........ ........ ........ ........ 52% 287M 1s 116736K ........ ........ ........ ........ ........ ........ 53% 333M 1s 119808K ........ ........ ........ ........ ........ ........ 55% 283M 0s 122880K ........ ........ ........ ........ ........ ........ 56% 275M 0s 125952K ........ ........ ........ ........ ........ ........ 58% 311M 0s 129024K ........ ........ ........ ........ ........ ........ 59% 282M 0s 132096K ........ ........ ........ ........ ........ ........ 60% 282M 0s 135168K ........ ........ ........ ........ ........ ........ 62% 283M 0s 138240K ........ ........ ........ ........ ........ ........ 63% 281M 0s 141312K ........ ........ ........ ........ ........ ........ 64% 282M 0s 144384K ........ ........ ........ ........ ........ ........ 66% 319M 0s 147456K ........ ........ ........ ........ ........ ........ 67% 265M 0s 150528K ........ ........ ........ ........ ........ ........ 69% 313M 0s 153600K ........ ........ ........ ........ ........ ........ 70% 335M 0s 156672K ........ ........ ........ ........ ........ ........ 71% 318M 0s 159744K ........ ........ ........ ........ ........ ........ 73% 339M 0s 162816K ........ ........ ........ ........ ........ ........ 74% 332M 0s 165888K ........ ........ ........ ........ ........ ........ 75% 332M 0s 168960K ........ ........ ........ ........ ........ ........ 77% 269M 0s 172032K ........ ........ ........ ........ ........ ........ 78% 320M 0s 175104K ........ ........ ........ ........ ........ ........ 80% 344M 0s 178176K ........ ........ ........ ........ ........ ........ 81% 285M 0s 181248K ........ ........ ........ ........ ........ ........ 82% 306M 0s 184320K ........ ........ ........ ........ ........ ........ 84% 272M 0s 187392K ........ ........ ........ ........ ........ ........ 85% 311M 0s 190464K ........ ........ ........ ........ ........ ........ 87% 293M 0s 193536K ........ ........ ........ ........ ........ ........ 88% 274M 0s 196608K ........ ........ ........ ........ ........ ........ 89% 281M 0s 199680K ........ ........ ........ ........ ........ ........ 91% 300M 0s 202752K ........ ........ ........ ........ ........ ........ 92% 289M 0s 205824K ........ ........ ........ ........ ........ ........ 93% 262M 0s 208896K ........ ........ ........ ........ ........ ........ 95% 312M 0s 211968K ........ ........ ........ ........ ........ ........ 96% 298M 0s 215040K ........ ........ ........ ........ ........ ........ 98% 268M 0s 218112K ........ ........ ........ ........ ........ ........ 99% 274M 0s 221184K ........ ........ ... 100% 271M=0.9s 2025-11-29 00:49:59 (230 MB/s) - ‘karaf-0.23.0.zip’ saved [227740074/227740074] Extracting the new controller... + echo 'Extracting the new controller...' + unzip -q karaf-0.23.0.zip Adding external repositories... + echo 'Adding external repositories...' + sed -ie 's%org.ops4j.pax.url.mvn.repositories=%org.ops4j.pax.url.mvn.repositories=https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot@id=opendaylight-snapshot@snapshots, https://nexus.opendaylight.org/content/repositories/public@id=opendaylight-mirror, http://repo1.maven.org/maven2@id=central, http://repository.springsource.com/maven/bundles/release@id=spring.ebr.release, http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external, http://zodiac.springsource.com/maven/bundles/release@id=gemini, http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases, https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases, https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases%g' /tmp/karaf-0.23.0/etc/org.ops4j.pax.url.mvn.cfg + cat /tmp/karaf-0.23.0/etc/org.ops4j.pax.url.mvn.cfg ################################################################################ # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ################################################################################ # # If set to true, the following property will not allow any certificate to be used # when accessing Maven repositories through SSL # #org.ops4j.pax.url.mvn.certificateCheck= # # Path to the local Maven settings file. # The repositories defined in this file will be automatically added to the list # of default repositories if the 'org.ops4j.pax.url.mvn.repositories' property # below is not set. # The following locations are checked for the existence of the settings.xml file # * 1. looks for the specified url # * 2. if not found looks for ${user.home}/.m2/settings.xml # * 3. if not found looks for ${maven.home}/conf/settings.xml # * 4. if not found looks for ${M2_HOME}/conf/settings.xml # #org.ops4j.pax.url.mvn.settings= # # Path to the local Maven repository which is used to avoid downloading # artifacts when they already exist locally. # The value of this property will be extracted from the settings.xml file # above, or defaulted to: # System.getProperty( "user.home" ) + "/.m2/repository" # org.ops4j.pax.url.mvn.localRepository=${karaf.home}/${karaf.default.repository} # # Default this to false. It's just weird to use undocumented repos # org.ops4j.pax.url.mvn.useFallbackRepositories=false # # Uncomment if you don't wanna use the proxy settings # from the Maven conf/settings.xml file # # org.ops4j.pax.url.mvn.proxySupport=false # # Comma separated list of repositories scanned when resolving an artifact. # Those repositories will be checked before iterating through the # below list of repositories and even before the local repository # A repository url can be appended with zero or more of the following flags: # @snapshots : the repository contains snaphots # @noreleases : the repository does not contain any released artifacts # # The following property value will add the system folder as a repo. # org.ops4j.pax.url.mvn.defaultRepositories=\ file:${karaf.home}/${karaf.default.repository}@id=system.repository@snapshots,\ file:${karaf.data}/kar@id=kar.repository@multi@snapshots,\ file:${karaf.base}/${karaf.default.repository}@id=child.system.repository@snapshots # Use the default local repo (e.g.~/.m2/repository) as a "remote" repo #org.ops4j.pax.url.mvn.defaultLocalRepoAsRemote=false # # Comma separated list of repositories scanned when resolving an artifact. # The default list includes the following repositories: # http://repo1.maven.org/maven2@id=central # http://repository.springsource.com/maven/bundles/release@id=spring.ebr # http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external # http://zodiac.springsource.com/maven/bundles/release@id=gemini # http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases # https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases # https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases # To add repositories to the default ones, prepend '+' to the list of repositories # to add. # A repository url can be appended with zero or more of the following flags: # @snapshots : the repository contains snapshots # @noreleases : the repository does not contain any released artifacts # @id=repository.id : the id for the repository, just like in the settings.xml this is optional but recommended # org.ops4j.pax.url.mvn.repositories=https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot@id=opendaylight-snapshot@snapshots, https://nexus.opendaylight.org/content/repositories/public@id=opendaylight-mirror, http://repo1.maven.org/maven2@id=central, http://repository.springsource.com/maven/bundles/release@id=spring.ebr.release, http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external, http://zodiac.springsource.com/maven/bundles/release@id=gemini, http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases, https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases, https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases ### ^^^ No remote repositories. This is the only ODL change compared to Karaf defaults.+ [[ True == \T\r\u\e ]] + echo 'Configuring the startup features...' Configuring the startup features... + sed -ie 's/\(featuresBoot=\|featuresBoot =\)/featuresBoot = odl-infrautils-ready,odl-jolokia,odl-ovsdb-southbound-impl-rest,/g' /tmp/karaf-0.23.0/etc/org.apache.karaf.features.cfg + FEATURE_TEST_STRING=features-test + FEATURE_TEST_VERSION=0.23.0 + KARAF_VERSION=karaf4 + [[ integration == \i\n\t\e\g\r\a\t\i\o\n ]] + sed -ie 's%\(featuresRepositories=\|featuresRepositories =\)%featuresRepositories = mvn:org.opendaylight.integration/features-test/0.23.0/xml/features,mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.2.0/xml/features,%g' /tmp/karaf-0.23.0/etc/org.apache.karaf.features.cfg + [[ ! -z '' ]] + cat /tmp/karaf-0.23.0/etc/org.apache.karaf.features.cfg ################################################################################ # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ################################################################################ # # Comma separated list of features repositories to register by default # featuresRepositories = mvn:org.opendaylight.integration/features-test/0.23.0/xml/features,mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.2.0/xml/features, file:${karaf.etc}/222c0a9b-2692-4a84-811a-d59fc80dde67.xml # # Comma separated list of features to install at startup # featuresBoot = odl-infrautils-ready,odl-jolokia,odl-ovsdb-southbound-impl-rest, 81b0da90-bdd3-4e19-bf9f-47cb7f839689 # # Resource repositories (OBR) that the features resolver can use # to resolve requirements/capabilities # # The format of the resourceRepositories is # resourceRepositories=[xml:url|json:url],... # for Instance: # #resourceRepositories=xml:http://host/path/to/index.xml # or #resourceRepositories=json:http://host/path/to/index.json # # # Defines if the boot features are started in asynchronous mode (in a dedicated thread) # featuresBootAsynchronous=false # # Service requirements enforcement # # By default, the feature resolver checks the service requirements/capabilities of # bundles for new features (xml schema >= 1.3.0) in order to automatically installs # the required bundles. # The following flag can have those values: # - disable: service requirements are completely ignored # - default: service requirements are ignored for old features # - enforce: service requirements are always verified # #serviceRequirements=default # # Store cfg file for config element in feature # #configCfgStore=true # # Define if the feature service automatically refresh bundles # autoRefresh=true # # Configuration of features processing mechanism (overrides, blacklisting, modification of features) # XML file defines instructions related to features processing # versions.properties may declare properties to resolve placeholders in XML file # both files are relative to ${karaf.etc} # #featureProcessing=org.apache.karaf.features.xml #featureProcessingVersions=versions.properties + configure_karaf_log karaf4 '' + local -r karaf_version=karaf4 + local -r controllerdebugmap= + local logapi=log4j + grep log4j2 /tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg log4j2.pattern = %d{ISO8601} | %-5p | %-16t | %-32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n log4j2.rootLogger.level = INFO #log4j2.rootLogger.type = asyncRoot #log4j2.rootLogger.includeLocation = false log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi log4j2.rootLogger.appenderRef.Console.ref = Console log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF} log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.type = ContextMapFilter log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.type = KeyValuePair log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.key = slf4j.marker log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.value = CONFIDENTIAL log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.operator = or log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMatch = DENY log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMismatch = NEUTRAL log4j2.logger.spifly.name = org.apache.aries.spifly log4j2.logger.spifly.level = WARN log4j2.logger.audit.name = org.apache.karaf.jaas.modules.audit log4j2.logger.audit.level = INFO log4j2.logger.audit.additivity = false log4j2.logger.audit.appenderRef.AuditRollingFile.ref = AuditRollingFile # Console appender not used by default (see log4j2.rootLogger.appenderRefs) log4j2.appender.console.type = Console log4j2.appender.console.name = Console log4j2.appender.console.layout.type = PatternLayout log4j2.appender.console.layout.pattern = ${log4j2.pattern} log4j2.appender.rolling.type = RollingRandomAccessFile log4j2.appender.rolling.name = RollingFile log4j2.appender.rolling.fileName = ${karaf.data}/log/karaf.log log4j2.appender.rolling.filePattern = ${karaf.data}/log/karaf.log.%i #log4j2.appender.rolling.immediateFlush = false log4j2.appender.rolling.append = true log4j2.appender.rolling.layout.type = PatternLayout log4j2.appender.rolling.layout.pattern = ${log4j2.pattern} log4j2.appender.rolling.policies.type = Policies log4j2.appender.rolling.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.rolling.policies.size.size = 64MB log4j2.appender.rolling.strategy.type = DefaultRolloverStrategy log4j2.appender.rolling.strategy.max = 7 log4j2.appender.audit.type = RollingRandomAccessFile log4j2.appender.audit.name = AuditRollingFile log4j2.appender.audit.fileName = ${karaf.data}/security/audit.log log4j2.appender.audit.filePattern = ${karaf.data}/security/audit.log.%i log4j2.appender.audit.append = true log4j2.appender.audit.layout.type = PatternLayout log4j2.appender.audit.layout.pattern = ${log4j2.pattern} log4j2.appender.audit.policies.type = Policies log4j2.appender.audit.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.audit.policies.size.size = 8MB log4j2.appender.audit.strategy.type = DefaultRolloverStrategy log4j2.appender.audit.strategy.max = 7 log4j2.appender.osgi.type = PaxOsgi log4j2.appender.osgi.name = PaxOsgi log4j2.appender.osgi.filter = * #log4j2.logger.aether.name = shaded.org.eclipse.aether #log4j2.logger.aether.level = TRACE #log4j2.logger.http-headers.name = shaded.org.apache.http.headers #log4j2.logger.http-headers.level = DEBUG #log4j2.logger.maven.name = org.ops4j.pax.url.mvn #log4j2.logger.maven.level = TRACE Configuring the karaf log... karaf_version: karaf4, logapi: log4j2 + logapi=log4j2 + echo 'Configuring the karaf log... karaf_version: karaf4, logapi: log4j2' + '[' log4j2 == log4j2 ']' + sed -ie 's/log4j2.appender.rolling.policies.size.size = 64MB/log4j2.appender.rolling.policies.size.size = 1GB/g' /tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg + orgmodule=org.opendaylight.yangtools.yang.parser.repo.YangTextSchemaContextResolver + orgmodule_=org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver + echo 'log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.name = WARN' + echo 'log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.level = WARN' controllerdebugmap: + unset IFS + echo 'controllerdebugmap: ' + '[' -n '' ']' cat /tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg + echo 'cat /tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg' + cat /tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg ################################################################################ # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ################################################################################ # Common pattern layout for appenders log4j2.pattern = %d{ISO8601} | %-5p | %-16t | %-32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n # Root logger log4j2.rootLogger.level = INFO # uncomment to use asynchronous loggers, which require mvn:com.lmax/disruptor/3.3.2 library #log4j2.rootLogger.type = asyncRoot #log4j2.rootLogger.includeLocation = false log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi log4j2.rootLogger.appenderRef.Console.ref = Console log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF} # Filters for logs marked by org.opendaylight.odlparent.Markers log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.type = ContextMapFilter log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.type = KeyValuePair log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.key = slf4j.marker log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.value = CONFIDENTIAL log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.operator = or log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMatch = DENY log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMismatch = NEUTRAL # Loggers configuration # Spifly logger log4j2.logger.spifly.name = org.apache.aries.spifly log4j2.logger.spifly.level = WARN # Security audit logger log4j2.logger.audit.name = org.apache.karaf.jaas.modules.audit log4j2.logger.audit.level = INFO log4j2.logger.audit.additivity = false log4j2.logger.audit.appenderRef.AuditRollingFile.ref = AuditRollingFile # Appenders configuration # Console appender not used by default (see log4j2.rootLogger.appenderRefs) log4j2.appender.console.type = Console log4j2.appender.console.name = Console log4j2.appender.console.layout.type = PatternLayout log4j2.appender.console.layout.pattern = ${log4j2.pattern} # Rolling file appender log4j2.appender.rolling.type = RollingRandomAccessFile log4j2.appender.rolling.name = RollingFile log4j2.appender.rolling.fileName = ${karaf.data}/log/karaf.log log4j2.appender.rolling.filePattern = ${karaf.data}/log/karaf.log.%i # uncomment to not force a disk flush #log4j2.appender.rolling.immediateFlush = false log4j2.appender.rolling.append = true log4j2.appender.rolling.layout.type = PatternLayout log4j2.appender.rolling.layout.pattern = ${log4j2.pattern} log4j2.appender.rolling.policies.type = Policies log4j2.appender.rolling.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.rolling.policies.size.size = 1GB log4j2.appender.rolling.strategy.type = DefaultRolloverStrategy log4j2.appender.rolling.strategy.max = 7 # Audit file appender log4j2.appender.audit.type = RollingRandomAccessFile log4j2.appender.audit.name = AuditRollingFile log4j2.appender.audit.fileName = ${karaf.data}/security/audit.log log4j2.appender.audit.filePattern = ${karaf.data}/security/audit.log.%i log4j2.appender.audit.append = true log4j2.appender.audit.layout.type = PatternLayout log4j2.appender.audit.layout.pattern = ${log4j2.pattern} log4j2.appender.audit.policies.type = Policies log4j2.appender.audit.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.audit.policies.size.size = 8MB log4j2.appender.audit.strategy.type = DefaultRolloverStrategy log4j2.appender.audit.strategy.max = 7 # OSGi appender log4j2.appender.osgi.type = PaxOsgi log4j2.appender.osgi.name = PaxOsgi log4j2.appender.osgi.filter = * # help with identification of maven-related problems with pax-url-aether #log4j2.logger.aether.name = shaded.org.eclipse.aether #log4j2.logger.aether.level = TRACE #log4j2.logger.http-headers.name = shaded.org.apache.http.headers #log4j2.logger.http-headers.level = DEBUG #log4j2.logger.maven.name = org.ops4j.pax.url.mvn #log4j2.logger.maven.level = TRACE log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.name = WARN log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.level = WARN + set_java_vars /usr/lib/jvm/java-21-openjdk-amd64 2048m /tmp/karaf-0.23.0/bin/setenv + local -r java_home=/usr/lib/jvm/java-21-openjdk-amd64 + local -r controllermem=2048m + local -r memconf=/tmp/karaf-0.23.0/bin/setenv Configure java home: /usr/lib/jvm/java-21-openjdk-amd64 max memory: 2048m memconf: /tmp/karaf-0.23.0/bin/setenv + echo Configure + echo ' java home: /usr/lib/jvm/java-21-openjdk-amd64' + echo ' max memory: 2048m' + echo ' memconf: /tmp/karaf-0.23.0/bin/setenv' + sed -ie 's%^# export JAVA_HOME%export JAVA_HOME=${JAVA_HOME:-/usr/lib/jvm/java-21-openjdk-amd64}%g' /tmp/karaf-0.23.0/bin/setenv + sed -ie 's/JAVA_MAX_MEM="2048m"/JAVA_MAX_MEM=2048m/g' /tmp/karaf-0.23.0/bin/setenv cat /tmp/karaf-0.23.0/bin/setenv + echo 'cat /tmp/karaf-0.23.0/bin/setenv' + cat /tmp/karaf-0.23.0/bin/setenv #!/bin/sh # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # handle specific scripts; the SCRIPT_NAME is exactly the name of the Karaf # script: client, instance, shell, start, status, stop, karaf # # if [ "${KARAF_SCRIPT}" == "SCRIPT_NAME" ]; then # Actions go here... # fi # # general settings which should be applied for all scripts go here; please keep # in mind that it is possible that scripts might be executed more than once, e.g. # in example of the start script where the start script is executed first and the # karaf script afterwards. # # # The following section shows the possible configuration options for the default # karaf scripts # export JAVA_HOME=${JAVA_HOME:-/usr/lib/jvm/java-21-openjdk-amd64} # Location of Java installation # export JAVA_OPTS # Generic JVM options, for instance, where you can pass the memory configuration # export JAVA_NON_DEBUG_OPTS # Additional non-debug JVM options # export EXTRA_JAVA_OPTS # Additional JVM options # export KARAF_HOME # Karaf home folder # export KARAF_DATA # Karaf data folder # export KARAF_BASE # Karaf base folder # export KARAF_ETC # Karaf etc folder # export KARAF_LOG # Karaf log folder # export KARAF_SYSTEM_OPTS # First citizen Karaf options # export KARAF_OPTS # Additional available Karaf options # export KARAF_DEBUG # Enable debug mode # export KARAF_REDIRECT # Enable/set the std/err redirection when using bin/start # export KARAF_NOROOT # Prevent execution as root if set to true Set Java version + echo 'Set Java version' + sudo /usr/sbin/alternatives --install /usr/bin/java java /usr/lib/jvm/java-21-openjdk-amd64/bin/java 1 sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper sudo: a password is required + sudo /usr/sbin/alternatives --set java /usr/lib/jvm/java-21-openjdk-amd64/bin/java sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper sudo: a password is required JDK default version ... + echo 'JDK default version ...' + java -version openjdk version "21.0.8" 2025-07-15 OpenJDK Runtime Environment (build 21.0.8+9-Ubuntu-0ubuntu122.04.1) OpenJDK 64-Bit Server VM (build 21.0.8+9-Ubuntu-0ubuntu122.04.1, mixed mode, sharing) Set JAVA_HOME + echo 'Set JAVA_HOME' + export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64 ++ readlink -e /usr/lib/jvm/java-21-openjdk-amd64/bin/java + JAVA_RESOLVED=/usr/lib/jvm/java-21-openjdk-amd64/bin/java Java binary pointed at by JAVA_HOME: /usr/lib/jvm/java-21-openjdk-amd64/bin/java + echo 'Java binary pointed at by JAVA_HOME: /usr/lib/jvm/java-21-openjdk-amd64/bin/java' Listing all open ports on controller system... + echo 'Listing all open ports on controller system...' + netstat -pnatu /tmp/configuration-script.sh: line 40: netstat: command not found Configuring cluster + '[' -f /tmp/custom_shard_config.txt ']' + echo 'Configuring cluster' + /tmp/karaf-0.23.0/bin/configure_cluster.sh 3 10.30.171.180 10.30.170.93 10.30.171.233 ################################################ ## Configure Cluster ## ################################################ NOTE: Cluster configuration files not found. Copying from /tmp/karaf-0.23.0/system/org/opendaylight/controller/sal-clustering-config/12.0.3 Configuring unique name in pekko.conf Configuring hostname in pekko.conf Configuring data and rpc seed nodes in pekko.conf modules = [ { name = "inventory" namespace = "urn:opendaylight:inventory" shard-strategy = "module" }, { name = "topology" namespace = "urn:TBD:params:xml:ns:yang:network-topology" shard-strategy = "module" }, { name = "toaster" namespace = "http://netconfcentral.org/ns/toaster" shard-strategy = "module" } ] Configuring replication type in module-shards.conf ################################################ ## NOTE: Manually restart controller to ## ## apply configuration. ## ################################################ Dump pekko.conf + echo 'Dump pekko.conf' + cat /tmp/karaf-0.23.0/configuration/initial/pekko.conf odl-cluster-data { pekko { remote { artery { enabled = on transport = tcp canonical.hostname = "10.30.171.233" canonical.port = 2550 } } cluster { # Using artery. seed-nodes = ["pekko://opendaylight-cluster-data@10.30.171.180:2550", "pekko://opendaylight-cluster-data@10.30.170.93:2550", "pekko://opendaylight-cluster-data@10.30.171.233:2550"] roles = ["member-3"] # when under load we might trip a false positive on the failure detector # failure-detector { # heartbeat-interval = 4 s # acceptable-heartbeat-pause = 16s # } } persistence { # By default the snapshots/journal directories live in KARAF_HOME. You can choose to put it somewhere else by # modifying the following two properties. The directory location specified may be a relative or absolute path. # The relative path is always relative to KARAF_HOME. # snapshot-store.local.dir = "target/snapshots" } disable-default-actor-system-quarantined-event-handling = "false" } } Dump modules.conf + echo 'Dump modules.conf' + cat /tmp/karaf-0.23.0/configuration/initial/modules.conf modules = [ { name = "inventory" namespace = "urn:opendaylight:inventory" shard-strategy = "module" }, { name = "topology" namespace = "urn:TBD:params:xml:ns:yang:network-topology" shard-strategy = "module" }, { name = "toaster" namespace = "http://netconfcentral.org/ns/toaster" shard-strategy = "module" } ] Dump module-shards.conf + echo 'Dump module-shards.conf' + cat /tmp/karaf-0.23.0/configuration/initial/module-shards.conf module-shards = [ { name = "default" shards = [ { name = "default" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "inventory" shards = [ { name="inventory" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "topology" shards = [ { name="topology" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "toaster" shards = [ { name="toaster" replicas = ["member-1", "member-2", "member-3"] } ] } ] Locating config plan to use... Finished running config plans Starting member-1 with IP address 10.30.171.180 Warning: Permanently added '10.30.171.180' (ECDSA) to the list of known hosts. Warning: Permanently added '10.30.171.180' (ECDSA) to the list of known hosts. Redirecting karaf console output to karaf_console.log Starting controller... start: Redirecting Karaf output to /tmp/karaf-0.23.0/data/log/karaf_console.log Starting member-2 with IP address 10.30.170.93 Warning: Permanently added '10.30.170.93' (ECDSA) to the list of known hosts. Warning: Permanently added '10.30.170.93' (ECDSA) to the list of known hosts. Redirecting karaf console output to karaf_console.log Starting controller... start: Redirecting Karaf output to /tmp/karaf-0.23.0/data/log/karaf_console.log Starting member-3 with IP address 10.30.171.233 Warning: Permanently added '10.30.171.233' (ECDSA) to the list of known hosts. Warning: Permanently added '10.30.171.233' (ECDSA) to the list of known hosts. Redirecting karaf console output to karaf_console.log Starting controller... start: Redirecting Karaf output to /tmp/karaf-0.23.0/data/log/karaf_console.log [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins15010087106942986595.sh common-functions.sh is being sourced common-functions environment: MAVENCONF: /tmp/karaf-0.23.0/etc/org.ops4j.pax.url.mvn.cfg ACTUALFEATURES: FEATURESCONF: /tmp/karaf-0.23.0/etc/org.apache.karaf.features.cfg CUSTOMPROP: /tmp/karaf-0.23.0/etc/custom.properties LOGCONF: /tmp/karaf-0.23.0/etc/org.ops4j.pax.logging.cfg MEMCONF: /tmp/karaf-0.23.0/bin/setenv CONTROLLERMEM: 2048m AKKACONF: /tmp/karaf-0.23.0/configuration/initial/pekko.conf MODULESCONF: /tmp/karaf-0.23.0/configuration/initial/modules.conf MODULESHARDSCONF: /tmp/karaf-0.23.0/configuration/initial/module-shards.conf SUITES: + echo '#################################################' ################################################# + echo '## Verify Cluster is UP ##' ## Verify Cluster is UP ## + echo '#################################################' ################################################# + create_post_startup_script + cat + copy_and_run_post_startup_script + seed_index=1 ++ seq 1 3 + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_1_IP + echo 'Execute the post startup script on controller 10.30.171.180' Execute the post startup script on controller 10.30.171.180 + scp /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/post-startup-script.sh 10.30.171.180:/tmp/ Warning: Permanently added '10.30.171.180' (ECDSA) to the list of known hosts. + ssh 10.30.171.180 'bash /tmp/post-startup-script.sh 1' Warning: Permanently added '10.30.171.180' (ECDSA) to the list of known hosts. /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found Waiting up to 3 minutes for controller to come up, checking every 5 seconds... 2025-11-29T00:50:35,370 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 249 - org.opendaylight.infrautils.ready-api - 7.1.9 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.] Controller is UP 2025-11-29T00:50:35,370 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 249 - org.opendaylight.infrautils.ready-api - 7.1.9 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.] Listing all open ports on controller system... /tmp/post-startup-script.sh: line 51: netstat: command not found looking for "BindException: Address already in use" in log file looking for "server is unhealthy" in log file + '[' 1 == 0 ']' + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_2_IP + echo 'Execute the post startup script on controller 10.30.170.93' Execute the post startup script on controller 10.30.170.93 + scp /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/post-startup-script.sh 10.30.170.93:/tmp/ Warning: Permanently added '10.30.170.93' (ECDSA) to the list of known hosts. + ssh 10.30.170.93 'bash /tmp/post-startup-script.sh 2' Warning: Permanently added '10.30.170.93' (ECDSA) to the list of known hosts. /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found Waiting up to 3 minutes for controller to come up, checking every 5 seconds... 2025-11-29T00:50:35,580 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 249 - org.opendaylight.infrautils.ready-api - 7.1.9 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.] Controller is UP 2025-11-29T00:50:35,580 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 249 - org.opendaylight.infrautils.ready-api - 7.1.9 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.] Listing all open ports on controller system... looking for "BindException: Address already in use" in log file /tmp/post-startup-script.sh: line 51: netstat: command not found looking for "server is unhealthy" in log file + '[' 2 == 0 ']' + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_3_IP + echo 'Execute the post startup script on controller 10.30.171.233' Execute the post startup script on controller 10.30.171.233 + scp /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/post-startup-script.sh 10.30.171.233:/tmp/ Warning: Permanently added '10.30.171.233' (ECDSA) to the list of known hosts. + ssh 10.30.171.233 'bash /tmp/post-startup-script.sh 3' Warning: Permanently added '10.30.171.233' (ECDSA) to the list of known hosts. /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found Waiting up to 3 minutes for controller to come up, checking every 5 seconds... 2025-11-29T00:50:35,636 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 249 - org.opendaylight.infrautils.ready-api - 7.1.9 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.] Controller is UP 2025-11-29T00:50:35,636 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 249 - org.opendaylight.infrautils.ready-api - 7.1.9 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.] Listing all open ports on controller system... /tmp/post-startup-script.sh: line 51: netstat: command not found looking for "BindException: Address already in use" in log file looking for "server is unhealthy" in log file + '[' 0 == 0 ']' + seed_index=1 + dump_controller_threads ++ seq 1 3 + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_1_IP + echo 'Let'\''s take the karaf thread dump' Let's take the karaf thread dump + ssh 10.30.171.180 'sudo ps aux' Warning: Permanently added '10.30.171.180' (ECDSA) to the list of known hosts. ++ grep org.apache.karaf.main.Main /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/ps_before.log ++ grep -v grep ++ tr -s ' ' ++ cut -f2 '-d ' + pid=2031 + echo 'karaf main: org.apache.karaf.main.Main, pid:2031' karaf main: org.apache.karaf.main.Main, pid:2031 + ssh 10.30.171.180 '/usr/lib/jvm/java-21-openjdk-amd64/bin/jstack -l 2031' Warning: Permanently added '10.30.171.180' (ECDSA) to the list of known hosts. + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_2_IP + echo 'Let'\''s take the karaf thread dump' Let's take the karaf thread dump + ssh 10.30.170.93 'sudo ps aux' Warning: Permanently added '10.30.170.93' (ECDSA) to the list of known hosts. ++ grep org.apache.karaf.main.Main /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/ps_before.log ++ grep -v grep ++ tr -s ' ' ++ cut -f2 '-d ' + pid=2021 + echo 'karaf main: org.apache.karaf.main.Main, pid:2021' karaf main: org.apache.karaf.main.Main, pid:2021 + ssh 10.30.170.93 '/usr/lib/jvm/java-21-openjdk-amd64/bin/jstack -l 2021' Warning: Permanently added '10.30.170.93' (ECDSA) to the list of known hosts. + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_3_IP + echo 'Let'\''s take the karaf thread dump' Let's take the karaf thread dump + ssh 10.30.171.233 'sudo ps aux' Warning: Permanently added '10.30.171.233' (ECDSA) to the list of known hosts. ++ grep org.apache.karaf.main.Main /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/ps_before.log ++ grep -v grep ++ cut -f2 '-d ' ++ tr -s ' ' + pid=2028 + echo 'karaf main: org.apache.karaf.main.Main, pid:2028' karaf main: org.apache.karaf.main.Main, pid:2028 + ssh 10.30.171.233 '/usr/lib/jvm/java-21-openjdk-amd64/bin/jstack -l 2028' Warning: Permanently added '10.30.171.233' (ECDSA) to the list of known hosts. + '[' 0 -gt 0 ']' + echo 'Generating controller variables...' Generating controller variables... ++ seq 1 3 + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_1_IP + odl_variables=' -v ODL_SYSTEM_1_IP:10.30.171.180' + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_2_IP + odl_variables=' -v ODL_SYSTEM_1_IP:10.30.171.180 -v ODL_SYSTEM_2_IP:10.30.170.93' + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_3_IP + odl_variables=' -v ODL_SYSTEM_1_IP:10.30.171.180 -v ODL_SYSTEM_2_IP:10.30.170.93 -v ODL_SYSTEM_3_IP:10.30.171.233' + echo 'Generating mininet variables...' Generating mininet variables... ++ seq 1 1 + for i in $(seq 1 "${NUM_TOOLS_SYSTEM}") + MININETIP=TOOLS_SYSTEM_1_IP + tools_variables=' -v TOOLS_SYSTEM_1_IP:10.30.171.62' + get_test_suites SUITES + local __suite_list=SUITES + echo 'Locating test plan to use...' Locating test plan to use... + testplan_filepath=/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/testplans/ovsdb-upstream-clustering-titanium.txt + '[' '!' -f /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/testplans/ovsdb-upstream-clustering-titanium.txt ']' + testplan_filepath=/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/testplans/ovsdb-upstream-clustering.txt + '[' disabled '!=' disabled ']' + echo 'Changing the testplan path...' Changing the testplan path... + sed s:integration:/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium: /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/testplans/ovsdb-upstream-clustering.txt + cat testplan.txt # Place the suites in run order: /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster + '[' -z '' ']' ++ grep -E -v '(^[[:space:]]*#|^[[:space:]]*$)' testplan.txt ++ tr '\012' ' ' + suite_list='/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster ' + eval 'SUITES='\''/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster '\''' ++ SUITES='/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster ' + echo 'Starting Robot test suites /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster ...' Starting Robot test suites /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster ... + robot -N ovsdb-upstream-clustering.txt --removekeywords wuks -e exclude -e skip_if_titanium -v BUNDLEFOLDER:karaf-0.23.0 -v BUNDLE_URL:https://nexus.opendaylight.org/content/repositories//autorelease-9399/org/opendaylight/integration/karaf/0.23.0/karaf-0.23.0.zip -v CONTROLLER:10.30.171.180 -v CONTROLLER1:10.30.170.93 -v CONTROLLER2:10.30.171.233 -v CONTROLLER_USER:jenkins -v JAVA_HOME:/usr/lib/jvm/java-21-openjdk-amd64 -v JDKVERSION:openjdk21 -v JENKINS_WORKSPACE:/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium -v MININET:10.30.171.62 -v MININET1: -v MININET2: -v MININET_USER:jenkins -v NEXUSURL_PREFIX:https://nexus.opendaylight.org -v NUM_ODL_SYSTEM:3 -v NUM_TOOLS_SYSTEM:1 -v ODL_STREAM:titanium -v ODL_SYSTEM_IP:10.30.171.180 -v ODL_SYSTEM_1_IP:10.30.171.180 -v ODL_SYSTEM_2_IP:10.30.170.93 -v ODL_SYSTEM_3_IP:10.30.171.233 -v ODL_SYSTEM_USER:jenkins -v TOOLS_SYSTEM_IP:10.30.171.62 -v TOOLS_SYSTEM_1_IP:10.30.171.62 -v TOOLS_SYSTEM_USER:jenkins -v USER_HOME:/home/jenkins -v IS_KARAF_APPL:True -v WORKSPACE:/tmp /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/test/csit/suites/ovsdb/Southbound_Cluster ============================================================================== ovsdb-upstream-clustering.txt ============================================================================== ovsdb-upstream-clustering.txt.Southbound Cluster ============================================================================== ovsdb-upstream-clustering.txt.Southbound Cluster.Ovsdb Southbound Cluster :... ============================================================================== Check Shards Status Before Fail :: Check Status for all shards in ... | PASS | ------------------------------------------------------------------------------ Start OVS Multiple Connections :: Connect OVS to all cluster insta... | PASS | ------------------------------------------------------------------------------ Check Entity Owner Status And Find Owner and Candidate Before Fail... | FAIL | Keyword 'ClusterManagement.Verify_Owner_And_Successors_For_Device' failed after retrying for 20 seconds. The last error was: Could not parse owner and candidates for device ovsdb://uuid/9a5df812-eb49-4477-a37e-2d464c02791d ------------------------------------------------------------------------------ Create Bridge Manually and Verify Before Fail :: Create bridge wit... | PASS | ------------------------------------------------------------------------------ Add Port Manually and Verify Before Fail :: Add port with OVS comm... | PASS | ------------------------------------------------------------------------------ Create Tap Device Before Fail :: Create tap devices to add to the ... | PASS | ------------------------------------------------------------------------------ Add Tap Device Manually and Verify Before Fail :: Add tap devices ... | PASS | ------------------------------------------------------------------------------ Delete the Bridge Manually and Verify Before Fail :: Delete bridge... | PASS | ------------------------------------------------------------------------------ Create Bridge In Owner and Verify Before Fail :: Create Bridge in ... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Create Port In Owner and Verify Before Fail :: Create Port in Owne... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Modify the destination IP of Port In Owner Before Fail :: Modify t... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Verify Port Is Modified Before Fail :: Verify port is modified in ... | FAIL | Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.180:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2F9a5df812-eb49-4477-a37e-2d464c02791d%2Fbridge%2Fbr01?content=nonconfig ------------------------------------------------------------------------------ Delete Port In Owner Before Fail :: Delete port in Owner and verif... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Delete Bridge In Owner And Verify Before Fail :: Delete bridge in ... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Kill Owner Instance :: Kill Owner Instance and verify it is dead | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Check Shards Status After Fail :: Create original cluster list and... | FAIL | Variable '${new_cluster_list}' not found. ------------------------------------------------------------------------------ Check Entity Owner Status And Find Owner and Candidate After Fail ... | FAIL | Variable '${original_candidate}' not found. ------------------------------------------------------------------------------ Create Bridge Manually and Verify After Fail :: Create bridge with... | FAIL | Variable '${new_cluster_list}' not found. ------------------------------------------------------------------------------ Add Port Manually and Verify After Fail :: Add port with OVS comma... | FAIL | Variable '${new_cluster_list}' not found. ------------------------------------------------------------------------------ Create Tap Device After Fail :: Create tap devices to add to the b... | PASS | ------------------------------------------------------------------------------ Add Tap Device Manually and Verify After Fail :: Add tap devices t... | FAIL | Variable '${new_cluster_list}' not found. ------------------------------------------------------------------------------ Delete the Bridge Manually and Verify After Fail :: Delete bridge ... | FAIL | Variable '${new_cluster_list}' not found. ------------------------------------------------------------------------------ Create Bridge In Owner and Verify After Fail :: Create Bridge in O... | FAIL | Variable '${new_owner}' not found. ------------------------------------------------------------------------------ Create Port In Owner and Verify After Fail :: Create Port in Owner... | FAIL | Variable '${new_owner}' not found. ------------------------------------------------------------------------------ Modify the destination IP of Port In Owner After Fail :: Modify th... | FAIL | Variable '${new_owner}' not found. ------------------------------------------------------------------------------ Verify Port Is Modified After Fail :: Verify port is modified in a... | FAIL | Variable '${new_cluster_list}' not found. ------------------------------------------------------------------------------ Start Old Owner Instance :: Start Owner Instance and verify it is ... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Check Shards Status After Recover :: Create original cluster list ... | PASS | ------------------------------------------------------------------------------ Check Entity Owner Status After Recover :: Check Entity Owner Stat... | FAIL | Keyword 'ClusterManagement.Verify_Owner_And_Successors_For_Device' failed after retrying for 20 seconds. The last error was: Could not parse owner and candidates for device ovsdb://uuid/9a5df812-eb49-4477-a37e-2d464c02791d ------------------------------------------------------------------------------ Create Bridge Manually and Verify After Recover :: Create bridge w... | PASS | ------------------------------------------------------------------------------ Add Port Manually and Verify After Recover :: Add port with OVS co... | PASS | ------------------------------------------------------------------------------ Create Tap Device After Recover :: Create tap devices to add to th... | PASS | ------------------------------------------------------------------------------ Add Tap Device Manually and Verify After Recover :: Add tap device... | PASS | ------------------------------------------------------------------------------ Delete the Bridge Manually and Verify After Recover :: Delete brid... | PASS | ------------------------------------------------------------------------------ Verify Modified Port After Recover :: Verify modified port exists ... | FAIL | Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.180:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2F9a5df812-eb49-4477-a37e-2d464c02791d%2Fbridge%2Fbr01?content=nonconfig ------------------------------------------------------------------------------ Delete Port In New Owner After Recover :: Delete port in Owner and... | FAIL | Variable '${new_owner}' not found. ------------------------------------------------------------------------------ Delete Bridge In New Owner And Verify After Recover :: Delete brid... | FAIL | Variable '${new_owner}' not found. ------------------------------------------------------------------------------ Create Bridge In Old Owner and Verify After Recover :: Create Brid... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Create Port In Old Owner and Verify After Recover :: Create Port i... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Modify the destination IP of Port In Old Owner After Recover :: Mo... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Verify Port Is Modified After Recover :: Verify port is modified i... | FAIL | Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.180:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2F9a5df812-eb49-4477-a37e-2d464c02791d%2Fbridge%2Fbr01?content=nonconfig ------------------------------------------------------------------------------ Delete Port In Old Owner After Recover :: Delete port in Owner and... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Delete Bridge In Old Owner And Verify After Recover :: Delete brid... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Cleans Up Test Environment For Next Suite :: Cleans up test enviro... | PASS | ------------------------------------------------------------------------------ ovsdb-upstream-clustering.txt.Southbound Cluster.Ovsdb Southbound ... | FAIL | 44 tests, 15 passed, 29 failed ============================================================================== ovsdb-upstream-clustering.txt.Southbound Cluster.Southbound Cluster Extensi... ============================================================================== Check Shards Status Before Fail :: Check Status for all shards in ... | PASS | ------------------------------------------------------------------------------ Start OVS Multiple Connections :: Connect OVS to all cluster insta... | PASS | ------------------------------------------------------------------------------ Check Entity Owner Status And Find Owner and Candidate Before Fail... | FAIL | Keyword 'ClusterManagement.Verify_Owner_And_Successors_For_Device' failed after retrying for 20 seconds. The last error was: Could not parse owner and candidates for device ovsdb://uuid/f940d8d5-ed67-4915-b637-8b1020bc7461 ------------------------------------------------------------------------------ Create Bridge Manually and Verify Before Fail :: Create bridge wit... | PASS | ------------------------------------------------------------------------------ Add Port Manually and Verify Before Fail :: Add port with OVS comm... | PASS | ------------------------------------------------------------------------------ Delete the Bridge Manually and Verify Before Fail :: Delete bridge... | PASS | ------------------------------------------------------------------------------ Create Bridge In Owner and Verify Before Fail :: Create Bridge in ... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Create Port In Owner and Verify Before Fail :: Create Port in Owne... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Modify the destination IP of Port In Owner Before Fail :: Modify t... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Verify Port Is Modified Before Fail :: Verify port is modified in ... | FAIL | Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.180:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2Ff940d8d5-ed67-4915-b637-8b1020bc7461%2Fbridge%2Fbr01?content=nonconfig ------------------------------------------------------------------------------ Delete Port In Owner Before Fail :: Delete port in Owner and verif... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Delete Bridge In Owner And Verify Before Fail :: Delete bridge in ... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Kill Candidate Instance :: Kill Owner Instance and verify it is dead | FAIL | Variable '${original_candidate}' not found. ------------------------------------------------------------------------------ Check Shards Status After Fail :: Create original cluster list and... | FAIL | Variable '${new_cluster_list}' not found. ------------------------------------------------------------------------------ Check Entity Owner Status And Find Owner and Candidate After Fail ... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Create Bridge Manually and Verify After Fail :: Create bridge with... | FAIL | Variable '${new_cluster_list}' not found. ------------------------------------------------------------------------------ Add Port Manually and Verify After Fail :: Add port with OVS comma... | FAIL | Variable '${new_cluster_list}' not found. ------------------------------------------------------------------------------ Delete the Bridge Manually and Verify After Fail :: Delete bridge ... | FAIL | Variable '${new_cluster_list}' not found. ------------------------------------------------------------------------------ Create Bridge In Owner and Verify After Fail :: Create Bridge in O... | FAIL | Variable '${new_owner}' not found. ------------------------------------------------------------------------------ Create Port In Owner and Verify After Fail :: Create Port in Owner... | FAIL | Variable '${new_owner}' not found. ------------------------------------------------------------------------------ Modify the destination IP of Port In Owner After Fail :: Modify th... | FAIL | Variable '${new_owner}' not found. ------------------------------------------------------------------------------ Verify Port Is Modified After Fail :: Verify port is modified in a... | FAIL | Variable '${new_cluster_list}' not found. ------------------------------------------------------------------------------ Start Old Candidate Instance :: Start Owner Instance and verify it... | FAIL | Variable '${original_candidate}' not found. ------------------------------------------------------------------------------ Check Shards Status After Recover :: Create original cluster list ... | PASS | ------------------------------------------------------------------------------ Check Entity Owner Status After Recover :: Check Entity Owner Stat... | FAIL | Keyword 'ClusterManagement.Verify_Owner_And_Successors_For_Device' failed after retrying for 20 seconds. The last error was: Could not parse owner and candidates for device ovsdb://uuid/f940d8d5-ed67-4915-b637-8b1020bc7461 ------------------------------------------------------------------------------ Create Bridge Manually and Verify After Recover :: Create bridge w... | PASS | ------------------------------------------------------------------------------ Add Port Manually and Verify After Recover :: Add port with OVS co... | PASS | ------------------------------------------------------------------------------ Delete the Bridge Manually and Verify After Recover :: Delete brid... | PASS | ------------------------------------------------------------------------------ Verify Modified Port After Recover :: Verify modified port exists ... | FAIL | Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.180:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2Ff940d8d5-ed67-4915-b637-8b1020bc7461%2Fbridge%2Fbr01?content=nonconfig ------------------------------------------------------------------------------ Delete Port In New Owner After Recover :: Delete port in Owner and... | FAIL | Variable '${new_owner}' not found. ------------------------------------------------------------------------------ Delete Bridge In New Owner And Verify After Recover :: Delete brid... | FAIL | Variable '${new_owner}' not found. ------------------------------------------------------------------------------ Create Bridge In Old Candidate and Verify After Recover :: Create ... | FAIL | Variable '${original_candidate}' not found. ------------------------------------------------------------------------------ Create Port In Old Owner and Verify After Recover :: Create Port i... | FAIL | Variable '${original_candidate}' not found. ------------------------------------------------------------------------------ Modify the destination IP of Port In Old Owner After Recover :: Mo... | FAIL | Variable '${original_candidate}' not found. ------------------------------------------------------------------------------ Verify Port Is Modified After Recover :: Verify port is modified i... | FAIL | Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.180:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2Ff940d8d5-ed67-4915-b637-8b1020bc7461%2Fbridge%2Fbr01?content=nonconfig ------------------------------------------------------------------------------ Delete Port In Old Owner After Recover :: Delete port in Owner and... | FAIL | Variable '${original_candidate}' not found. ------------------------------------------------------------------------------ Delete Bridge In Old Owner And Verify After Recover :: Delete brid... | FAIL | Variable '${original_candidate}' not found. ------------------------------------------------------------------------------ Cleans Up Test Environment For Next Suite :: Cleans up test enviro... | PASS | ------------------------------------------------------------------------------ ovsdb-upstream-clustering.txt.Southbound Cluster.Southbound Cluste... | FAIL | 38 tests, 10 passed, 28 failed ============================================================================== ovsdb-upstream-clustering.txt.Southbound Cluster | FAIL | 82 tests, 25 passed, 57 failed ============================================================================== ovsdb-upstream-clustering.txt.Southbound Cluster ============================================================================== ovsdb-upstream-clustering.txt.Southbound Cluster.Ovsdb Southbound Cluster :... ============================================================================== Check Shards Status Before Fail :: Check Status for all shards in ... | PASS | ------------------------------------------------------------------------------ Start OVS Multiple Connections :: Connect OVS to all cluster insta... | PASS | ------------------------------------------------------------------------------ Check Entity Owner Status And Find Owner and Candidate Before Fail... | FAIL | Keyword 'ClusterManagement.Verify_Owner_And_Successors_For_Device' failed after retrying for 20 seconds. The last error was: Could not parse owner and candidates for device ovsdb://uuid/1b351ddf-d3b1-4a7a-9759-2bed81d37347 ------------------------------------------------------------------------------ Create Bridge Manually and Verify Before Fail :: Create bridge wit... | PASS | ------------------------------------------------------------------------------ Add Port Manually and Verify Before Fail :: Add port with OVS comm... | PASS | ------------------------------------------------------------------------------ Create Tap Device Before Fail :: Create tap devices to add to the ... | PASS | ------------------------------------------------------------------------------ Add Tap Device Manually and Verify Before Fail :: Add tap devices ... | PASS | ------------------------------------------------------------------------------ Delete the Bridge Manually and Verify Before Fail :: Delete bridge... | PASS | ------------------------------------------------------------------------------ Create Bridge In Owner and Verify Before Fail :: Create Bridge in ... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Create Port In Owner and Verify Before Fail :: Create Port in Owne... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Modify the destination IP of Port In Owner Before Fail :: Modify t... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Verify Port Is Modified Before Fail :: Verify port is modified in ... | FAIL | Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.180:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2F1b351ddf-d3b1-4a7a-9759-2bed81d37347%2Fbridge%2Fbr01?content=nonconfig ------------------------------------------------------------------------------ Delete Port In Owner Before Fail :: Delete port in Owner and verif... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Delete Bridge In Owner And Verify Before Fail :: Delete bridge in ... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Kill Owner Instance :: Kill Owner Instance and verify it is dead | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Check Shards Status After Fail :: Create original cluster list and... | FAIL | Variable '${new_cluster_list}' not found. ------------------------------------------------------------------------------ Check Entity Owner Status And Find Owner and Candidate After Fail ... | FAIL | Variable '${original_candidate}' not found. ------------------------------------------------------------------------------ Create Bridge Manually and Verify After Fail :: Create bridge with... | FAIL | Variable '${new_cluster_list}' not found. ------------------------------------------------------------------------------ Add Port Manually and Verify After Fail :: Add port with OVS comma... | FAIL | Variable '${new_cluster_list}' not found. ------------------------------------------------------------------------------ Create Tap Device After Fail :: Create tap devices to add to the b... | PASS | ------------------------------------------------------------------------------ Add Tap Device Manually and Verify After Fail :: Add tap devices t... | FAIL | Variable '${new_cluster_list}' not found. ------------------------------------------------------------------------------ Delete the Bridge Manually and Verify After Fail :: Delete bridge ... | FAIL | Variable '${new_cluster_list}' not found. ------------------------------------------------------------------------------ Create Bridge In Owner and Verify After Fail :: Create Bridge in O... | FAIL | Variable '${new_owner}' not found. ------------------------------------------------------------------------------ Create Port In Owner and Verify After Fail :: Create Port in Owner... | FAIL | Variable '${new_owner}' not found. ------------------------------------------------------------------------------ Modify the destination IP of Port In Owner After Fail :: Modify th... | FAIL | Variable '${new_owner}' not found. ------------------------------------------------------------------------------ Verify Port Is Modified After Fail :: Verify port is modified in a... | FAIL | Variable '${new_cluster_list}' not found. ------------------------------------------------------------------------------ Start Old Owner Instance :: Start Owner Instance and verify it is ... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Check Shards Status After Recover :: Create original cluster list ... | PASS | ------------------------------------------------------------------------------ Check Entity Owner Status After Recover :: Check Entity Owner Stat... | FAIL | Keyword 'ClusterManagement.Verify_Owner_And_Successors_For_Device' failed after retrying for 20 seconds. The last error was: Could not parse owner and candidates for device ovsdb://uuid/1b351ddf-d3b1-4a7a-9759-2bed81d37347 ------------------------------------------------------------------------------ Create Bridge Manually and Verify After Recover :: Create bridge w... | PASS | ------------------------------------------------------------------------------ Add Port Manually and Verify After Recover :: Add port with OVS co... | PASS | ------------------------------------------------------------------------------ Create Tap Device After Recover :: Create tap devices to add to th... | PASS | ------------------------------------------------------------------------------ Add Tap Device Manually and Verify After Recover :: Add tap device... | PASS | ------------------------------------------------------------------------------ Delete the Bridge Manually and Verify After Recover :: Delete brid... | PASS | ------------------------------------------------------------------------------ Verify Modified Port After Recover :: Verify modified port exists ... | FAIL | Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.180:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2F1b351ddf-d3b1-4a7a-9759-2bed81d37347%2Fbridge%2Fbr01?content=nonconfig ------------------------------------------------------------------------------ Delete Port In New Owner After Recover :: Delete port in Owner and... | FAIL | Variable '${new_owner}' not found. ------------------------------------------------------------------------------ Delete Bridge In New Owner And Verify After Recover :: Delete brid... | FAIL | Variable '${new_owner}' not found. ------------------------------------------------------------------------------ Create Bridge In Old Owner and Verify After Recover :: Create Brid... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Create Port In Old Owner and Verify After Recover :: Create Port i... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Modify the destination IP of Port In Old Owner After Recover :: Mo... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Verify Port Is Modified After Recover :: Verify port is modified i... | FAIL | Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.180:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2F1b351ddf-d3b1-4a7a-9759-2bed81d37347%2Fbridge%2Fbr01?content=nonconfig ------------------------------------------------------------------------------ Delete Port In Old Owner After Recover :: Delete port in Owner and... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Delete Bridge In Old Owner And Verify After Recover :: Delete brid... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Cleans Up Test Environment For Next Suite :: Cleans up test enviro... | PASS | ------------------------------------------------------------------------------ ovsdb-upstream-clustering.txt.Southbound Cluster.Ovsdb Southbound ... | FAIL | 44 tests, 15 passed, 29 failed ============================================================================== ovsdb-upstream-clustering.txt.Southbound Cluster.Southbound Cluster Extensi... ============================================================================== Check Shards Status Before Fail :: Check Status for all shards in ... | PASS | ------------------------------------------------------------------------------ Start OVS Multiple Connections :: Connect OVS to all cluster insta... | PASS | ------------------------------------------------------------------------------ Check Entity Owner Status And Find Owner and Candidate Before Fail... | FAIL | Keyword 'ClusterManagement.Verify_Owner_And_Successors_For_Device' failed after retrying for 20 seconds. The last error was: Could not parse owner and candidates for device ovsdb://uuid/07bc747a-4dfd-434b-9744-227ec09d0ba3 ------------------------------------------------------------------------------ Create Bridge Manually and Verify Before Fail :: Create bridge wit... | PASS | ------------------------------------------------------------------------------ Add Port Manually and Verify Before Fail :: Add port with OVS comm... | PASS | ------------------------------------------------------------------------------ Delete the Bridge Manually and Verify Before Fail :: Delete bridge... | PASS | ------------------------------------------------------------------------------ Create Bridge In Owner and Verify Before Fail :: Create Bridge in ... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Create Port In Owner and Verify Before Fail :: Create Port in Owne... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Modify the destination IP of Port In Owner Before Fail :: Modify t... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Verify Port Is Modified Before Fail :: Verify port is modified in ... | FAIL | Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.180:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2F07bc747a-4dfd-434b-9744-227ec09d0ba3%2Fbridge%2Fbr01?content=nonconfig ------------------------------------------------------------------------------ Delete Port In Owner Before Fail :: Delete port in Owner and verif... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Delete Bridge In Owner And Verify Before Fail :: Delete bridge in ... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Kill Candidate Instance :: Kill Owner Instance and verify it is dead | FAIL | Variable '${original_candidate}' not found. ------------------------------------------------------------------------------ Check Shards Status After Fail :: Create original cluster list and... | FAIL | Variable '${new_cluster_list}' not found. ------------------------------------------------------------------------------ Check Entity Owner Status And Find Owner and Candidate After Fail ... | FAIL | Variable '${original_owner}' not found. ------------------------------------------------------------------------------ Create Bridge Manually and Verify After Fail :: Create bridge with... | FAIL | Variable '${new_cluster_list}' not found. ------------------------------------------------------------------------------ Add Port Manually and Verify After Fail :: Add port with OVS comma... | FAIL | Variable '${new_cluster_list}' not found. ------------------------------------------------------------------------------ Delete the Bridge Manually and Verify After Fail :: Delete bridge ... | FAIL | Variable '${new_cluster_list}' not found. ------------------------------------------------------------------------------ Create Bridge In Owner and Verify After Fail :: Create Bridge in O... | FAIL | Variable '${new_owner}' not found. ------------------------------------------------------------------------------ Create Port In Owner and Verify After Fail :: Create Port in Owner... | FAIL | Variable '${new_owner}' not found. ------------------------------------------------------------------------------ Modify the destination IP of Port In Owner After Fail :: Modify th... | FAIL | Variable '${new_owner}' not found. ------------------------------------------------------------------------------ Verify Port Is Modified After Fail :: Verify port is modified in a... | FAIL | Variable '${new_cluster_list}' not found. ------------------------------------------------------------------------------ Start Old Candidate Instance :: Start Owner Instance and verify it... | FAIL | Variable '${original_candidate}' not found. ------------------------------------------------------------------------------ Check Shards Status After Recover :: Create original cluster list ... | PASS | ------------------------------------------------------------------------------ Check Entity Owner Status After Recover :: Check Entity Owner Stat... | FAIL | Keyword 'ClusterManagement.Verify_Owner_And_Successors_For_Device' failed after retrying for 20 seconds. The last error was: Could not parse owner and candidates for device ovsdb://uuid/07bc747a-4dfd-434b-9744-227ec09d0ba3 ------------------------------------------------------------------------------ Create Bridge Manually and Verify After Recover :: Create bridge w... | PASS | ------------------------------------------------------------------------------ Add Port Manually and Verify After Recover :: Add port with OVS co... | PASS | ------------------------------------------------------------------------------ Delete the Bridge Manually and Verify After Recover :: Delete brid... | PASS | ------------------------------------------------------------------------------ Verify Modified Port After Recover :: Verify modified port exists ... | FAIL | Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.180:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2F07bc747a-4dfd-434b-9744-227ec09d0ba3%2Fbridge%2Fbr01?content=nonconfig ------------------------------------------------------------------------------ Delete Port In New Owner After Recover :: Delete port in Owner and... | FAIL | Variable '${new_owner}' not found. ------------------------------------------------------------------------------ Delete Bridge In New Owner And Verify After Recover :: Delete brid... | FAIL | Variable '${new_owner}' not found. ------------------------------------------------------------------------------ Create Bridge In Old Candidate and Verify After Recover :: Create ... | FAIL | Variable '${original_candidate}' not found. ------------------------------------------------------------------------------ Create Port In Old Owner and Verify After Recover :: Create Port i... | FAIL | Variable '${original_candidate}' not found. ------------------------------------------------------------------------------ Modify the destination IP of Port In Old Owner After Recover :: Mo... | FAIL | Variable '${original_candidate}' not found. ------------------------------------------------------------------------------ Verify Port Is Modified After Recover :: Verify port is modified i... | FAIL | Keyword 'ClusterManagement.Check_Item_Occurrence_Member_List_Or_All' failed after retrying for 5 seconds. The last error was: HTTPError: 409 Client Error: Conflict for url: http://10.30.171.180:8181/rests/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2Fuuid%2F07bc747a-4dfd-434b-9744-227ec09d0ba3%2Fbridge%2Fbr01?content=nonconfig ------------------------------------------------------------------------------ Delete Port In Old Owner After Recover :: Delete port in Owner and... | FAIL | Variable '${original_candidate}' not found. ------------------------------------------------------------------------------ Delete Bridge In Old Owner And Verify After Recover :: Delete brid... | FAIL | Variable '${original_candidate}' not found. ------------------------------------------------------------------------------ Cleans Up Test Environment For Next Suite :: Cleans up test enviro... | PASS | ------------------------------------------------------------------------------ ovsdb-upstream-clustering.txt.Southbound Cluster.Southbound Cluste... | FAIL | 38 tests, 10 passed, 28 failed ============================================================================== ovsdb-upstream-clustering.txt.Southbound Cluster | FAIL | 82 tests, 25 passed, 57 failed ============================================================================== ovsdb-upstream-clustering.txt | FAIL | 164 tests, 50 passed, 114 failed ============================================================================== Output: /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/output.xml Log: /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/log.html Report: /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/report.html + true + echo 'Examining the files in data/log and checking filesize' Examining the files in data/log and checking filesize + ssh 10.30.171.180 'ls -altr /tmp/karaf-0.23.0/data/log/' Warning: Permanently added '10.30.171.180' (ECDSA) to the list of known hosts. total 1344 drwxrwxr-x 2 jenkins jenkins 4096 Nov 29 00:50 . -rw-rw-r-- 1 jenkins jenkins 1720 Nov 29 00:50 karaf_console.log drwxrwxr-x 9 jenkins jenkins 4096 Nov 29 00:50 .. -rw-rw-r-- 1 jenkins jenkins 1360136 Nov 29 01:00 karaf.log + ssh 10.30.171.180 'du -hs /tmp/karaf-0.23.0/data/log/*' Warning: Permanently added '10.30.171.180' (ECDSA) to the list of known hosts. 1.4M /tmp/karaf-0.23.0/data/log/karaf.log 4.0K /tmp/karaf-0.23.0/data/log/karaf_console.log + ssh 10.30.170.93 'ls -altr /tmp/karaf-0.23.0/data/log/' Warning: Permanently added '10.30.170.93' (ECDSA) to the list of known hosts. total 544 drwxrwxr-x 2 jenkins jenkins 4096 Nov 29 00:50 . -rw-rw-r-- 1 jenkins jenkins 1720 Nov 29 00:50 karaf_console.log drwxrwxr-x 9 jenkins jenkins 4096 Nov 29 00:50 .. -rw-rw-r-- 1 jenkins jenkins 543716 Nov 29 01:00 karaf.log + ssh 10.30.170.93 'du -hs /tmp/karaf-0.23.0/data/log/*' Warning: Permanently added '10.30.170.93' (ECDSA) to the list of known hosts. 532K /tmp/karaf-0.23.0/data/log/karaf.log 4.0K /tmp/karaf-0.23.0/data/log/karaf_console.log + ssh 10.30.171.233 'ls -altr /tmp/karaf-0.23.0/data/log/' Warning: Permanently added '10.30.171.233' (ECDSA) to the list of known hosts. total 512 drwxrwxr-x 2 jenkins jenkins 4096 Nov 29 00:50 . -rw-rw-r-- 1 jenkins jenkins 1720 Nov 29 00:50 karaf_console.log drwxrwxr-x 9 jenkins jenkins 4096 Nov 29 00:50 .. -rw-rw-r-- 1 jenkins jenkins 510887 Nov 29 01:00 karaf.log + ssh 10.30.171.233 'du -hs /tmp/karaf-0.23.0/data/log/*' Warning: Permanently added '10.30.171.233' (ECDSA) to the list of known hosts. 500K /tmp/karaf-0.23.0/data/log/karaf.log 4.0K /tmp/karaf-0.23.0/data/log/karaf_console.log + set +e ++ seq 1 3 + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_1_IP + echo 'Let'\''s take the karaf thread dump again' Let's take the karaf thread dump again + ssh 10.30.171.180 'sudo ps aux' Warning: Permanently added '10.30.171.180' (ECDSA) to the list of known hosts. ++ grep org.apache.karaf.main.Main /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/ps_after.log ++ grep -v grep ++ cut -f2 '-d ' ++ tr -s ' ' + pid=2031 + echo 'karaf main: org.apache.karaf.main.Main, pid:2031' karaf main: org.apache.karaf.main.Main, pid:2031 + ssh 10.30.171.180 '/usr/lib/jvm/java-21-openjdk-amd64/bin/jstack -l 2031' Warning: Permanently added '10.30.171.180' (ECDSA) to the list of known hosts. + echo 'killing karaf process...' killing karaf process... + ssh 10.30.171.180 bash -c 'ps axf | grep karaf | grep -v grep | awk '\''{print "kill -9 " $1}'\'' | sh' Warning: Permanently added '10.30.171.180' (ECDSA) to the list of known hosts. + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_2_IP + echo 'Let'\''s take the karaf thread dump again' Let's take the karaf thread dump again + ssh 10.30.170.93 'sudo ps aux' Warning: Permanently added '10.30.170.93' (ECDSA) to the list of known hosts. ++ grep org.apache.karaf.main.Main /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/ps_after.log ++ grep -v grep ++ tr -s ' ' ++ cut -f2 '-d ' + pid=2021 + echo 'karaf main: org.apache.karaf.main.Main, pid:2021' karaf main: org.apache.karaf.main.Main, pid:2021 + ssh 10.30.170.93 '/usr/lib/jvm/java-21-openjdk-amd64/bin/jstack -l 2021' Warning: Permanently added '10.30.170.93' (ECDSA) to the list of known hosts. + echo 'killing karaf process...' killing karaf process... + ssh 10.30.170.93 bash -c 'ps axf | grep karaf | grep -v grep | awk '\''{print "kill -9 " $1}'\'' | sh' Warning: Permanently added '10.30.170.93' (ECDSA) to the list of known hosts. + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_3_IP + echo 'Let'\''s take the karaf thread dump again' Let's take the karaf thread dump again + ssh 10.30.171.233 'sudo ps aux' Warning: Permanently added '10.30.171.233' (ECDSA) to the list of known hosts. ++ grep org.apache.karaf.main.Main /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/ps_after.log ++ grep -v grep ++ tr -s ' ' ++ cut -f2 '-d ' + pid=2028 + echo 'karaf main: org.apache.karaf.main.Main, pid:2028' karaf main: org.apache.karaf.main.Main, pid:2028 + ssh 10.30.171.233 '/usr/lib/jvm/java-21-openjdk-amd64/bin/jstack -l 2028' Warning: Permanently added '10.30.171.233' (ECDSA) to the list of known hosts. + echo 'killing karaf process...' killing karaf process... + ssh 10.30.171.233 bash -c 'ps axf | grep karaf | grep -v grep | awk '\''{print "kill -9 " $1}'\'' | sh' Warning: Permanently added '10.30.171.233' (ECDSA) to the list of known hosts. + sleep 5 ++ seq 1 3 + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_1_IP + echo 'Compressing karaf.log 1' Compressing karaf.log 1 + ssh 10.30.171.180 gzip --best /tmp/karaf-0.23.0/data/log/karaf.log Warning: Permanently added '10.30.171.180' (ECDSA) to the list of known hosts. + echo 'Fetching compressed karaf.log 1' Fetching compressed karaf.log 1 + scp 10.30.171.180:/tmp/karaf-0.23.0/data/log/karaf.log.gz odl1_karaf.log.gz Warning: Permanently added '10.30.171.180' (ECDSA) to the list of known hosts. + ssh 10.30.171.180 rm -f /tmp/karaf-0.23.0/data/log/karaf.log.gz Warning: Permanently added '10.30.171.180' (ECDSA) to the list of known hosts. + scp 10.30.171.180:/tmp/karaf-0.23.0/data/log/karaf_console.log odl1_karaf_console.log Warning: Permanently added '10.30.171.180' (ECDSA) to the list of known hosts. + ssh 10.30.171.180 rm -f /tmp/karaf-0.23.0/data/log/karaf_console.log Warning: Permanently added '10.30.171.180' (ECDSA) to the list of known hosts. + echo 'Fetch GC logs' Fetch GC logs + mkdir -p gclogs-1 + scp '10.30.171.180:/tmp/karaf-0.23.0/data/log/*.log' gclogs-1/ Warning: Permanently added '10.30.171.180' (ECDSA) to the list of known hosts. scp: /tmp/karaf-0.23.0/data/log/*.log: No such file or directory + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_2_IP + echo 'Compressing karaf.log 2' Compressing karaf.log 2 + ssh 10.30.170.93 gzip --best /tmp/karaf-0.23.0/data/log/karaf.log Warning: Permanently added '10.30.170.93' (ECDSA) to the list of known hosts. + echo 'Fetching compressed karaf.log 2' Fetching compressed karaf.log 2 + scp 10.30.170.93:/tmp/karaf-0.23.0/data/log/karaf.log.gz odl2_karaf.log.gz Warning: Permanently added '10.30.170.93' (ECDSA) to the list of known hosts. + ssh 10.30.170.93 rm -f /tmp/karaf-0.23.0/data/log/karaf.log.gz Warning: Permanently added '10.30.170.93' (ECDSA) to the list of known hosts. + scp 10.30.170.93:/tmp/karaf-0.23.0/data/log/karaf_console.log odl2_karaf_console.log Warning: Permanently added '10.30.170.93' (ECDSA) to the list of known hosts. + ssh 10.30.170.93 rm -f /tmp/karaf-0.23.0/data/log/karaf_console.log Warning: Permanently added '10.30.170.93' (ECDSA) to the list of known hosts. + echo 'Fetch GC logs' Fetch GC logs + mkdir -p gclogs-2 + scp '10.30.170.93:/tmp/karaf-0.23.0/data/log/*.log' gclogs-2/ Warning: Permanently added '10.30.170.93' (ECDSA) to the list of known hosts. scp: /tmp/karaf-0.23.0/data/log/*.log: No such file or directory + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_3_IP + echo 'Compressing karaf.log 3' Compressing karaf.log 3 + ssh 10.30.171.233 gzip --best /tmp/karaf-0.23.0/data/log/karaf.log Warning: Permanently added '10.30.171.233' (ECDSA) to the list of known hosts. + echo 'Fetching compressed karaf.log 3' Fetching compressed karaf.log 3 + scp 10.30.171.233:/tmp/karaf-0.23.0/data/log/karaf.log.gz odl3_karaf.log.gz Warning: Permanently added '10.30.171.233' (ECDSA) to the list of known hosts. + ssh 10.30.171.233 rm -f /tmp/karaf-0.23.0/data/log/karaf.log.gz Warning: Permanently added '10.30.171.233' (ECDSA) to the list of known hosts. + scp 10.30.171.233:/tmp/karaf-0.23.0/data/log/karaf_console.log odl3_karaf_console.log Warning: Permanently added '10.30.171.233' (ECDSA) to the list of known hosts. + ssh 10.30.171.233 rm -f /tmp/karaf-0.23.0/data/log/karaf_console.log Warning: Permanently added '10.30.171.233' (ECDSA) to the list of known hosts. + echo 'Fetch GC logs' Fetch GC logs + mkdir -p gclogs-3 + scp '10.30.171.233:/tmp/karaf-0.23.0/data/log/*.log' gclogs-3/ Warning: Permanently added '10.30.171.233' (ECDSA) to the list of known hosts. scp: /tmp/karaf-0.23.0/data/log/*.log: No such file or directory + echo 'Examine copied files' Examine copied files + ls -lt total 26448 drwxrwxr-x. 2 jenkins jenkins 6 Nov 29 01:00 gclogs-3 -rw-rw-r--. 1 jenkins jenkins 1720 Nov 29 01:00 odl3_karaf_console.log -rw-rw-r--. 1 jenkins jenkins 37641 Nov 29 01:00 odl3_karaf.log.gz drwxrwxr-x. 2 jenkins jenkins 6 Nov 29 01:00 gclogs-2 -rw-rw-r--. 1 jenkins jenkins 1720 Nov 29 01:00 odl2_karaf_console.log -rw-rw-r--. 1 jenkins jenkins 39302 Nov 29 01:00 odl2_karaf.log.gz drwxrwxr-x. 2 jenkins jenkins 6 Nov 29 01:00 gclogs-1 -rw-rw-r--. 1 jenkins jenkins 1720 Nov 29 01:00 odl1_karaf_console.log -rw-rw-r--. 1 jenkins jenkins 59231 Nov 29 01:00 odl1_karaf.log.gz -rw-rw-r--. 1 jenkins jenkins 127907 Nov 29 01:00 karaf_3_2028_threads_after.log -rw-rw-r--. 1 jenkins jenkins 13239 Nov 29 01:00 ps_after.log -rw-rw-r--. 1 jenkins jenkins 127904 Nov 29 01:00 karaf_2_2021_threads_after.log -rw-rw-r--. 1 jenkins jenkins 132472 Nov 29 01:00 karaf_1_2031_threads_after.log -rw-rw-r--. 1 jenkins jenkins 258223 Nov 29 01:00 report.html -rw-rw-r--. 1 jenkins jenkins 2967716 Nov 29 01:00 log.html -rw-rw-r--. 1 jenkins jenkins 22925991 Nov 29 01:00 output.xml -rw-rw-r--. 1 jenkins jenkins 245 Nov 29 00:53 testplan.txt -rw-rw-r--. 1 jenkins jenkins 93211 Nov 29 00:53 karaf_3_2028_threads_before.log -rw-rw-r--. 1 jenkins jenkins 14073 Nov 29 00:53 ps_before.log -rw-rw-r--. 1 jenkins jenkins 94833 Nov 29 00:53 karaf_2_2021_threads_before.log -rw-rw-r--. 1 jenkins jenkins 94853 Nov 29 00:53 karaf_1_2031_threads_before.log -rw-rw-r--. 1 jenkins jenkins 3043 Nov 29 00:50 post-startup-script.sh -rw-rw-r--. 1 jenkins jenkins 225 Nov 29 00:49 startup-script.sh -rw-rw-r--. 1 jenkins jenkins 3240 Nov 29 00:49 configuration-script.sh -rw-rw-r--. 1 jenkins jenkins 266 Nov 29 00:49 detect_variables.env -rw-rw-r--. 1 jenkins jenkins 92 Nov 29 00:49 set_variables.env -rw-rw-r--. 1 jenkins jenkins 356 Nov 29 00:49 slave_addresses.txt -rw-rw-r--. 1 jenkins jenkins 570 Nov 29 00:48 requirements.txt -rw-rw-r--. 1 jenkins jenkins 26 Nov 29 00:48 env.properties -rw-rw-r--. 1 jenkins jenkins 334 Nov 29 00:47 stack-parameters.yaml drwxrwxr-x. 7 jenkins jenkins 4096 Nov 29 00:46 test drwxrwxr-x. 2 jenkins jenkins 6 Nov 29 00:46 test@tmp + true [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/sh /tmp/jenkins12358337055012395374.sh Cleaning up Robot installation... $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 5311 killed; [ssh-agent] Stopped. Recording plot data Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. Build step 'Publish Robot Framework test results' changed build result to UNSTABLE [PostBuildScript] - [INFO] Executing post build scripts. [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins18319860348760056950.sh Archiving csit artifacts mv: cannot stat '*_1.png': No such file or directory mv: cannot stat '/tmp/odl1_*': No such file or directory mv: cannot stat '*_2.png': No such file or directory mv: cannot stat '/tmp/odl2_*': No such file or directory mv: cannot stat '*_3.png': No such file or directory mv: cannot stat '/tmp/odl3_*': No such file or directory % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 3127k 0 3127k 0 0 5738k 0 --:--:-- --:--:-- --:--:-- 5738k Archive: robot-plugin.zip inflating: ./archives/robot-plugin/log.html inflating: ./archives/robot-plugin/output.xml inflating: ./archives/robot-plugin/report.html mv: cannot stat '*.log.gz': No such file or directory mv: cannot stat '*.csv': No such file or directory mv: cannot stat '*.png': No such file or directory [PostBuildScript] - [INFO] Executing post build scripts. [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins8151286722098758313.sh [PostBuildScript] - [INFO] Executing post build scripts. [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content OS_CLOUD=vex OS_STACK_NAME=releng-ovsdb-csit-3node-upstream-clustering-only-titanium-554 [EnvInject] - Variables injected successfully. provisioning config files... copy managed file [clouds-yaml] to file:/home/jenkins/.config/openstack/clouds.yaml [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins12630021822342858141.sh ---> openstack-stack-delete.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ptD2 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lftools 0.37.16 requires urllib3<2.1.0, but you have urllib3 2.5.0 which is incompatible. lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: lftools[openstack] kubernetes python-heatclient python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-ptD2/bin to PATH INFO: Stack cost retrieval disabled, setting cost to 0 INFO: Deleting stack releng-ovsdb-csit-3node-upstream-clustering-only-titanium-554 Successfully deleted stack releng-ovsdb-csit-3node-upstream-clustering-only-titanium-554 [PostBuildScript] - [INFO] Executing post build scripts. [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins4148497986042948048.sh ---> sysstat.sh [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins1153872413416657470.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=redhat + workspace=/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + rpm -qa + sort + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium ']' + mkdir -p /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/archives/ [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins585200027762531573.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ptD2 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: lftools lf-activate-venv(): INFO: Adding /tmp/venv-ptD2/bin to PATH INFO: Running in OpenStack, capturing instance metadata [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins4614389938470126245.sh provisioning config files... Could not find credentials [logs] for ovsdb-csit-3node-upstream-clustering-only-titanium #554 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium@tmp/config6177880380970294779tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[odl-logs-s3-cloudfront-index] Run condition [Regular expression match] enabling perform for step [Provide Configuration files] provisioning config files... copy managed file [jenkins-s3-log-ship] to file:/home/jenkins/.aws/credentials [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins4537129362507974901.sh ---> create-netrc.sh WARN: Log server credential not found. [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins167218992647858137.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ptD2 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: lftools lf-activate-venv(): INFO: Adding /tmp/venv-ptD2/bin to PATH [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins15560514656471262988.sh ---> sudo-logs.sh Archiving 'sudo' log.. [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash /tmp/jenkins8903535317186878101.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ptD2 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-ptD2/bin to PATH DEBUG: total: 0 INFO: Retrieving Stack Cost... INFO: Retrieving Pricing Info for: v3-standard-2 INFO: Archiving Costs [ovsdb-csit-3node-upstream-clustering-only-titanium] $ /bin/bash -l /tmp/jenkins3746948094504168360.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/ovsdb-csit-3node-upstream-clustering-only-titanium/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ptD2 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: lftools urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-ptD2/bin to PATH WARNING: Nexus logging server not set INFO: S3 path logs/releng/vex-yul-odl-jenkins-1/ovsdb-csit-3node-upstream-clustering-only-titanium/554/ INFO: archiving logs to S3 ---> uname -a: Linux prd-centos8-robot-2c-8g-2091.novalocal 4.18.0-553.5.1.el8.x86_64 #1 SMP Tue May 21 05:46:01 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 2 On-line CPU(s) list: 0,1 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 2 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0,1 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 2 ---> df -h: Filesystem Size Used Avail Use% Mounted on devtmpfs 3.8G 0 3.8G 0% /dev tmpfs 3.8G 0 3.8G 0% /dev/shm tmpfs 3.8G 17M 3.8G 1% /run tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup /dev/vda1 40G 8.4G 32G 21% / tmpfs 770M 0 770M 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 7697 642 4832 19 2222 6756 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:8f:34:76 brd ff:ff:ff:ff:ff:ff altname enp0s3 altname ens3 inet 10.30.171.230/23 brd 10.30.171.255 scope global dynamic noprefixroute eth0 valid_lft 85386sec preferred_lft 85386sec inet6 fe80::f816:3eff:fe8f:3476/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.18.0-553.5.1.el8.x86_64 (centos-stream-8-robot-7d7a37eb-bc14-4dd6-9530-dc22c5eae738.noval) 11/29/2025 _x86_64_ (2 CPU) 00:45:07 LINUX RESTART (2 CPU) 12:46:01 AM tps rtps wtps bread/s bwrtn/s 12:47:01 AM 118.59 37.04 81.56 5482.97 12059.31 12:48:01 AM 78.87 0.72 78.16 52.92 9861.01 12:49:01 AM 26.91 0.42 26.50 64.79 2465.52 12:50:01 AM 89.92 6.97 82.95 1292.98 8757.06 12:51:01 AM 11.70 0.00 11.70 0.00 1136.84 12:52:01 AM 2.23 0.00 2.23 0.00 66.44 12:53:01 AM 0.10 0.00 0.10 0.00 1.43 12:54:01 AM 0.60 0.25 0.35 13.60 73.38 12:55:01 AM 2.03 0.00 2.03 0.00 164.84 12:56:01 AM 0.35 0.00 0.35 0.00 120.43 12:57:01 AM 0.43 0.00 0.43 0.00 107.78 12:58:01 AM 0.20 0.00 0.20 0.00 75.87 12:59:01 AM 0.23 0.00 0.23 0.00 124.51 01:00:01 AM 0.35 0.00 0.35 0.00 138.70 01:01:01 AM 16.28 0.37 15.91 37.33 699.08 01:02:01 AM 32.71 0.58 32.13 65.56 2251.52 Average: 23.85 2.90 20.95 438.15 2381.55 12:46:01 AM kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 12:47:01 AM 5310540 7032020 2571884 32.63 2688 1903800 737784 8.26 187452 2061928 185188 12:48:01 AM 5126296 6968008 2756128 34.97 2688 2017424 771472 8.64 199396 2199408 42008 12:49:01 AM 5096156 6992784 2786268 35.35 2688 2069576 759900 8.51 228208 2196624 60276 12:50:01 AM 4917112 7013916 2965312 37.62 2688 2263416 698028 7.82 260924 2318296 26976 12:51:01 AM 4916364 7013064 2966060 37.63 2688 2263428 698028 7.82 260944 2318428 4 12:52:01 AM 4916820 7013520 2965604 37.62 2688 2263428 698028 7.82 260948 2318124 4 12:53:01 AM 4916464 7013164 2965960 37.63 2688 2263428 698028 7.82 260952 2318524 4 12:54:01 AM 4867504 6966668 3014920 38.25 2688 2265924 800844 8.97 261292 2366728 104 12:55:01 AM 4861512 6964272 3020912 38.32 2688 2269504 800844 8.97 261376 2372812 696 12:56:01 AM 4855172 6960784 3027252 38.41 2688 2272364 818112 9.16 261376 2379204 32 12:57:01 AM 4850472 6960292 3031952 38.46 2688 2276548 837560 9.38 261376 2383708 1136 12:58:01 AM 4847012 6959996 3035412 38.51 2688 2279752 852996 9.55 261376 2386848 2104 12:59:01 AM 4841148 6957164 3041276 38.58 2688 2282744 874772 9.79 261376 2392592 1392 01:00:01 AM 4836616 6955896 3045808 38.64 2688 2286056 913064 10.22 261376 2396640 640 01:01:01 AM 4962108 6927036 2920316 37.05 2688 2137308 790712 8.85 323224 2231152 55576 01:02:01 AM 4932004 6900556 2950420 37.43 2688 2141684 799712 8.95 483300 2106616 26260 Average: 4940831 6974946 2941593 37.32 2688 2203524 784368 8.78 268431 2296727 25150 12:46:01 AM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 12:47:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:47:01 AM eth0 356.76 218.78 1880.03 57.74 0.00 0.00 0.00 0.00 12:48:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:48:01 AM eth0 56.45 40.87 631.77 7.68 0.00 0.00 0.00 0.00 12:49:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:49:01 AM eth0 16.71 14.88 9.57 5.23 0.00 0.00 0.00 0.00 12:50:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:50:01 AM eth0 490.67 366.24 423.02 90.13 0.00 0.00 0.00 0.00 12:51:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:51:01 AM eth0 5.63 4.47 1.98 1.34 0.00 0.00 0.00 0.00 12:52:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:52:01 AM eth0 2.55 2.03 0.57 0.49 0.00 0.00 0.00 0.00 12:53:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:53:01 AM eth0 3.70 2.17 0.89 0.68 0.00 0.00 0.00 0.00 12:54:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:54:01 AM eth0 60.46 49.43 18.62 5.17 0.00 0.00 0.00 0.00 12:55:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:55:01 AM eth0 211.88 212.38 41.68 15.94 0.00 0.00 0.00 0.00 12:56:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:56:01 AM eth0 134.99 135.65 28.50 10.61 0.00 0.00 0.00 0.00 12:57:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:57:01 AM eth0 198.20 199.40 43.70 15.32 0.00 0.00 0.00 0.00 12:58:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:58:01 AM eth0 165.49 166.16 34.51 12.62 0.00 0.00 0.00 0.00 12:59:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:59:01 AM eth0 124.85 125.20 27.47 10.23 0.00 0.00 0.00 0.00 01:00:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:00:01 AM eth0 161.70 162.81 33.56 12.15 0.00 0.00 0.00 0.00 01:01:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:01:01 AM eth0 153.34 116.30 149.56 108.00 0.00 0.00 0.00 0.00 01:02:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:02:01 AM eth0 18.85 17.46 15.14 8.68 0.00 0.00 0.00 0.00 Average: lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: eth0 135.13 114.63 208.79 22.63 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.18.0-553.5.1.el8.x86_64 (centos-stream-8-robot-7d7a37eb-bc14-4dd6-9530-dc22c5eae738.noval) 11/29/2025 _x86_64_ (2 CPU) 00:45:07 LINUX RESTART (2 CPU) 12:46:01 AM CPU %user %nice %system %iowait %steal %idle 12:47:01 AM all 38.53 0.04 6.92 5.13 0.11 49.27 12:47:01 AM 0 36.74 0.03 5.92 4.77 0.10 52.43 12:47:01 AM 1 40.32 0.05 7.91 5.50 0.12 46.10 12:48:01 AM all 23.26 0.00 3.22 3.57 0.07 69.88 12:48:01 AM 0 18.73 0.00 2.85 3.12 0.07 75.23 12:48:01 AM 1 27.80 0.00 3.59 4.02 0.07 64.53 12:49:01 AM all 22.08 0.00 3.04 0.31 0.07 74.51 12:49:01 AM 0 23.56 0.00 3.07 0.43 0.08 72.85 12:49:01 AM 1 20.60 0.00 3.00 0.18 0.05 76.16 12:50:01 AM all 29.20 0.00 5.72 1.12 0.08 63.88 12:50:01 AM 0 25.50 0.00 5.49 1.04 0.07 67.89 12:50:01 AM 1 32.81 0.00 5.94 1.19 0.10 59.96 12:51:01 AM all 0.30 0.00 0.23 0.13 0.03 99.32 12:51:01 AM 0 0.32 0.00 0.23 0.15 0.03 99.26 12:51:01 AM 1 0.28 0.00 0.22 0.10 0.02 99.38 12:52:01 AM all 0.33 0.00 0.08 0.02 0.03 99.54 12:52:01 AM 0 0.48 0.00 0.10 0.00 0.03 99.38 12:52:01 AM 1 0.17 0.00 0.07 0.03 0.03 99.70 12:53:01 AM all 0.33 0.00 0.08 0.00 0.03 99.57 12:53:01 AM 0 0.08 0.00 0.07 0.00 0.03 99.82 12:53:01 AM 1 0.57 0.00 0.08 0.00 0.03 99.32 12:54:01 AM all 6.13 0.00 0.62 0.01 0.05 93.19 12:54:01 AM 0 2.16 0.00 0.42 0.02 0.05 97.36 12:54:01 AM 1 10.11 0.00 0.82 0.00 0.05 89.02 12:55:01 AM all 13.14 0.00 1.24 0.03 0.07 85.52 12:55:01 AM 0 9.04 0.00 1.10 0.00 0.07 89.79 12:55:01 AM 1 17.26 0.00 1.38 0.07 0.07 81.22 12:56:01 AM all 9.81 0.00 0.89 0.00 0.08 89.22 12:56:01 AM 0 10.78 0.00 0.96 0.00 0.08 88.18 12:56:01 AM 1 8.84 0.00 0.82 0.00 0.08 90.26 12:57:01 AM all 12.25 0.00 1.27 0.00 0.08 86.40 12:57:01 AM 0 9.97 0.00 1.26 0.00 0.08 88.68 12:57:01 AM 1 14.53 0.00 1.29 0.00 0.07 84.11 12:57:01 AM CPU %user %nice %system %iowait %steal %idle 12:58:01 AM all 10.83 0.00 1.03 0.01 0.07 88.07 12:58:01 AM 0 9.37 0.00 1.08 0.02 0.07 89.47 12:58:01 AM 1 12.30 0.00 0.98 0.00 0.07 86.65 12:59:01 AM all 11.05 0.00 0.89 0.00 0.07 87.99 12:59:01 AM 0 9.82 0.00 0.94 0.00 0.07 89.17 12:59:01 AM 1 12.27 0.00 0.84 0.00 0.07 86.82 01:00:01 AM all 9.76 0.00 0.98 0.00 0.07 89.20 01:00:01 AM 0 5.49 0.00 0.86 0.00 0.07 93.59 01:00:01 AM 1 14.06 0.00 1.10 0.00 0.07 84.78 01:01:01 AM all 23.49 0.00 2.81 0.12 0.08 73.49 01:01:01 AM 0 12.03 0.00 2.18 0.13 0.08 85.58 01:01:01 AM 1 34.96 0.00 3.45 0.10 0.08 61.41 01:02:01 AM all 32.49 0.00 3.93 0.38 0.08 63.12 01:02:01 AM 0 12.99 0.00 3.36 0.13 0.07 83.46 01:02:01 AM 1 51.98 0.00 4.50 0.62 0.10 42.80 Average: all 15.19 0.00 2.06 0.68 0.07 82.01 Average: 0 11.68 0.00 1.87 0.61 0.07 85.77 Average: 1 18.70 0.00 2.25 0.74 0.07 78.24