Started by upstream project "integration-distribution-mri-test-titanium" build number 63 originally caused by: Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-centos8-robot-2c-8g-2670 (centos8-robot-2c-8g) in workspace /w/workspace/controller-csit-3node-clustering-ask-all-titanium [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-P9TuoTVdMcRH/agent.5160 SSH_AGENT_PID=5161 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/controller-csit-3node-clustering-ask-all-titanium@tmp/private_key_5853807083702754787.key (/w/workspace/controller-csit-3node-clustering-ask-all-titanium@tmp/private_key_5853807083702754787.key) [ssh-agent] Using credentials jenkins (Release Engineering Jenkins Key) The recommended git tool is: NONE using credential opendaylight-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://devvexx.opendaylight.org/mirror/integration/test > git init /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test # timeout=10 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/integration/test > git --version # timeout=10 > git --version # 'git version 2.43.0' using GIT_SSH to set credentials Release Engineering Jenkins Key [INFO] Currently running in a labeled security context [INFO] Currently SELinux is 'enforcing' on the host > /usr/bin/chcon --type=ssh_home_t /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test@tmp/jenkins-gitclient-ssh2679334868356292333.key Verifying host key using known hosts file You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/integration/test +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/integration/test # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/integration/test # timeout=10 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/integration/test using GIT_SSH to set credentials Release Engineering Jenkins Key [INFO] Currently running in a labeled security context [INFO] Currently SELinux is 'enforcing' on the host > /usr/bin/chcon --type=ssh_home_t /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test@tmp/jenkins-gitclient-ssh18443504702377583202.key Verifying host key using known hosts file You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/integration/test master # timeout=10 > git rev-parse FETCH_HEAD^{commit} # timeout=10 Checking out Revision 6c60ddfc8acc87c45ab0767b2ba1d2c4e7d34388 (origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 6c60ddfc8acc87c45ab0767b2ba1d2c4e7d34388 # timeout=10 Commit message: "Adapt test for new pce-allocation field" > git rev-parse FETCH_HEAD^{commit} # timeout=10 > git rev-list --no-walk 62cb016f4f4171033927cf2ae7f4ac5095373e88 # timeout=10 No emails were triggered. provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf copy managed file [clouds-yaml] to file:/home/jenkins/.config/openstack/clouds.yaml [controller-csit-3node-clustering-ask-all-titanium] $ /bin/bash /tmp/jenkins2657864160914256588.sh ---> python-tools-install.sh Setup pyenv: system * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.13 (set by /opt/pyenv/version) * 3.11.7 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-hM1z lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: lftools lf-activate-venv(): INFO: Adding /tmp/venv-hM1z/bin to PATH Generating Requirements File ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. httplib2 0.31.0 requires pyparsing<4,>=3.0.4, but you have pyparsing 2.4.7 which is incompatible. Python 3.11.7 pip 25.3 from /tmp/venv-hM1z/lib/python3.11/site-packages/pip (python 3.11) appdirs==1.4.4 argcomplete==3.6.3 aspy.yaml==1.3.0 attrs==25.4.0 autopage==0.5.2 beautifulsoup4==4.14.3 boto3==1.41.5 botocore==1.41.5 bs4==0.0.2 cachetools==6.2.2 certifi==2025.11.12 cffi==2.0.0 cfgv==3.5.0 chardet==5.2.0 charset-normalizer==3.4.4 click==8.3.1 cliff==4.12.0 cmd2==2.7.0 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.3.1 distlib==0.4.0 dnspython==2.8.0 docker==7.1.0 dogpile.cache==1.5.0 durationpy==0.10 email-validator==2.3.0 filelock==3.20.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.45 google-auth==2.43.0 httplib2==0.31.0 identify==2.6.15 idna==3.11 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.25.1 jsonschema-specifications==2025.9.1 keystoneauth1==5.12.0 kubernetes==34.1.0 lftools==0.37.16 lxml==6.0.2 markdown-it-py==4.0.0 MarkupSafe==3.0.3 mdurl==0.1.2 msgpack==1.1.2 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.3.1 openstacksdk==4.8.0 os-service-types==1.8.2 osc-lib==4.2.0 oslo.config==10.1.0 oslo.context==6.2.0 oslo.i18n==6.7.1 oslo.log==7.2.1 oslo.serialization==5.8.0 oslo.utils==9.2.0 packaging==25.0 pbr==7.0.3 platformdirs==4.5.0 prettytable==3.17.0 psutil==7.1.3 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.23 pygerrit2==2.0.15 PyGithub==2.8.1 Pygments==2.19.2 PyJWT==2.10.1 PyNaCl==1.6.1 pyparsing==2.4.7 pyperclip==1.11.0 pyrsistent==0.20.0 python-cinderclient==9.8.0 python-dateutil==2.9.0.post0 python-heatclient==4.3.0 python-jenkins==1.8.3 python-keystoneclient==5.7.0 python-magnumclient==4.9.0 python-openstackclient==8.2.0 python-swiftclient==4.9.0 PyYAML==6.0.3 referencing==0.37.0 requests==2.32.5 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rich==14.2.0 rich-argparse==1.7.2 rpds-py==0.30.0 rsa==4.9.1 ruamel.yaml==0.18.16 ruamel.yaml.clib==0.2.15 s3transfer==0.15.0 simplejson==3.20.2 six==1.17.0 smmap==5.0.2 soupsieve==2.8 stevedore==5.6.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.15.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.35.4 wcwidth==0.2.14 websocket-client==1.9.0 wrapt==2.0.1 xdg==6.0.0 xmltodict==1.0.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content OS_STACK_TEMPLATE=csit-2-instance-type.yaml OS_CLOUD=vex OS_STACK_NAME=releng-controller-csit-3node-clustering-ask-all-titanium-63 OS_STACK_TEMPLATE_DIR=openstack-hot [EnvInject] - Variables injected successfully. provisioning config files... copy managed file [clouds-yaml] to file:/home/jenkins/.config/openstack/clouds.yaml [controller-csit-3node-clustering-ask-all-titanium] $ /bin/bash /tmp/jenkins8494263888154997418.sh ---> Create parameters file for OpenStack HOT OpenStack Heat parameters generated ----------------------------------- parameters: vm_0_count: '3' vm_0_flavor: 'v3-standard-4' vm_0_image: 'ZZCI - Ubuntu 22.04 - builder - x86_64 - 20250917-133034.447' vm_1_count: '0' vm_1_flavor: 'v3-standard-2' vm_1_image: 'ZZCI - Ubuntu 22.04 - mininet-ovs-217 - x86_64 - 20250917-133034.654' job_name: '19183-63' silo: 'releng' [controller-csit-3node-clustering-ask-all-titanium] $ /bin/bash -l /tmp/jenkins1434489912807764114.sh ---> Create HEAT stack + source /home/jenkins/lf-env.sh + lf-activate-venv --python python3 'lftools[openstack]' kubernetes niet python-heatclient python-openstackclient python-magnumclient urllib3~=1.26.15 yq ++ mktemp -d /tmp/venv-XXXX + lf_venv=/tmp/venv-xjZb + local venv_file=/tmp/.os_lf_venv + local python=python3 + local options + local set_path=true + local install_args= ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --python python3 'lftools[openstack]' kubernetes niet python-heatclient python-openstackclient python-magnumclient urllib3~=1.26.15 yq + options=' --python '\''python3'\'' -- '\''lftools[openstack]'\'' '\''kubernetes'\'' '\''niet'\'' '\''python-heatclient'\'' '\''python-openstackclient'\'' '\''python-magnumclient'\'' '\''urllib3~=1.26.15'\'' '\''yq'\''' + eval set -- ' --python '\''python3'\'' -- '\''lftools[openstack]'\'' '\''kubernetes'\'' '\''niet'\'' '\''python-heatclient'\'' '\''python-openstackclient'\'' '\''python-magnumclient'\'' '\''urllib3~=1.26.15'\'' '\''yq'\''' ++ set -- --python python3 -- 'lftools[openstack]' kubernetes niet python-heatclient python-openstackclient python-magnumclient urllib3~=1.26.15 yq + true + case $1 in + python=python3 + shift 2 + true + case $1 in + shift + break + case $python in + local pkg_list= + [[ -d /opt/pyenv ]] + echo 'Setup pyenv:' Setup pyenv: + export PYENV_ROOT=/opt/pyenv + PYENV_ROOT=/opt/pyenv + export PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/home/jenkins/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/puppetlabs/bin + PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/home/jenkins/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/puppetlabs/bin + pyenv versions system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/controller-csit-3node-clustering-ask-all-titanium/.python-version) + command -v pyenv ++ pyenv init - --no-rehash + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); for i in ${!paths[@]}; do if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; fi; done; echo "${paths[*]}"'\'')" export PATH="/opt/pyenv/shims:${PATH}" export PYENV_SHELL=bash source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' pyenv() { local command command="${1:-}" if [ "$#" -gt 0 ]; then shift fi case "$command" in rehash|shell) eval "$(pyenv "sh-$command" "$@")" ;; *) command pyenv "$command" "$@" ;; esac }' +++ bash --norc -ec 'IFS=:; paths=($PATH); for i in ${!paths[@]}; do if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; fi; done; echo "${paths[*]}"' ++ PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/home/jenkins/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/puppetlabs/bin ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/home/jenkins/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/puppetlabs/bin ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/home/jenkins/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/puppetlabs/bin ++ export PYENV_SHELL=bash ++ PYENV_SHELL=bash ++ source /opt/pyenv/libexec/../completions/pyenv.bash +++ complete -F _pyenv pyenv ++ lf-pyver python3 ++ local py_version_xy=python3 ++ local py_version_xyz= ++ awk '{ print $1 }' ++ pyenv versions ++ local command ++ command=versions ++ '[' 1 -gt 0 ']' ++ shift ++ case "$command" in ++ command pyenv versions ++ pyenv versions ++ sed 's/^[ *]* //' ++ grep -E '^[0-9.]*[0-9]$' ++ [[ ! -s /tmp/.pyenv_versions ]] +++ grep '^3' /tmp/.pyenv_versions +++ sort -V +++ tail -n 1 ++ py_version_xyz=3.11.7 ++ [[ -z 3.11.7 ]] ++ echo 3.11.7 ++ return 0 + pyenv local 3.11.7 + local command + command=local + '[' 2 -gt 0 ']' + shift + case "$command" in + command pyenv local 3.11.7 + pyenv local 3.11.7 + for arg in "$@" + case $arg in + pkg_list+='lftools[openstack] ' + for arg in "$@" + case $arg in + pkg_list+='kubernetes ' + for arg in "$@" + case $arg in + pkg_list+='niet ' + for arg in "$@" + case $arg in + pkg_list+='python-heatclient ' + for arg in "$@" + case $arg in + pkg_list+='python-openstackclient ' + for arg in "$@" + case $arg in + pkg_list+='python-magnumclient ' + for arg in "$@" + case $arg in + pkg_list+='urllib3~=1.26.15 ' + for arg in "$@" + case $arg in + pkg_list+='yq ' + [[ -f /tmp/.os_lf_venv ]] ++ cat /tmp/.os_lf_venv + lf_venv=/tmp/venv-hM1z + echo 'lf-activate-venv(): INFO: Reuse venv:/tmp/venv-hM1z from' file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Reuse venv:/tmp/venv-hM1z from file:/tmp/.os_lf_venv + echo 'lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv)' lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) + local 'pip_opts=--upgrade --quiet' + pip_opts='--upgrade --quiet --trusted-host pypi.org' + pip_opts='--upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org' + pip_opts='--upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org' + [[ -n '' ]] + [[ -n '' ]] + echo 'lf-activate-venv(): INFO: Attempting to install with network-safe options...' lf-activate-venv(): INFO: Attempting to install with network-safe options... + /tmp/venv-hM1z/bin/python3 -m pip install --upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org pip 'setuptools<66' virtualenv + echo 'lf-activate-venv(): INFO: Base packages installed successfully' lf-activate-venv(): INFO: Base packages installed successfully + [[ -z lftools[openstack] kubernetes niet python-heatclient python-openstackclient python-magnumclient urllib3~=1.26.15 yq ]] + echo 'lf-activate-venv(): INFO: Installing additional packages: lftools[openstack] kubernetes niet python-heatclient python-openstackclient python-magnumclient urllib3~=1.26.15 yq ' lf-activate-venv(): INFO: Installing additional packages: lftools[openstack] kubernetes niet python-heatclient python-openstackclient python-magnumclient urllib3~=1.26.15 yq + /tmp/venv-hM1z/bin/python3 -m pip install --upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org --upgrade-strategy eager 'lftools[openstack]' kubernetes niet python-heatclient python-openstackclient python-magnumclient urllib3~=1.26.15 yq + type python3 + true + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-hM1z/bin to PATH' lf-activate-venv(): INFO: Adding /tmp/venv-hM1z/bin to PATH + PATH=/tmp/venv-hM1z/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/home/jenkins/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/puppetlabs/bin + return 0 + openstack --os-cloud vex limits show --absolute +--------------------------+---------+ | Name | Value | +--------------------------+---------+ | maxTotalInstances | -1 | | maxTotalCores | 450 | | maxTotalRAMSize | 1000000 | | maxServerMeta | 128 | | maxImageMeta | 128 | | maxPersonality | 5 | | maxPersonalitySize | 10240 | | maxTotalKeypairs | 100 | | maxServerGroups | 10 | | maxServerGroupMembers | 10 | | maxTotalFloatingIps | -1 | | maxSecurityGroups | -1 | | maxSecurityGroupRules | -1 | | totalRAMUsed | 499712 | | totalCoresUsed | 122 | | totalInstancesUsed | 57 | | totalFloatingIpsUsed | 0 | | totalSecurityGroupsUsed | 0 | | totalServerGroupsUsed | 0 | | maxTotalVolumes | -1 | | maxTotalSnapshots | 10 | | maxTotalVolumeGigabytes | 4096 | | maxTotalBackups | 10 | | maxTotalBackupGigabytes | 1000 | | totalVolumesUsed | 3 | | totalGigabytesUsed | 60 | | totalSnapshotsUsed | 0 | | totalBackupsUsed | 0 | | totalBackupGigabytesUsed | 0 | +--------------------------+---------+ + pushd /opt/ciman/openstack-hot /opt/ciman/openstack-hot /w/workspace/controller-csit-3node-clustering-ask-all-titanium + lftools openstack --os-cloud vex stack create releng-controller-csit-3node-clustering-ask-all-titanium-63 csit-2-instance-type.yaml /w/workspace/controller-csit-3node-clustering-ask-all-titanium/stack-parameters.yaml Creating stack releng-controller-csit-3node-clustering-ask-all-titanium-63 Waiting to initialize infrastructure... Waiting to initialize infrastructure... Stack initialization successful. ------------------------------------ Stack Details ------------------------------------ {'added': None, 'capabilities': [], 'created_at': '2025-11-30T23:03:07Z', 'deleted': None, 'deleted_at': None, 'description': 'No description', 'environment': None, 'environment_files': None, 'files': None, 'files_container': None, 'id': 'ece50ee0-4054-4fb6-ba7e-3810fb51af04', 'is_rollback_disabled': True, 'links': [{'href': 'https://orchestration.public.mtl1.vexxhost.net/v1/12c36e260d8e4bb2913965203b1b491f/stacks/releng-controller-csit-3node-clustering-ask-all-titanium-63/ece50ee0-4054-4fb6-ba7e-3810fb51af04', 'rel': 'self'}], 'location': Munch({'cloud': 'vex', 'region_name': 'ca-ymq-1', 'zone': None, 'project': Munch({'id': '12c36e260d8e4bb2913965203b1b491f', 'name': '61975f2c-7c17-4d69-82fa-c3ae420ad6fd', 'domain_id': None, 'domain_name': 'Default'})}), 'name': 'releng-controller-csit-3node-clustering-ask-all-titanium-63', 'notification_topics': [], 'outputs': [{'description': 'IP addresses of the 1st vm types', 'output_key': 'vm_0_ips', 'output_value': ['10.30.170.77', '10.30.171.188', '10.30.171.110']}, {'description': 'IP addresses of the 2nd vm types', 'output_key': 'vm_1_ips', 'output_value': []}], 'owner_id': ****, 'parameters': {'OS::project_id': '12c36e260d8e4bb2913965203b1b491f', 'OS::stack_id': 'ece50ee0-4054-4fb6-ba7e-3810fb51af04', 'OS::stack_name': 'releng-controller-csit-3node-clustering-ask-all-titanium-63', 'job_name': '19183-63', 'silo': 'releng', 'vm_0_count': '3', 'vm_0_flavor': 'v3-standard-4', 'vm_0_image': 'ZZCI - Ubuntu 22.04 - builder - x86_64 - ' '20250917-133034.447', 'vm_1_count': '0', 'vm_1_flavor': 'v3-standard-2', 'vm_1_image': 'ZZCI - Ubuntu 22.04 - mininet-ovs-217 - x86_64 ' '- 20250917-133034.654'}, 'parent_id': None, 'replaced': None, 'status': 'CREATE_COMPLETE', 'status_reason': 'Stack CREATE completed successfully', 'tags': [], 'template': None, 'template_description': 'No description', 'template_url': None, 'timeout_mins': 15, 'unchanged': None, 'updated': None, 'updated_at': None, 'user_project_id': 'b9a6c3a552a643028bbc3172bb63991c'} ------------------------------------ + popd /w/workspace/controller-csit-3node-clustering-ask-all-titanium [controller-csit-3node-clustering-ask-all-titanium] $ /bin/bash -l /tmp/jenkins7768862173126527673.sh ---> Copy SSH public keys to CSIT lab Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/controller-csit-3node-clustering-ask-all-titanium/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-hM1z from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: lftools[openstack] kubernetes python-heatclient python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-hM1z/bin to PATH SSH not responding on 10.30.171.188. Retrying in 10 seconds... SSH not responding on 10.30.171.110. Retrying in 10 seconds... Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. releng-19183-63-0-builder-0 Successfully copied public keys to slave 10.30.170.77 Process 6493 ready. Ping to 10.30.171.188 successful. Ping to 10.30.171.110 successful. SSH not responding on 10.30.171.110. Retrying in 10 seconds... Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. releng-19183-63-0-builder-1 Successfully copied public keys to slave 10.30.171.188 Process 6494 ready. Ping to 10.30.171.110 successful. Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. releng-19183-63-0-builder-2 Successfully copied public keys to slave 10.30.171.110 Process 6496 ready. SSH ready on all stack servers. [controller-csit-3node-clustering-ask-all-titanium] $ /bin/bash -l /tmp/jenkins809729274473362003.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/controller-csit-3node-clustering-ask-all-titanium/.python-version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-vSA6 lf-activate-venv(): INFO: Save venv in file: /w/workspace/controller-csit-3node-clustering-ask-all-titanium/.robot_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: setuptools wheel lf-activate-venv(): INFO: Adding /tmp/venv-vSA6/bin to PATH + echo 'Installing Python Requirements' Installing Python Requirements + cat + python -m pip install -r requirements.txt Looking in indexes: https://nexus3.opendaylight.org/repository/PyPi/simple Collecting docker-py (from -r requirements.txt (line 1)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/docker-py/1.10.6/docker_py-1.10.6-py2.py3-none-any.whl (50 kB) Collecting ipaddr (from -r requirements.txt (line 2)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/ipaddr/2.2.0/ipaddr-2.2.0.tar.gz (26 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting netaddr (from -r requirements.txt (line 3)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/netaddr/1.3.0/netaddr-1.3.0-py3-none-any.whl (2.3 MB) Collecting netifaces (from -r requirements.txt (line 4)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/netifaces/0.11.0/netifaces-0.11.0.tar.gz (30 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting pyhocon (from -r requirements.txt (line 5)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/pyhocon/0.3.61/pyhocon-0.3.61-py3-none-any.whl (25 kB) Collecting requests (from -r requirements.txt (line 6)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/requests/2.32.5/requests-2.32.5-py3-none-any.whl (64 kB) Collecting robotframework (from -r requirements.txt (line 7)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/robotframework/7.3.2/robotframework-7.3.2-py3-none-any.whl (795 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 795.1/795.1 kB 38.9 MB/s 0:00:00 Collecting robotframework-httplibrary (from -r requirements.txt (line 8)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/robotframework-httplibrary/0.4.2/robotframework-httplibrary-0.4.2.tar.gz (9.1 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting robotframework-requests==0.9.7 (from -r requirements.txt (line 9)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/robotframework-requests/0.9.7/robotframework_requests-0.9.7-py3-none-any.whl (21 kB) Collecting robotframework-selenium2library (from -r requirements.txt (line 10)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/robotframework-selenium2library/3.0.0/robotframework_selenium2library-3.0.0-py2.py3-none-any.whl (6.2 kB) Collecting robotframework-sshlibrary==3.8.0 (from -r requirements.txt (line 11)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/robotframework-sshlibrary/3.8.0/robotframework-sshlibrary-3.8.0.tar.gz (51 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting scapy (from -r requirements.txt (line 12)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/scapy/2.6.1/scapy-2.6.1-py3-none-any.whl (2.4 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.4/2.4 MB 57.4 MB/s 0:00:00 Collecting jsonpath-rw (from -r requirements.txt (line 15)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/jsonpath-rw/1.4.0/jsonpath-rw-1.4.0.tar.gz (13 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting elasticsearch (from -r requirements.txt (line 18)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elasticsearch/9.2.0/elasticsearch-9.2.0-py3-none-any.whl (960 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 960.5/960.5 kB 36.1 MB/s 0:00:00 Collecting elasticsearch-dsl (from -r requirements.txt (line 19)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elasticsearch-dsl/8.18.0/elasticsearch_dsl-8.18.0-py3-none-any.whl (10 kB) Collecting pyangbind (from -r requirements.txt (line 22)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/pyangbind/0.8.6/pyangbind-0.8.6-py3-none-any.whl (52 kB) Collecting isodate (from -r requirements.txt (line 25)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/isodate/0.7.2/isodate-0.7.2-py3-none-any.whl (22 kB) Collecting jmespath (from -r requirements.txt (line 28)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/jmespath/1.0.1/jmespath-1.0.1-py3-none-any.whl (20 kB) Collecting jsonpatch (from -r requirements.txt (line 31)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/jsonpatch/1.33/jsonpatch-1.33-py2.py3-none-any.whl (12 kB) Collecting paramiko>=1.15.3 (from robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/paramiko/4.0.0/paramiko-4.0.0-py3-none-any.whl (223 kB) Collecting scp>=0.13.0 (from robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/scp/0.15.0/scp-0.15.0-py2.py3-none-any.whl (8.8 kB) Collecting docker-pycreds>=0.2.1 (from docker-py->-r requirements.txt (line 1)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/docker-pycreds/0.4.0/docker_pycreds-0.4.0-py2.py3-none-any.whl (9.0 kB) Collecting six>=1.4.0 (from docker-py->-r requirements.txt (line 1)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/six/1.17.0/six-1.17.0-py2.py3-none-any.whl (11 kB) Collecting websocket-client>=0.32.0 (from docker-py->-r requirements.txt (line 1)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/websocket-client/1.9.0/websocket_client-1.9.0-py3-none-any.whl (82 kB) Collecting pyparsing<4,>=2 (from pyhocon->-r requirements.txt (line 5)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/pyparsing/3.2.5/pyparsing-3.2.5-py3-none-any.whl (113 kB) Collecting charset_normalizer<4,>=2 (from requests->-r requirements.txt (line 6)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/charset-normalizer/3.4.4/charset_normalizer-3.4.4-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl (151 kB) Collecting idna<4,>=2.5 (from requests->-r requirements.txt (line 6)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/idna/3.11/idna-3.11-py3-none-any.whl (71 kB) Collecting urllib3<3,>=1.21.1 (from requests->-r requirements.txt (line 6)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/urllib3/2.5.0/urllib3-2.5.0-py3-none-any.whl (129 kB) Collecting certifi>=2017.4.17 (from requests->-r requirements.txt (line 6)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/certifi/2025.11.12/certifi-2025.11.12-py3-none-any.whl (159 kB) Collecting webtest>=2.0 (from robotframework-httplibrary->-r requirements.txt (line 8)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/webtest/3.0.7/webtest-3.0.7-py3-none-any.whl (32 kB) Collecting jsonpointer (from robotframework-httplibrary->-r requirements.txt (line 8)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/jsonpointer/3.0.0/jsonpointer-3.0.0-py2.py3-none-any.whl (7.6 kB) Collecting robotframework-seleniumlibrary>=3.0.0 (from robotframework-selenium2library->-r requirements.txt (line 10)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/robotframework-seleniumlibrary/6.8.0/robotframework_seleniumlibrary-6.8.0-py3-none-any.whl (104 kB) Collecting ply (from jsonpath-rw->-r requirements.txt (line 15)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/ply/3.11/ply-3.11-py2.py3-none-any.whl (49 kB) Collecting decorator (from jsonpath-rw->-r requirements.txt (line 15)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/decorator/5.2.1/decorator-5.2.1-py3-none-any.whl (9.2 kB) Collecting anyio (from elasticsearch->-r requirements.txt (line 18)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/anyio/4.12.0/anyio-4.12.0-py3-none-any.whl (113 kB) Collecting elastic-transport<10,>=9.2.0 (from elasticsearch->-r requirements.txt (line 18)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elastic-transport/9.2.0/elastic_transport-9.2.0-py3-none-any.whl (65 kB) Collecting python-dateutil (from elasticsearch->-r requirements.txt (line 18)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/python-dateutil/2.9.0.post0/python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB) Collecting sniffio (from elasticsearch->-r requirements.txt (line 18)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/sniffio/1.3.1/sniffio-1.3.1-py3-none-any.whl (10 kB) Collecting typing-extensions (from elasticsearch->-r requirements.txt (line 18)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/typing-extensions/4.15.0/typing_extensions-4.15.0-py3-none-any.whl (44 kB) Collecting elasticsearch (from -r requirements.txt (line 18)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elasticsearch/8.19.2/elasticsearch-8.19.2-py3-none-any.whl (949 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 949.7/949.7 kB 43.3 MB/s 0:00:00 INFO: pip is looking at multiple versions of elasticsearch-dsl to determine which version is compatible with other requirements. This could take a while. Collecting elasticsearch-dsl (from -r requirements.txt (line 19)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elasticsearch-dsl/8.17.1/elasticsearch_dsl-8.17.1-py3-none-any.whl (158 kB) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elasticsearch-dsl/8.17.0/elasticsearch_dsl-8.17.0-py3-none-any.whl (158 kB) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elasticsearch-dsl/8.16.0/elasticsearch_dsl-8.16.0-py3-none-any.whl (158 kB) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elasticsearch-dsl/8.15.4/elasticsearch_dsl-8.15.4-py3-none-any.whl (104 kB) Collecting elastic-transport<9,>=8.15.1 (from elasticsearch->-r requirements.txt (line 18)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/elastic-transport/8.17.1/elastic_transport-8.17.1-py3-none-any.whl (64 kB) Collecting pyang (from pyangbind->-r requirements.txt (line 22)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/pyang/2.7.1/pyang-2.7.1-py2.py3-none-any.whl (598 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 598.5/598.5 kB 29.8 MB/s 0:00:00 Collecting lxml (from pyangbind->-r requirements.txt (line 22)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/lxml/6.0.2/lxml-6.0.2-cp311-cp311-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl (5.2 MB) Collecting regex (from pyangbind->-r requirements.txt (line 22)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/regex/2025.11.3/regex-2025.11.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl (800 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 800.4/800.4 kB 39.4 MB/s 0:00:00 Collecting enum34 (from pyangbind->-r requirements.txt (line 22)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/enum34/1.1.10/enum34-1.1.10-py3-none-any.whl (11 kB) Collecting bcrypt>=3.2 (from paramiko>=1.15.3->robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/bcrypt/5.0.0/bcrypt-5.0.0-cp39-abi3-manylinux_2_28_x86_64.whl (278 kB) Collecting cryptography>=3.3 (from paramiko>=1.15.3->robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/cryptography/46.0.3/cryptography-46.0.3-cp311-abi3-manylinux_2_28_x86_64.whl (4.5 MB) Collecting invoke>=2.0 (from paramiko>=1.15.3->robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/invoke/2.2.1/invoke-2.2.1-py3-none-any.whl (160 kB) Collecting pynacl>=1.5 (from paramiko>=1.15.3->robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/pynacl/1.6.1/pynacl-1.6.1-cp38-abi3-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl (1.4 MB) Collecting cffi>=2.0.0 (from cryptography>=3.3->paramiko>=1.15.3->robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/cffi/2.0.0/cffi-2.0.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (215 kB) Collecting pycparser (from cffi>=2.0.0->cryptography>=3.3->paramiko>=1.15.3->robotframework-sshlibrary==3.8.0->-r requirements.txt (line 11)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/pycparser/2.23/pycparser-2.23-py3-none-any.whl (118 kB) Collecting selenium>=4.3.0 (from robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/selenium/4.38.0/selenium-4.38.0-py3-none-any.whl (9.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.7/9.7 MB 87.9 MB/s 0:00:00 Collecting robotframework-pythonlibcore>=4.4.1 (from robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/robotframework-pythonlibcore/4.4.1/robotframework_pythonlibcore-4.4.1-py2.py3-none-any.whl (12 kB) Collecting click>=8.0 (from robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/click/8.3.1/click-8.3.1-py3-none-any.whl (108 kB) Collecting trio<1.0,>=0.31.0 (from selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/trio/0.32.0/trio-0.32.0-py3-none-any.whl (512 kB) Collecting trio-websocket<1.0,>=0.12.2 (from selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/trio-websocket/0.12.2/trio_websocket-0.12.2-py3-none-any.whl (21 kB) Collecting attrs>=23.2.0 (from trio<1.0,>=0.31.0->selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/attrs/25.4.0/attrs-25.4.0-py3-none-any.whl (67 kB) Collecting sortedcontainers (from trio<1.0,>=0.31.0->selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/sortedcontainers/2.4.0/sortedcontainers-2.4.0-py2.py3-none-any.whl (29 kB) Collecting outcome (from trio<1.0,>=0.31.0->selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/outcome/1.3.0.post0/outcome-1.3.0.post0-py2.py3-none-any.whl (10 kB) Collecting wsproto>=0.14 (from trio-websocket<1.0,>=0.12.2->selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/wsproto/1.3.2/wsproto-1.3.2-py3-none-any.whl (24 kB) Collecting pysocks!=1.5.7,<2.0,>=1.5.6 (from urllib3[socks]<3.0,>=2.5.0->selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/pysocks/1.7.1/PySocks-1.7.1-py3-none-any.whl (16 kB) Collecting WebOb>=1.2 (from webtest>=2.0->robotframework-httplibrary->-r requirements.txt (line 8)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/webob/1.8.9/WebOb-1.8.9-py2.py3-none-any.whl (115 kB) Collecting waitress>=3.0.2 (from webtest>=2.0->robotframework-httplibrary->-r requirements.txt (line 8)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/waitress/3.0.2/waitress-3.0.2-py3-none-any.whl (56 kB) Collecting beautifulsoup4 (from webtest>=2.0->robotframework-httplibrary->-r requirements.txt (line 8)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/beautifulsoup4/4.14.3/beautifulsoup4-4.14.3-py3-none-any.whl (107 kB) Collecting h11<1,>=0.16.0 (from wsproto>=0.14->trio-websocket<1.0,>=0.12.2->selenium>=4.3.0->robotframework-seleniumlibrary>=3.0.0->robotframework-selenium2library->-r requirements.txt (line 10)) Downloading https://nexus3.opendaylight.org/repository/PyPi/packages/h11/0.16.0/h11-0.16.0-py3-none-any.whl (37 kB) Collecting soupsieve>=1.6.1 (from beautifulsoup4->webtest>=2.0->robotframework-httplibrary->-r requirements.txt (line 8)) Using cached https://nexus3.opendaylight.org/repository/PyPi/packages/soupsieve/2.8/soupsieve-2.8-py3-none-any.whl (36 kB) Building wheels for collected packages: robotframework-sshlibrary, ipaddr, netifaces, robotframework-httplibrary, jsonpath-rw Building wheel for robotframework-sshlibrary (pyproject.toml): started Building wheel for robotframework-sshlibrary (pyproject.toml): finished with status 'done' Created wheel for robotframework-sshlibrary: filename=robotframework_sshlibrary-3.8.0-py3-none-any.whl size=55205 sha256=0f09625b54b127e778a4597ed8706bc409403a1421f7381b7f480febdf6a2dd3 Stored in directory: /home/jenkins/.cache/pip/wheels/f7/c9/b3/a977b7bcc410d45ae27d240df3d00a12585509180e373ecccc Building wheel for ipaddr (pyproject.toml): started Building wheel for ipaddr (pyproject.toml): finished with status 'done' Created wheel for ipaddr: filename=ipaddr-2.2.0-py3-none-any.whl size=18353 sha256=5ba9ced1a5adcabc77f7b54b1dbfe96b7c297fa1e79441db3651d29b2e8e3bd9 Stored in directory: /home/jenkins/.cache/pip/wheels/dc/6c/04/da2d847fa8d45c59af3e1d83e2acc29cb8adcbaf04c0898dbf Building wheel for netifaces (pyproject.toml): started Building wheel for netifaces (pyproject.toml): finished with status 'done' Created wheel for netifaces: filename=netifaces-0.11.0-cp311-cp311-linux_x86_64.whl size=41076 sha256=7133d58ae6c7a8c3fa938e8d955fec2193d3e00912dab97890f1419e37e317ec Stored in directory: /home/jenkins/.cache/pip/wheels/f8/18/88/e61d54b995bea304bdb1d040a92b72228a1bf72ca2a3eba7c9 Building wheel for robotframework-httplibrary (pyproject.toml): started Building wheel for robotframework-httplibrary (pyproject.toml): finished with status 'done' Created wheel for robotframework-httplibrary: filename=robotframework_httplibrary-0.4.2-py3-none-any.whl size=10014 sha256=df4cc0e6936d43594c156a7d90b1181e0c86318b5477ab70fd2247b5e5b02674 Stored in directory: /home/jenkins/.cache/pip/wheels/aa/bc/0d/9a20dd51effef392aae2733cb4c7b66c6fa29fca33d88b57ed Building wheel for jsonpath-rw (pyproject.toml): started Building wheel for jsonpath-rw (pyproject.toml): finished with status 'done' Created wheel for jsonpath-rw: filename=jsonpath_rw-1.4.0-py3-none-any.whl size=15176 sha256=287cb679142d91ddee9df4f4a6ce978a3daa9b0713fb5a714c420a970e2ef7ad Stored in directory: /home/jenkins/.cache/pip/wheels/f1/54/63/9a8da38cefae13755097b36cc852decc25d8ef69c37d58d4eb Successfully built robotframework-sshlibrary ipaddr netifaces robotframework-httplibrary jsonpath-rw Installing collected packages: sortedcontainers, ply, netifaces, ipaddr, enum34, websocket-client, WebOb, waitress, urllib3, typing-extensions, soupsieve, sniffio, six, scapy, robotframework-pythonlibcore, robotframework, regex, pysocks, pyparsing, pycparser, netaddr, lxml, jsonpointer, jmespath, isodate, invoke, idna, h11, decorator, click, charset_normalizer, certifi, bcrypt, attrs, wsproto, requests, python-dateutil, pyhocon, pyang, outcome, jsonpath-rw, jsonpatch, elastic-transport, docker-pycreds, cffi, beautifulsoup4, webtest, trio, robotframework-requests, pynacl, pyangbind, elasticsearch, docker-py, cryptography, trio-websocket, robotframework-httplibrary, paramiko, elasticsearch-dsl, selenium, scp, robotframework-sshlibrary, robotframework-seleniumlibrary, robotframework-selenium2library Successfully installed WebOb-1.8.9 attrs-25.4.0 bcrypt-5.0.0 beautifulsoup4-4.14.3 certifi-2025.11.12 cffi-2.0.0 charset_normalizer-3.4.4 click-8.3.1 cryptography-46.0.3 decorator-5.2.1 docker-py-1.10.6 docker-pycreds-0.4.0 elastic-transport-8.17.1 elasticsearch-8.19.2 elasticsearch-dsl-8.15.4 enum34-1.1.10 h11-0.16.0 idna-3.11 invoke-2.2.1 ipaddr-2.2.0 isodate-0.7.2 jmespath-1.0.1 jsonpatch-1.33 jsonpath-rw-1.4.0 jsonpointer-3.0.0 lxml-6.0.2 netaddr-1.3.0 netifaces-0.11.0 outcome-1.3.0.post0 paramiko-4.0.0 ply-3.11 pyang-2.7.1 pyangbind-0.8.6 pycparser-2.23 pyhocon-0.3.61 pynacl-1.6.1 pyparsing-3.2.5 pysocks-1.7.1 python-dateutil-2.9.0.post0 regex-2025.11.3 requests-2.32.5 robotframework-7.3.2 robotframework-httplibrary-0.4.2 robotframework-pythonlibcore-4.4.1 robotframework-requests-0.9.7 robotframework-selenium2library-3.0.0 robotframework-seleniumlibrary-6.8.0 robotframework-sshlibrary-3.8.0 scapy-2.6.1 scp-0.15.0 selenium-4.38.0 six-1.17.0 sniffio-1.3.1 sortedcontainers-2.4.0 soupsieve-2.8 trio-0.32.0 trio-websocket-0.12.2 typing-extensions-4.15.0 urllib3-2.5.0 waitress-3.0.2 websocket-client-1.9.0 webtest-3.0.7 wsproto-1.3.2 + pip freeze attrs==25.4.0 bcrypt==5.0.0 beautifulsoup4==4.14.3 certifi==2025.11.12 cffi==2.0.0 charset-normalizer==3.4.4 click==8.3.1 cryptography==46.0.3 decorator==5.2.1 distlib==0.4.0 docker-py==1.10.6 docker-pycreds==0.4.0 elastic-transport==8.17.1 elasticsearch==8.19.2 elasticsearch-dsl==8.15.4 enum34==1.1.10 filelock==3.20.0 h11==0.16.0 idna==3.11 invoke==2.2.1 ipaddr==2.2.0 isodate==0.7.2 jmespath==1.0.1 jsonpatch==1.33 jsonpath-rw==1.4.0 jsonpointer==3.0.0 lxml==6.0.2 netaddr==1.3.0 netifaces==0.11.0 outcome==1.3.0.post0 paramiko==4.0.0 platformdirs==4.5.0 ply==3.11 pyang==2.7.1 pyangbind==0.8.6 pycparser==2.23 pyhocon==0.3.61 PyNaCl==1.6.1 pyparsing==3.2.5 PySocks==1.7.1 python-dateutil==2.9.0.post0 regex==2025.11.3 requests==2.32.5 robotframework==7.3.2 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==4.4.1 robotframework-requests==0.9.7 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==6.8.0 robotframework-sshlibrary==3.8.0 scapy==2.6.1 scp==0.15.0 selenium==4.38.0 six==1.17.0 sniffio==1.3.1 sortedcontainers==2.4.0 soupsieve==2.8 trio==0.32.0 trio-websocket==0.12.2 typing_extensions==4.15.0 urllib3==2.5.0 virtualenv==20.35.4 waitress==3.0.2 WebOb==1.8.9 websocket-client==1.9.0 WebTest==3.0.7 wsproto==1.3.2 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path 'env.properties' [EnvInject] - Variables injected successfully. [controller-csit-3node-clustering-ask-all-titanium] $ /bin/bash -l /tmp/jenkins2705921697196870189.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/controller-csit-3node-clustering-ask-all-titanium/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-hM1z from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: python-heatclient python-openstackclient yq ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lftools 0.37.16 requires urllib3<2.1.0, but you have urllib3 2.5.0 which is incompatible. kubernetes 34.1.0 requires urllib3<2.4.0,>=1.24.2, but you have urllib3 2.5.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-hM1z/bin to PATH + ODL_SYSTEM=() + TOOLS_SYSTEM=() + OPENSTACK_SYSTEM=() + OPENSTACK_CONTROLLERS=() + mapfile -t ADDR ++ openstack stack show -f json -c outputs releng-controller-csit-3node-clustering-ask-all-titanium-63 ++ jq -r '.outputs[] | select(.output_key | match("^vm_[0-9]+_ips$")) | .output_value | .[]' + for i in "${ADDR[@]}" ++ ssh 10.30.170.77 hostname -s Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. + REMHOST=releng-19183-63-0-builder-0 + case ${REMHOST} in + ODL_SYSTEM=("${ODL_SYSTEM[@]}" "${i}") + for i in "${ADDR[@]}" ++ ssh 10.30.171.188 hostname -s Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. + REMHOST=releng-19183-63-0-builder-1 + case ${REMHOST} in + ODL_SYSTEM=("${ODL_SYSTEM[@]}" "${i}") + for i in "${ADDR[@]}" ++ ssh 10.30.171.110 hostname -s Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. + REMHOST=releng-19183-63-0-builder-2 + case ${REMHOST} in + ODL_SYSTEM=("${ODL_SYSTEM[@]}" "${i}") + echo NUM_ODL_SYSTEM=3 + echo NUM_TOOLS_SYSTEM=0 + '[' '' == yes ']' + NUM_OPENSTACK_SYSTEM=0 + echo NUM_OPENSTACK_SYSTEM=0 + '[' 0 -eq 2 ']' + echo ODL_SYSTEM_IP=10.30.170.77 ++ seq 0 2 + for i in $(seq 0 $(( ${#ODL_SYSTEM[@]} - 1 ))) + echo ODL_SYSTEM_1_IP=10.30.170.77 + for i in $(seq 0 $(( ${#ODL_SYSTEM[@]} - 1 ))) + echo ODL_SYSTEM_2_IP=10.30.171.188 + for i in $(seq 0 $(( ${#ODL_SYSTEM[@]} - 1 ))) + echo ODL_SYSTEM_3_IP=10.30.171.110 + echo TOOLS_SYSTEM_IP= ++ seq 0 -1 + openstack_index=0 + NUM_OPENSTACK_CONTROL_NODES=1 + echo NUM_OPENSTACK_CONTROL_NODES=1 ++ seq 0 0 + for i in $(seq 0 $((NUM_OPENSTACK_CONTROL_NODES - 1))) + echo OPENSTACK_CONTROL_NODE_1_IP= + NUM_OPENSTACK_COMPUTE_NODES=-1 + echo NUM_OPENSTACK_COMPUTE_NODES=-1 + '[' -1 -ge 2 ']' ++ seq 0 -2 + NUM_OPENSTACK_HAPROXY_NODES=0 + echo NUM_OPENSTACK_HAPROXY_NODES=0 ++ seq 0 -1 + echo 'Contents of slave_addresses.txt:' Contents of slave_addresses.txt: + cat slave_addresses.txt NUM_ODL_SYSTEM=3 NUM_TOOLS_SYSTEM=0 NUM_OPENSTACK_SYSTEM=0 ODL_SYSTEM_IP=10.30.170.77 ODL_SYSTEM_1_IP=10.30.170.77 ODL_SYSTEM_2_IP=10.30.171.188 ODL_SYSTEM_3_IP=10.30.171.110 TOOLS_SYSTEM_IP= NUM_OPENSTACK_CONTROL_NODES=1 OPENSTACK_CONTROL_NODE_1_IP= NUM_OPENSTACK_COMPUTE_NODES=-1 NUM_OPENSTACK_HAPROXY_NODES=0 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path 'slave_addresses.txt' [EnvInject] - Variables injected successfully. [controller-csit-3node-clustering-ask-all-titanium] $ /bin/sh /tmp/jenkins15010013809566150808.sh Preparing for JRE Version 21 Karaf artifact is karaf Karaf project is integration Java home is /usr/lib/jvm/java-21-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path 'set_variables.env' [EnvInject] - Variables injected successfully. [controller-csit-3node-clustering-ask-all-titanium] $ /bin/bash /tmp/jenkins4601014801386250420.sh 2025-11-30 23:05:09 URL:https://raw.githubusercontent.com/opendaylight/integration-distribution/stable/titanium/pom.xml [2619/2619] -> "pom.xml" [1] Bundle version is 0.22.2-SNAPSHOT --2025-11-30 23:05:09-- https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.2-SNAPSHOT/maven-metadata.xml Resolving nexus.opendaylight.org (nexus.opendaylight.org)... 199.204.45.87, 2604:e100:1:0:f816:3eff:fe45:48d6 Connecting to nexus.opendaylight.org (nexus.opendaylight.org)|199.204.45.87|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 1410 (1.4K) [application/xml] Saving to: ‘maven-metadata.xml’ 0K . 100% 48.2M=0s 2025-11-30 23:05:09 (48.2 MB/s) - ‘maven-metadata.xml’ saved [1410/1410] org.opendaylight.integration karaf 0.22.2-SNAPSHOT 20251130.175422 194 20251130175422 pom 0.22.2-20251130.175422-194 20251130175422 tar.gz 0.22.2-20251130.175422-194 20251130175422 zip 0.22.2-20251130.175422-194 20251130175422 cyclonedx xml 0.22.2-20251130.175422-194 20251130175422 cyclonedx json 0.22.2-20251130.175422-194 20251130175422 Nexus timestamp is 0.22.2-20251130.175422-194 Distribution bundle URL is https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.2-SNAPSHOT/karaf-0.22.2-20251130.175422-194.zip Distribution bundle is karaf-0.22.2-20251130.175422-194.zip Distribution bundle version is 0.22.2-SNAPSHOT Distribution folder is karaf-0.22.2-SNAPSHOT Nexus prefix is https://nexus.opendaylight.org [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path 'detect_variables.env' [EnvInject] - Variables injected successfully. [controller-csit-3node-clustering-ask-all-titanium] $ /bin/bash -l /tmp/jenkins9520217857789579045.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/controller-csit-3node-clustering-ask-all-titanium/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-hM1z from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lftools 0.37.16 requires urllib3<2.1.0, but you have urllib3 2.5.0 which is incompatible. lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: python-heatclient python-openstackclient ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lftools 0.37.16 requires urllib3<2.1.0, but you have urllib3 2.5.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-hM1z/bin to PATH Copying common-functions.sh to /tmp Copying common-functions.sh to 10.30.170.77:/tmp Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. Copying common-functions.sh to 10.30.171.188:/tmp Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. Copying common-functions.sh to 10.30.171.110:/tmp Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. [controller-csit-3node-clustering-ask-all-titanium] $ /bin/bash /tmp/jenkins1026932017644351834.sh common-functions.sh is being sourced common-functions environment: MAVENCONF: /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg ACTUALFEATURES: FEATURESCONF: /tmp/karaf-0.22.2-SNAPSHOT/etc/org.apache.karaf.features.cfg CUSTOMPROP: /tmp/karaf-0.22.2-SNAPSHOT/etc/custom.properties LOGCONF: /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg MEMCONF: /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv CONTROLLERMEM: 2048m AKKACONF: /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/pekko.conf MODULESCONF: /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/modules.conf MODULESHARDSCONF: /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/module-shards.conf SUITES: ################################################# ## Configure Cluster and Start ## ################################################# ACTUALFEATURES: odl-integration-compatible-with-all,odl-jolokia,odl-restconf,odl-clustering-test-app SPACE_SEPARATED_FEATURES: odl-integration-compatible-with-all odl-jolokia odl-restconf odl-clustering-test-app Locating script plan to use... script plan exists!!! Changing the script plan path... # Place the scripts in run order: /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/scripts/car_people_shard.sh Executing /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/scripts/car_people_shard.sh... Add car-people shards file Copy shard config to member-1 with IP address 10.30.170.77 Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. Copy shard config to member-2 with IP address 10.30.171.188 Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. Copy shard config to member-3 with IP address 10.30.171.110 Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. Finished running script plans Configuring member-1 with IP address 10.30.170.77 Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. + source /tmp/common-functions.sh karaf-0.22.2-SNAPSHOT titanium ++ [[ /tmp/common-functions.sh == \/\t\m\p\/\c\o\n\f\i\g\u\r\a\t\i\o\n\-\s\c\r\i\p\t\.\s\h ]] common-functions.sh is being sourced ++ echo 'common-functions.sh is being sourced' ++ BUNDLEFOLDER=karaf-0.22.2-SNAPSHOT ++ DISTROSTREAM=titanium ++ export MAVENCONF=/tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg ++ MAVENCONF=/tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg ++ export FEATURESCONF=/tmp/karaf-0.22.2-SNAPSHOT/etc/org.apache.karaf.features.cfg ++ FEATURESCONF=/tmp/karaf-0.22.2-SNAPSHOT/etc/org.apache.karaf.features.cfg ++ export CUSTOMPROP=/tmp/karaf-0.22.2-SNAPSHOT/etc/custom.properties ++ CUSTOMPROP=/tmp/karaf-0.22.2-SNAPSHOT/etc/custom.properties ++ export LOGCONF=/tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg ++ LOGCONF=/tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg ++ export MEMCONF=/tmp/karaf-0.22.2-SNAPSHOT/bin/setenv ++ MEMCONF=/tmp/karaf-0.22.2-SNAPSHOT/bin/setenv ++ export CONTROLLERMEM= ++ CONTROLLERMEM= ++ case "${DISTROSTREAM}" in ++ CLUSTER_SYSTEM=pekko ++ export AKKACONF=/tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/pekko.conf ++ AKKACONF=/tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/pekko.conf ++ export MODULESCONF=/tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/modules.conf ++ MODULESCONF=/tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/modules.conf ++ export MODULESHARDSCONF=/tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/module-shards.conf ++ MODULESHARDSCONF=/tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/module-shards.conf ++ print_common_env ++ cat common-functions environment: MAVENCONF: /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg ACTUALFEATURES: FEATURESCONF: /tmp/karaf-0.22.2-SNAPSHOT/etc/org.apache.karaf.features.cfg CUSTOMPROP: /tmp/karaf-0.22.2-SNAPSHOT/etc/custom.properties LOGCONF: /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg MEMCONF: /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv CONTROLLERMEM: AKKACONF: /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/pekko.conf MODULESCONF: /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/modules.conf MODULESHARDSCONF: /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/module-shards.conf SUITES: ++ SSH='ssh -t -t' ++ extra_services_cntl=' dnsmasq.service httpd.service libvirtd.service openvswitch.service ovs-vswitchd.service ovsdb-server.service rabbitmq-server.service ' ++ extra_services_cmp=' libvirtd.service openvswitch.service ovs-vswitchd.service ovsdb-server.service ' Changing to /tmp Downloading the distribution from https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.2-SNAPSHOT/karaf-0.22.2-20251130.175422-194.zip + echo 'Changing to /tmp' + cd /tmp + echo 'Downloading the distribution from https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.2-SNAPSHOT/karaf-0.22.2-20251130.175422-194.zip' + wget --progress=dot:mega https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.2-SNAPSHOT/karaf-0.22.2-20251130.175422-194.zip --2025-11-30 23:05:22-- https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.2-SNAPSHOT/karaf-0.22.2-20251130.175422-194.zip Resolving nexus.opendaylight.org (nexus.opendaylight.org)... 199.204.45.87, 2604:e100:1:0:f816:3eff:fe45:48d6 Connecting to nexus.opendaylight.org (nexus.opendaylight.org)|199.204.45.87|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 234981466 (224M) [application/zip] Saving to: ‘karaf-0.22.2-20251130.175422-194.zip’ 0K ........ ........ ........ ........ ........ ........ 1% 37.2M 6s 3072K ........ ........ ........ ........ ........ ........ 2% 88.3M 4s 6144K ........ ........ ........ ........ ........ ........ 4% 114M 3s 9216K ........ ........ ........ ........ ........ ........ 5% 122M 3s 12288K ........ ........ ........ ........ ........ ........ 6% 178M 3s 15360K ........ ........ ........ ........ ........ ........ 8% 226M 2s 18432K ........ ........ ........ ........ ........ ........ 9% 213M 2s 21504K ........ ........ ........ ........ ........ ........ 10% 244M 2s 24576K ........ ........ ........ ........ ........ ........ 12% 220M 2s 27648K ........ ........ ........ ........ ........ ........ 13% 232M 2s 30720K ........ ........ ........ ........ ........ ........ 14% 226M 2s 33792K ........ ........ ........ ........ ........ ........ 16% 237M 1s 36864K ........ ........ ........ ........ ........ ........ 17% 237M 1s 39936K ........ ........ ........ ........ ........ ........ 18% 245M 1s 43008K ........ ........ ........ ........ ........ ........ 20% 250M 1s 46080K ........ ........ ........ ........ ........ ........ 21% 266M 1s 49152K ........ ........ ........ ........ ........ ........ 22% 245M 1s 52224K ........ ........ ........ ........ ........ ........ 24% 233M 1s 55296K ........ ........ ........ ........ ........ ........ 25% 221M 1s 58368K ........ ........ ........ ........ ........ ........ 26% 233M 1s 61440K ........ ........ ........ ........ ........ ........ 28% 251M 1s 64512K ........ ........ ........ ........ ........ ........ 29% 242M 1s 67584K ........ ........ ........ ........ ........ ........ 30% 253M 1s 70656K ........ ........ ........ ........ ........ ........ 32% 143M 1s 73728K ........ ........ ........ ........ ........ ........ 33% 302M 1s 76800K ........ ........ ........ ........ ........ ........ 34% 236M 1s 79872K ........ ........ ........ ........ ........ ........ 36% 149M 1s 82944K ........ ........ ........ ........ ........ ........ 37% 161M 1s 86016K ........ ........ ........ ........ ........ ........ 38% 152M 1s 89088K ........ ........ ........ ........ ........ ........ 40% 158M 1s 92160K ........ ........ ........ ........ ........ ........ 41% 155M 1s 95232K ........ ........ ........ ........ ........ ........ 42% 162M 1s 98304K ........ ........ ........ ........ ........ ........ 44% 156M 1s 101376K ........ ........ ........ ........ ........ ........ 45% 163M 1s 104448K ........ ........ ........ ........ ........ ........ 46% 159M 1s 107520K ........ ........ ........ ........ ........ ........ 48% 163M 1s 110592K ........ ........ ........ ........ ........ ........ 49% 168M 1s 113664K ........ ........ ........ ........ ........ ........ 50% 202M 1s 116736K ........ ........ ........ ........ ........ ........ 52% 240M 1s 119808K ........ ........ ........ ........ ........ ........ 53% 201M 1s 122880K ........ ........ ........ ........ ........ ........ 54% 176M 1s 125952K ........ ........ ........ ........ ........ ........ 56% 163M 1s 129024K ........ ........ ........ ........ ........ ........ 57% 198M 1s 132096K ........ ........ ........ ........ ........ ........ 58% 239M 1s 135168K ........ ........ ........ ........ ........ ........ 60% 244M 1s 138240K ........ ........ ........ ........ ........ ........ 61% 237M 0s 141312K ........ ........ ........ ........ ........ ........ 62% 231M 0s 144384K ........ ........ ........ ........ ........ ........ 64% 242M 0s 147456K ........ ........ ........ ........ ........ ........ 65% 225M 0s 150528K ........ ........ ........ ........ ........ ........ 66% 228M 0s 153600K ........ ........ ........ ........ ........ ........ 68% 231M 0s 156672K ........ ........ ........ ........ ........ ........ 69% 241M 0s 159744K ........ ........ ........ ........ ........ ........ 70% 248M 0s 162816K ........ ........ ........ ........ ........ ........ 72% 266M 0s 165888K ........ ........ ........ ........ ........ ........ 73% 242M 0s 168960K ........ ........ ........ ........ ........ ........ 74% 265M 0s 172032K ........ ........ ........ ........ ........ ........ 76% 244M 0s 175104K ........ ........ ........ ........ ........ ........ 77% 284M 0s 178176K ........ ........ ........ ........ ........ ........ 78% 254M 0s 181248K ........ ........ ........ ........ ........ ........ 80% 245M 0s 184320K ........ ........ ........ ........ ........ ........ 81% 331M 0s 187392K ........ ........ ........ ........ ........ ........ 83% 247M 0s 190464K ........ ........ ........ ........ ........ ........ 84% 266M 0s 193536K ........ ........ ........ ........ ........ ........ 85% 280M 0s 196608K ........ ........ ........ ........ ........ ........ 87% 221M 0s 199680K ........ ........ ........ ........ ........ ........ 88% 235M 0s 202752K ........ ........ ........ ........ ........ ........ 89% 227M 0s 205824K ........ ........ ........ ........ ........ ........ 91% 245M 0s 208896K ........ ........ ........ ........ ........ ........ 92% 241M 0s 211968K ........ ........ ........ ........ ........ ........ 93% 246M 0s 215040K ........ ........ ........ ........ ........ ........ 95% 231M 0s 218112K ........ ........ ........ ........ ........ ........ 96% 237M 0s 221184K ........ ........ ........ ........ ........ ........ 97% 233M 0s 224256K ........ ........ ........ ........ ........ ........ 99% 229M 0s 227328K ........ ........ ........ ........ . 100% 230M=1.1s 2025-11-30 23:05:23 (196 MB/s) - ‘karaf-0.22.2-20251130.175422-194.zip’ saved [234981466/234981466] Extracting the new controller... + echo 'Extracting the new controller...' + unzip -q karaf-0.22.2-20251130.175422-194.zip Adding external repositories... + echo 'Adding external repositories...' + sed -ie 's%org.ops4j.pax.url.mvn.repositories=%org.ops4j.pax.url.mvn.repositories=https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot@id=opendaylight-snapshot@snapshots, https://nexus.opendaylight.org/content/repositories/public@id=opendaylight-mirror, http://repo1.maven.org/maven2@id=central, http://repository.springsource.com/maven/bundles/release@id=spring.ebr.release, http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external, http://zodiac.springsource.com/maven/bundles/release@id=gemini, http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases, https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases, https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases%g' /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg + cat /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg ################################################################################ # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ################################################################################ # # If set to true, the following property will not allow any certificate to be used # when accessing Maven repositories through SSL # #org.ops4j.pax.url.mvn.certificateCheck= # # Path to the local Maven settings file. # The repositories defined in this file will be automatically added to the list # of default repositories if the 'org.ops4j.pax.url.mvn.repositories' property # below is not set. # The following locations are checked for the existence of the settings.xml file # * 1. looks for the specified url # * 2. if not found looks for ${user.home}/.m2/settings.xml # * 3. if not found looks for ${maven.home}/conf/settings.xml # * 4. if not found looks for ${M2_HOME}/conf/settings.xml # #org.ops4j.pax.url.mvn.settings= # # Path to the local Maven repository which is used to avoid downloading # artifacts when they already exist locally. # The value of this property will be extracted from the settings.xml file # above, or defaulted to: # System.getProperty( "user.home" ) + "/.m2/repository" # org.ops4j.pax.url.mvn.localRepository=${karaf.home}/${karaf.default.repository} # # Default this to false. It's just weird to use undocumented repos # org.ops4j.pax.url.mvn.useFallbackRepositories=false # # Uncomment if you don't wanna use the proxy settings # from the Maven conf/settings.xml file # # org.ops4j.pax.url.mvn.proxySupport=false # # Comma separated list of repositories scanned when resolving an artifact. # Those repositories will be checked before iterating through the # below list of repositories and even before the local repository # A repository url can be appended with zero or more of the following flags: # @snapshots : the repository contains snaphots # @noreleases : the repository does not contain any released artifacts # # The following property value will add the system folder as a repo. # org.ops4j.pax.url.mvn.defaultRepositories=\ file:${karaf.home}/${karaf.default.repository}@id=system.repository@snapshots,\ file:${karaf.data}/kar@id=kar.repository@multi@snapshots,\ file:${karaf.base}/${karaf.default.repository}@id=child.system.repository@snapshots # Use the default local repo (e.g.~/.m2/repository) as a "remote" repo #org.ops4j.pax.url.mvn.defaultLocalRepoAsRemote=false # # Comma separated list of repositories scanned when resolving an artifact. # The default list includes the following repositories: # http://repo1.maven.org/maven2@id=central # http://repository.springsource.com/maven/bundles/release@id=spring.ebr # http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external # http://zodiac.springsource.com/maven/bundles/release@id=gemini # http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases # https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases # https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases # To add repositories to the default ones, prepend '+' to the list of repositories # to add. # A repository url can be appended with zero or more of the following flags: # @snapshots : the repository contains snapshots # @noreleases : the repository does not contain any released artifacts # @id=repository.id : the id for the repository, just like in the settings.xml this is optional but recommended # org.ops4j.pax.url.mvn.repositories=https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot@id=opendaylight-snapshot@snapshots, https://nexus.opendaylight.org/content/repositories/public@id=opendaylight-mirror, http://repo1.maven.org/maven2@id=central, http://repository.springsource.com/maven/bundles/release@id=spring.ebr.release, http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external, http://zodiac.springsource.com/maven/bundles/release@id=gemini, http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases, https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases, https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases ### ^^^ No remote repositories. This is the only ODL change compared to Karaf defaults.Configuring the startup features... + [[ True == \T\r\u\e ]] + echo 'Configuring the startup features...' + sed -ie 's/\(featuresBoot=\|featuresBoot =\)/featuresBoot = odl-integration-compatible-with-all,odl-jolokia,odl-restconf,odl-clustering-test-app,/g' /tmp/karaf-0.22.2-SNAPSHOT/etc/org.apache.karaf.features.cfg + FEATURE_TEST_STRING=features-test + FEATURE_TEST_VERSION=0.22.2-SNAPSHOT + KARAF_VERSION=karaf4 + [[ integration == \i\n\t\e\g\r\a\t\i\o\n ]] + sed -ie 's%\(featuresRepositories=\|featuresRepositories =\)%featuresRepositories = mvn:org.opendaylight.integration/features-test/0.22.2-SNAPSHOT/xml/features,mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.2.0/xml/features,%g' /tmp/karaf-0.22.2-SNAPSHOT/etc/org.apache.karaf.features.cfg + [[ ! -z '' ]] + cat /tmp/karaf-0.22.2-SNAPSHOT/etc/org.apache.karaf.features.cfg ################################################################################ # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ################################################################################ # # Comma separated list of features repositories to register by default # featuresRepositories = mvn:org.opendaylight.integration/features-test/0.22.2-SNAPSHOT/xml/features,mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.2.0/xml/features, file:${karaf.etc}/d9838ecd-cf2e-476d-adb9-aed42a6a675d.xml # # Comma separated list of features to install at startup # featuresBoot = odl-integration-compatible-with-all,odl-jolokia,odl-restconf,odl-clustering-test-app, 9cac513d-6753-4f5a-bef2-992de0a00fe1 # # Resource repositories (OBR) that the features resolver can use # to resolve requirements/capabilities # # The format of the resourceRepositories is # resourceRepositories=[xml:url|json:url],... # for Instance: # #resourceRepositories=xml:http://host/path/to/index.xml # or #resourceRepositories=json:http://host/path/to/index.json # # # Defines if the boot features are started in asynchronous mode (in a dedicated thread) # featuresBootAsynchronous=false # # Service requirements enforcement # # By default, the feature resolver checks the service requirements/capabilities of # bundles for new features (xml schema >= 1.3.0) in order to automatically installs # the required bundles. # The following flag can have those values: # - disable: service requirements are completely ignored # - default: service requirements are ignored for old features # - enforce: service requirements are always verified # #serviceRequirements=default # # Store cfg file for config element in feature # #configCfgStore=true # # Define if the feature service automatically refresh bundles # autoRefresh=true # # Configuration of features processing mechanism (overrides, blacklisting, modification of features) # XML file defines instructions related to features processing # versions.properties may declare properties to resolve placeholders in XML file # both files are relative to ${karaf.etc} # #featureProcessing=org.apache.karaf.features.xml #featureProcessingVersions=versions.properties + configure_karaf_log karaf4 '' + local -r karaf_version=karaf4 + local -r controllerdebugmap= + local logapi=log4j + grep log4j2 /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg log4j2.pattern = %d{ISO8601} | %-5p | %-16t | %-32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n log4j2.rootLogger.level = INFO #log4j2.rootLogger.type = asyncRoot #log4j2.rootLogger.includeLocation = false log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi log4j2.rootLogger.appenderRef.Console.ref = Console log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF} log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.type = ContextMapFilter log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.type = KeyValuePair log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.key = slf4j.marker log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.value = CONFIDENTIAL log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.operator = or log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMatch = DENY log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMismatch = NEUTRAL log4j2.logger.spifly.name = org.apache.aries.spifly log4j2.logger.spifly.level = WARN log4j2.logger.audit.name = org.apache.karaf.jaas.modules.audit log4j2.logger.audit.level = INFO log4j2.logger.audit.additivity = false log4j2.logger.audit.appenderRef.AuditRollingFile.ref = AuditRollingFile # Console appender not used by default (see log4j2.rootLogger.appenderRefs) log4j2.appender.console.type = Console log4j2.appender.console.name = Console log4j2.appender.console.layout.type = PatternLayout log4j2.appender.console.layout.pattern = ${log4j2.pattern} log4j2.appender.rolling.type = RollingRandomAccessFile log4j2.appender.rolling.name = RollingFile log4j2.appender.rolling.fileName = ${karaf.data}/log/karaf.log log4j2.appender.rolling.filePattern = ${karaf.data}/log/karaf.log.%i #log4j2.appender.rolling.immediateFlush = false log4j2.appender.rolling.append = true log4j2.appender.rolling.layout.type = PatternLayout log4j2.appender.rolling.layout.pattern = ${log4j2.pattern} log4j2.appender.rolling.policies.type = Policies log4j2.appender.rolling.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.rolling.policies.size.size = 64MB log4j2.appender.rolling.strategy.type = DefaultRolloverStrategy log4j2.appender.rolling.strategy.max = 7 log4j2.appender.audit.type = RollingRandomAccessFile log4j2.appender.audit.name = AuditRollingFile log4j2.appender.audit.fileName = ${karaf.data}/security/audit.log log4j2.appender.audit.filePattern = ${karaf.data}/security/audit.log.%i log4j2.appender.audit.append = true log4j2.appender.audit.layout.type = PatternLayout log4j2.appender.audit.layout.pattern = ${log4j2.pattern} log4j2.appender.audit.policies.type = Policies log4j2.appender.audit.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.audit.policies.size.size = 8MB log4j2.appender.audit.strategy.type = DefaultRolloverStrategy log4j2.appender.audit.strategy.max = 7 log4j2.appender.osgi.type = PaxOsgi log4j2.appender.osgi.name = PaxOsgi log4j2.appender.osgi.filter = * #log4j2.logger.aether.name = shaded.org.eclipse.aether #log4j2.logger.aether.level = TRACE #log4j2.logger.http-headers.name = shaded.org.apache.http.headers #log4j2.logger.http-headers.level = DEBUG #log4j2.logger.maven.name = org.ops4j.pax.url.mvn #log4j2.logger.maven.level = TRACE Configuring the karaf log... karaf_version: karaf4, logapi: log4j2 + logapi=log4j2 + echo 'Configuring the karaf log... karaf_version: karaf4, logapi: log4j2' + '[' log4j2 == log4j2 ']' + sed -ie 's/log4j2.appender.rolling.policies.size.size = 64MB/log4j2.appender.rolling.policies.size.size = 1GB/g' /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg + orgmodule=org.opendaylight.yangtools.yang.parser.repo.YangTextSchemaContextResolver + orgmodule_=org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver + echo 'log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.name = WARN' controllerdebugmap: cat /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg + echo 'log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.level = WARN' + unset IFS + echo 'controllerdebugmap: ' + '[' -n '' ']' + echo 'cat /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg' + cat /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg ################################################################################ # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ################################################################################ # Common pattern layout for appenders log4j2.pattern = %d{ISO8601} | %-5p | %-16t | %-32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n # Root logger log4j2.rootLogger.level = INFO # uncomment to use asynchronous loggers, which require mvn:com.lmax/disruptor/3.3.2 library #log4j2.rootLogger.type = asyncRoot #log4j2.rootLogger.includeLocation = false log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi log4j2.rootLogger.appenderRef.Console.ref = Console log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF} # Filters for logs marked by org.opendaylight.odlparent.Markers log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.type = ContextMapFilter log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.type = KeyValuePair log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.key = slf4j.marker log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.value = CONFIDENTIAL log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.operator = or log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMatch = DENY log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMismatch = NEUTRAL # Loggers configuration # Spifly logger log4j2.logger.spifly.name = org.apache.aries.spifly log4j2.logger.spifly.level = WARN # Security audit logger log4j2.logger.audit.name = org.apache.karaf.jaas.modules.audit log4j2.logger.audit.level = INFO log4j2.logger.audit.additivity = false log4j2.logger.audit.appenderRef.AuditRollingFile.ref = AuditRollingFile # Appenders configuration # Console appender not used by default (see log4j2.rootLogger.appenderRefs) log4j2.appender.console.type = Console log4j2.appender.console.name = Console log4j2.appender.console.layout.type = PatternLayout log4j2.appender.console.layout.pattern = ${log4j2.pattern} # Rolling file appender log4j2.appender.rolling.type = RollingRandomAccessFile log4j2.appender.rolling.name = RollingFile log4j2.appender.rolling.fileName = ${karaf.data}/log/karaf.log log4j2.appender.rolling.filePattern = ${karaf.data}/log/karaf.log.%i # uncomment to not force a disk flush #log4j2.appender.rolling.immediateFlush = false log4j2.appender.rolling.append = true log4j2.appender.rolling.layout.type = PatternLayout log4j2.appender.rolling.layout.pattern = ${log4j2.pattern} log4j2.appender.rolling.policies.type = Policies log4j2.appender.rolling.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.rolling.policies.size.size = 1GB log4j2.appender.rolling.strategy.type = DefaultRolloverStrategy log4j2.appender.rolling.strategy.max = 7 # Audit file appender log4j2.appender.audit.type = RollingRandomAccessFile log4j2.appender.audit.name = AuditRollingFile log4j2.appender.audit.fileName = ${karaf.data}/security/audit.log log4j2.appender.audit.filePattern = ${karaf.data}/security/audit.log.%i log4j2.appender.audit.append = true log4j2.appender.audit.layout.type = PatternLayout log4j2.appender.audit.layout.pattern = ${log4j2.pattern} log4j2.appender.audit.policies.type = Policies log4j2.appender.audit.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.audit.policies.size.size = 8MB log4j2.appender.audit.strategy.type = DefaultRolloverStrategy log4j2.appender.audit.strategy.max = 7 # OSGi appender log4j2.appender.osgi.type = PaxOsgi log4j2.appender.osgi.name = PaxOsgi log4j2.appender.osgi.filter = * # help with identification of maven-related problems with pax-url-aether #log4j2.logger.aether.name = shaded.org.eclipse.aether #log4j2.logger.aether.level = TRACE #log4j2.logger.http-headers.name = shaded.org.apache.http.headers #log4j2.logger.http-headers.level = DEBUG #log4j2.logger.maven.name = org.ops4j.pax.url.mvn #log4j2.logger.maven.level = TRACE log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.name = WARN log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.level = WARN + set_java_vars /usr/lib/jvm/java-21-openjdk-amd64 3072m /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv + local -r java_home=/usr/lib/jvm/java-21-openjdk-amd64 + local -r controllermem=3072m Configure java home: /usr/lib/jvm/java-21-openjdk-amd64 + local -r memconf=/tmp/karaf-0.22.2-SNAPSHOT/bin/setenv + echo Configure + echo ' java home: /usr/lib/jvm/java-21-openjdk-amd64' max memory: 3072m memconf: /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv + echo ' max memory: 3072m' + echo ' memconf: /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv' + sed -ie 's%^# export JAVA_HOME%export JAVA_HOME=${JAVA_HOME:-/usr/lib/jvm/java-21-openjdk-amd64}%g' /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv + sed -ie 's/JAVA_MAX_MEM="2048m"/JAVA_MAX_MEM=3072m/g' /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv cat /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv + echo 'cat /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv' + cat /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv #!/bin/sh # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # handle specific scripts; the SCRIPT_NAME is exactly the name of the Karaf # script: client, instance, shell, start, status, stop, karaf # # if [ "${KARAF_SCRIPT}" == "SCRIPT_NAME" ]; then # Actions go here... # fi # # general settings which should be applied for all scripts go here; please keep # in mind that it is possible that scripts might be executed more than once, e.g. # in example of the start script where the start script is executed first and the # karaf script afterwards. # # # The following section shows the possible configuration options for the default # karaf scripts # export JAVA_HOME=${JAVA_HOME:-/usr/lib/jvm/java-21-openjdk-amd64} # Location of Java installation # export JAVA_OPTS # Generic JVM options, for instance, where you can pass the memory configuration # export JAVA_NON_DEBUG_OPTS # Additional non-debug JVM options # export EXTRA_JAVA_OPTS # Additional JVM options # export KARAF_HOME # Karaf home folder # export KARAF_DATA # Karaf data folder # export KARAF_BASE # Karaf base folder # export KARAF_ETC # Karaf etc folder # export KARAF_LOG # Karaf log folder # export KARAF_SYSTEM_OPTS # First citizen Karaf options # export KARAF_OPTS # Additional available Karaf options # export KARAF_DEBUG # Enable debug mode # export KARAF_REDIRECT # Enable/set the std/err redirection when using bin/start # export KARAF_NOROOT # Prevent execution as root if set to true Set Java version + echo 'Set Java version' + sudo /usr/sbin/alternatives --install /usr/bin/java java /usr/lib/jvm/java-21-openjdk-amd64/bin/java 1 sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper sudo: a password is required + sudo /usr/sbin/alternatives --set java /usr/lib/jvm/java-21-openjdk-amd64/bin/java sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper sudo: a password is required JDK default version ... + echo 'JDK default version ...' + java -version openjdk version "21.0.8" 2025-07-15 OpenJDK Runtime Environment (build 21.0.8+9-Ubuntu-0ubuntu122.04.1) OpenJDK 64-Bit Server VM (build 21.0.8+9-Ubuntu-0ubuntu122.04.1, mixed mode, sharing) Set JAVA_HOME + echo 'Set JAVA_HOME' + export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64 ++ readlink -e /usr/lib/jvm/java-21-openjdk-amd64/bin/java Java binary pointed at by JAVA_HOME: /usr/lib/jvm/java-21-openjdk-amd64/bin/java + JAVA_RESOLVED=/usr/lib/jvm/java-21-openjdk-amd64/bin/java + echo 'Java binary pointed at by JAVA_HOME: /usr/lib/jvm/java-21-openjdk-amd64/bin/java' Listing all open ports on controller system... + echo 'Listing all open ports on controller system...' + netstat -pnatu /tmp/configuration-script.sh: line 40: netstat: command not found Custom shard config exists!!! + '[' -f /tmp/custom_shard_config.txt ']' + echo 'Custom shard config exists!!!' Copying the shard config... + echo 'Copying the shard config...' + cp /tmp/custom_shard_config.txt /tmp/karaf-0.22.2-SNAPSHOT/bin/ Configuring cluster + echo 'Configuring cluster' + /tmp/karaf-0.22.2-SNAPSHOT/bin/configure_cluster.sh 1 10.30.170.77 10.30.171.188 10.30.171.110 ################################################ ## Configure Cluster ## ################################################ NOTE: Cluster configuration files not found. Copying from /tmp/karaf-0.22.2-SNAPSHOT/system/org/opendaylight/controller/sal-clustering-config/11.0.2 Configuring unique name in pekko.conf Configuring hostname in pekko.conf Configuring data and rpc seed nodes in pekko.conf modules = [ { name = "inventory" namespace = "urn:opendaylight:inventory" shard-strategy = "module" }, { name = "topology" namespace = "urn:TBD:params:xml:ns:yang:network-topology" shard-strategy = "module" }, { name = "toaster" namespace = "http://netconfcentral.org/ns/toaster" shard-strategy = "module" }, { name = "car" namespace = "urn:opendaylight:params:xml:ns:yang:controller:config:sal-clustering-it:car" shard-strategy = "module" }, { name = "people" namespace = "urn:opendaylight:params:xml:ns:yang:controller:config:sal-clustering-it:people" shard-strategy = "module" }, { name = "car-people" namespace = "urn:opendaylight:params:xml:ns:yang:controller:config:sal-clustering-it:car-people" shard-strategy = "module" } ] Configuring replication type in module-shards.conf ################################################ ## NOTE: Manually restart controller to ## ## apply configuration. ## ################################################ Dump pekko.conf + echo 'Dump pekko.conf' + cat /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/pekko.conf odl-cluster-data { pekko { remote { artery { enabled = on transport = tcp canonical.hostname = "10.30.170.77" canonical.port = 2550 } } cluster { # Using artery. seed-nodes = ["pekko://opendaylight-cluster-data@10.30.170.77:2550", "pekko://opendaylight-cluster-data@10.30.171.188:2550", "pekko://opendaylight-cluster-data@10.30.171.110:2550"] roles = ["member-1"] # when under load we might trip a false positive on the failure detector # failure-detector { # heartbeat-interval = 4 s # acceptable-heartbeat-pause = 16s # } } persistence { # By default the snapshots/journal directories live in KARAF_HOME. You can choose to put it somewhere else by # modifying the following two properties. The directory location specified may be a relative or absolute path. # The relative path is always relative to KARAF_HOME. # snapshot-store.local.dir = "target/snapshots" # Use lz4 compression for LocalSnapshotStore snapshots snapshot-store.local.use-lz4-compression = false # Size of blocks for lz4 compression: 64KB, 256KB, 1MB or 4MB snapshot-store.local.lz4-blocksize = 256KB } disable-default-actor-system-quarantined-event-handling = "false" } } Dump modules.conf + echo 'Dump modules.conf' + cat /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/modules.conf modules = [ { name = "inventory" namespace = "urn:opendaylight:inventory" shard-strategy = "module" }, { name = "topology" namespace = "urn:TBD:params:xml:ns:yang:network-topology" shard-strategy = "module" }, { name = "toaster" namespace = "http://netconfcentral.org/ns/toaster" shard-strategy = "module" }, { name = "car" namespace = "urn:opendaylight:params:xml:ns:yang:controller:config:sal-clustering-it:car" shard-strategy = "module" }, { name = "people" namespace = "urn:opendaylight:params:xml:ns:yang:controller:config:sal-clustering-it:people" shard-strategy = "module" }, { name = "car-people" namespace = "urn:opendaylight:params:xml:ns:yang:controller:config:sal-clustering-it:car-people" shard-strategy = "module" } ] Dump module-shards.conf + echo 'Dump module-shards.conf' + cat /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/module-shards.conf module-shards = [ { name = "default" shards = [ { name = "default" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "inventory" shards = [ { name="inventory" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "topology" shards = [ { name="topology" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "toaster" shards = [ { name="toaster" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "car" shards = [ { name="car" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "people" shards = [ { name="people" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "car-people" shards = [ { name="car-people" replicas = ["member-1", "member-2", "member-3"] } ] } ] Configuring member-2 with IP address 10.30.171.188 Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. + source /tmp/common-functions.sh karaf-0.22.2-SNAPSHOT titanium ++ [[ /tmp/common-functions.sh == \/\t\m\p\/\c\o\n\f\i\g\u\r\a\t\i\o\n\-\s\c\r\i\p\t\.\s\h ]] ++ echo 'common-functions.sh is being sourced' common-functions.sh is being sourced ++ BUNDLEFOLDER=karaf-0.22.2-SNAPSHOT ++ DISTROSTREAM=titanium ++ export MAVENCONF=/tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg ++ MAVENCONF=/tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg ++ export FEATURESCONF=/tmp/karaf-0.22.2-SNAPSHOT/etc/org.apache.karaf.features.cfg ++ FEATURESCONF=/tmp/karaf-0.22.2-SNAPSHOT/etc/org.apache.karaf.features.cfg ++ export CUSTOMPROP=/tmp/karaf-0.22.2-SNAPSHOT/etc/custom.properties ++ CUSTOMPROP=/tmp/karaf-0.22.2-SNAPSHOT/etc/custom.properties ++ export LOGCONF=/tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg ++ LOGCONF=/tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg ++ export MEMCONF=/tmp/karaf-0.22.2-SNAPSHOT/bin/setenv ++ MEMCONF=/tmp/karaf-0.22.2-SNAPSHOT/bin/setenv common-functions environment: MAVENCONF: /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg ACTUALFEATURES: FEATURESCONF: /tmp/karaf-0.22.2-SNAPSHOT/etc/org.apache.karaf.features.cfg CUSTOMPROP: /tmp/karaf-0.22.2-SNAPSHOT/etc/custom.properties LOGCONF: /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg MEMCONF: /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv CONTROLLERMEM: AKKACONF: /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/pekko.conf MODULESCONF: /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/modules.conf MODULESHARDSCONF: /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/module-shards.conf SUITES: Changing to /tmp Downloading the distribution from https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.2-SNAPSHOT/karaf-0.22.2-20251130.175422-194.zip ++ export CONTROLLERMEM= ++ CONTROLLERMEM= ++ case "${DISTROSTREAM}" in ++ CLUSTER_SYSTEM=pekko ++ export AKKACONF=/tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/pekko.conf ++ AKKACONF=/tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/pekko.conf ++ export MODULESCONF=/tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/modules.conf ++ MODULESCONF=/tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/modules.conf ++ export MODULESHARDSCONF=/tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/module-shards.conf ++ MODULESHARDSCONF=/tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/module-shards.conf ++ print_common_env ++ cat ++ SSH='ssh -t -t' ++ extra_services_cntl=' dnsmasq.service httpd.service libvirtd.service openvswitch.service ovs-vswitchd.service ovsdb-server.service rabbitmq-server.service ' ++ extra_services_cmp=' libvirtd.service openvswitch.service ovs-vswitchd.service ovsdb-server.service ' + echo 'Changing to /tmp' + cd /tmp + echo 'Downloading the distribution from https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.2-SNAPSHOT/karaf-0.22.2-20251130.175422-194.zip' + wget --progress=dot:mega https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.2-SNAPSHOT/karaf-0.22.2-20251130.175422-194.zip --2025-11-30 23:05:26-- https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.2-SNAPSHOT/karaf-0.22.2-20251130.175422-194.zip Resolving nexus.opendaylight.org (nexus.opendaylight.org)... 199.204.45.87, 2604:e100:1:0:f816:3eff:fe45:48d6 Connecting to nexus.opendaylight.org (nexus.opendaylight.org)|199.204.45.87|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 234981466 (224M) [application/zip] Saving to: ‘karaf-0.22.2-20251130.175422-194.zip’ 0K ........ ........ ........ ........ ........ ........ 1% 91.4M 2s 3072K ........ ........ ........ ........ ........ ........ 2% 124M 2s 6144K ........ ........ ........ ........ ........ ........ 4% 156M 2s 9216K ........ ........ ........ ........ ........ ........ 5% 166M 2s 12288K ........ ........ ........ ........ ........ ........ 6% 216M 2s 15360K ........ ........ ........ ........ ........ ........ 8% 188M 1s 18432K ........ ........ ........ ........ ........ ........ 9% 234M 1s 21504K ........ ........ ........ ........ ........ ........ 10% 255M 1s 24576K ........ ........ ........ ........ ........ ........ 12% 246M 1s 27648K ........ ........ ........ ........ ........ ........ 13% 244M 1s 30720K ........ ........ ........ ........ ........ ........ 14% 190M 1s 33792K ........ ........ ........ ........ ........ ........ 16% 289M 1s 36864K ........ ........ ........ ........ ........ ........ 17% 235M 1s 39936K ........ ........ ........ ........ ........ ........ 18% 285M 1s 43008K ........ ........ ........ ........ ........ ........ 20% 301M 1s 46080K ........ ........ ........ ........ ........ ........ 21% 339M 1s 49152K ........ ........ ........ ........ ........ ........ 22% 333M 1s 52224K ........ ........ ........ ........ ........ ........ 24% 301M 1s 55296K ........ ........ ........ ........ ........ ........ 25% 231M 1s 58368K ........ ........ ........ ........ ........ ........ 26% 302M 1s 61440K ........ ........ ........ ........ ........ ........ 28% 317M 1s 64512K ........ ........ ........ ........ ........ ........ 29% 322M 1s 67584K ........ ........ ........ ........ ........ ........ 30% 322M 1s 70656K ........ ........ ........ ........ ........ ........ 32% 335M 1s 73728K ........ ........ ........ ........ ........ ........ 33% 324M 1s 76800K ........ ........ ........ ........ ........ ........ 34% 336M 1s 79872K ........ ........ ........ ........ ........ ........ 36% 348M 1s 82944K ........ ........ ........ ........ ........ ........ 37% 329M 1s 86016K ........ ........ ........ ........ ........ ........ 38% 328M 1s 89088K ........ ........ ........ ........ ........ ........ 40% 317M 1s 92160K ........ ........ ........ ........ ........ ........ 41% 307M 1s 95232K ........ ........ ........ ........ ........ ........ 42% 311M 1s 98304K ........ ........ ........ ........ ........ ........ 44% 358M 1s 101376K ........ ........ ........ ........ ........ ........ 45% 359M 0s 104448K ........ ........ ........ ........ ........ ........ 46% 260M 0s 107520K ........ ........ ........ ........ ........ ........ 48% 312M 0s 110592K ........ ........ ........ ........ ........ ........ 49% 297M 0s 113664K ........ ........ ........ ........ ........ ........ 50% 265M 0s 116736K ........ ........ ........ ........ ........ ........ 52% 279M 0s 119808K ........ ........ ........ ........ ........ ........ 53% 261M 0s 122880K ........ ........ ........ ........ ........ ........ 54% 227M 0s 125952K ........ ........ ........ ........ ........ ........ 56% 253M 0s 129024K ........ ........ ........ ........ ........ ........ 57% 223M 0s 132096K ........ ........ ........ ........ ........ ........ 58% 228M 0s 135168K ........ ........ ........ ........ ........ ........ 60% 275M 0s 138240K ........ ........ ........ ........ ........ ........ 61% 334M 0s 141312K ........ ........ ........ ........ ........ ........ 62% 369M 0s 144384K ........ ........ ........ ........ ........ ........ 64% 341M 0s 147456K ........ ........ ........ ........ ........ ........ 65% 317M 0s 150528K ........ ........ ........ ........ ........ ........ 66% 369M 0s 153600K ........ ........ ........ ........ ........ ........ 68% 409M 0s 156672K ........ ........ ........ ........ ........ ........ 69% 345M 0s 159744K ........ ........ ........ ........ ........ ........ 70% 365M 0s 162816K ........ ........ ........ ........ ........ ........ 72% 396M 0s 165888K ........ ........ ........ ........ ........ ........ 73% 355M 0s 168960K ........ ........ ........ ........ ........ ........ 74% 277M 0s 172032K ........ ........ ........ ........ ........ ........ 76% 260M 0s 175104K ........ ........ ........ ........ ........ ........ 77% 267M 0s 178176K ........ ........ ........ ........ ........ ........ 78% 238M 0s 181248K ........ ........ ........ ........ ........ ........ 80% 280M 0s 184320K ........ ........ ........ ........ ........ ........ 81% 330M 0s 187392K ........ ........ ........ ........ ........ ........ 83% 280M 0s 190464K ........ ........ ........ ........ ........ ........ 84% 296M 0s 193536K ........ ........ ........ ........ ........ ........ 85% 320M 0s 196608K ........ ........ ........ ........ ........ ........ 87% 327M 0s 199680K ........ ........ ........ ........ ........ ........ 88% 130M 0s 202752K ........ ........ ........ ........ ........ ........ 89% 306M 0s 205824K ........ ........ ........ ........ ........ ........ 91% 288M 0s 208896K ........ ........ ........ ........ ........ ........ 92% 353M 0s 211968K ........ ........ ........ ........ ........ ........ 93% 295M 0s 215040K ........ ........ ........ ........ ........ ........ 95% 272M 0s 218112K ........ ........ ........ ........ ........ ........ 96% 269M 0s 221184K ........ ........ ........ ........ ........ ........ 97% 315M 0s 224256K ........ ........ ........ ........ ........ ........ 99% 253M 0s 227328K ........ ........ ........ ........ . 100% 268M=0.8s 2025-11-30 23:05:26 (266 MB/s) - ‘karaf-0.22.2-20251130.175422-194.zip’ saved [234981466/234981466] Extracting the new controller... + echo 'Extracting the new controller...' + unzip -q karaf-0.22.2-20251130.175422-194.zip Adding external repositories... + echo 'Adding external repositories...' + sed -ie 's%org.ops4j.pax.url.mvn.repositories=%org.ops4j.pax.url.mvn.repositories=https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot@id=opendaylight-snapshot@snapshots, https://nexus.opendaylight.org/content/repositories/public@id=opendaylight-mirror, http://repo1.maven.org/maven2@id=central, http://repository.springsource.com/maven/bundles/release@id=spring.ebr.release, http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external, http://zodiac.springsource.com/maven/bundles/release@id=gemini, http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases, https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases, https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases%g' /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg + cat /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg ################################################################################ # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ################################################################################ # # If set to true, the following property will not allow any certificate to be used # when accessing Maven repositories through SSL # #org.ops4j.pax.url.mvn.certificateCheck= # # Path to the local Maven settings file. # The repositories defined in this file will be automatically added to the list # of default repositories if the 'org.ops4j.pax.url.mvn.repositories' property # below is not set. # The following locations are checked for the existence of the settings.xml file # * 1. looks for the specified url # * 2. if not found looks for ${user.home}/.m2/settings.xml # * 3. if not found looks for ${maven.home}/conf/settings.xml # * 4. if not found looks for ${M2_HOME}/conf/settings.xml # #org.ops4j.pax.url.mvn.settings= # # Path to the local Maven repository which is used to avoid downloading # artifacts when they already exist locally. # The value of this property will be extracted from the settings.xml file # above, or defaulted to: # System.getProperty( "user.home" ) + "/.m2/repository" # org.ops4j.pax.url.mvn.localRepository=${karaf.home}/${karaf.default.repository} # # Default this to false. It's just weird to use undocumented repos # org.ops4j.pax.url.mvn.useFallbackRepositories=false # # Uncomment if you don't wanna use the proxy settings # from the Maven conf/settings.xml file # # org.ops4j.pax.url.mvn.proxySupport=false # # Comma separated list of repositories scanned when resolving an artifact. # Those repositories will be checked before iterating through the # below list of repositories and even before the local repository # A repository url can be appended with zero or more of the following flags: # @snapshots : the repository contains snaphots # @noreleases : the repository does not contain any released artifacts # # The following property value will add the system folder as a repo. # org.ops4j.pax.url.mvn.defaultRepositories=\ file:${karaf.home}/${karaf.default.repository}@id=system.repository@snapshots,\ file:${karaf.data}/kar@id=kar.repository@multi@snapshots,\ file:${karaf.base}/${karaf.default.repository}@id=child.system.repository@snapshots # Use the default local repo (e.g.~/.m2/repository) as a "remote" repo #org.ops4j.pax.url.mvn.defaultLocalRepoAsRemote=false # # Comma separated list of repositories scanned when resolving an artifact. # The default list includes the following repositories: # http://repo1.maven.org/maven2@id=central # http://repository.springsource.com/maven/bundles/release@id=spring.ebr # http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external # http://zodiac.springsource.com/maven/bundles/release@id=gemini # http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases # https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases # https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases # To add repositories to the default ones, prepend '+' to the list of repositories # to add. # A repository url can be appended with zero or more of the following flags: # @snapshots : the repository contains snapshots # @noreleases : the repository does not contain any released artifacts # @id=repository.id : the id for the repository, just like in the settings.xml this is optional but recommended # org.ops4j.pax.url.mvn.repositories=https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot@id=opendaylight-snapshot@snapshots, https://nexus.opendaylight.org/content/repositories/public@id=opendaylight-mirror, http://repo1.maven.org/maven2@id=central, http://repository.springsource.com/maven/bundles/release@id=spring.ebr.release, http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external, http://zodiac.springsource.com/maven/bundles/release@id=gemini, http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases, https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases, https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases ### ^^^ No remote repositories. This is the only ODL change compared to Karaf defaults.Configuring the startup features... + [[ True == \T\r\u\e ]] + echo 'Configuring the startup features...' + sed -ie 's/\(featuresBoot=\|featuresBoot =\)/featuresBoot = odl-integration-compatible-with-all,odl-jolokia,odl-restconf,odl-clustering-test-app,/g' /tmp/karaf-0.22.2-SNAPSHOT/etc/org.apache.karaf.features.cfg + FEATURE_TEST_STRING=features-test + FEATURE_TEST_VERSION=0.22.2-SNAPSHOT + KARAF_VERSION=karaf4 + [[ integration == \i\n\t\e\g\r\a\t\i\o\n ]] + sed -ie 's%\(featuresRepositories=\|featuresRepositories =\)%featuresRepositories = mvn:org.opendaylight.integration/features-test/0.22.2-SNAPSHOT/xml/features,mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.2.0/xml/features,%g' /tmp/karaf-0.22.2-SNAPSHOT/etc/org.apache.karaf.features.cfg + [[ ! -z '' ]] + cat /tmp/karaf-0.22.2-SNAPSHOT/etc/org.apache.karaf.features.cfg ################################################################################ # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ################################################################################ # # Comma separated list of features repositories to register by default # featuresRepositories = mvn:org.opendaylight.integration/features-test/0.22.2-SNAPSHOT/xml/features,mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.2.0/xml/features, file:${karaf.etc}/d9838ecd-cf2e-476d-adb9-aed42a6a675d.xml # # Comma separated list of features to install at startup # featuresBoot = odl-integration-compatible-with-all,odl-jolokia,odl-restconf,odl-clustering-test-app, 9cac513d-6753-4f5a-bef2-992de0a00fe1 # # Resource repositories (OBR) that the features resolver can use # to resolve requirements/capabilities # # The format of the resourceRepositories is # resourceRepositories=[xml:url|json:url],... # for Instance: # #resourceRepositories=xml:http://host/path/to/index.xml # or #resourceRepositories=json:http://host/path/to/index.json # # # Defines if the boot features are started in asynchronous mode (in a dedicated thread) # featuresBootAsynchronous=false # # Service requirements enforcement # # By default, the feature resolver checks the service requirements/capabilities of # bundles for new features (xml schema >= 1.3.0) in order to automatically installs # the required bundles. # The following flag can have those values: # - disable: service requirements are completely ignored # - default: service requirements are ignored for old features # - enforce: service requirements are always verified # #serviceRequirements=default # # Store cfg file for config element in feature # #configCfgStore=true # # Define if the feature service automatically refresh bundles # autoRefresh=true # # Configuration of features processing mechanism (overrides, blacklisting, modification of features) # XML file defines instructions related to features processing # versions.properties may declare properties to resolve placeholders in XML file # both files are relative to ${karaf.etc} # #featureProcessing=org.apache.karaf.features.xml #featureProcessingVersions=versions.properties + configure_karaf_log karaf4 '' + local -r karaf_version=karaf4 + local -r controllerdebugmap= + local logapi=log4j + grep log4j2 /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg log4j2.pattern = %d{ISO8601} | %-5p | %-16t | %-32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n log4j2.rootLogger.level = INFO #log4j2.rootLogger.type = asyncRoot #log4j2.rootLogger.includeLocation = false log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi log4j2.rootLogger.appenderRef.Console.ref = Console log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF} log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.type = ContextMapFilter log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.type = KeyValuePair log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.key = slf4j.marker log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.value = CONFIDENTIAL log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.operator = or log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMatch = DENY log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMismatch = NEUTRAL log4j2.logger.spifly.name = org.apache.aries.spifly log4j2.logger.spifly.level = WARN log4j2.logger.audit.name = org.apache.karaf.jaas.modules.audit log4j2.logger.audit.level = INFO log4j2.logger.audit.additivity = false log4j2.logger.audit.appenderRef.AuditRollingFile.ref = AuditRollingFile # Console appender not used by default (see log4j2.rootLogger.appenderRefs) log4j2.appender.console.type = Console log4j2.appender.console.name = Console log4j2.appender.console.layout.type = PatternLayout log4j2.appender.console.layout.pattern = ${log4j2.pattern} log4j2.appender.rolling.type = RollingRandomAccessFile log4j2.appender.rolling.name = RollingFile log4j2.appender.rolling.fileName = ${karaf.data}/log/karaf.log log4j2.appender.rolling.filePattern = ${karaf.data}/log/karaf.log.%i #log4j2.appender.rolling.immediateFlush = false log4j2.appender.rolling.append = true log4j2.appender.rolling.layout.type = PatternLayout log4j2.appender.rolling.layout.pattern = ${log4j2.pattern} log4j2.appender.rolling.policies.type = Policies log4j2.appender.rolling.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.rolling.policies.size.size = 64MB log4j2.appender.rolling.strategy.type = DefaultRolloverStrategy log4j2.appender.rolling.strategy.max = 7 log4j2.appender.audit.type = RollingRandomAccessFile log4j2.appender.audit.name = AuditRollingFile log4j2.appender.audit.fileName = ${karaf.data}/security/audit.log log4j2.appender.audit.filePattern = ${karaf.data}/security/audit.log.%i log4j2.appender.audit.append = true log4j2.appender.audit.layout.type = PatternLayout log4j2.appender.audit.layout.pattern = ${log4j2.pattern} log4j2.appender.audit.policies.type = Policies log4j2.appender.audit.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.audit.policies.size.size = 8MB log4j2.appender.audit.strategy.type = DefaultRolloverStrategy log4j2.appender.audit.strategy.max = 7 log4j2.appender.osgi.type = PaxOsgi log4j2.appender.osgi.name = PaxOsgi log4j2.appender.osgi.filter = * #log4j2.logger.aether.name = shaded.org.eclipse.aether #log4j2.logger.aether.level = TRACE #log4j2.logger.http-headers.name = shaded.org.apache.http.headers #log4j2.logger.http-headers.level = DEBUG #log4j2.logger.maven.name = org.ops4j.pax.url.mvn #log4j2.logger.maven.level = TRACE Configuring the karaf log... karaf_version: karaf4, logapi: log4j2 + logapi=log4j2 + echo 'Configuring the karaf log... karaf_version: karaf4, logapi: log4j2' + '[' log4j2 == log4j2 ']' + sed -ie 's/log4j2.appender.rolling.policies.size.size = 64MB/log4j2.appender.rolling.policies.size.size = 1GB/g' /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg + orgmodule=org.opendaylight.yangtools.yang.parser.repo.YangTextSchemaContextResolver + orgmodule_=org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver + echo 'log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.name = WARN' controllerdebugmap: + echo 'log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.level = WARN' + unset IFS + echo 'controllerdebugmap: ' cat /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg + '[' -n '' ']' + echo 'cat /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg' + cat /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg ################################################################################ # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ################################################################################ # Common pattern layout for appenders log4j2.pattern = %d{ISO8601} | %-5p | %-16t | %-32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n # Root logger log4j2.rootLogger.level = INFO # uncomment to use asynchronous loggers, which require mvn:com.lmax/disruptor/3.3.2 library #log4j2.rootLogger.type = asyncRoot #log4j2.rootLogger.includeLocation = false log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi log4j2.rootLogger.appenderRef.Console.ref = Console log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF} # Filters for logs marked by org.opendaylight.odlparent.Markers log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.type = ContextMapFilter log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.type = KeyValuePair log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.key = slf4j.marker log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.value = CONFIDENTIAL log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.operator = or log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMatch = DENY log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMismatch = NEUTRAL # Loggers configuration # Spifly logger log4j2.logger.spifly.name = org.apache.aries.spifly log4j2.logger.spifly.level = WARN # Security audit logger log4j2.logger.audit.name = org.apache.karaf.jaas.modules.audit log4j2.logger.audit.level = INFO log4j2.logger.audit.additivity = false log4j2.logger.audit.appenderRef.AuditRollingFile.ref = AuditRollingFile # Appenders configuration # Console appender not used by default (see log4j2.rootLogger.appenderRefs) log4j2.appender.console.type = Console log4j2.appender.console.name = Console log4j2.appender.console.layout.type = PatternLayout log4j2.appender.console.layout.pattern = ${log4j2.pattern} # Rolling file appender log4j2.appender.rolling.type = RollingRandomAccessFile log4j2.appender.rolling.name = RollingFile log4j2.appender.rolling.fileName = ${karaf.data}/log/karaf.log log4j2.appender.rolling.filePattern = ${karaf.data}/log/karaf.log.%i # uncomment to not force a disk flush #log4j2.appender.rolling.immediateFlush = false log4j2.appender.rolling.append = true log4j2.appender.rolling.layout.type = PatternLayout log4j2.appender.rolling.layout.pattern = ${log4j2.pattern} log4j2.appender.rolling.policies.type = Policies log4j2.appender.rolling.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.rolling.policies.size.size = 1GB log4j2.appender.rolling.strategy.type = DefaultRolloverStrategy log4j2.appender.rolling.strategy.max = 7 # Audit file appender log4j2.appender.audit.type = RollingRandomAccessFile log4j2.appender.audit.name = AuditRollingFile log4j2.appender.audit.fileName = ${karaf.data}/security/audit.log log4j2.appender.audit.filePattern = ${karaf.data}/security/audit.log.%i log4j2.appender.audit.append = true log4j2.appender.audit.layout.type = PatternLayout log4j2.appender.audit.layout.pattern = ${log4j2.pattern} log4j2.appender.audit.policies.type = Policies log4j2.appender.audit.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.audit.policies.size.size = 8MB log4j2.appender.audit.strategy.type = DefaultRolloverStrategy log4j2.appender.audit.strategy.max = 7 # OSGi appender log4j2.appender.osgi.type = PaxOsgi log4j2.appender.osgi.name = PaxOsgi log4j2.appender.osgi.filter = * # help with identification of maven-related problems with pax-url-aether #log4j2.logger.aether.name = shaded.org.eclipse.aether #log4j2.logger.aether.level = TRACE #log4j2.logger.http-headers.name = shaded.org.apache.http.headers #log4j2.logger.http-headers.level = DEBUG #log4j2.logger.maven.name = org.ops4j.pax.url.mvn #log4j2.logger.maven.level = TRACE log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.name = WARN log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.level = WARN Configure java home: /usr/lib/jvm/java-21-openjdk-amd64 max memory: 3072m memconf: /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv + set_java_vars /usr/lib/jvm/java-21-openjdk-amd64 3072m /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv + local -r java_home=/usr/lib/jvm/java-21-openjdk-amd64 + local -r controllermem=3072m + local -r memconf=/tmp/karaf-0.22.2-SNAPSHOT/bin/setenv + echo Configure + echo ' java home: /usr/lib/jvm/java-21-openjdk-amd64' + echo ' max memory: 3072m' + echo ' memconf: /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv' + sed -ie 's%^# export JAVA_HOME%export JAVA_HOME=${JAVA_HOME:-/usr/lib/jvm/java-21-openjdk-amd64}%g' /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv + sed -ie 's/JAVA_MAX_MEM="2048m"/JAVA_MAX_MEM=3072m/g' /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv cat /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv + echo 'cat /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv' + cat /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv #!/bin/sh # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # handle specific scripts; the SCRIPT_NAME is exactly the name of the Karaf # script: client, instance, shell, start, status, stop, karaf # # if [ "${KARAF_SCRIPT}" == "SCRIPT_NAME" ]; then # Actions go here... # fi # # general settings which should be applied for all scripts go here; please keep # in mind that it is possible that scripts might be executed more than once, e.g. # in example of the start script where the start script is executed first and the # karaf script afterwards. # # # The following section shows the possible configuration options for the default # karaf scripts # export JAVA_HOME=${JAVA_HOME:-/usr/lib/jvm/java-21-openjdk-amd64} # Location of Java installation # export JAVA_OPTS # Generic JVM options, for instance, where you can pass the memory configuration # export JAVA_NON_DEBUG_OPTS # Additional non-debug JVM options # export EXTRA_JAVA_OPTS # Additional JVM options # export KARAF_HOME # Karaf home folder # export KARAF_DATA # Karaf data folder # export KARAF_BASE # Karaf base folder # export KARAF_ETC # Karaf etc folder # export KARAF_LOG # Karaf log folder # export KARAF_SYSTEM_OPTS # First citizen Karaf options # export KARAF_OPTS # Additional available Karaf options # export KARAF_DEBUG # Enable debug mode # export KARAF_REDIRECT # Enable/set the std/err redirection when using bin/start # export KARAF_NOROOT # Prevent execution as root if set to true Set Java version + echo 'Set Java version' + sudo /usr/sbin/alternatives --install /usr/bin/java java /usr/lib/jvm/java-21-openjdk-amd64/bin/java 1 sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper sudo: a password is required + sudo /usr/sbin/alternatives --set java /usr/lib/jvm/java-21-openjdk-amd64/bin/java sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper sudo: a password is required JDK default version ... + echo 'JDK default version ...' + java -version openjdk version "21.0.8" 2025-07-15 OpenJDK Runtime Environment (build 21.0.8+9-Ubuntu-0ubuntu122.04.1) OpenJDK 64-Bit Server VM (build 21.0.8+9-Ubuntu-0ubuntu122.04.1, mixed mode, sharing) Set JAVA_HOME + echo 'Set JAVA_HOME' + export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64 ++ readlink -e /usr/lib/jvm/java-21-openjdk-amd64/bin/java Java binary pointed at by JAVA_HOME: /usr/lib/jvm/java-21-openjdk-amd64/bin/java Listing all open ports on controller system... + JAVA_RESOLVED=/usr/lib/jvm/java-21-openjdk-amd64/bin/java + echo 'Java binary pointed at by JAVA_HOME: /usr/lib/jvm/java-21-openjdk-amd64/bin/java' + echo 'Listing all open ports on controller system...' + netstat -pnatu /tmp/configuration-script.sh: line 40: netstat: command not found Custom shard config exists!!! Copying the shard config... + '[' -f /tmp/custom_shard_config.txt ']' + echo 'Custom shard config exists!!!' + echo 'Copying the shard config...' + cp /tmp/custom_shard_config.txt /tmp/karaf-0.22.2-SNAPSHOT/bin/ Configuring cluster + echo 'Configuring cluster' + /tmp/karaf-0.22.2-SNAPSHOT/bin/configure_cluster.sh 2 10.30.170.77 10.30.171.188 10.30.171.110 ################################################ ## Configure Cluster ## ################################################ NOTE: Cluster configuration files not found. Copying from /tmp/karaf-0.22.2-SNAPSHOT/system/org/opendaylight/controller/sal-clustering-config/11.0.2 Configuring unique name in pekko.conf Configuring hostname in pekko.conf Configuring data and rpc seed nodes in pekko.conf modules = [ { name = "inventory" namespace = "urn:opendaylight:inventory" shard-strategy = "module" }, { name = "topology" namespace = "urn:TBD:params:xml:ns:yang:network-topology" shard-strategy = "module" }, { name = "toaster" namespace = "http://netconfcentral.org/ns/toaster" shard-strategy = "module" }, { name = "car" namespace = "urn:opendaylight:params:xml:ns:yang:controller:config:sal-clustering-it:car" shard-strategy = "module" }, { name = "people" namespace = "urn:opendaylight:params:xml:ns:yang:controller:config:sal-clustering-it:people" shard-strategy = "module" }, { name = "car-people" namespace = "urn:opendaylight:params:xml:ns:yang:controller:config:sal-clustering-it:car-people" shard-strategy = "module" } ] Configuring replication type in module-shards.conf ################################################ ## NOTE: Manually restart controller to ## ## apply configuration. ## ################################################ Dump pekko.conf + echo 'Dump pekko.conf' + cat /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/pekko.conf odl-cluster-data { pekko { remote { artery { enabled = on transport = tcp canonical.hostname = "10.30.171.188" canonical.port = 2550 } } cluster { # Using artery. seed-nodes = ["pekko://opendaylight-cluster-data@10.30.170.77:2550", "pekko://opendaylight-cluster-data@10.30.171.188:2550", "pekko://opendaylight-cluster-data@10.30.171.110:2550"] roles = ["member-2"] # when under load we might trip a false positive on the failure detector # failure-detector { # heartbeat-interval = 4 s # acceptable-heartbeat-pause = 16s # } } persistence { # By default the snapshots/journal directories live in KARAF_HOME. You can choose to put it somewhere else by # modifying the following two properties. The directory location specified may be a relative or absolute path. # The relative path is always relative to KARAF_HOME. # snapshot-store.local.dir = "target/snapshots" # Use lz4 compression for LocalSnapshotStore snapshots snapshot-store.local.use-lz4-compression = false # Size of blocks for lz4 compression: 64KB, 256KB, 1MB or 4MB snapshot-store.local.lz4-blocksize = 256KB } disable-default-actor-system-quarantined-event-handling = "false" } } Dump modules.conf + echo 'Dump modules.conf' + cat /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/modules.conf modules = [ { name = "inventory" namespace = "urn:opendaylight:inventory" shard-strategy = "module" }, { name = "topology" namespace = "urn:TBD:params:xml:ns:yang:network-topology" shard-strategy = "module" }, { name = "toaster" namespace = "http://netconfcentral.org/ns/toaster" shard-strategy = "module" }, { name = "car" namespace = "urn:opendaylight:params:xml:ns:yang:controller:config:sal-clustering-it:car" shard-strategy = "module" }, { name = "people" namespace = "urn:opendaylight:params:xml:ns:yang:controller:config:sal-clustering-it:people" shard-strategy = "module" }, { name = "car-people" namespace = "urn:opendaylight:params:xml:ns:yang:controller:config:sal-clustering-it:car-people" shard-strategy = "module" } ] Dump module-shards.conf + echo 'Dump module-shards.conf' + cat /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/module-shards.conf module-shards = [ { name = "default" shards = [ { name = "default" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "inventory" shards = [ { name="inventory" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "topology" shards = [ { name="topology" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "toaster" shards = [ { name="toaster" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "car" shards = [ { name="car" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "people" shards = [ { name="people" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "car-people" shards = [ { name="car-people" replicas = ["member-1", "member-2", "member-3"] } ] } ] Configuring member-3 with IP address 10.30.171.110 Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. + source /tmp/common-functions.sh karaf-0.22.2-SNAPSHOT titanium ++ [[ /tmp/common-functions.sh == \/\t\m\p\/\c\o\n\f\i\g\u\r\a\t\i\o\n\-\s\c\r\i\p\t\.\s\h ]] ++ echo 'common-functions.sh is being sourced' common-functions.sh is being sourced ++ BUNDLEFOLDER=karaf-0.22.2-SNAPSHOT ++ DISTROSTREAM=titanium ++ export MAVENCONF=/tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg ++ MAVENCONF=/tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg ++ export FEATURESCONF=/tmp/karaf-0.22.2-SNAPSHOT/etc/org.apache.karaf.features.cfg ++ FEATURESCONF=/tmp/karaf-0.22.2-SNAPSHOT/etc/org.apache.karaf.features.cfg ++ export CUSTOMPROP=/tmp/karaf-0.22.2-SNAPSHOT/etc/custom.properties ++ CUSTOMPROP=/tmp/karaf-0.22.2-SNAPSHOT/etc/custom.properties ++ export LOGCONF=/tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg ++ LOGCONF=/tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg ++ export MEMCONF=/tmp/karaf-0.22.2-SNAPSHOT/bin/setenv ++ MEMCONF=/tmp/karaf-0.22.2-SNAPSHOT/bin/setenv ++ export CONTROLLERMEM= ++ CONTROLLERMEM= ++ case "${DISTROSTREAM}" in ++ CLUSTER_SYSTEM=pekko ++ export AKKACONF=/tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/pekko.conf ++ AKKACONF=/tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/pekko.conf ++ export MODULESCONF=/tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/modules.conf ++ MODULESCONF=/tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/modules.conf ++ export MODULESHARDSCONF=/tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/module-shards.conf ++ MODULESHARDSCONF=/tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/module-shards.conf ++ print_common_env ++ cat common-functions environment: MAVENCONF: /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg ACTUALFEATURES: FEATURESCONF: /tmp/karaf-0.22.2-SNAPSHOT/etc/org.apache.karaf.features.cfg CUSTOMPROP: /tmp/karaf-0.22.2-SNAPSHOT/etc/custom.properties LOGCONF: /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg MEMCONF: /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv CONTROLLERMEM: AKKACONF: /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/pekko.conf MODULESCONF: /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/modules.conf MODULESHARDSCONF: /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/module-shards.conf SUITES: ++ SSH='ssh -t -t' ++ extra_services_cntl=' dnsmasq.service httpd.service libvirtd.service openvswitch.service ovs-vswitchd.service ovsdb-server.service rabbitmq-server.service ' ++ extra_services_cmp=' libvirtd.service openvswitch.service ovs-vswitchd.service ovsdb-server.service ' Changing to /tmp Downloading the distribution from https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.2-SNAPSHOT/karaf-0.22.2-20251130.175422-194.zip + echo 'Changing to /tmp' + cd /tmp + echo 'Downloading the distribution from https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.2-SNAPSHOT/karaf-0.22.2-20251130.175422-194.zip' + wget --progress=dot:mega https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.2-SNAPSHOT/karaf-0.22.2-20251130.175422-194.zip --2025-11-30 23:05:30-- https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.2-SNAPSHOT/karaf-0.22.2-20251130.175422-194.zip Resolving nexus.opendaylight.org (nexus.opendaylight.org)... 199.204.45.87, 2604:e100:1:0:f816:3eff:fe45:48d6 Connecting to nexus.opendaylight.org (nexus.opendaylight.org)|199.204.45.87|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 234981466 (224M) [application/zip] Saving to: ‘karaf-0.22.2-20251130.175422-194.zip’ 0K ........ ........ ........ ........ ........ ........ 1% 63.9M 3s 3072K ........ ........ ........ ........ ........ ........ 2% 91.3M 3s 6144K ........ ........ ........ ........ ........ ........ 4% 142M 2s 9216K ........ ........ ........ ........ ........ ........ 5% 128M 2s 12288K ........ ........ ........ ........ ........ ........ 6% 144M 2s 15360K ........ ........ ........ ........ ........ ........ 8% 169M 2s 18432K ........ ........ ........ ........ ........ ........ 9% 175M 2s 21504K ........ ........ ........ ........ ........ ........ 10% 169M 2s 24576K ........ ........ ........ ........ ........ ........ 12% 197M 2s 27648K ........ ........ ........ ........ ........ ........ 13% 214M 1s 30720K ........ ........ ........ ........ ........ ........ 14% 228M 1s 33792K ........ ........ ........ ........ ........ ........ 16% 242M 1s 36864K ........ ........ ........ ........ ........ ........ 17% 246M 1s 39936K ........ ........ ........ ........ ........ ........ 18% 227M 1s 43008K ........ ........ ........ ........ ........ ........ 20% 249M 1s 46080K ........ ........ ........ ........ ........ ........ 21% 248M 1s 49152K ........ ........ ........ ........ ........ ........ 22% 245M 1s 52224K ........ ........ ........ ........ ........ ........ 24% 240M 1s 55296K ........ ........ ........ ........ ........ ........ 25% 244M 1s 58368K ........ ........ ........ ........ ........ ........ 26% 232M 1s 61440K ........ ........ ........ ........ ........ ........ 28% 235M 1s 64512K ........ ........ ........ ........ ........ ........ 29% 249M 1s 67584K ........ ........ ........ ........ ........ ........ 30% 228M 1s 70656K ........ ........ ........ ........ ........ ........ 32% 148M 1s 73728K ........ ........ ........ ........ ........ ........ 33% 269M 1s 76800K ........ ........ ........ ........ ........ ........ 34% 335M 1s 79872K ........ ........ ........ ........ ........ ........ 36% 413M 1s 82944K ........ ........ ........ ........ ........ ........ 37% 443M 1s 86016K ........ ........ ........ ........ ........ ........ 38% 483M 1s 89088K ........ ........ ........ ........ ........ ........ 40% 472M 1s 92160K ........ ........ ........ ........ ........ ........ 41% 338M 1s 95232K ........ ........ ........ ........ ........ ........ 42% 332M 1s 98304K ........ ........ ........ ........ ........ ........ 44% 335M 1s 101376K ........ ........ ........ ........ ........ ........ 45% 326M 1s 104448K ........ ........ ........ ........ ........ ........ 46% 331M 1s 107520K ........ ........ ........ ........ ........ ........ 48% 310M 1s 110592K ........ ........ ........ ........ ........ ........ 49% 216M 1s 113664K ........ ........ ........ ........ ........ ........ 50% 299M 1s 116736K ........ ........ ........ ........ ........ ........ 52% 298M 0s 119808K ........ ........ ........ ........ ........ ........ 53% 294M 0s 122880K ........ ........ ........ ........ ........ ........ 54% 290M 0s 125952K ........ ........ ........ ........ ........ ........ 56% 284M 0s 129024K ........ ........ ........ ........ ........ ........ 57% 289M 0s 132096K ........ ........ ........ ........ ........ ........ 58% 261M 0s 135168K ........ ........ ........ ........ ........ ........ 60% 300M 0s 138240K ........ ........ ........ ........ ........ ........ 61% 258M 0s 141312K ........ ........ ........ ........ ........ ........ 62% 266M 0s 144384K ........ ........ ........ ........ ........ ........ 64% 266M 0s 147456K ........ ........ ........ ........ ........ ........ 65% 273M 0s 150528K ........ ........ ........ ........ ........ ........ 66% 269M 0s 153600K ........ ........ ........ ........ ........ ........ 68% 278M 0s 156672K ........ ........ ........ ........ ........ ........ 69% 277M 0s 159744K ........ ........ ........ ........ ........ ........ 70% 280M 0s 162816K ........ ........ ........ ........ ........ ........ 72% 267M 0s 165888K ........ ........ ........ ........ ........ ........ 73% 277M 0s 168960K ........ ........ ........ ........ ........ ........ 74% 256M 0s 172032K ........ ........ ........ ........ ........ ........ 76% 282M 0s 175104K ........ ........ ........ ........ ........ ........ 77% 269M 0s 178176K ........ ........ ........ ........ ........ ........ 78% 261M 0s 181248K ........ ........ ........ ........ ........ ........ 80% 267M 0s 184320K ........ ........ ........ ........ ........ ........ 81% 285M 0s 187392K ........ ........ ........ ........ ........ ........ 83% 258M 0s 190464K ........ ........ ........ ........ ........ ........ 84% 275M 0s 193536K ........ ........ ........ ........ ........ ........ 85% 285M 0s 196608K ........ ........ ........ ........ ........ ........ 87% 276M 0s 199680K ........ ........ ........ ........ ........ ........ 88% 271M 0s 202752K ........ ........ ........ ........ ........ ........ 89% 280M 0s 205824K ........ ........ ........ ........ ........ ........ 91% 271M 0s 208896K ........ ........ ........ ........ ........ ........ 92% 286M 0s 211968K ........ ........ ........ ........ ........ ........ 93% 269M 0s 215040K ........ ........ ........ ........ ........ ........ 95% 284M 0s 218112K ........ ........ ........ ........ ........ ........ 96% 273M 0s 221184K ........ ........ ........ ........ ........ ........ 97% 284M 0s 224256K ........ ........ ........ ........ ........ ........ 99% 261M 0s 227328K ........ ........ ........ ........ . 100% 278M=0.9s 2025-11-30 23:05:31 (239 MB/s) - ‘karaf-0.22.2-20251130.175422-194.zip’ saved [234981466/234981466] Extracting the new controller... + echo 'Extracting the new controller...' + unzip -q karaf-0.22.2-20251130.175422-194.zip Adding external repositories... + echo 'Adding external repositories...' + sed -ie 's%org.ops4j.pax.url.mvn.repositories=%org.ops4j.pax.url.mvn.repositories=https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot@id=opendaylight-snapshot@snapshots, https://nexus.opendaylight.org/content/repositories/public@id=opendaylight-mirror, http://repo1.maven.org/maven2@id=central, http://repository.springsource.com/maven/bundles/release@id=spring.ebr.release, http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external, http://zodiac.springsource.com/maven/bundles/release@id=gemini, http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases, https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases, https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases%g' /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg + cat /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg ################################################################################ # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ################################################################################ # # If set to true, the following property will not allow any certificate to be used # when accessing Maven repositories through SSL # #org.ops4j.pax.url.mvn.certificateCheck= # # Path to the local Maven settings file. # The repositories defined in this file will be automatically added to the list # of default repositories if the 'org.ops4j.pax.url.mvn.repositories' property # below is not set. # The following locations are checked for the existence of the settings.xml file # * 1. looks for the specified url # * 2. if not found looks for ${user.home}/.m2/settings.xml # * 3. if not found looks for ${maven.home}/conf/settings.xml # * 4. if not found looks for ${M2_HOME}/conf/settings.xml # #org.ops4j.pax.url.mvn.settings= # # Path to the local Maven repository which is used to avoid downloading # artifacts when they already exist locally. # The value of this property will be extracted from the settings.xml file # above, or defaulted to: # System.getProperty( "user.home" ) + "/.m2/repository" # org.ops4j.pax.url.mvn.localRepository=${karaf.home}/${karaf.default.repository} # # Default this to false. It's just weird to use undocumented repos # org.ops4j.pax.url.mvn.useFallbackRepositories=false # # Uncomment if you don't wanna use the proxy settings # from the Maven conf/settings.xml file # # org.ops4j.pax.url.mvn.proxySupport=false # # Comma separated list of repositories scanned when resolving an artifact. # Those repositories will be checked before iterating through the # below list of repositories and even before the local repository # A repository url can be appended with zero or more of the following flags: # @snapshots : the repository contains snaphots # @noreleases : the repository does not contain any released artifacts # # The following property value will add the system folder as a repo. # org.ops4j.pax.url.mvn.defaultRepositories=\ file:${karaf.home}/${karaf.default.repository}@id=system.repository@snapshots,\ file:${karaf.data}/kar@id=kar.repository@multi@snapshots,\ file:${karaf.base}/${karaf.default.repository}@id=child.system.repository@snapshots # Use the default local repo (e.g.~/.m2/repository) as a "remote" repo #org.ops4j.pax.url.mvn.defaultLocalRepoAsRemote=false # # Comma separated list of repositories scanned when resolving an artifact. # The default list includes the following repositories: # http://repo1.maven.org/maven2@id=central # http://repository.springsource.com/maven/bundles/release@id=spring.ebr # http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external # http://zodiac.springsource.com/maven/bundles/release@id=gemini # http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases # https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases # https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases # To add repositories to the default ones, prepend '+' to the list of repositories # to add. # A repository url can be appended with zero or more of the following flags: # @snapshots : the repository contains snapshots # @noreleases : the repository does not contain any released artifacts # @id=repository.id : the id for the repository, just like in the settings.xml this is optional but recommended # org.ops4j.pax.url.mvn.repositories=https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot@id=opendaylight-snapshot@snapshots, https://nexus.opendaylight.org/content/repositories/public@id=opendaylight-mirror, http://repo1.maven.org/maven2@id=central, http://repository.springsource.com/maven/bundles/release@id=spring.ebr.release, http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external, http://zodiac.springsource.com/maven/bundles/release@id=gemini, http://repository.apache.org/content/groups/snapshots-group@id=apache@snapshots@noreleases, https://oss.sonatype.org/content/repositories/snapshots@id=sonatype.snapshots.deploy@snapshots@noreleases, https://oss.sonatype.org/content/repositories/ops4j-snapshots@id=ops4j.sonatype.snapshots.deploy@snapshots@noreleases ### ^^^ No remote repositories. This is the only ODL change compared to Karaf defaults.Configuring the startup features... + [[ True == \T\r\u\e ]] + echo 'Configuring the startup features...' + sed -ie 's/\(featuresBoot=\|featuresBoot =\)/featuresBoot = odl-integration-compatible-with-all,odl-jolokia,odl-restconf,odl-clustering-test-app,/g' /tmp/karaf-0.22.2-SNAPSHOT/etc/org.apache.karaf.features.cfg + FEATURE_TEST_STRING=features-test + FEATURE_TEST_VERSION=0.22.2-SNAPSHOT + KARAF_VERSION=karaf4 + [[ integration == \i\n\t\e\g\r\a\t\i\o\n ]] + sed -ie 's%\(featuresRepositories=\|featuresRepositories =\)%featuresRepositories = mvn:org.opendaylight.integration/features-test/0.22.2-SNAPSHOT/xml/features,mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.2.0/xml/features,%g' /tmp/karaf-0.22.2-SNAPSHOT/etc/org.apache.karaf.features.cfg + [[ ! -z '' ]] + cat /tmp/karaf-0.22.2-SNAPSHOT/etc/org.apache.karaf.features.cfg ################################################################################ # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ################################################################################ # # Comma separated list of features repositories to register by default # featuresRepositories = mvn:org.opendaylight.integration/features-test/0.22.2-SNAPSHOT/xml/features,mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.2.0/xml/features, file:${karaf.etc}/d9838ecd-cf2e-476d-adb9-aed42a6a675d.xml # # Comma separated list of features to install at startup # featuresBoot = odl-integration-compatible-with-all,odl-jolokia,odl-restconf,odl-clustering-test-app, 9cac513d-6753-4f5a-bef2-992de0a00fe1 # # Resource repositories (OBR) that the features resolver can use # to resolve requirements/capabilities # # The format of the resourceRepositories is # resourceRepositories=[xml:url|json:url],... # for Instance: # #resourceRepositories=xml:http://host/path/to/index.xml # or #resourceRepositories=json:http://host/path/to/index.json # # # Defines if the boot features are started in asynchronous mode (in a dedicated thread) # featuresBootAsynchronous=false # # Service requirements enforcement # # By default, the feature resolver checks the service requirements/capabilities of # bundles for new features (xml schema >= 1.3.0) in order to automatically installs # the required bundles. # The following flag can have those values: # - disable: service requirements are completely ignored # - default: service requirements are ignored for old features # - enforce: service requirements are always verified # #serviceRequirements=default # # Store cfg file for config element in feature # #configCfgStore=true # # Define if the feature service automatically refresh bundles # autoRefresh=true # # Configuration of features processing mechanism (overrides, blacklisting, modification of features) # XML file defines instructions related to features processing # versions.properties may declare properties to resolve placeholders in XML file # both files are relative to ${karaf.etc} # #featureProcessing=org.apache.karaf.features.xml #featureProcessingVersions=versions.properties + configure_karaf_log karaf4 '' + local -r karaf_version=karaf4 + local -r controllerdebugmap= + local logapi=log4j + grep log4j2 /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg log4j2.pattern = %d{ISO8601} | %-5p | %-16t | %-32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n log4j2.rootLogger.level = INFO #log4j2.rootLogger.type = asyncRoot #log4j2.rootLogger.includeLocation = false log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi log4j2.rootLogger.appenderRef.Console.ref = Console log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF} log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.type = ContextMapFilter log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.type = KeyValuePair log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.key = slf4j.marker log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.value = CONFIDENTIAL log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.operator = or log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMatch = DENY log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMismatch = NEUTRAL log4j2.logger.spifly.name = org.apache.aries.spifly log4j2.logger.spifly.level = WARN log4j2.logger.audit.name = org.apache.karaf.jaas.modules.audit log4j2.logger.audit.level = INFO log4j2.logger.audit.additivity = false log4j2.logger.audit.appenderRef.AuditRollingFile.ref = AuditRollingFile # Console appender not used by default (see log4j2.rootLogger.appenderRefs) log4j2.appender.console.type = Console log4j2.appender.console.name = Console log4j2.appender.console.layout.type = PatternLayout log4j2.appender.console.layout.pattern = ${log4j2.pattern} log4j2.appender.rolling.type = RollingRandomAccessFile log4j2.appender.rolling.name = RollingFile log4j2.appender.rolling.fileName = ${karaf.data}/log/karaf.log log4j2.appender.rolling.filePattern = ${karaf.data}/log/karaf.log.%i #log4j2.appender.rolling.immediateFlush = false log4j2.appender.rolling.append = true log4j2.appender.rolling.layout.type = PatternLayout log4j2.appender.rolling.layout.pattern = ${log4j2.pattern} log4j2.appender.rolling.policies.type = Policies log4j2.appender.rolling.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.rolling.policies.size.size = 64MB log4j2.appender.rolling.strategy.type = DefaultRolloverStrategy log4j2.appender.rolling.strategy.max = 7 log4j2.appender.audit.type = RollingRandomAccessFile log4j2.appender.audit.name = AuditRollingFile log4j2.appender.audit.fileName = ${karaf.data}/security/audit.log log4j2.appender.audit.filePattern = ${karaf.data}/security/audit.log.%i log4j2.appender.audit.append = true log4j2.appender.audit.layout.type = PatternLayout log4j2.appender.audit.layout.pattern = ${log4j2.pattern} log4j2.appender.audit.policies.type = Policies log4j2.appender.audit.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.audit.policies.size.size = 8MB log4j2.appender.audit.strategy.type = DefaultRolloverStrategy log4j2.appender.audit.strategy.max = 7 log4j2.appender.osgi.type = PaxOsgi log4j2.appender.osgi.name = PaxOsgi log4j2.appender.osgi.filter = * #log4j2.logger.aether.name = shaded.org.eclipse.aether #log4j2.logger.aether.level = TRACE #log4j2.logger.http-headers.name = shaded.org.apache.http.headers #log4j2.logger.http-headers.level = DEBUG #log4j2.logger.maven.name = org.ops4j.pax.url.mvn #log4j2.logger.maven.level = TRACE Configuring the karaf log... karaf_version: karaf4, logapi: log4j2 + logapi=log4j2 + echo 'Configuring the karaf log... karaf_version: karaf4, logapi: log4j2' + '[' log4j2 == log4j2 ']' + sed -ie 's/log4j2.appender.rolling.policies.size.size = 64MB/log4j2.appender.rolling.policies.size.size = 1GB/g' /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg + orgmodule=org.opendaylight.yangtools.yang.parser.repo.YangTextSchemaContextResolver + orgmodule_=org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver + echo 'log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.name = WARN' + echo 'log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.level = WARN' controllerdebugmap: cat /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg + unset IFS + echo 'controllerdebugmap: ' + '[' -n '' ']' + echo 'cat /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg' + cat /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg ################################################################################ # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ################################################################################ # Common pattern layout for appenders log4j2.pattern = %d{ISO8601} | %-5p | %-16t | %-32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n # Root logger log4j2.rootLogger.level = INFO # uncomment to use asynchronous loggers, which require mvn:com.lmax/disruptor/3.3.2 library #log4j2.rootLogger.type = asyncRoot #log4j2.rootLogger.includeLocation = false log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi log4j2.rootLogger.appenderRef.Console.ref = Console log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF} # Filters for logs marked by org.opendaylight.odlparent.Markers log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.type = ContextMapFilter log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.type = KeyValuePair log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.key = slf4j.marker log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.pair1.value = CONFIDENTIAL log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.operator = or log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMatch = DENY log4j2.rootLogger.appenderRef.RollingFile.filter.confidential.onMismatch = NEUTRAL # Loggers configuration # Spifly logger log4j2.logger.spifly.name = org.apache.aries.spifly log4j2.logger.spifly.level = WARN # Security audit logger log4j2.logger.audit.name = org.apache.karaf.jaas.modules.audit log4j2.logger.audit.level = INFO log4j2.logger.audit.additivity = false log4j2.logger.audit.appenderRef.AuditRollingFile.ref = AuditRollingFile # Appenders configuration # Console appender not used by default (see log4j2.rootLogger.appenderRefs) log4j2.appender.console.type = Console log4j2.appender.console.name = Console log4j2.appender.console.layout.type = PatternLayout log4j2.appender.console.layout.pattern = ${log4j2.pattern} # Rolling file appender log4j2.appender.rolling.type = RollingRandomAccessFile log4j2.appender.rolling.name = RollingFile log4j2.appender.rolling.fileName = ${karaf.data}/log/karaf.log log4j2.appender.rolling.filePattern = ${karaf.data}/log/karaf.log.%i # uncomment to not force a disk flush #log4j2.appender.rolling.immediateFlush = false log4j2.appender.rolling.append = true log4j2.appender.rolling.layout.type = PatternLayout log4j2.appender.rolling.layout.pattern = ${log4j2.pattern} log4j2.appender.rolling.policies.type = Policies log4j2.appender.rolling.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.rolling.policies.size.size = 1GB log4j2.appender.rolling.strategy.type = DefaultRolloverStrategy log4j2.appender.rolling.strategy.max = 7 # Audit file appender log4j2.appender.audit.type = RollingRandomAccessFile log4j2.appender.audit.name = AuditRollingFile log4j2.appender.audit.fileName = ${karaf.data}/security/audit.log log4j2.appender.audit.filePattern = ${karaf.data}/security/audit.log.%i log4j2.appender.audit.append = true log4j2.appender.audit.layout.type = PatternLayout log4j2.appender.audit.layout.pattern = ${log4j2.pattern} log4j2.appender.audit.policies.type = Policies log4j2.appender.audit.policies.size.type = SizeBasedTriggeringPolicy log4j2.appender.audit.policies.size.size = 8MB log4j2.appender.audit.strategy.type = DefaultRolloverStrategy log4j2.appender.audit.strategy.max = 7 # OSGi appender log4j2.appender.osgi.type = PaxOsgi log4j2.appender.osgi.name = PaxOsgi log4j2.appender.osgi.filter = * # help with identification of maven-related problems with pax-url-aether #log4j2.logger.aether.name = shaded.org.eclipse.aether #log4j2.logger.aether.level = TRACE #log4j2.logger.http-headers.name = shaded.org.apache.http.headers #log4j2.logger.http-headers.level = DEBUG #log4j2.logger.maven.name = org.ops4j.pax.url.mvn #log4j2.logger.maven.level = TRACE log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.name = WARN log4j2.logger.org_opendaylight_yangtools_yang_parser_repo_YangTextSchemaContextResolver.level = WARN + set_java_vars /usr/lib/jvm/java-21-openjdk-amd64 3072m /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv Configure java home: /usr/lib/jvm/java-21-openjdk-amd64 + local -r java_home=/usr/lib/jvm/java-21-openjdk-amd64 + local -r controllermem=3072m + local -r memconf=/tmp/karaf-0.22.2-SNAPSHOT/bin/setenv + echo Configure + echo ' java home: /usr/lib/jvm/java-21-openjdk-amd64' max memory: 3072m memconf: /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv + echo ' max memory: 3072m' + echo ' memconf: /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv' + sed -ie 's%^# export JAVA_HOME%export JAVA_HOME=${JAVA_HOME:-/usr/lib/jvm/java-21-openjdk-amd64}%g' /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv + sed -ie 's/JAVA_MAX_MEM="2048m"/JAVA_MAX_MEM=3072m/g' /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv cat /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv + echo 'cat /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv' + cat /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv #!/bin/sh # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # handle specific scripts; the SCRIPT_NAME is exactly the name of the Karaf # script: client, instance, shell, start, status, stop, karaf # # if [ "${KARAF_SCRIPT}" == "SCRIPT_NAME" ]; then # Actions go here... # fi # # general settings which should be applied for all scripts go here; please keep # in mind that it is possible that scripts might be executed more than once, e.g. # in example of the start script where the start script is executed first and the # karaf script afterwards. # # # The following section shows the possible configuration options for the default # karaf scripts # export JAVA_HOME=${JAVA_HOME:-/usr/lib/jvm/java-21-openjdk-amd64} # Location of Java installation # export JAVA_OPTS # Generic JVM options, for instance, where you can pass the memory configuration # export JAVA_NON_DEBUG_OPTS # Additional non-debug JVM options # export EXTRA_JAVA_OPTS # Additional JVM options # export KARAF_HOME # Karaf home folder # export KARAF_DATA # Karaf data folder # export KARAF_BASE # Karaf base folder # export KARAF_ETC # Karaf etc folder # export KARAF_LOG # Karaf log folder # export KARAF_SYSTEM_OPTS # First citizen Karaf options # export KARAF_OPTS # Additional available Karaf options # export KARAF_DEBUG # Enable debug mode # export KARAF_REDIRECT # Enable/set the std/err redirection when using bin/start # export KARAF_NOROOT # Prevent execution as root if set to true Set Java version + echo 'Set Java version' + sudo /usr/sbin/alternatives --install /usr/bin/java java /usr/lib/jvm/java-21-openjdk-amd64/bin/java 1 sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper sudo: a password is required + sudo /usr/sbin/alternatives --set java /usr/lib/jvm/java-21-openjdk-amd64/bin/java sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper sudo: a password is required JDK default version ... + echo 'JDK default version ...' + java -version openjdk version "21.0.8" 2025-07-15 OpenJDK Runtime Environment (build 21.0.8+9-Ubuntu-0ubuntu122.04.1) OpenJDK 64-Bit Server VM (build 21.0.8+9-Ubuntu-0ubuntu122.04.1, mixed mode, sharing) Set JAVA_HOME + echo 'Set JAVA_HOME' + export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64 ++ readlink -e /usr/lib/jvm/java-21-openjdk-amd64/bin/java Java binary pointed at by JAVA_HOME: /usr/lib/jvm/java-21-openjdk-amd64/bin/java Listing all open ports on controller system... + JAVA_RESOLVED=/usr/lib/jvm/java-21-openjdk-amd64/bin/java + echo 'Java binary pointed at by JAVA_HOME: /usr/lib/jvm/java-21-openjdk-amd64/bin/java' + echo 'Listing all open ports on controller system...' + netstat -pnatu /tmp/configuration-script.sh: line 40: netstat: command not found Custom shard config exists!!! Copying the shard config... + '[' -f /tmp/custom_shard_config.txt ']' + echo 'Custom shard config exists!!!' + echo 'Copying the shard config...' + cp /tmp/custom_shard_config.txt /tmp/karaf-0.22.2-SNAPSHOT/bin/ Configuring cluster + echo 'Configuring cluster' + /tmp/karaf-0.22.2-SNAPSHOT/bin/configure_cluster.sh 3 10.30.170.77 10.30.171.188 10.30.171.110 ################################################ ## Configure Cluster ## ################################################ NOTE: Cluster configuration files not found. Copying from /tmp/karaf-0.22.2-SNAPSHOT/system/org/opendaylight/controller/sal-clustering-config/11.0.2 Configuring unique name in pekko.conf Configuring hostname in pekko.conf Configuring data and rpc seed nodes in pekko.conf modules = [ { name = "inventory" namespace = "urn:opendaylight:inventory" shard-strategy = "module" }, { name = "topology" namespace = "urn:TBD:params:xml:ns:yang:network-topology" shard-strategy = "module" }, { name = "toaster" namespace = "http://netconfcentral.org/ns/toaster" shard-strategy = "module" }, { name = "car" namespace = "urn:opendaylight:params:xml:ns:yang:controller:config:sal-clustering-it:car" shard-strategy = "module" }, { name = "people" namespace = "urn:opendaylight:params:xml:ns:yang:controller:config:sal-clustering-it:people" shard-strategy = "module" }, { name = "car-people" namespace = "urn:opendaylight:params:xml:ns:yang:controller:config:sal-clustering-it:car-people" shard-strategy = "module" } ] Configuring replication type in module-shards.conf ################################################ ## NOTE: Manually restart controller to ## ## apply configuration. ## ################################################ Dump pekko.conf + echo 'Dump pekko.conf' + cat /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/pekko.conf odl-cluster-data { pekko { remote { artery { enabled = on transport = tcp canonical.hostname = "10.30.171.110" canonical.port = 2550 } } cluster { # Using artery. seed-nodes = ["pekko://opendaylight-cluster-data@10.30.170.77:2550", "pekko://opendaylight-cluster-data@10.30.171.188:2550", "pekko://opendaylight-cluster-data@10.30.171.110:2550"] roles = ["member-3"] # when under load we might trip a false positive on the failure detector # failure-detector { # heartbeat-interval = 4 s # acceptable-heartbeat-pause = 16s # } } persistence { # By default the snapshots/journal directories live in KARAF_HOME. You can choose to put it somewhere else by # modifying the following two properties. The directory location specified may be a relative or absolute path. # The relative path is always relative to KARAF_HOME. # snapshot-store.local.dir = "target/snapshots" # Use lz4 compression for LocalSnapshotStore snapshots snapshot-store.local.use-lz4-compression = false # Size of blocks for lz4 compression: 64KB, 256KB, 1MB or 4MB snapshot-store.local.lz4-blocksize = 256KB } disable-default-actor-system-quarantined-event-handling = "false" } } Dump modules.conf + echo 'Dump modules.conf' + cat /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/modules.conf modules = [ { name = "inventory" namespace = "urn:opendaylight:inventory" shard-strategy = "module" }, { name = "topology" namespace = "urn:TBD:params:xml:ns:yang:network-topology" shard-strategy = "module" }, { name = "toaster" namespace = "http://netconfcentral.org/ns/toaster" shard-strategy = "module" }, { name = "car" namespace = "urn:opendaylight:params:xml:ns:yang:controller:config:sal-clustering-it:car" shard-strategy = "module" }, { name = "people" namespace = "urn:opendaylight:params:xml:ns:yang:controller:config:sal-clustering-it:people" shard-strategy = "module" }, { name = "car-people" namespace = "urn:opendaylight:params:xml:ns:yang:controller:config:sal-clustering-it:car-people" shard-strategy = "module" } ] Dump module-shards.conf + echo 'Dump module-shards.conf' + cat /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/module-shards.conf module-shards = [ { name = "default" shards = [ { name = "default" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "inventory" shards = [ { name="inventory" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "topology" shards = [ { name="topology" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "toaster" shards = [ { name="toaster" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "car" shards = [ { name="car" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "people" shards = [ { name="people" replicas = ["member-1", "member-2", "member-3"] } ] }, { name = "car-people" shards = [ { name="car-people" replicas = ["member-1", "member-2", "member-3"] } ] } ] Locating config plan to use... Finished running config plans Starting member-1 with IP address 10.30.170.77 Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. Redirecting karaf console output to karaf_console.log Starting controller... start: Redirecting Karaf output to /tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf_console.log Starting member-2 with IP address 10.30.171.188 Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. Redirecting karaf console output to karaf_console.log Starting controller... start: Redirecting Karaf output to /tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf_console.log Starting member-3 with IP address 10.30.171.110 Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. Redirecting karaf console output to karaf_console.log Starting controller... start: Redirecting Karaf output to /tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf_console.log [controller-csit-3node-clustering-ask-all-titanium] $ /bin/bash /tmp/jenkins4353543431036558927.sh common-functions.sh is being sourced common-functions environment: MAVENCONF: /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.url.mvn.cfg ACTUALFEATURES: FEATURESCONF: /tmp/karaf-0.22.2-SNAPSHOT/etc/org.apache.karaf.features.cfg CUSTOMPROP: /tmp/karaf-0.22.2-SNAPSHOT/etc/custom.properties LOGCONF: /tmp/karaf-0.22.2-SNAPSHOT/etc/org.ops4j.pax.logging.cfg MEMCONF: /tmp/karaf-0.22.2-SNAPSHOT/bin/setenv CONTROLLERMEM: 2048m AKKACONF: /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/pekko.conf MODULESCONF: /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/modules.conf MODULESHARDSCONF: /tmp/karaf-0.22.2-SNAPSHOT/configuration/initial/module-shards.conf SUITES: + echo '#################################################' ################################################# + echo '## Verify Cluster is UP ##' ## Verify Cluster is UP ## + echo '#################################################' ################################################# + create_post_startup_script + cat + copy_and_run_post_startup_script + seed_index=1 ++ seq 1 3 + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_1_IP + echo 'Execute the post startup script on controller 10.30.170.77' Execute the post startup script on controller 10.30.170.77 + scp /w/workspace/controller-csit-3node-clustering-ask-all-titanium/post-startup-script.sh 10.30.170.77:/tmp/ Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. + ssh 10.30.170.77 'bash /tmp/post-startup-script.sh 1' Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found Waiting up to 3 minutes for controller to come up, checking every 5 seconds... 2025-11-30T23:06:17,651 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 284 - org.opendaylight.infrautils.ready-api - 7.1.7 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.] Controller is UP 2025-11-30T23:06:17,651 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 284 - org.opendaylight.infrautils.ready-api - 7.1.7 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.] Listing all open ports on controller system... /tmp/post-startup-script.sh: line 51: netstat: command not found looking for "BindException: Address already in use" in log file looking for "server is unhealthy" in log file + '[' 1 == 0 ']' + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_2_IP + echo 'Execute the post startup script on controller 10.30.171.188' Execute the post startup script on controller 10.30.171.188 + scp /w/workspace/controller-csit-3node-clustering-ask-all-titanium/post-startup-script.sh 10.30.171.188:/tmp/ Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. + ssh 10.30.171.188 'bash /tmp/post-startup-script.sh 2' Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found Waiting up to 3 minutes for controller to come up, checking every 5 seconds... 2025-11-30T23:06:17,644 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 284 - org.opendaylight.infrautils.ready-api - 7.1.7 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.] Controller is UP 2025-11-30T23:06:17,644 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 284 - org.opendaylight.infrautils.ready-api - 7.1.7 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.] Listing all open ports on controller system... /tmp/post-startup-script.sh: line 51: netstat: command not found looking for "BindException: Address already in use" in log file looking for "server is unhealthy" in log file + '[' 2 == 0 ']' + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_3_IP + echo 'Execute the post startup script on controller 10.30.171.110' Execute the post startup script on controller 10.30.171.110 + scp /w/workspace/controller-csit-3node-clustering-ask-all-titanium/post-startup-script.sh 10.30.171.110:/tmp/ Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. + ssh 10.30.171.110 'bash /tmp/post-startup-script.sh 3' Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found /tmp/post-startup-script.sh: line 4: netstat: command not found Waiting up to 3 minutes for controller to come up, checking every 5 seconds... 2025-11-30T23:06:17,621 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 284 - org.opendaylight.infrautils.ready-api - 7.1.7 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.] Controller is UP 2025-11-30T23:06:17,621 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 284 - org.opendaylight.infrautils.ready-api - 7.1.7 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.] Listing all open ports on controller system... looking for "BindException: Address already in use" in log file /tmp/post-startup-script.sh: line 51: netstat: command not found looking for "server is unhealthy" in log file + '[' 0 == 0 ']' + seed_index=1 + dump_controller_threads ++ seq 1 3 + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_1_IP + echo 'Let'\''s take the karaf thread dump' Let's take the karaf thread dump + ssh 10.30.170.77 'sudo ps aux' Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. ++ grep org.apache.karaf.main.Main /w/workspace/controller-csit-3node-clustering-ask-all-titanium/ps_before.log ++ grep -v grep ++ tr -s ' ' ++ cut -f2 '-d ' + pid=2080 + echo 'karaf main: org.apache.karaf.main.Main, pid:2080' karaf main: org.apache.karaf.main.Main, pid:2080 + ssh 10.30.170.77 '/usr/lib/jvm/java-21-openjdk-amd64/bin/jstack -l 2080' Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_2_IP + echo 'Let'\''s take the karaf thread dump' Let's take the karaf thread dump + ssh 10.30.171.188 'sudo ps aux' Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. ++ grep org.apache.karaf.main.Main /w/workspace/controller-csit-3node-clustering-ask-all-titanium/ps_before.log ++ grep -v grep ++ tr -s ' ' ++ cut -f2 '-d ' + pid=2082 + echo 'karaf main: org.apache.karaf.main.Main, pid:2082' karaf main: org.apache.karaf.main.Main, pid:2082 + ssh 10.30.171.188 '/usr/lib/jvm/java-21-openjdk-amd64/bin/jstack -l 2082' Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_3_IP + echo 'Let'\''s take the karaf thread dump' Let's take the karaf thread dump + ssh 10.30.171.110 'sudo ps aux' Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. ++ grep org.apache.karaf.main.Main /w/workspace/controller-csit-3node-clustering-ask-all-titanium/ps_before.log ++ grep -v grep ++ tr -s ' ' ++ cut -f2 '-d ' + pid=2091 + echo 'karaf main: org.apache.karaf.main.Main, pid:2091' karaf main: org.apache.karaf.main.Main, pid:2091 + ssh 10.30.171.110 '/usr/lib/jvm/java-21-openjdk-amd64/bin/jstack -l 2091' Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. + '[' 0 -gt 0 ']' + echo 'Generating controller variables...' Generating controller variables... ++ seq 1 3 + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_1_IP + odl_variables=' -v ODL_SYSTEM_1_IP:10.30.170.77' + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_2_IP + odl_variables=' -v ODL_SYSTEM_1_IP:10.30.170.77 -v ODL_SYSTEM_2_IP:10.30.171.188' + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_3_IP + odl_variables=' -v ODL_SYSTEM_1_IP:10.30.170.77 -v ODL_SYSTEM_2_IP:10.30.171.188 -v ODL_SYSTEM_3_IP:10.30.171.110' + echo 'Generating mininet variables...' Generating mininet variables... ++ seq 1 0 + get_test_suites SUITES + local __suite_list=SUITES + echo 'Locating test plan to use...' Locating test plan to use... + testplan_filepath=/w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/testplans/controller-clustering-ask-titanium.txt + '[' '!' -f /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/testplans/controller-clustering-ask-titanium.txt ']' + testplan_filepath=/w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/testplans/controller-clustering-ask.txt + '[' disabled '!=' disabled ']' + echo 'Changing the testplan path...' Changing the testplan path... + sed s:integration:/w/workspace/controller-csit-3node-clustering-ask-all-titanium: /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/testplans/controller-clustering-ask.txt + cat testplan.txt # Place the suites in run order: /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/rpc_provider_precedence.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/rpc_provider_partition_and_heal.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/action_provider_precedence.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/action_provider_partition_and_heal.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/cluster_singleton/master_stability.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/cluster_singleton/partition_and_heal.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/cluster_singleton/chasing_the_leader.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/singleton_service/global_rpc_kill.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/singleton_service/global_rpc_freeze.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/singleton_service/global_rpc_isolate.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/carpeople_crud.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_failover_crud.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_persistence_recovery.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/buycar_failover.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/entity_isolate.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/buycar_failover_isolation.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_failover_crud_isolation.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_outage_corners.robot + '[' -z '' ']' ++ grep -E -v '(^[[:space:]]*#|^[[:space:]]*$)' testplan.txt ++ tr '\012' ' ' + suite_list='/w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/rpc_provider_precedence.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/rpc_provider_partition_and_heal.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/action_provider_precedence.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/action_provider_partition_and_heal.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/cluster_singleton/master_stability.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/cluster_singleton/partition_and_heal.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/cluster_singleton/chasing_the_leader.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/singleton_service/global_rpc_kill.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/singleton_service/global_rpc_freeze.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/singleton_service/global_rpc_isolate.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/carpeople_crud.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_failover_crud.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_persistence_recovery.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/buycar_failover.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/entity_isolate.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/buycar_failover_isolation.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_failover_crud_isolation.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_outage_corners.robot ' + eval 'SUITES='\''/w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/rpc_provider_precedence.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/rpc_provider_partition_and_heal.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/action_provider_precedence.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/action_provider_partition_and_heal.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/cluster_singleton/master_stability.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/cluster_singleton/partition_and_heal.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/cluster_singleton/chasing_the_leader.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/singleton_service/global_rpc_kill.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/singleton_service/global_rpc_freeze.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/singleton_service/global_rpc_isolate.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/carpeople_crud.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_failover_crud.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_persistence_recovery.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/buycar_failover.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/entity_isolate.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/buycar_failover_isolation.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_failover_crud_isolation.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_outage_corners.robot '\''' ++ SUITES='/w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/rpc_provider_precedence.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/rpc_provider_partition_and_heal.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/action_provider_precedence.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/action_provider_partition_and_heal.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/cluster_singleton/master_stability.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/cluster_singleton/partition_and_heal.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/cluster_singleton/chasing_the_leader.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/singleton_service/global_rpc_kill.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/singleton_service/global_rpc_freeze.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/singleton_service/global_rpc_isolate.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/carpeople_crud.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_failover_crud.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_persistence_recovery.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/buycar_failover.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/entity_isolate.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/buycar_failover_isolation.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_failover_crud_isolation.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_outage_corners.robot ' + echo 'Starting Robot test suites /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/rpc_provider_precedence.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/rpc_provider_partition_and_heal.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/action_provider_precedence.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/action_provider_partition_and_heal.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/cluster_singleton/master_stability.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/cluster_singleton/partition_and_heal.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/cluster_singleton/chasing_the_leader.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/singleton_service/global_rpc_kill.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/singleton_service/global_rpc_freeze.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/singleton_service/global_rpc_isolate.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/carpeople_crud.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_failover_crud.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_persistence_recovery.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/buycar_failover.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/entity_isolate.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/buycar_failover_isolation.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_failover_crud_isolation.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_outage_corners.robot ...' Starting Robot test suites /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/rpc_provider_precedence.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/rpc_provider_partition_and_heal.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/action_provider_precedence.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/action_provider_partition_and_heal.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/cluster_singleton/master_stability.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/cluster_singleton/partition_and_heal.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/cluster_singleton/chasing_the_leader.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/singleton_service/global_rpc_kill.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/singleton_service/global_rpc_freeze.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/singleton_service/global_rpc_isolate.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/carpeople_crud.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_failover_crud.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_persistence_recovery.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/buycar_failover.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/entity_isolate.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/buycar_failover_isolation.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_failover_crud_isolation.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_outage_corners.robot ... + robot -N controller-clustering-ask.txt --removekeywords wuks -e exclude -e skip_if_titanium -v BUNDLEFOLDER:karaf-0.22.2-SNAPSHOT -v BUNDLE_URL:https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/karaf/0.22.2-SNAPSHOT/karaf-0.22.2-20251130.175422-194.zip -v CONTROLLER:10.30.170.77 -v CONTROLLER1:10.30.171.188 -v CONTROLLER2:10.30.171.110 -v CONTROLLER_USER:jenkins -v JAVA_HOME:/usr/lib/jvm/java-21-openjdk-amd64 -v JDKVERSION:openjdk21 -v JENKINS_WORKSPACE:/w/workspace/controller-csit-3node-clustering-ask-all-titanium -v MININET: -v MININET1: -v MININET2: -v MININET_USER:jenkins -v NEXUSURL_PREFIX:https://nexus.opendaylight.org -v NUM_ODL_SYSTEM:3 -v NUM_TOOLS_SYSTEM:0 -v ODL_STREAM:titanium -v ODL_SYSTEM_IP:10.30.170.77 -v ODL_SYSTEM_1_IP:10.30.170.77 -v ODL_SYSTEM_2_IP:10.30.171.188 -v ODL_SYSTEM_3_IP:10.30.171.110 -v ODL_SYSTEM_USER:jenkins -v TOOLS_SYSTEM_IP: -v TOOLS_SYSTEM_USER:jenkins -v USER_HOME:/home/jenkins -v IS_KARAF_APPL:True -v WORKSPACE:/tmp /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/rpc_provider_precedence.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/rpc_provider_partition_and_heal.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/action_provider_precedence.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_rpc_broker/action_provider_partition_and_heal.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/cluster_singleton/master_stability.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/cluster_singleton/partition_and_heal.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/cluster_singleton/chasing_the_leader.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/singleton_service/global_rpc_kill.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/singleton_service/global_rpc_freeze.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/singleton_service/global_rpc_isolate.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/dom_data_broker/restart_odl_with_tell_based_false.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/carpeople_crud.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_failover_crud.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_persistence_recovery.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/buycar_failover.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/entity_isolate.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/buycar_failover_isolation.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_failover_crud_isolation.robot /w/workspace/controller-csit-3node-clustering-ask-all-titanium/test/csit/suites/controller/Clustering_Datastore/car_outage_corners.robot ============================================================================== controller-clustering-ask.txt ============================================================================== controller-clustering-ask.txt.Restart Odl With Tell Based False :: Unset te... ============================================================================== Stop_All_Members :: Stop every odl node. | FAIL | Keyword 'Verify_Karaf_Is_Not_Running_On_Member' failed after retrying for 6 minutes. The last error was: Found running Karaf count: 1: 0 != 1 ------------------------------------------------------------------------------ Unset_Tell_Based_Protocol_Usage :: Comment out the flag usage in c... | PASS | ------------------------------------------------------------------------------ Start_All_And_Sync :: Start each member and wait for sync. | FAIL | Keyword 'Verify_Members_Are_Ready' failed after retrying for 6 minutes. The last error was: ReadTimeout: HTTPConnectionPool(host='10.30.171.110', port=8181): Read timed out. (read timeout=125.0) ------------------------------------------------------------------------------ controller-clustering-ask.txt.Restart Odl With Tell Based False ::... | FAIL | 3 tests, 1 passed, 2 failed ============================================================================== controller-clustering-ask.txt.Rpc Provider Precedence :: DOMRpcBroker testi... ============================================================================== Register_Rpc_On_Each_Node :: Register global rpc on each node of t... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_rpc_provider_precedence_register_rpc_on_each_node" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_rpc_provider_precedence_register_rpc_on_each_node&order=bug_status" ReadTimeout: HTTPConnectionPool(host='10.30.171.110', port=8181): Read timed out. (read timeout=125.0) ------------------------------------------------------------------------------ Invoke_Rpc_On_Each_Node :: Verify that the rpc response comes from... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_rpc_provider_precedence_invoke_rpc_on_each_node" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_rpc_provider_precedence_invoke_rpc_on_each_node&order=bug_status" ReadTimeout: HTTPConnectionPool(host='10.30.171.110', port=8181): Read timed out. (read timeout=125.0) ------------------------------------------------------------------------------ Unregister_Rpc_On_Node :: Unregister the rpc on one of the cluster... | PASS | ------------------------------------------------------------------------------ Invoke_Rpc_On_Node_With_Unregistered_Rpc :: Invoke rcp on the node... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_rpc_provider_precedence_invoke_rpc_on_node_with_unregistered_rpc" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_rpc_provider_precedence_invoke_rpc_on_node_with_unregistered_rpc&order=bug_status" ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) ------------------------------------------------------------------------------ Invoke_Rpc_On_Remaining_Nodes :: Verify that the rpc response come... | PASS | ------------------------------------------------------------------------------ Reregister_Rpc_On_Node :: Reregister the rpc. | PASS | ------------------------------------------------------------------------------ Invoke_Rpc_On_Each_Node_Again :: Verify that the rpc response come... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_rpc_provider_precedence_invoke_rpc_on_each_node_again" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_rpc_provider_precedence_invoke_rpc_on_each_node_again&order=bug_status" ReadTimeout: HTTPConnectionPool(host='10.30.171.110', port=8181): Read timed out. (read timeout=125.0) ------------------------------------------------------------------------------ Unregister_Rpc_On_Each_Node :: Unregister rpc on every node. | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_rpc_provider_precedence_unregister_rpc_on_each_node" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_rpc_provider_precedence_unregister_rpc_on_each_node&order=bug_status" ReadTimeout: HTTPConnectionPool(host='10.30.171.110', port=8181): Read timed out. (read timeout=125.0) ------------------------------------------------------------------------------ controller-clustering-ask.txt.Rpc Provider Precedence :: DOMRpcBro... | FAIL | 8 tests, 3 passed, 5 failed ============================================================================== controller-clustering-ask.txt.Restart Odl With Tell Based False :: Unset te... ============================================================================== Stop_All_Members :: Stop every odl node. | PASS | ------------------------------------------------------------------------------ Unset_Tell_Based_Protocol_Usage :: Comment out the flag usage in c... | PASS | ------------------------------------------------------------------------------ Start_All_And_Sync :: Start each member and wait for sync. | FAIL | Got rc: 1 or stderr was not empty: sudo: netstat: command not found ------------------------------------------------------------------------------ controller-clustering-ask.txt.Restart Odl With Tell Based False ::... | FAIL | 3 tests, 2 passed, 1 failed ============================================================================== controller-clustering-ask.txt.Rpc Provider Partition And Heal :: DOMRpcBrok... ============================================================================== Register_Rpc_On_Two_Nodes :: Register rpc on two nodes of the odl ... | PASS | ------------------------------------------------------------------------------ Invoke_Rpc_On_Each_Node :: Invoke get-constant rpc on every node o... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_rpc_provider_partition_and_heal_invoke_rpc_on_each_node" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_rpc_provider_partition_and_heal_invoke_rpc_on_each_node&order=bug_status" Keyword 'Verify_Constant_On_Active_Nodes' failed after retrying for 10 seconds. The last error was: ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) ------------------------------------------------------------------------------ Isolate_One_Node :: Isolate one node with registered rpc. From the... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_rpc_provider_partition_and_heal_isolate_one_node" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_rpc_provider_partition_and_heal_isolate_one_node&order=bug_status" ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) ------------------------------------------------------------------------------ Invoke_Rpc_On_Nonisolated_Nodes :: Invoke rpc on non-islolated nodes. | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_rpc_provider_partition_and_heal_invoke_rpc_on_nonisolated_nodes" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_rpc_provider_partition_and_heal_invoke_rpc_on_nonisolated_nodes&order=bug_status" Keyword 'DrbCommons.Verify_Constant_On_Active_Nodes' failed after retrying for 1 minute. The last error was: Keyword 'Verify_Constant_On_Active_Nodes' failed after retrying for 10 seconds. The last error was: ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) ------------------------------------------------------------------------------ Rejoin_Isolated_Member :: Rejoin isolated node | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_rpc_provider_partition_and_heal_rejoin_isolated_member" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_rpc_provider_partition_and_heal_rejoin_isolated_member&order=bug_status" Variable '${isolated_idx}' not found. ------------------------------------------------------------------------------ Invoke_Rpc_On_Each_Node_Again :: Invoke rpc get-constant on every ... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_rpc_provider_partition_and_heal_invoke_rpc_on_each_node_again" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_rpc_provider_partition_and_heal_invoke_rpc_on_each_node_again&order=bug_status" Keyword 'Verify_Constant_On_Active_Nodes' failed after retrying for 10 seconds. The last error was: ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) ------------------------------------------------------------------------------ Unregister_Rpc_On_Each_Node :: Inregister rpc on both nodes. | PASS | ------------------------------------------------------------------------------ controller-clustering-ask.txt.Rpc Provider Partition And Heal :: D... | FAIL | 7 tests, 2 passed, 5 failed ============================================================================== controller-clustering-ask.txt.Restart Odl With Tell Based False :: Unset te... ============================================================================== Stop_All_Members :: Stop every odl node. | FAIL | Keyword 'Verify_Karaf_Is_Not_Running_On_Member' failed after retrying for 6 minutes. The last error was: Found running Karaf count: 1: 0 != 1 ------------------------------------------------------------------------------ Unset_Tell_Based_Protocol_Usage :: Comment out the flag usage in c... | PASS | ------------------------------------------------------------------------------ Start_All_And_Sync :: Start each member and wait for sync. | FAIL | Keyword 'Verify_Members_Are_Ready' failed after retrying for 6 minutes. The last error was: ConnectionError: HTTPConnectionPool(host='10.30.171.110', port=8181): Max retries exceeded with url: /jolokia/read/org.opendaylight.controller:Category=ShardManager,name=shard-manager-config,type=DistributedConfigDatastore (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ------------------------------------------------------------------------------ controller-clustering-ask.txt.Restart Odl With Tell Based False ::... | FAIL | 3 tests, 1 passed, 2 failed ============================================================================== controller-clustering-ask.txt.Action Provider Precedence :: DOMRpcBroker te... ============================================================================== Register_Rpc_On_Each_Node :: Register routed rpc on each node of t... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_action_provider_precedence_register_rpc_on_each_node" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_action_provider_precedence_register_rpc_on_each_node&order=bug_status" ConnectionError: HTTPConnectionPool(host='10.30.171.110', port=8181): Max retries exceeded with url: /rests/operations/odl-mdsal-lowlevel-control:register-bound-constant (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ------------------------------------------------------------------------------ Invoke_Rpc_On_Each_Node :: Verify that the rpc response comes from... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_action_provider_precedence_invoke_rpc_on_each_node" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_action_provider_precedence_invoke_rpc_on_each_node&order=bug_status" ConnectionError: HTTPConnectionPool(host='10.30.171.110', port=8181): Max retries exceeded with url: /rests/operations/odl-mdsal-lowlevel-target:get-contexted-constant (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ------------------------------------------------------------------------------ Unregister_Rpc_On_Node :: Unregister the rpc on one of the cluster... | PASS | ------------------------------------------------------------------------------ Invoke_Rpc_On_Node_With_Unregistered_Rpc :: Invoke rcp on the node... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_action_provider_precedence_invoke_rpc_on_node_with_unregistered_rpc" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_action_provider_precedence_invoke_rpc_on_node_with_unregistered_rpc&order=bug_status" ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) ------------------------------------------------------------------------------ Invoke_Rpc_On_Remaining_Nodes :: Verify that the rpc response come... | PASS | ------------------------------------------------------------------------------ Reregister_Rpc_On_Node :: Reregister the rpc. | PASS | ------------------------------------------------------------------------------ Invoke_Rpc_On_Each_Node_Again :: Verify that the rpc response come... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_action_provider_precedence_invoke_rpc_on_each_node_again" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_action_provider_precedence_invoke_rpc_on_each_node_again&order=bug_status" ConnectionError: HTTPConnectionPool(host='10.30.171.110', port=8181): Max retries exceeded with url: /rests/operations/odl-mdsal-lowlevel-target:get-contexted-constant (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ------------------------------------------------------------------------------ Unregister_Rpc_On_Each_Node :: Unregister rpc on every node. | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_action_provider_precedence_unregister_rpc_on_each_node" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_action_provider_precedence_unregister_rpc_on_each_node&order=bug_status" ConnectionError: HTTPConnectionPool(host='10.30.171.110', port=8181): Max retries exceeded with url: /rests/operations/odl-mdsal-lowlevel-control:unregister-bound-constant (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ------------------------------------------------------------------------------ controller-clustering-ask.txt.Action Provider Precedence :: DOMRpc... | FAIL | 8 tests, 3 passed, 5 failed ============================================================================== controller-clustering-ask.txt.Restart Odl With Tell Based False :: Unset te... ============================================================================== Stop_All_Members :: Stop every odl node. | PASS | ------------------------------------------------------------------------------ Unset_Tell_Based_Protocol_Usage :: Comment out the flag usage in c... | PASS | ------------------------------------------------------------------------------ Start_All_And_Sync :: Start each member and wait for sync. | FAIL | Got rc: 1 or stderr was not empty: sudo: netstat: command not found ------------------------------------------------------------------------------ controller-clustering-ask.txt.Restart Odl With Tell Based False ::... | FAIL | 3 tests, 2 passed, 1 failed ============================================================================== controller-clustering-ask.txt.Action Provider Partition And Heal :: DOMRpcB... ============================================================================== Register_Rpc_On_Two_Nodes :: Register rpc on two nodes of the odl ... | PASS | ------------------------------------------------------------------------------ Invoke_Rpc_On_Each_Node :: Invoke get-contexted-constant rpc on ev... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_action_provider_partition_and_heal_invoke_rpc_on_each_node" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_action_provider_partition_and_heal_invoke_rpc_on_each_node&order=bug_status" Keyword 'Verify_Contexted_Constant_On_Active_Nodes' failed after retrying for 10 seconds. The last error was: ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) ------------------------------------------------------------------------------ Isolate_One_Node :: Isolate one node with registered rpc. | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_action_provider_partition_and_heal_isolate_one_node" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_action_provider_partition_and_heal_isolate_one_node&order=bug_status" ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) ------------------------------------------------------------------------------ Invoke_Rpc_On_Remaining_Nodes :: Invoke rpc on non-islolated nodes... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_action_provider_partition_and_heal_invoke_rpc_on_remaining_nodes" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_action_provider_partition_and_heal_invoke_rpc_on_remaining_nodes&order=bug_status" Keyword 'DrbCommons.Verify_Contexted_Constant_On_Active_Nodes' failed after retrying for 1 minute. The last error was: Keyword 'Verify_Contexted_Constant_On_Active_Nodes' failed after retrying for 10 seconds. The last error was: ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) ------------------------------------------------------------------------------ Rejoin_Isolated_Member :: Rejoin isolated node | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_action_provider_partition_and_heal_rejoin_isolated_member" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_action_provider_partition_and_heal_rejoin_isolated_member&order=bug_status" Variable '${isolated_idx}' not found. ------------------------------------------------------------------------------ Invoke_Rpc_On_Each_Node_Again :: Invoke rpc get-contexted-constant... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_action_provider_partition_and_heal_invoke_rpc_on_each_node_again" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_action_provider_partition_and_heal_invoke_rpc_on_each_node_again&order=bug_status" Keyword 'Verify_Contexted_Constant_On_Active_Nodes' failed after retrying for 10 seconds. The last error was: ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) ------------------------------------------------------------------------------ Unregister_Rpc_On_Each_Node :: Inregister rpc on both nodes. | PASS | ------------------------------------------------------------------------------ controller-clustering-ask.txt.Action Provider Partition And Heal :... | FAIL | 7 tests, 2 passed, 5 failed ============================================================================== controller-clustering-ask.txt.Restart Odl With Tell Based False :: Unset te... ============================================================================== Stop_All_Members :: Stop every odl node. | PASS | ------------------------------------------------------------------------------ Unset_Tell_Based_Protocol_Usage :: Comment out the flag usage in c... | PASS | ------------------------------------------------------------------------------ Start_All_And_Sync :: Start each member and wait for sync. | FAIL | Got rc: 1 or stderr was not empty: sudo: netstat: command not found ------------------------------------------------------------------------------ controller-clustering-ask.txt.Restart Odl With Tell Based False ::... | FAIL | 3 tests, 2 passed, 1 failed ============================================================================== controller-clustering-ask.txt.Master Stability :: Cluster Singleton testing... ============================================================================== Register_Singleton_Constant_On_Each_Node_And_Verify :: Register a ... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_master_stability_register_singleton_constant_on_each_node_and_verify" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_master_stability_register_singleton_constant_on_each_node_and_verify&order=bug_status" Could not parse owner and candidates for device get-singleton-constant-service'] ------------------------------------------------------------------------------ Unregister_Singleton_Constant_On_Non_Master_Node :: Unregister the... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_master_stability_unregister_singleton_constant_on_non_master_node" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_master_stability_unregister_singleton_constant_on_non_master_node&order=bug_status" Variable '@{cs_candidates}' not found. Did you mean: @{cs_exp_candidates} ------------------------------------------------------------------------------ Monitor_Stability_While_Unregistered :: Verify that the owner rema... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_master_stability_monitor_stability_while_unregistered" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_master_stability_monitor_stability_while_unregistered&order=bug_status" Variable '${cs_owner}' not found. ------------------------------------------------------------------------------ Reregister_Singleton_Constant :: Re-register the unregistered cand... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_master_stability_reregister_singleton_constant" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_master_stability_reregister_singleton_constant&order=bug_status" Variable '${unregistered_node}' not found. ------------------------------------------------------------------------------ Verify_Stability_After_Reregistration :: Verify that the owner rem... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_master_stability_verify_stability_after_reregistration" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_master_stability_verify_stability_after_reregistration&order=bug_status" Variable '${cs_owner}' not found. ------------------------------------------------------------------------------ Unregister_Singleton_Constant_On_Each_Node :: Unregister the appli... | PASS | ------------------------------------------------------------------------------ controller-clustering-ask.txt.Master Stability :: Cluster Singleto... | FAIL | 6 tests, 1 passed, 5 failed ============================================================================== controller-clustering-ask.txt.Restart Odl With Tell Based False :: Unset te... ============================================================================== Stop_All_Members :: Stop every odl node. | PASS | ------------------------------------------------------------------------------ Unset_Tell_Based_Protocol_Usage :: Comment out the flag usage in c... | PASS | ------------------------------------------------------------------------------ Start_All_And_Sync :: Start each member and wait for sync. | FAIL | Keyword 'Verify_Members_Are_Ready' failed after retrying for 6 minutes. The last error was: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/jolokia/read/org.opendaylight.controller:Category=ShardManager,name=shard-manager-config,type=DistributedConfigDatastore ------------------------------------------------------------------------------ controller-clustering-ask.txt.Restart Odl With Tell Based False ::... | FAIL | 3 tests, 2 passed, 1 failed ============================================================================== controller-clustering-ask.txt.Partition And Heal :: Cluster Singleton testi... ============================================================================== Register_Singleton_Constant_On_Each_Node :: Register a candidate a... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_partition_and_heal_register_singleton_constant_on_each_node" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_partition_and_heal_register_singleton_constant_on_each_node&order=bug_status" HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/operations/odl-mdsal-lowlevel-control:register-singleton-constant ------------------------------------------------------------------------------ Verify_Singleton_Constant_On_Each_Node :: Store the owner and cand... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_partition_and_heal_verify_singleton_constant_on_each_node" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_partition_and_heal_verify_singleton_constant_on_each_node&order=bug_status" Could not parse owner and candidates for device get-singleton-constant-service'] ------------------------------------------------------------------------------ Isolate_Owner_Node :: Isolate the cluster node which is the owner.... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_partition_and_heal_isolate_owner_node" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_partition_and_heal_isolate_owner_node&order=bug_status" Variable '${cs_owner}' not found. ------------------------------------------------------------------------------ Monitor_Stability_While_Isolated :: Monitor the stability of the s... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_partition_and_heal_monitor_stability_while_isolated" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_partition_and_heal_monitor_stability_while_isolated&order=bug_status" Invalid IF condition: Evaluating expression '"${index}" == "${cs_isolated_index}"' failed: Variable '${cs_isolated_index}' not found. ------------------------------------------------------------------------------ Rejoin_Isolated_node :: Rejoin isolated node. | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_partition_and_heal_rejoin_isolated_node" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_partition_and_heal_rejoin_isolated_node&order=bug_status" Variable '${cs_isolated_index}' not found. ------------------------------------------------------------------------------ Unregister_Singleton_Constant_On_Each_Node :: Unregister the appli... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_partition_and_heal_unregister_singleton_constant_on_each_node" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_partition_and_heal_unregister_singleton_constant_on_each_node&order=bug_status" HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/operations/odl-mdsal-lowlevel-control:unregister-singleton-constant ------------------------------------------------------------------------------ controller-clustering-ask.txt.Partition And Heal :: Cluster Single... | FAIL | 6 tests, 0 passed, 6 failed ============================================================================== controller-clustering-ask.txt.Restart Odl With Tell Based False :: Unset te... ============================================================================== Stop_All_Members :: Stop every odl node. | FAIL | Keyword 'Verify_Karaf_Is_Not_Running_On_Member' failed after retrying for 6 minutes. The last error was: Found running Karaf count: 1: 0 != 1 ------------------------------------------------------------------------------ Unset_Tell_Based_Protocol_Usage :: Comment out the flag usage in c... | PASS | ------------------------------------------------------------------------------ Start_All_And_Sync :: Start each member and wait for sync. | FAIL | Keyword 'Verify_Members_Are_Ready' failed after retrying for 6 minutes. The last error was: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.170.77:8181/jolokia/read/org.opendaylight.controller:Category=ShardManager,name=shard-manager-config,type=DistributedConfigDatastore ------------------------------------------------------------------------------ controller-clustering-ask.txt.Restart Odl With Tell Based False ::... | FAIL | 3 tests, 1 passed, 2 failed ============================================================================== controller-clustering-ask.txt.Chasing The Leader :: Cluster Singleton testi... ============================================================================== Register_Candidates :: Register a candidate application on each no... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_chasing_the_leader_register_candidates" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_chasing_the_leader_register_candidates&order=bug_status" HTTPError: 401 Client Error: Unauthorized for url: http://10.30.170.77:8181/rests/operations/odl-mdsal-lowlevel-control:register-flapping-singleton ------------------------------------------------------------------------------ Do_Nothing :: Do nothing for the time of the test duration, becaus... | PASS | ------------------------------------------------------------------------------ Unregister_Candidates_And_Validate_Criteria :: Unregister the test... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_chasing_the_leader_unregister_candidates_and_validate_criteria" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_chasing_the_leader_unregister_candidates_and_validate_criteria&order=bug_status" HTTPError: 401 Client Error: Unauthorized for url: http://10.30.170.77:8181/rests/operations/odl-mdsal-lowlevel-control:unregister-flapping-singleton ------------------------------------------------------------------------------ controller-clustering-ask.txt.Chasing The Leader :: Cluster Single... | FAIL | 3 tests, 1 passed, 2 failed ============================================================================== controller-clustering-ask.txt.Restart Odl With Tell Based False :: Unset te... ============================================================================== Stop_All_Members :: Stop every odl node. | PASS | ------------------------------------------------------------------------------ Unset_Tell_Based_Protocol_Usage :: Comment out the flag usage in c... | PASS | ------------------------------------------------------------------------------ Start_All_And_Sync :: Start each member and wait for sync. | FAIL | Keyword 'Verify_Members_Are_Ready' failed after retrying for 6 minutes. The last error was: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/jolokia/read/org.opendaylight.controller:Category=ShardManager,name=shard-manager-config,type=DistributedConfigDatastore ------------------------------------------------------------------------------ controller-clustering-ask.txt.Restart Odl With Tell Based False ::... | FAIL | 3 tests, 2 passed, 1 failed ============================================================================== controller-clustering-ask.txt.Global Rpc Kill :: Controller functional HA t... ============================================================================== Get_Basic_Rpc_Test_Owner :: Find a service owner and successors. | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_global_rpc_kill_get_basic_rpc_test_owner" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_global_rpc_kill_get_basic_rpc_test_owner&order=bug_status" Could not parse owner and candidates for device Basic-rpc-test'] ------------------------------------------------------------------------------ Rpc_Before_Stopping_On_Owner :: Run rpc on the service owner. | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_global_rpc_kill_rpc_before_stopping_on_owner" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_global_rpc_kill_rpc_before_stopping_on_owner&order=bug_status" Variable '${brt_owner}' not found. ------------------------------------------------------------------------------ Rpc_Before_Stop_On_Successors :: Run rpc on non owher cluster nodes. | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_global_rpc_kill_rpc_before_stop_on_successors" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_global_rpc_kill_rpc_before_stop_on_successors&order=bug_status" Variable '@{brt_successors}' not found. ------------------------------------------------------------------------------ Stop_Current_Owner_Member :: Stop cluster node which is the owner. | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_global_rpc_kill_stop_current_owner_member" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_global_rpc_kill_stop_current_owner_member&order=bug_status" Variable '${brt_owner}' not found. ------------------------------------------------------------------------------ Verify_New_Basic_Rpc_Test_Owner_Elected :: Verify new owner of the... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_global_rpc_kill_verify_new_basic_rpc_test_owner_elected" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_global_rpc_kill_verify_new_basic_rpc_test_owner_elected&order=bug_status" Variable '${old_brt_successors}' not found. ------------------------------------------------------------------------------ Rpc_On_Remained_Cluster_Nodes :: Run rpc on remained cluster nodes. | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_global_rpc_kill_rpc_on_remained_cluster_nodes" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_global_rpc_kill_rpc_on_remained_cluster_nodes&order=bug_status" Variable '@{old_brt_successors}' not found. ------------------------------------------------------------------------------ Restart_Stopped_Member :: Restart stopped node | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_global_rpc_kill_restart_stopped_member" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_global_rpc_kill_restart_stopped_member&order=bug_status" Variable '${old_brt_owner}' not found. ------------------------------------------------------------------------------ Verify_New_Owner_Remained_After_Rejoin :: Verify no owner change h... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_global_rpc_kill_verify_new_owner_remained_after_rejoin" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_global_rpc_kill_verify_new_owner_remained_after_rejoin&order=bug_status" Variable '${brt_owner}' not found. ------------------------------------------------------------------------------ Rpc_After_Rejoin_On_New_Owner :: Run rpc on the new service owner ... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_global_rpc_kill_rpc_after_rejoin_on_new_owner" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_global_rpc_kill_rpc_after_rejoin_on_new_owner&order=bug_status" Variable '${brt_owner}' not found. ------------------------------------------------------------------------------ Rpc_After_Rejoin_On_Old_Owner :: Run rpc on rejoined cluster node. | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_global_rpc_kill_rpc_after_rejoin_on_old_owner" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_global_rpc_kill_rpc_after_rejoin_on_old_owner&order=bug_status" Variable '${old_brt_owner}' not found. ------------------------------------------------------------------------------ Rpc_After_Rejoin_On_All :: Run rpc again on all nodes. | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_global_rpc_kill_rpc_after_rejoin_on_all" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_global_rpc_kill_rpc_after_rejoin_on_all&order=bug_status" Variable '${brt_owner}' not found. ------------------------------------------------------------------------------ controller-clustering-ask.txt.Global Rpc Kill :: Controller functi... | FAIL | 11 tests, 0 passed, 11 failed ============================================================================== controller-clustering-ask.txt.Restart Odl With Tell Based False :: Unset te... ============================================================================== Stop_All_Members :: Stop every odl node. | PASS | ------------------------------------------------------------------------------ Unset_Tell_Based_Protocol_Usage :: Comment out the flag usage in c... | PASS | ------------------------------------------------------------------------------ Start_All_And_Sync :: Start each member and wait for sync. | FAIL | Keyword 'Verify_Members_Are_Ready' failed after retrying for 6 minutes. The last error was: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/jolokia/read/org.opendaylight.controller:Category=ShardManager,name=shard-manager-config,type=DistributedConfigDatastore ------------------------------------------------------------------------------ controller-clustering-ask.txt.Restart Odl With Tell Based False ::... | FAIL | 3 tests, 2 passed, 1 failed ============================================================================== controller-clustering-ask.txt.Global Rpc Freeze :: Controller functional HA... ============================================================================== Get_Basic_Rpc_Test_Owner :: Find a service owner and successors. | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_global_rpc_freeze_get_basic_rpc_test_owner" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_global_rpc_freeze_get_basic_rpc_test_owner&order=bug_status" Could not parse owner and candidates for device Basic-rpc-test'] ------------------------------------------------------------------------------ Rpc_Before_Freezing_On_Owner :: Run rpc on the service owner. | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_global_rpc_freeze_rpc_before_freezing_on_owner" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_global_rpc_freeze_rpc_before_freezing_on_owner&order=bug_status" Variable '${brt_owner}' not found. ------------------------------------------------------------------------------ Rpc_Before_Freeze_On_Successors :: Run rpc on non owher cluster no... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_global_rpc_freeze_rpc_before_freeze_on_successors" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_global_rpc_freeze_rpc_before_freeze_on_successors&order=bug_status" Variable '@{brt_successors}' not found. ------------------------------------------------------------------------------ Freeze_Current_Owner_Member :: Stop cluster node which is the owner. | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_global_rpc_freeze_freeze_current_owner_member" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_global_rpc_freeze_freeze_current_owner_member&order=bug_status" Variable '${brt_owner}' not found. ------------------------------------------------------------------------------ Verify_New_Basic_Rpc_Test_Owner_Elected :: Verify new owner of the... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_global_rpc_freeze_verify_new_basic_rpc_test_owner_elected" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_global_rpc_freeze_verify_new_basic_rpc_test_owner_elected&order=bug_status" Variable '${old_brt_successors}' not found. ------------------------------------------------------------------------------ Rpc_On_Remained_Cluster_Nodes :: Run rpc on remained cluster nodes. | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_global_rpc_freeze_rpc_on_remained_cluster_nodes" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_global_rpc_freeze_rpc_on_remained_cluster_nodes&order=bug_status" Variable '@{old_brt_successors}' not found. ------------------------------------------------------------------------------ Unfreeze_Frozen_Member :: Restart frozen node | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_global_rpc_freeze_unfreeze_frozen_member" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_global_rpc_freeze_unfreeze_frozen_member&order=bug_status" Variable '${old_brt_owner}' not found. ------------------------------------------------------------------------------ Rpc_After_Rejoin_On_New_Owner :: Run rpc on the new service owner ... | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_global_rpc_freeze_rpc_after_rejoin_on_new_owner" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_global_rpc_freeze_rpc_after_rejoin_on_new_owner&order=bug_status" Variable '${brt_owner}' not found. ------------------------------------------------------------------------------ Rpc_After_Rejoin_On_Old_Owner :: Run rpc on rejoined cluster node. | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_global_rpc_freeze_rpc_after_rejoin_on_old_owner" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_global_rpc_freeze_rpc_after_rejoin_on_old_owner&order=bug_status" Variable '${old_brt_owner}' not found. ------------------------------------------------------------------------------ Rpc_After_Rejoin_On_All :: Run rpc again on all nodes. | FAIL | ... click for list of related bugs or create a new one if needed (with the "controller_clustering_ask_txt_global_rpc_freeze_rpc_after_rejoin_on_all" reference somewhere inside) "https://bugs.opendaylight.org/buglist.cgi?f1=cf_external_ref&o1=substring&v1=controller_clustering_ask_txt_global_rpc_freeze_rpc_after_rejoin_on_all&order=bug_status" Variable '${brt_owner}' not found. ------------------------------------------------------------------------------ controller-clustering-ask.txt.Global Rpc Freeze :: Controller func... | FAIL | 10 tests, 0 passed, 10 failed ============================================================================== controller-clustering-ask.txt.Restart Odl With Tell Based False :: Unset te... ============================================================================== Stop_All_Members :: Stop every odl node. | FAIL | Keyword 'Verify_Karaf_Is_Not_Running_On_Member' failed after retrying for 6 minutes. The last error was: Found running Karaf count: 1: 0 != 1 ------------------------------------------------------------------------------ Unset_Tell_Based_Protocol_Usage :: Comment out the flag usage in c... | PASS | ------------------------------------------------------------------------------ Start_All_And_Sync :: Start each member and wait for sync. | FAIL | Keyword 'Verify_Members_Are_Ready' failed after retrying for 6 minutes. The last error was: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.170.77:8181/jolokia/read/org.opendaylight.controller:Category=ShardManager,name=shard-manager-config,type=DistributedConfigDatastore ------------------------------------------------------------------------------ controller-clustering-ask.txt.Restart Odl With Tell Based False ::... | FAIL | 3 tests, 1 passed, 2 failed ============================================================================== controller-clustering-ask.txt.Global Rpc Isolate :: Controller functional H... ============================================================================== Get_Basic_Rpc_Test_Owner :: Find a service owner and successors. | FAIL | Parent suite setup failed: NoValidConnectionsError: [Errno None] Unable to connect to port 8101 on 10.30.171.188 ------------------------------------------------------------------------------ Rpc_Before_Isolation_On_Owner :: Run rpc on the service owner. | FAIL | Parent suite setup failed: NoValidConnectionsError: [Errno None] Unable to connect to port 8101 on 10.30.171.188 ------------------------------------------------------------------------------ Rpc_Before_Isolation_On_Successors :: Run rpc on non owher cluster... | FAIL | Parent suite setup failed: NoValidConnectionsError: [Errno None] Unable to connect to port 8101 on 10.30.171.188 ------------------------------------------------------------------------------ Isolate_Current_Owner_Member :: Isolating cluster node which is th... | FAIL | Parent suite setup failed: NoValidConnectionsError: [Errno None] Unable to connect to port 8101 on 10.30.171.188 ------------------------------------------------------------------------------ Verify_New_Basic_Rpc_Test_Owner_Elected :: Verify new owner of the... | FAIL | Parent suite setup failed: NoValidConnectionsError: [Errno None] Unable to connect to port 8101 on 10.30.171.188 ------------------------------------------------------------------------------ Rpc_On_Isolated_Node :: Run rpc on isolated cluster node. | FAIL | Parent suite setup failed: NoValidConnectionsError: [Errno None] Unable to connect to port 8101 on 10.30.171.188 ------------------------------------------------------------------------------ Rpc_On_Non_Isolated_Cluster_Nodes :: Run rpc on remained cluster n... | FAIL | Parent suite setup failed: NoValidConnectionsError: [Errno None] Unable to connect to port 8101 on 10.30.171.188 ------------------------------------------------------------------------------ Rejoin_Isolated_Member :: Rejoin isolated node | FAIL | Parent suite setup failed: NoValidConnectionsError: [Errno None] Unable to connect to port 8101 on 10.30.171.188 ------------------------------------------------------------------------------ Rpc_After_Rejoin_On_New_Owner :: Run rpc on the new service owner ... | FAIL | Parent suite setup failed: NoValidConnectionsError: [Errno None] Unable to connect to port 8101 on 10.30.171.188 ------------------------------------------------------------------------------ Rpc_After_Rejoin_On_Old_Owner :: Run rpc on rejoined cluster node. | FAIL | Parent suite setup failed: NoValidConnectionsError: [Errno None] Unable to connect to port 8101 on 10.30.171.188 ------------------------------------------------------------------------------ Rpc_After_Rejoin_On_All :: Run rpc again on all nodes. | FAIL | Parent suite setup failed: NoValidConnectionsError: [Errno None] Unable to connect to port 8101 on 10.30.171.188 ------------------------------------------------------------------------------ controller-clustering-ask.txt.Global Rpc Isolate :: Controller fun... | FAIL | Suite setup failed: NoValidConnectionsError: [Errno None] Unable to connect to port 8101 on 10.30.171.188 11 tests, 0 passed, 11 failed ============================================================================== controller-clustering-ask.txt.Restart Odl With Tell Based False :: Unset te... ============================================================================== Stop_All_Members :: Stop every odl node. | PASS | ------------------------------------------------------------------------------ Unset_Tell_Based_Protocol_Usage :: Comment out the flag usage in c... | PASS | ------------------------------------------------------------------------------ Start_All_And_Sync :: Start each member and wait for sync. | FAIL | Keyword 'Verify_Members_Are_Ready' failed after retrying for 6 minutes. The last error was: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/jolokia/read/org.opendaylight.controller:Category=ShardManager,name=shard-manager-config,type=DistributedConfigDatastore ------------------------------------------------------------------------------ controller-clustering-ask.txt.Restart Odl With Tell Based False ::... | FAIL | 3 tests, 2 passed, 1 failed ============================================================================== controller-clustering-ask.txt.Carpeople Crud :: Suite for performing basic ... ============================================================================== Add_Cars_To_Leader :: Add 30 cars to car Leader by one big PUT. | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Added_Cars_On_Leader :: GET response from Leader should match ... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Added_Cars_On_Followers :: The same check on other members. | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Add_People_To_First_Follower :: Add 30 people to people first Foll... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Added_People_On_Leader :: GET response from Leader should matc... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Added_People_On_Followers :: The same check on other members. | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Buy_Cars_On_Leader :: Buy some cars on car-people Leader, loop of ... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Buy_Cars_On_Followers :: On each Follower buy corresponding ID seg... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Added_CarPeople_On_Leader :: GET car-person mappings from Lead... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Added_CarPeople_On_Followers :: The same check on other members. | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Delete_All_CarPeople_On_Leader :: DELETE car-people container. No ... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Delete_All_People_On_Leader :: DELETE people container. No verific... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Delete_All_Cars_On_Leader :: DELETE cars container. No verificatio... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ controller-clustering-ask.txt.Carpeople Crud :: Suite for performi... | FAIL | Suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig 13 tests, 0 passed, 13 failed ============================================================================== controller-clustering-ask.txt.Car Failover Crud :: Suite mixing basic opera... ============================================================================== Add_Original_Cars_On_Old_Leader_And_Verify :: Add initial cars on ... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Stop_Original_Car_Leader :: Stop the car Leader to cause a new lea... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Wait_For_New_Leader :: Wait until new car Leader is elected. | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Original_Cars_On_New_Leader :: GET cars from new Leader, shoul... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Original_Cars_On_New_Followers :: The same check on other exis... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Delete_Original_Cars_On_New_Leader :: Delete cars on the new Leader. | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Add_Leader_Cars_On_New_Leader :: Add cars on the new Leader. | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Leader_Cars_On_New_Leader :: GET cars from new Leader, should ... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Leader_Cars_On_New_Followers :: The same check on other existi... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Delete_Leader_Cars_On_New_First_Follower :: Delete cars in new fir... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Add_Follower_Cars_On_New_First_Follower :: Add cars on the new fir... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Folower_Cars_On_New_Leader :: Get cars from the new Leader, sh... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Follower_Cars_On_New_Followers :: The same check on other exis... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Start_Old_Car_Leader :: Start the stopped member without deleting ... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Folower_Cars_On_Old_Leader :: GET cars from the restarted memb... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Delete_Follower_Cars_On_New_Leader :: Delete cars on the last Leader. | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ controller-clustering-ask.txt.Car Failover Crud :: Suite mixing ba... | FAIL | Suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig 16 tests, 0 passed, 16 failed ============================================================================== controller-clustering-ask.txt.Car Persistence Recovery :: This test restart... ============================================================================== Add_Cars_On_Leader_And_Verify :: Single big PUT to datastore to ad... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Stop_All_Members :: Stop all controllers. | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Start_All_Members :: Start all controllers (should restore the per... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Memorize_Leader_And_Followers :: Locate current Leader of car Shard. | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Cars_On_Leader :: GET cars from Leader, should match the PUT d... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Cars_On_Followers :: The same check on other members. | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Delete_Cars_On_Leader :: Delete cars on the new Leader. | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ controller-clustering-ask.txt.Car Persistence Recovery :: This tes... | FAIL | Suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig 7 tests, 0 passed, 7 failed ============================================================================== controller-clustering-ask.txt.Buycar Failover :: This test focuses on testi... ============================================================================== Add_Cars_To_Leader_And_Verify :: Add all needed cars to car Leader... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Add_People_To_First_Follower_And_Verify :: Add all needed people t... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Buy_Cars_On_Leader_And_Verify :: Buy some cars on the leader member. | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Buy_Cars_On_Follower_And_Verify :: Buy some cars on the first foll... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Reboot_People_Leader :: Previous people Leader is rebooted. We sho... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Buy_Cars_On_Leader_After_Reboot_And_Verify :: Buy some cars on the... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Buy_Cars_On_Follower_After_Reboot_And_Verify :: Buy some cars on t... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Delete_All_CarPeople :: DELETE car-people container. No verificati... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Delete_All_People :: DELETE people container. No verification beyo... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Delete_All_Cars :: DELETE cars container. No verification beyond h... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ controller-clustering-ask.txt.Buycar Failover :: This test focuses... | FAIL | Suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig 10 tests, 0 passed, 10 failed ============================================================================== controller-clustering-ask.txt.Entity Isolate :: Suite for performing member... ============================================================================== Check All Shards Before Isolate :: Check all shards in controller. | FAIL | Parent suite setup failed: Evaluating expression 'json.loads(\'\'\'{\n "error": "javax.management.InstanceNotFoundException : org.opendaylight.controller:Category=Shards,name=member-1-shard-entity-ownership-operational,type=DistributedOperationalDatastore",\n "error_type": "javax.management.InstanceNotFoundException",\n "request": {\n "mbean": "org.opendaylight.controller:Category=Shards,name=member-1-shard-entity-ownership-operational,type=DistributedOperationalDatastore",\n "type": "read"\n },\n "stacktrace": "javax.management.InstanceNotFoundException: org.opendaylight.controller:Category=Shards,name=member-1-shard-entity-ownership-operational,type=DistributedOperationalDatastore\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1073)\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1343)\\n\\tat java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:921)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:46)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:41)\\n\\tat org.jolokia.backend.executor.AbstractMBeanServerExecutor.call(AbstractMBeanServerExecutor.java:90)\\n\\tat org.jolokia.handler.ReadHandler.getMBeanInfo(ReadHandler.java:233)\\n\\tat org.jolokia.handler.ReadHandler.getAllAttributesNames(ReadHandler.java:245)\\n\\tat org.jolokia.handler.ReadHandler.resolveAttributes(ReadHandler.java:221)\\n\\tat org.jolokia.handler.ReadHandler.f... [ Message content over the limit has been removed. ] ...lipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\\n\\tat org.eclipse.jetty.server.Server.handle(Server.java:516)\\n\\tat org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)\\n\\tat org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)\\n\\tat org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)\\n\\tat org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)\\n\\tat org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)\\n\\tat org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)\\n\\tat org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)\\n\\tat org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)\\n\\tat java.base/java.lang.Thread.run(Thread.java:1583)\\n",\n "status": 404\n}\n\'\'\')' failed: JSONDecodeError: Invalid control character at: line 8 column 190 (char 619) ------------------------------------------------------------------------------ Isolate Entity Leader :: Isolate the entity-ownership Leader to ca... | FAIL | Parent suite setup failed: Evaluating expression 'json.loads(\'\'\'{\n "error": "javax.management.InstanceNotFoundException : org.opendaylight.controller:Category=Shards,name=member-1-shard-entity-ownership-operational,type=DistributedOperationalDatastore",\n "error_type": "javax.management.InstanceNotFoundException",\n "request": {\n "mbean": "org.opendaylight.controller:Category=Shards,name=member-1-shard-entity-ownership-operational,type=DistributedOperationalDatastore",\n "type": "read"\n },\n "stacktrace": "javax.management.InstanceNotFoundException: org.opendaylight.controller:Category=Shards,name=member-1-shard-entity-ownership-operational,type=DistributedOperationalDatastore\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1073)\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1343)\\n\\tat java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:921)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:46)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:41)\\n\\tat org.jolokia.backend.executor.AbstractMBeanServerExecutor.call(AbstractMBeanServerExecutor.java:90)\\n\\tat org.jolokia.handler.ReadHandler.getMBeanInfo(ReadHandler.java:233)\\n\\tat org.jolokia.handler.ReadHandler.getAllAttributesNames(ReadHandler.java:245)\\n\\tat org.jolokia.handler.ReadHandler.resolveAttributes(ReadHandler.java:221)\\n\\tat org.jolokia.handler.ReadHandler.f... [ Message content over the limit has been removed. ] ...lipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\\n\\tat org.eclipse.jetty.server.Server.handle(Server.java:516)\\n\\tat org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)\\n\\tat org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)\\n\\tat org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)\\n\\tat org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)\\n\\tat org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)\\n\\tat org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)\\n\\tat org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)\\n\\tat org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)\\n\\tat java.base/java.lang.Thread.run(Thread.java:1583)\\n",\n "status": 404\n}\n\'\'\')' failed: JSONDecodeError: Invalid control character at: line 8 column 190 (char 619) ------------------------------------------------------------------------------ Check All Shards After Isolate :: Check all shards in controller. | FAIL | Parent suite setup failed: Evaluating expression 'json.loads(\'\'\'{\n "error": "javax.management.InstanceNotFoundException : org.opendaylight.controller:Category=Shards,name=member-1-shard-entity-ownership-operational,type=DistributedOperationalDatastore",\n "error_type": "javax.management.InstanceNotFoundException",\n "request": {\n "mbean": "org.opendaylight.controller:Category=Shards,name=member-1-shard-entity-ownership-operational,type=DistributedOperationalDatastore",\n "type": "read"\n },\n "stacktrace": "javax.management.InstanceNotFoundException: org.opendaylight.controller:Category=Shards,name=member-1-shard-entity-ownership-operational,type=DistributedOperationalDatastore\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1073)\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1343)\\n\\tat java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:921)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:46)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:41)\\n\\tat org.jolokia.backend.executor.AbstractMBeanServerExecutor.call(AbstractMBeanServerExecutor.java:90)\\n\\tat org.jolokia.handler.ReadHandler.getMBeanInfo(ReadHandler.java:233)\\n\\tat org.jolokia.handler.ReadHandler.getAllAttributesNames(ReadHandler.java:245)\\n\\tat org.jolokia.handler.ReadHandler.resolveAttributes(ReadHandler.java:221)\\n\\tat org.jolokia.handler.ReadHandler.f... [ Message content over the limit has been removed. ] ...lipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\\n\\tat org.eclipse.jetty.server.Server.handle(Server.java:516)\\n\\tat org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)\\n\\tat org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)\\n\\tat org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)\\n\\tat org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)\\n\\tat org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)\\n\\tat org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)\\n\\tat org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)\\n\\tat org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)\\n\\tat java.base/java.lang.Thread.run(Thread.java:1583)\\n",\n "status": 404\n}\n\'\'\')' failed: JSONDecodeError: Invalid control character at: line 8 column 190 (char 619) ------------------------------------------------------------------------------ Rejoin Entity Leader :: Rejoin the entity-ownership Leader. | FAIL | Parent suite setup failed: Evaluating expression 'json.loads(\'\'\'{\n "error": "javax.management.InstanceNotFoundException : org.opendaylight.controller:Category=Shards,name=member-1-shard-entity-ownership-operational,type=DistributedOperationalDatastore",\n "error_type": "javax.management.InstanceNotFoundException",\n "request": {\n "mbean": "org.opendaylight.controller:Category=Shards,name=member-1-shard-entity-ownership-operational,type=DistributedOperationalDatastore",\n "type": "read"\n },\n "stacktrace": "javax.management.InstanceNotFoundException: org.opendaylight.controller:Category=Shards,name=member-1-shard-entity-ownership-operational,type=DistributedOperationalDatastore\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1073)\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1343)\\n\\tat java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:921)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:46)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:41)\\n\\tat org.jolokia.backend.executor.AbstractMBeanServerExecutor.call(AbstractMBeanServerExecutor.java:90)\\n\\tat org.jolokia.handler.ReadHandler.getMBeanInfo(ReadHandler.java:233)\\n\\tat org.jolokia.handler.ReadHandler.getAllAttributesNames(ReadHandler.java:245)\\n\\tat org.jolokia.handler.ReadHandler.resolveAttributes(ReadHandler.java:221)\\n\\tat org.jolokia.handler.ReadHandler.f... [ Message content over the limit has been removed. ] ...lipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\\n\\tat org.eclipse.jetty.server.Server.handle(Server.java:516)\\n\\tat org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)\\n\\tat org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)\\n\\tat org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)\\n\\tat org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)\\n\\tat org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)\\n\\tat org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)\\n\\tat org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)\\n\\tat org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)\\n\\tat java.base/java.lang.Thread.run(Thread.java:1583)\\n",\n "status": 404\n}\n\'\'\')' failed: JSONDecodeError: Invalid control character at: line 8 column 190 (char 619) ------------------------------------------------------------------------------ Check All Shards After Rejoin :: Check all shards in controller. | FAIL | Parent suite setup failed: Evaluating expression 'json.loads(\'\'\'{\n "error": "javax.management.InstanceNotFoundException : org.opendaylight.controller:Category=Shards,name=member-1-shard-entity-ownership-operational,type=DistributedOperationalDatastore",\n "error_type": "javax.management.InstanceNotFoundException",\n "request": {\n "mbean": "org.opendaylight.controller:Category=Shards,name=member-1-shard-entity-ownership-operational,type=DistributedOperationalDatastore",\n "type": "read"\n },\n "stacktrace": "javax.management.InstanceNotFoundException: org.opendaylight.controller:Category=Shards,name=member-1-shard-entity-ownership-operational,type=DistributedOperationalDatastore\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1073)\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1343)\\n\\tat java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:921)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:46)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:41)\\n\\tat org.jolokia.backend.executor.AbstractMBeanServerExecutor.call(AbstractMBeanServerExecutor.java:90)\\n\\tat org.jolokia.handler.ReadHandler.getMBeanInfo(ReadHandler.java:233)\\n\\tat org.jolokia.handler.ReadHandler.getAllAttributesNames(ReadHandler.java:245)\\n\\tat org.jolokia.handler.ReadHandler.resolveAttributes(ReadHandler.java:221)\\n\\tat org.jolokia.handler.ReadHandler.f... [ Message content over the limit has been removed. ] ...lipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\\n\\tat org.eclipse.jetty.server.Server.handle(Server.java:516)\\n\\tat org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)\\n\\tat org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)\\n\\tat org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)\\n\\tat org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)\\n\\tat org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)\\n\\tat org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)\\n\\tat org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)\\n\\tat org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)\\n\\tat java.base/java.lang.Thread.run(Thread.java:1583)\\n",\n "status": 404\n}\n\'\'\')' failed: JSONDecodeError: Invalid control character at: line 8 column 190 (char 619) ------------------------------------------------------------------------------ controller-clustering-ask.txt.Entity Isolate :: Suite for performi... | FAIL | Suite setup failed: Evaluating expression 'json.loads(\'\'\'{\n "error": "javax.management.InstanceNotFoundException : org.opendaylight.controller:Category=Shards,name=member-1-shard-entity-ownership-operational,type=DistributedOperationalDatastore",\n "error_type": "javax.management.InstanceNotFoundException",\n "request": {\n "mbean": "org.opendaylight.controller:Category=Shards,name=member-1-shard-entity-ownership-operational,type=DistributedOperationalDatastore",\n "type": "read"\n },\n "stacktrace": "javax.management.InstanceNotFoundException: org.opendaylight.controller:Category=Shards,name=member-1-shard-entity-ownership-operational,type=DistributedOperationalDatastore\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1073)\\n\\tat java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1343)\\n\\tat java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:921)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:46)\\n\\tat org.jolokia.handler.ReadHandler$1.execute(ReadHandler.java:41)\\n\\tat org.jolokia.backend.executor.AbstractMBeanServerExecutor.call(AbstractMBeanServerExecutor.java:90)\\n\\tat org.jolokia.handler.ReadHandler.getMBeanInfo(ReadHandler.java:233)\\n\\tat org.jolokia.handler.ReadHandler.getAllAttributesNames(ReadHandler.java:245)\\n\\tat org.jolokia.handler.ReadHandler.resolveAttributes(ReadHandler.java:221)\\n\\tat org.jolokia.handler.ReadHandler.f... [ Message content over the limit has been removed. ] ...lipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\\n\\tat org.eclipse.jetty.server.Server.handle(Server.java:516)\\n\\tat org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)\\n\\tat org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)\\n\\tat org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)\\n\\tat org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)\\n\\tat org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)\\n\\tat org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)\\n\\tat org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)\\n\\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)\\n\\tat org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)\\n\\tat org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)\\n\\tat java.base/java.lang.Thread.run(Thread.java:1583)\\n",\n "status": 404\n}\n\'\'\')' failed: JSONDecodeError: Invalid control character at: line 8 column 190 (char 619) 5 tests, 0 passed, 5 failed ============================================================================== controller-clustering-ask.txt.Buycar Failover Isolation :: This test focuse... ============================================================================== Add_Cars_To_Leader_And_Verify :: Add all needed cars to car Leader... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Add_People_To_First_Follower_And_Verify :: Add all needed people t... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Buy_Cars_On_Leader_And_Verify :: Buy some cars on the leader member. | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Buy_Cars_On_Follower_And_Verify :: Buy some cars on the first foll... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Isolate_and_Rejoin_People_Leader :: Previous people Leader is isol... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Buy_Cars_On_Leader_After_Rejoin_And_Verify :: Buy some cars on the... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Buy_Cars_On_Follower_After_Rejoin_And_Verify :: Buy some cars on t... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Delete_All_CarPeople :: DELETE car-people container. No verificati... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Delete_All_People :: DELETE people container. No verification beyo... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Delete_All_Cars :: DELETE cars container. No verification beyond h... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ controller-clustering-ask.txt.Buycar Failover Isolation :: This te... | FAIL | Suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig 10 tests, 0 passed, 10 failed ============================================================================== controller-clustering-ask.txt.Car Failover Crud Isolation :: Suite mixing b... ============================================================================== Add_Original_Cars_On_Old_Leader_And_Verify :: Add initial cars on ... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Isolate_Original_Car_Leader :: Isolate the car Leader to cause a n... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Wait_For_New_Leader :: Wait until new car Leader is elected. | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Original_Cars_On_New_Leader :: GET cars from new Leader, shoul... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Original_Cars_On_New_Followers :: The same check on other exis... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Delete_Original_Cars_On_New_Leader :: Delete cars on the new Leader. | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Add_Leader_Cars_On_New_Leader :: Add cars on the new Leader. | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Leader_Cars_On_New_Leader :: GET cars from new Leader, should ... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Leader_Cars_On_New_Followers :: The same check on other existi... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Delete_Leader_Cars_On_New_First_Follower :: Delete cars in new fir... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Add_Follower_Cars_On_New_First_Follower :: Add cars on the new fir... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Folower_Cars_On_New_Leader :: Get cars from the new Leader, sh... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Follower_Cars_On_New_Followers :: The same check on other exis... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Rejoin_Old_Car_Leader :: Rejoin the isolated member without deleti... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Folower_Cars_On_Old_Leader :: GET cars from the restarted memb... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Delete_Follower_Cars_On_New_Leader :: Delete cars on the last Leader. | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ controller-clustering-ask.txt.Car Failover Crud Isolation :: Suite... | FAIL | Suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig 16 tests, 0 passed, 16 failed ============================================================================== controller-clustering-ask.txt.Car Outage Corners :: Cluster suite for testi... ============================================================================== Stop_Majority_Of_The_Followers :: Stop half plus one car Follower ... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Attempt_To_Add_Cars_To_Leader :: Adding cars should fail, as major... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Start_Tipping_Follower :: Start one Follower member without persis... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Add_Cars_On_Tipping_Follower :: Add cars on the tipping Follower. | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Cars_On_Existing_Members :: On each up member: GET cars, shoul... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Start_Other_Followers :: Start other followers without persisted d... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ See_Cars_On_New_Follower_Leader :: GET cars from a new follower to... | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ Delete_Cars_On_Leader :: Delete cars on Leader. | FAIL | Parent suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig ------------------------------------------------------------------------------ controller-clustering-ask.txt.Car Outage Corners :: Cluster suite ... | FAIL | Suite setup failed: HTTPError: 401 Client Error: Unauthorized for url: http://10.30.171.110:8181/rests/data/ietf-yang-library:modules-state?content=nonconfig 8 tests, 0 passed, 8 failed ============================================================================== controller-clustering-ask.txt | FAIL | 195 tests, 30 passed, 165 failed ============================================================================== Output: /w/workspace/controller-csit-3node-clustering-ask-all-titanium/output.xml Log: /w/workspace/controller-csit-3node-clustering-ask-all-titanium/log.html Report: /w/workspace/controller-csit-3node-clustering-ask-all-titanium/report.html + true + echo 'Examining the files in data/log and checking filesize' Examining the files in data/log and checking filesize + ssh 10.30.170.77 'ls -altr /tmp/karaf-0.22.2-SNAPSHOT/data/log/' Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. total 9448 drwxrwxr-x 2 jenkins jenkins 4096 Nov 30 23:05 . -rw-rw-r-- 1 jenkins jenkins 1720 Nov 30 23:05 karaf_console.log drwxrwxr-x 9 jenkins jenkins 4096 Dec 1 01:01 .. -rw-rw-r-- 1 jenkins jenkins 9657351 Dec 1 01:06 karaf.log + ssh 10.30.170.77 'du -hs /tmp/karaf-0.22.2-SNAPSHOT/data/log/*' Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. 9.3M /tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf.log 4.0K /tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf_console.log + ssh 10.30.171.188 'ls -altr /tmp/karaf-0.22.2-SNAPSHOT/data/log/' Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. total 8880 drwxrwxr-x 2 jenkins jenkins 4096 Nov 30 23:05 . -rw-rw-r-- 1 jenkins jenkins 1720 Nov 30 23:05 karaf_console.log drwxrwxr-x 9 jenkins jenkins 4096 Dec 1 01:01 .. -rw-rw-r-- 1 jenkins jenkins 9072832 Dec 1 01:06 karaf.log + ssh 10.30.171.188 'du -hs /tmp/karaf-0.22.2-SNAPSHOT/data/log/*' Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. 8.7M /tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf.log 4.0K /tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf_console.log + ssh 10.30.171.110 'ls -altr /tmp/karaf-0.22.2-SNAPSHOT/data/log/' Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. total 19540 drwxrwxr-x 2 jenkins jenkins 4096 Nov 30 23:05 . -rw-rw-r-- 1 jenkins jenkins 144833 Nov 30 23:16 karaf_console.log drwxrwxr-x 9 jenkins jenkins 4096 Dec 1 01:00 .. -rw-rw-r-- 1 jenkins jenkins 19848528 Dec 1 01:07 karaf.log + ssh 10.30.171.110 'du -hs /tmp/karaf-0.22.2-SNAPSHOT/data/log/*' Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. 19M /tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf.log 144K /tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf_console.log + set +e ++ seq 1 3 + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_1_IP + echo 'Let'\''s take the karaf thread dump again' Let's take the karaf thread dump again + ssh 10.30.170.77 'sudo ps aux' Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. ++ grep org.apache.karaf.main.Main /w/workspace/controller-csit-3node-clustering-ask-all-titanium/ps_after.log ++ grep -v grep ++ tr -s ' ' ++ cut -f2 '-d ' + pid=31602 + echo 'karaf main: org.apache.karaf.main.Main, pid:31602' karaf main: org.apache.karaf.main.Main, pid:31602 + ssh 10.30.170.77 '/usr/lib/jvm/java-21-openjdk-amd64/bin/jstack -l 31602' Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. + echo 'killing karaf process...' killing karaf process... + ssh 10.30.170.77 bash -c 'ps axf | grep karaf | grep -v grep | awk '\''{print "kill -9 " $1}'\'' | sh' Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_2_IP + echo 'Let'\''s take the karaf thread dump again' Let's take the karaf thread dump again + ssh 10.30.171.188 'sudo ps aux' Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. ++ tr -s ' ' ++ grep org.apache.karaf.main.Main /w/workspace/controller-csit-3node-clustering-ask-all-titanium/ps_after.log ++ grep -v grep ++ cut -f2 '-d ' + pid=60046 + echo 'karaf main: org.apache.karaf.main.Main, pid:60046' karaf main: org.apache.karaf.main.Main, pid:60046 + ssh 10.30.171.188 '/usr/lib/jvm/java-21-openjdk-amd64/bin/jstack -l 60046' Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. + echo 'killing karaf process...' killing karaf process... + ssh 10.30.171.188 bash -c 'ps axf | grep karaf | grep -v grep | awk '\''{print "kill -9 " $1}'\'' | sh' Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_3_IP + echo 'Let'\''s take the karaf thread dump again' Let's take the karaf thread dump again + ssh 10.30.171.110 'sudo ps aux' Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. ++ grep org.apache.karaf.main.Main /w/workspace/controller-csit-3node-clustering-ask-all-titanium/ps_after.log ++ grep -v grep ++ tr -s ' ' ++ cut -f2 '-d ' + pid=53269 + echo 'karaf main: org.apache.karaf.main.Main, pid:53269' karaf main: org.apache.karaf.main.Main, pid:53269 + ssh 10.30.171.110 '/usr/lib/jvm/java-21-openjdk-amd64/bin/jstack -l 53269' Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. + echo 'killing karaf process...' killing karaf process... + ssh 10.30.171.110 bash -c 'ps axf | grep karaf | grep -v grep | awk '\''{print "kill -9 " $1}'\'' | sh' Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. + sleep 5 ++ seq 1 3 + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_1_IP + echo 'Compressing karaf.log 1' Compressing karaf.log 1 + ssh 10.30.170.77 gzip --best /tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf.log Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. + echo 'Fetching compressed karaf.log 1' Fetching compressed karaf.log 1 + scp 10.30.170.77:/tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf.log.gz odl1_karaf.log.gz Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. + ssh 10.30.170.77 rm -f /tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf.log.gz Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. + scp 10.30.170.77:/tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf_console.log odl1_karaf_console.log Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. + ssh 10.30.170.77 rm -f /tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf_console.log Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. + echo 'Fetch GC logs' Fetch GC logs + mkdir -p gclogs-1 + scp '10.30.170.77:/tmp/karaf-0.22.2-SNAPSHOT/data/log/*.log' gclogs-1/ Warning: Permanently added '10.30.170.77' (ECDSA) to the list of known hosts. scp: /tmp/karaf-0.22.2-SNAPSHOT/data/log/*.log: No such file or directory + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_2_IP + echo 'Compressing karaf.log 2' Compressing karaf.log 2 + ssh 10.30.171.188 gzip --best /tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf.log Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. + echo 'Fetching compressed karaf.log 2' Fetching compressed karaf.log 2 + scp 10.30.171.188:/tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf.log.gz odl2_karaf.log.gz Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. + ssh 10.30.171.188 rm -f /tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf.log.gz Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. + scp 10.30.171.188:/tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf_console.log odl2_karaf_console.log Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. + ssh 10.30.171.188 rm -f /tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf_console.log Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. + echo 'Fetch GC logs' Fetch GC logs + mkdir -p gclogs-2 + scp '10.30.171.188:/tmp/karaf-0.22.2-SNAPSHOT/data/log/*.log' gclogs-2/ Warning: Permanently added '10.30.171.188' (ECDSA) to the list of known hosts. scp: /tmp/karaf-0.22.2-SNAPSHOT/data/log/*.log: No such file or directory + for i in $(seq 1 "${NUM_ODL_SYSTEM}") + CONTROLLERIP=ODL_SYSTEM_3_IP + echo 'Compressing karaf.log 3' Compressing karaf.log 3 + ssh 10.30.171.110 gzip --best /tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf.log Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. gzip: /tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf.log: file size changed while zipping + echo 'Fetching compressed karaf.log 3' Fetching compressed karaf.log 3 + scp 10.30.171.110:/tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf.log.gz odl3_karaf.log.gz Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. + ssh 10.30.171.110 rm -f /tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf.log.gz Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. + scp 10.30.171.110:/tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf_console.log odl3_karaf_console.log Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. + ssh 10.30.171.110 rm -f /tmp/karaf-0.22.2-SNAPSHOT/data/log/karaf_console.log Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. + echo 'Fetch GC logs' Fetch GC logs + mkdir -p gclogs-3 + scp '10.30.171.110:/tmp/karaf-0.22.2-SNAPSHOT/data/log/*.log' gclogs-3/ Warning: Permanently added '10.30.171.110' (ECDSA) to the list of known hosts. scp: /tmp/karaf-0.22.2-SNAPSHOT/data/log/*.log: No such file or directory + echo 'Examine copied files' Examine copied files + ls -lt total 71364 drwxrwxr-x. 2 jenkins jenkins 6 Dec 1 01:07 gclogs-3 -rw-rw-r--. 1 jenkins jenkins 144833 Dec 1 01:07 odl3_karaf_console.log -rw-rw-r--. 1 jenkins jenkins 811786 Dec 1 01:07 odl3_karaf.log.gz drwxrwxr-x. 2 jenkins jenkins 6 Dec 1 01:07 gclogs-2 -rw-rw-r--. 1 jenkins jenkins 1720 Dec 1 01:07 odl2_karaf_console.log -rw-rw-r--. 1 jenkins jenkins 592755 Dec 1 01:07 odl2_karaf.log.gz drwxrwxr-x. 2 jenkins jenkins 6 Dec 1 01:07 gclogs-1 -rw-rw-r--. 1 jenkins jenkins 1720 Dec 1 01:07 odl1_karaf_console.log -rw-rw-r--. 1 jenkins jenkins 708929 Dec 1 01:07 odl1_karaf.log.gz -rw-rw-r--. 1 jenkins jenkins 114020 Dec 1 01:07 karaf_3_53269_threads_after.log -rw-rw-r--. 1 jenkins jenkins 14051 Dec 1 01:07 ps_after.log -rw-rw-r--. 1 jenkins jenkins 139269 Dec 1 01:07 karaf_2_60046_threads_after.log -rw-rw-r--. 1 jenkins jenkins 141204 Dec 1 01:07 karaf_1_31602_threads_after.log -rw-rw-r--. 1 jenkins jenkins 310412 Dec 1 01:07 report.html -rw-rw-r--. 1 jenkins jenkins 4058860 Dec 1 01:07 log.html -rw-rw-r--. 1 jenkins jenkins 65582534 Dec 1 01:06 output.xml -rw-rw-r--. 1 jenkins jenkins 4120 Nov 30 23:08 testplan.txt -rw-rw-r--. 1 jenkins jenkins 112862 Nov 30 23:08 karaf_3_2091_threads_before.log -rw-rw-r--. 1 jenkins jenkins 13924 Nov 30 23:08 ps_before.log -rw-rw-r--. 1 jenkins jenkins 114145 Nov 30 23:08 karaf_2_2082_threads_before.log -rw-rw-r--. 1 jenkins jenkins 113905 Nov 30 23:08 karaf_1_2080_threads_before.log -rw-rw-r--. 1 jenkins jenkins 3106 Nov 30 23:05 post-startup-script.sh -rw-rw-r--. 1 jenkins jenkins 252 Nov 30 23:05 startup-script.sh -rw-rw-r--. 1 jenkins jenkins 3473 Nov 30 23:05 configuration-script.sh -rw-rw-r--. 1 jenkins jenkins 695 Nov 30 23:05 custom_shard_config.txt -rw-rw-r--. 1 jenkins jenkins 135 Nov 30 23:05 scriptplan.txt -rw-rw-r--. 1 jenkins jenkins 337 Nov 30 23:05 detect_variables.env -rw-rw-r--. 1 jenkins jenkins 2619 Nov 30 23:05 pom.xml -rw-rw-r--. 1 jenkins jenkins 92 Nov 30 23:05 set_variables.env -rw-rw-r--. 1 jenkins jenkins 312 Nov 30 23:05 slave_addresses.txt -rw-rw-r--. 1 jenkins jenkins 570 Nov 30 23:04 requirements.txt -rw-rw-r--. 1 jenkins jenkins 26 Nov 30 23:04 env.properties -rw-rw-r--. 1 jenkins jenkins 333 Nov 30 23:02 stack-parameters.yaml drwxrwxr-x. 7 jenkins jenkins 4096 Nov 30 23:01 test drwxrwxr-x. 2 jenkins jenkins 6 Nov 30 23:01 test@tmp -rw-rw-r--. 1 jenkins jenkins 1410 Nov 30 17:54 maven-metadata.xml + true [controller-csit-3node-clustering-ask-all-titanium] $ /bin/sh /tmp/jenkins4515432512394079531.sh Cleaning up Robot installation... $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 5161 killed; [ssh-agent] Stopped. Recording plot data Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. Build step 'Publish Robot Framework test results' changed build result to UNSTABLE [PostBuildScript] - [INFO] Executing post build scripts. [controller-csit-3node-clustering-ask-all-titanium] $ /bin/bash /tmp/jenkins5813835439577340403.sh Archiving csit artifacts mv: cannot stat '*_1.png': No such file or directory mv: cannot stat '/tmp/odl1_*': No such file or directory mv: cannot stat '*_2.png': No such file or directory mv: cannot stat '/tmp/odl2_*': No such file or directory mv: cannot stat '*_3.png': No such file or directory mv: cannot stat '/tmp/odl3_*': No such file or directory % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 4008k 0 4008k 0 0 5467k 0 --:--:-- --:--:-- --:--:-- 5460k 100 6760k 0 6760k 0 0 5090k 0 --:--:-- 0:00:01 --:--:-- 5090k Archive: robot-plugin.zip inflating: ./archives/robot-plugin/log.html inflating: ./archives/robot-plugin/output.xml inflating: ./archives/robot-plugin/report.html mv: cannot stat '*.log.gz': No such file or directory mv: cannot stat '*.csv': No such file or directory mv: cannot stat '*.png': No such file or directory [PostBuildScript] - [INFO] Executing post build scripts. [controller-csit-3node-clustering-ask-all-titanium] $ /bin/bash /tmp/jenkins4967956192953502721.sh [PostBuildScript] - [INFO] Executing post build scripts. [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content OS_CLOUD=vex OS_STACK_NAME=releng-controller-csit-3node-clustering-ask-all-titanium-63 [EnvInject] - Variables injected successfully. provisioning config files... copy managed file [clouds-yaml] to file:/home/jenkins/.config/openstack/clouds.yaml [controller-csit-3node-clustering-ask-all-titanium] $ /bin/bash /tmp/jenkins277526183407361604.sh ---> openstack-stack-delete.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/controller-csit-3node-clustering-ask-all-titanium/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-hM1z from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lftools 0.37.16 requires urllib3<2.1.0, but you have urllib3 2.5.0 which is incompatible. lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: lftools[openstack] kubernetes python-heatclient python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-hM1z/bin to PATH INFO: Stack cost retrieval disabled, setting cost to 0 INFO: Deleting stack releng-controller-csit-3node-clustering-ask-all-titanium-63 Successfully deleted stack releng-controller-csit-3node-clustering-ask-all-titanium-63 [PostBuildScript] - [INFO] Executing post build scripts. [controller-csit-3node-clustering-ask-all-titanium] $ /bin/bash /tmp/jenkins11856693818970366237.sh ---> sysstat.sh [controller-csit-3node-clustering-ask-all-titanium] $ /bin/bash /tmp/jenkins1448460289577939567.sh ---> package-listing.sh ++ tr '[:upper:]' '[:lower:]' ++ facter osfamily + OS_FAMILY=redhat + workspace=/w/workspace/controller-csit-3node-clustering-ask-all-titanium + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/controller-csit-3node-clustering-ask-all-titanium ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + rpm -qa + sort + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/controller-csit-3node-clustering-ask-all-titanium ']' + mkdir -p /w/workspace/controller-csit-3node-clustering-ask-all-titanium/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/controller-csit-3node-clustering-ask-all-titanium/archives/ [controller-csit-3node-clustering-ask-all-titanium] $ /bin/bash /tmp/jenkins5968645027842455916.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/controller-csit-3node-clustering-ask-all-titanium/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-hM1z from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: lftools lf-activate-venv(): INFO: Adding /tmp/venv-hM1z/bin to PATH INFO: Running in OpenStack, capturing instance metadata [controller-csit-3node-clustering-ask-all-titanium] $ /bin/bash /tmp/jenkins15524210994473300651.sh provisioning config files... Could not find credentials [logs] for controller-csit-3node-clustering-ask-all-titanium #63 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/controller-csit-3node-clustering-ask-all-titanium@tmp/config8093903826644055773tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[odl-logs-s3-cloudfront-index] Run condition [Regular expression match] enabling perform for step [Provide Configuration files] provisioning config files... copy managed file [jenkins-s3-log-ship] to file:/home/jenkins/.aws/credentials [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [controller-csit-3node-clustering-ask-all-titanium] $ /bin/bash /tmp/jenkins6835880117091614895.sh ---> create-netrc.sh WARN: Log server credential not found. [controller-csit-3node-clustering-ask-all-titanium] $ /bin/bash /tmp/jenkins4314352311343097282.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/controller-csit-3node-clustering-ask-all-titanium/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-hM1z from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: lftools lf-activate-venv(): INFO: Adding /tmp/venv-hM1z/bin to PATH [controller-csit-3node-clustering-ask-all-titanium] $ /bin/bash /tmp/jenkins6811878608309988869.sh ---> sudo-logs.sh Archiving 'sudo' log.. [controller-csit-3node-clustering-ask-all-titanium] $ /bin/bash /tmp/jenkins17538060324542627677.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/controller-csit-3node-clustering-ask-all-titanium/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-hM1z from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-hM1z/bin to PATH DEBUG: total: 0 INFO: Retrieving Stack Cost... INFO: Retrieving Pricing Info for: v3-standard-2 INFO: Archiving Costs [controller-csit-3node-clustering-ask-all-titanium] $ /bin/bash -l /tmp/jenkins16825143798150827718.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/controller-csit-3node-clustering-ask-all-titanium/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-hM1z from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) lf-activate-venv(): INFO: Attempting to install with network-safe options... lf-activate-venv(): INFO: Base packages installed successfully lf-activate-venv(): INFO: Installing additional packages: lftools urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-hM1z/bin to PATH WARNING: Nexus logging server not set INFO: S3 path logs/releng/vex-yul-odl-jenkins-1/controller-csit-3node-clustering-ask-all-titanium/63/ INFO: archiving logs to S3 ---> uname -a: Linux prd-centos8-robot-2c-8g-2670.novalocal 4.18.0-553.5.1.el8.x86_64 #1 SMP Tue May 21 05:46:01 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 2 On-line CPU(s) list: 0,1 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 2 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0,1 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 2 ---> df -h: Filesystem Size Used Avail Use% Mounted on devtmpfs 3.8G 0 3.8G 0% /dev tmpfs 3.8G 0 3.8G 0% /dev/shm tmpfs 3.8G 17M 3.8G 1% /run tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup /dev/vda1 40G 8.5G 32G 22% / tmpfs 770M 0 770M 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 7697 645 4706 19 2346 6753 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:db:8d:0b brd ff:ff:ff:ff:ff:ff altname enp0s3 altname ens3 inet 10.30.171.144/23 brd 10.30.171.255 scope global dynamic noprefixroute eth0 valid_lft 78732sec preferred_lft 78732sec inet6 fe80::f816:3eff:fedb:8d0b/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.18.0-553.5.1.el8.x86_64 (prd-centos8-robot-2c-8g-2670.novalocal) 12/01/2025 _x86_64_ (2 CPU) 12:00:01 AM tps rtps wtps bread/s bwrtn/s 12:01:01 AM 1.15 0.00 1.15 0.00 374.68 12:02:01 AM 0.70 0.08 0.62 1.87 111.04 12:03:01 AM 0.23 0.00 0.23 0.00 17.90 12:04:01 AM 0.47 0.00 0.47 0.00 16.64 12:05:01 AM 0.30 0.00 0.30 0.00 15.08 12:06:01 AM 0.30 0.00 0.30 0.00 29.64 12:07:01 AM 0.38 0.07 0.32 7.60 19.33 12:08:01 AM 0.45 0.00 0.45 0.00 67.21 12:09:01 AM 0.30 0.00 0.30 0.00 13.25 12:10:01 AM 0.13 0.00 0.13 0.00 19.60 12:11:01 AM 0.17 0.00 0.17 0.00 17.64 12:12:01 AM 0.40 0.00 0.40 0.00 16.36 12:13:01 AM 0.25 0.00 0.25 0.00 21.73 12:14:01 AM 0.22 0.00 0.22 0.00 50.42 12:15:01 AM 0.12 0.00 0.12 0.00 8.40 12:16:01 AM 0.12 0.00 0.12 0.00 16.20 12:17:01 AM 0.43 0.00 0.43 0.00 20.13 12:18:01 AM 0.55 0.00 0.55 0.00 15.71 12:19:01 AM 0.20 0.00 0.20 0.00 17.58 12:20:01 AM 0.30 0.00 0.30 0.00 22.81 12:21:01 AM 0.33 0.00 0.33 0.00 12.76 12:22:01 AM 0.32 0.00 0.32 0.00 7.87 12:23:01 AM 0.30 0.00 0.30 0.00 6.20 12:24:01 AM 0.18 0.00 0.18 0.00 6.25 12:25:02 AM 0.13 0.00 0.13 0.00 3.37 12:26:01 AM 0.14 0.00 0.14 0.00 5.42 12:27:01 AM 0.38 0.00 0.38 0.00 20.41 12:28:01 AM 0.35 0.00 0.35 0.00 24.28 12:29:01 AM 0.42 0.00 0.42 0.00 26.51 12:30:01 AM 0.30 0.00 0.30 0.00 16.63 12:31:01 AM 0.20 0.00 0.20 0.00 13.00 12:32:01 AM 0.43 0.00 0.43 0.00 28.36 12:33:01 AM 0.12 0.00 0.12 0.00 12.01 12:34:01 AM 0.18 0.00 0.18 0.00 22.55 12:35:01 AM 1.20 0.32 0.88 6.93 58.03 12:36:01 AM 0.20 0.00 0.20 0.00 17.23 12:37:01 AM 0.35 0.00 0.35 0.00 21.58 12:38:01 AM 0.10 0.00 0.10 0.00 8.53 12:39:01 AM 0.17 0.00 0.17 0.00 28.18 12:40:01 AM 0.20 0.00 0.20 0.00 14.28 12:41:01 AM 0.25 0.00 0.25 0.00 13.38 12:42:01 AM 0.48 0.00 0.48 0.00 29.05 12:43:01 AM 0.12 0.00 0.12 0.00 11.33 12:44:01 AM 0.17 0.00 0.17 0.00 22.68 12:45:01 AM 0.15 0.00 0.15 0.00 12.28 12:46:01 AM 0.18 0.00 0.18 0.00 45.08 12:47:01 AM 0.30 0.00 0.30 0.00 12.90 12:48:01 AM 0.18 0.00 0.18 0.00 16.58 12:49:01 AM 0.15 0.00 0.15 0.00 16.65 12:50:01 AM 0.32 0.00 0.32 0.00 15.43 12:51:01 AM 0.17 0.00 0.17 0.00 14.43 12:52:01 AM 0.38 0.00 0.38 0.00 28.75 12:53:01 AM 0.13 0.00 0.13 0.00 3.07 12:54:01 AM 0.13 0.00 0.13 0.00 4.93 12:55:01 AM 0.37 0.00 0.37 0.00 7.87 12:56:01 AM 0.18 0.00 0.18 0.00 6.28 12:57:01 AM 0.30 0.00 0.30 0.00 7.03 12:58:01 AM 0.17 0.00 0.17 0.00 21.06 12:59:01 AM 0.13 0.00 0.13 0.00 14.00 01:00:01 AM 0.13 0.00 0.13 0.00 8.56 01:01:01 AM 0.17 0.00 0.17 0.00 27.39 01:02:01 AM 0.57 0.00 0.57 0.00 13.69 01:03:01 AM 0.30 0.00 0.30 0.00 24.16 01:04:01 AM 0.13 0.00 0.13 0.00 9.06 01:05:01 AM 0.13 0.00 0.13 0.00 22.83 01:06:01 AM 0.20 0.00 0.20 0.00 18.14 01:07:01 AM 0.38 0.00 0.38 0.00 127.28 01:08:01 AM 20.33 0.32 20.01 39.06 3476.45 Average: 0.59 0.01 0.57 0.82 77.59 12:00:01 AM kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 12:01:01 AM 4802584 6964472 3079840 39.07 2688 2328756 734456 8.22 270020 2421868 1412 12:02:01 AM 4798760 6962464 3083664 39.12 2688 2330564 753764 8.44 270188 2425676 116 12:03:01 AM 4798684 6962768 3083740 39.12 2688 2330944 753764 8.44 270188 2425868 36 12:04:01 AM 4798348 6962816 3084076 39.13 2688 2331328 753764 8.44 270188 2426280 56 12:05:01 AM 4798212 6963064 3084212 39.13 2688 2331712 753764 8.44 270188 2426344 84 12:06:01 AM 4797568 6963108 3084856 39.14 2688 2332400 736768 8.25 270188 2427112 16 12:07:01 AM 4795192 6962368 3087232 39.17 2688 2334152 748396 8.38 270216 2429212 1100 12:08:01 AM 4794576 6962620 3087848 39.17 2688 2334896 748396 8.38 270304 2429940 48 12:09:01 AM 4794312 6962832 3088112 39.18 2688 2335380 748396 8.38 270304 2430300 252 12:10:01 AM 4793688 6962612 3088736 39.19 2688 2335776 727732 8.15 270304 2430920 88 12:11:01 AM 4793392 6962804 3089032 39.19 2688 2336264 727732 8.15 270304 2431300 88 12:12:01 AM 4792872 6962772 3089552 39.20 2688 2336752 727732 8.15 270304 2431760 256 12:13:01 AM 4792584 6962964 3089840 39.20 2688 2337232 727732 8.15 270304 2432180 168 12:14:01 AM 4791096 6962788 3091328 39.22 2688 2338552 727732 8.15 270304 2433380 36 12:15:01 AM 4790896 6962968 3091528 39.22 2688 2338932 727732 8.15 270304 2433748 188 12:16:01 AM 4790500 6962960 3091924 39.23 2688 2339320 727732 8.15 270304 2434136 116 12:17:01 AM 4789864 6962708 3092560 39.23 2688 2339704 727732 8.15 270304 2434588 4 12:18:01 AM 4789668 6962892 3092756 39.24 2688 2340084 727732 8.15 270304 2435016 96 12:19:01 AM 4789184 6962804 3093240 39.24 2688 2340472 727732 8.15 270304 2435284 24 12:20:01 AM 4788140 6962520 3094284 39.26 2688 2341248 749216 8.39 270304 2436280 208 12:21:01 AM 4788480 6962956 3093944 39.25 2688 2341336 726880 8.14 270308 2436292 64 12:22:01 AM 4788008 6962572 3094416 39.26 2688 2341416 726880 8.14 270304 2436408 24 12:23:01 AM 4788248 6962904 3094176 39.25 2688 2341508 726880 8.14 270304 2436308 56 12:24:01 AM 4788320 6963072 3094104 39.25 2688 2341604 726880 8.14 270304 2436404 20 12:25:02 AM 4787848 6962688 3094576 39.26 2688 2341692 726880 8.14 270304 2436696 64 12:26:01 AM 4787672 6962604 3094752 39.26 2688 2341784 726880 8.14 270304 2436856 24 12:27:01 AM 4787284 6962616 3095140 39.27 2688 2342164 726880 8.14 270304 2437308 4 12:28:01 AM 4786060 6962316 3096364 39.28 2688 2343028 743060 8.32 270304 2438404 324 12:29:01 AM 4785984 6962588 3096440 39.28 2688 2343376 743060 8.32 270304 2438456 4 12:30:01 AM 4785136 6962220 3097288 39.29 2688 2343860 738952 8.27 270304 2439164 84 12:31:01 AM 4784644 6962236 3097780 39.30 2688 2344348 738952 8.27 270304 2439684 252 12:32:01 AM 4784392 6962392 3098032 39.30 2688 2344744 738952 8.27 270304 2440032 4 12:33:01 AM 4783860 6962352 3098564 39.31 2688 2345236 738952 8.27 270304 2440400 168 12:34:01 AM 4783544 6962512 3098880 39.31 2688 2345712 738952 8.27 270304 2440924 4 12:35:01 AM 4781552 6961972 3100872 39.34 2688 2347172 738952 8.27 270428 2442652 160 12:36:01 AM 4781256 6962060 3101168 39.34 2688 2347556 738952 8.27 270428 2442864 84 12:37:01 AM 4778652 6959848 3103772 39.38 2688 2347940 738952 8.27 270428 2445404 4 12:38:01 AM 4778324 6959892 3104100 39.38 2688 2348320 738952 8.27 270428 2445784 168 12:39:01 AM 4777596 6959928 3104828 39.39 2688 2349080 738952 8.27 270428 2446548 124 12:40:01 AM 4777368 6959980 3105056 39.39 2688 2349352 736956 8.25 270428 2446768 4 12:41:01 AM 4777180 6960276 3105244 39.39 2688 2349832 736956 8.25 270428 2447028 168 12:42:01 AM 4776556 6960136 3105868 39.40 2688 2350316 736956 8.25 270428 2447452 4 12:43:01 AM 4776252 6960316 3106172 39.41 2688 2350800 736956 8.25 270428 2447936 176 12:44:01 AM 4775640 6960180 3106784 39.41 2688 2351280 736956 8.25 270428 2448340 4 12:45:01 AM 4774656 6959804 3107768 39.43 2688 2351888 736956 8.25 270428 2449272 300 12:46:01 AM 4773796 6959984 3108628 39.44 2688 2352928 754304 8.45 270428 2450296 36 12:47:01 AM 4773616 6960212 3108808 39.44 2688 2353328 754304 8.45 270428 2450572 196 12:48:01 AM 4773080 6960048 3109344 39.45 2688 2353708 754304 8.45 270428 2450992 112 12:49:01 AM 4772752 6960104 3109672 39.45 2688 2354092 754304 8.45 270428 2451260 40 12:50:01 AM 4772160 6959908 3110264 39.46 2688 2354476 745248 8.34 270428 2451884 100 12:51:01 AM 4771884 6960008 3110540 39.46 2688 2354856 745248 8.34 270428 2452024 84 12:52:01 AM 4771280 6960028 3111144 39.47 2688 2355480 745248 8.34 270428 2452700 24 12:53:01 AM 4771316 6960156 3111108 39.47 2688 2355572 745248 8.34 270428 2452724 52 12:54:01 AM 4771296 6960228 3111128 39.47 2688 2355664 745248 8.34 270428 2452816 24 12:55:01 AM 4771312 6960336 3111112 39.47 2688 2355748 745248 8.34 270456 2452880 40 12:56:01 AM 4771088 6960204 3111336 39.47 2688 2355840 745248 8.34 270456 2453184 4 12:57:01 AM 4770764 6959972 3111660 39.48 2688 2355932 728792 8.16 270456 2453348 40 12:58:01 AM 4770320 6960124 3112104 39.48 2688 2356536 728792 8.16 270456 2453688 48 12:59:01 AM 4769976 6960172 3112448 39.49 2688 2356920 728792 8.16 270460 2454160 40 01:00:01 AM 4769048 6959624 3113376 39.50 2688 2357300 745604 8.35 270456 2454700 196 01:01:01 AM 4768064 6959268 3114360 39.51 2688 2357936 745604 8.35 270456 2455896 48 01:02:01 AM 4768548 6960044 3113876 39.50 2688 2358224 745604 8.35 270472 2455624 208 01:03:01 AM 4768348 6960260 3114076 39.51 2688 2358632 745604 8.35 270472 2455924 16 01:04:01 AM 4767760 6960148 3114664 39.51 2688 2359108 745604 8.35 270472 2456244 248 01:05:01 AM 4767084 6959956 3115340 39.52 2688 2359592 745604 8.35 270472 2457000 88 01:06:01 AM 4766904 6960252 3115520 39.52 2688 2360076 745604 8.35 270472 2457224 84 01:07:01 AM 4556848 6753812 3325576 42.19 2688 2363652 1094944 12.26 270472 2665372 4 01:08:01 AM 4877928 6969648 3004496 38.12 2688 2264576 682716 7.64 531036 2111540 24480 Average: 4780291 6958562 3102133 39.36 2688 2345147 743541 8.33 274191 2440539 484 12:00:01 AM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 12:01:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:01:01 AM eth0 18.47 19.17 2.56 2.26 0.00 0.00 0.00 0.00 12:02:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:02:01 AM eth0 31.45 29.06 6.05 3.81 0.00 0.00 0.00 0.00 12:03:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:03:01 AM eth0 12.25 9.47 3.39 2.19 0.00 0.00 0.00 0.00 12:04:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:04:01 AM eth0 12.43 9.31 3.11 1.99 0.00 0.00 0.00 0.00 12:05:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:05:01 AM eth0 11.56 9.07 2.75 1.79 0.00 0.00 0.00 0.00 12:06:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:06:01 AM eth0 19.93 16.03 5.09 3.21 0.00 0.00 0.00 0.00 12:07:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:07:01 AM eth0 72.40 71.17 14.72 6.41 0.00 0.00 0.00 0.00 12:08:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:08:01 AM eth0 16.48 13.61 4.17 2.54 0.00 0.00 0.00 0.00 12:09:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:09:01 AM eth0 1.55 1.23 0.50 0.23 0.00 0.00 0.00 0.00 12:10:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:10:01 AM eth0 1.50 0.90 0.38 0.18 0.00 0.00 0.00 0.00 12:11:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:11:01 AM eth0 1.45 1.00 0.52 0.22 0.00 0.00 0.00 0.00 12:12:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:12:01 AM eth0 1.70 1.13 0.57 0.28 0.00 0.00 0.00 0.00 12:13:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:13:01 AM eth0 1.12 1.05 0.42 0.22 0.00 0.00 0.00 0.00 12:14:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:14:01 AM eth0 69.57 67.02 11.39 5.98 0.00 0.00 0.00 0.00 12:15:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:15:01 AM eth0 10.98 9.10 2.76 1.79 0.00 0.00 0.00 0.00 12:16:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:16:01 AM eth0 11.00 9.12 2.73 1.79 0.00 0.00 0.00 0.00 12:17:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:17:01 AM eth0 10.68 8.91 2.85 1.84 0.00 0.00 0.00 0.00 12:18:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:18:01 AM eth0 11.36 9.02 2.87 1.79 0.00 0.00 0.00 0.00 12:19:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:19:01 AM eth0 10.40 8.98 2.67 1.78 0.00 0.00 0.00 0.00 12:20:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:20:01 AM eth0 21.94 18.41 5.59 3.51 0.00 0.00 0.00 0.00 12:21:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:21:01 AM eth0 0.37 0.27 0.08 0.04 0.00 0.00 0.00 0.00 12:22:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:22:01 AM eth0 0.83 0.33 0.24 0.11 0.00 0.00 0.00 0.00 12:23:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:23:01 AM eth0 0.27 0.23 0.09 0.05 0.00 0.00 0.00 0.00 12:24:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:24:01 AM eth0 0.60 0.17 0.10 0.04 0.00 0.00 0.00 0.00 12:25:02 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:25:02 AM eth0 1.33 0.58 0.50 0.31 0.00 0.00 0.00 0.00 12:26:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:26:01 AM eth0 0.80 0.36 0.35 0.24 0.00 0.00 0.00 0.00 12:27:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:27:01 AM eth0 44.99 33.54 20.97 35.97 0.00 0.00 0.00 0.00 12:28:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:28:01 AM eth0 61.21 43.14 29.23 18.68 0.00 0.00 0.00 0.00 12:29:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:29:01 AM eth0 7.95 6.50 1.73 1.14 0.00 0.00 0.00 0.00 12:30:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:30:01 AM eth0 1.87 1.08 0.52 0.22 0.00 0.00 0.00 0.00 12:31:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:31:01 AM eth0 1.30 1.23 0.68 0.41 0.00 0.00 0.00 0.00 12:32:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:32:01 AM eth0 2.07 1.12 0.62 0.27 0.00 0.00 0.00 0.00 12:33:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:33:01 AM eth0 1.52 1.23 0.66 0.40 0.00 0.00 0.00 0.00 12:34:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:34:01 AM eth0 1.73 1.00 0.47 0.22 0.00 0.00 0.00 0.00 12:35:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:35:01 AM eth0 98.23 97.32 16.37 8.49 0.00 0.00 0.00 0.00 12:36:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:36:01 AM eth0 12.68 9.22 2.99 1.80 0.00 0.00 0.00 0.00 12:37:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:01 AM eth0 16.48 10.11 4.80 2.90 0.00 0.00 0.00 0.00 12:38:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:38:01 AM eth0 13.10 9.73 3.74 2.44 0.00 0.00 0.00 0.00 12:39:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:39:01 AM eth0 23.76 17.91 5.95 3.54 0.00 0.00 0.00 0.00 12:40:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:40:01 AM eth0 2.55 0.95 0.45 0.17 0.00 0.00 0.00 0.00 12:41:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:41:01 AM eth0 1.83 0.90 0.42 0.19 0.00 0.00 0.00 0.00 12:42:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:42:01 AM eth0 3.47 1.22 0.90 0.30 0.00 0.00 0.00 0.00 12:43:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:43:01 AM eth0 2.27 1.22 0.77 0.41 0.00 0.00 0.00 0.00 12:44:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:44:01 AM eth0 3.03 1.15 0.63 0.23 0.00 0.00 0.00 0.00 12:45:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:45:01 AM eth0 7.53 4.13 2.05 1.22 0.00 0.00 0.00 0.00 12:46:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:46:01 AM eth0 100.23 92.32 17.28 8.70 0.00 0.00 0.00 0.00 12:47:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:47:01 AM eth0 11.95 9.12 2.98 1.85 0.00 0.00 0.00 0.00 12:48:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:48:01 AM eth0 12.35 9.23 3.01 1.80 0.00 0.00 0.00 0.00 12:49:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:49:01 AM eth0 11.40 8.90 2.78 1.78 0.00 0.00 0.00 0.00 12:50:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:50:01 AM eth0 11.11 9.00 2.79 1.79 0.00 0.00 0.00 0.00 12:51:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:51:01 AM eth0 10.75 9.07 2.77 1.79 0.00 0.00 0.00 0.00 12:52:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:52:01 AM eth0 16.65 13.75 4.32 2.65 0.00 0.00 0.00 0.00 12:53:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:53:01 AM eth0 0.47 0.20 0.10 0.04 0.00 0.00 0.00 0.00 12:54:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:54:01 AM eth0 0.73 0.23 0.12 0.05 0.00 0.00 0.00 0.00 12:55:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:55:01 AM eth0 0.85 0.25 0.15 0.05 0.00 0.00 0.00 0.00 12:56:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:56:01 AM eth0 2.38 0.48 0.62 0.30 0.00 0.00 0.00 0.00 12:57:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:57:01 AM eth0 0.77 0.50 0.47 0.30 0.00 0.00 0.00 0.00 12:58:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:58:01 AM eth0 24.56 21.71 4.59 2.62 0.00 0.00 0.00 0.00 12:59:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:59:01 AM eth0 11.36 9.13 2.81 1.80 0.00 0.00 0.00 0.00 01:00:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:00:01 AM eth0 11.43 9.13 2.85 1.82 0.00 0.00 0.00 0.00 01:01:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:01:01 AM eth0 15.81 13.54 4.05 2.56 0.00 0.00 0.00 0.00 01:02:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:02:01 AM eth0 2.62 1.02 0.76 0.39 0.00 0.00 0.00 0.00 01:03:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:03:01 AM eth0 2.20 1.18 0.74 0.39 0.00 0.00 0.00 0.00 01:04:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:04:01 AM eth0 1.88 1.08 0.50 0.22 0.00 0.00 0.00 0.00 01:05:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:05:01 AM eth0 1.63 1.25 0.70 0.41 0.00 0.00 0.00 0.00 01:06:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:06:01 AM eth0 1.47 1.05 0.44 0.23 0.00 0.00 0.00 0.00 01:07:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:07:01 AM eth0 95.03 93.60 30.92 9.99 0.00 0.00 0.00 0.00 01:08:01 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:08:01 AM eth0 139.14 92.13 241.46 319.63 0.00 0.00 0.00 0.00 Average: lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: eth0 16.87 14.05 7.32 7.13 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.18.0-553.5.1.el8.x86_64 (prd-centos8-robot-2c-8g-2670.novalocal) 12/01/2025 _x86_64_ (2 CPU) 12:00:01 AM CPU %user %nice %system %iowait %steal %idle 12:01:01 AM all 14.08 0.03 0.79 0.08 0.09 84.94 12:01:01 AM 0 14.27 0.02 0.72 0.10 0.08 84.81 12:01:01 AM 1 13.88 0.03 0.87 0.05 0.10 85.06 12:02:01 AM all 8.50 0.00 0.48 0.00 0.08 90.95 12:02:01 AM 0 12.98 0.00 0.63 0.00 0.08 86.30 12:02:01 AM 1 4.01 0.00 0.32 0.00 0.07 95.60 12:03:01 AM all 4.42 0.00 0.33 0.00 0.08 95.17 12:03:01 AM 0 5.88 0.00 0.43 0.00 0.07 93.62 12:03:01 AM 1 2.96 0.00 0.23 0.00 0.08 96.72 12:04:01 AM all 4.30 0.00 0.33 0.00 0.07 95.30 12:04:01 AM 0 5.03 0.00 0.40 0.00 0.07 94.51 12:04:01 AM 1 3.58 0.00 0.27 0.00 0.07 96.09 12:05:01 AM all 4.29 0.00 0.33 0.00 0.08 95.31 12:05:01 AM 0 3.47 0.00 0.27 0.00 0.07 96.19 12:05:01 AM 1 5.10 0.00 0.38 0.00 0.08 94.43 12:06:01 AM all 6.49 0.00 0.38 0.00 0.08 93.05 12:06:01 AM 0 10.65 0.00 0.52 0.00 0.08 88.75 12:06:01 AM 1 2.32 0.00 0.25 0.00 0.07 97.36 12:07:01 AM all 7.09 0.00 0.72 0.00 0.08 92.11 12:07:01 AM 0 5.94 0.00 0.72 0.00 0.08 93.25 12:07:01 AM 1 8.25 0.00 0.72 0.00 0.07 90.97 12:08:01 AM all 5.31 0.00 0.27 0.06 0.07 94.30 12:08:01 AM 0 8.19 0.00 0.35 0.00 0.08 91.37 12:08:01 AM 1 2.42 0.00 0.18 0.12 0.05 97.22 12:09:01 AM all 1.08 0.00 0.13 0.00 0.06 98.74 12:09:01 AM 0 0.89 0.00 0.08 0.00 0.05 98.98 12:09:01 AM 1 1.27 0.00 0.17 0.00 0.07 98.49 12:10:01 AM all 1.03 0.00 0.11 0.00 0.06 98.81 12:10:01 AM 0 1.38 0.00 0.10 0.00 0.05 98.46 12:10:01 AM 1 0.67 0.00 0.12 0.00 0.07 99.15 12:11:01 AM all 1.24 0.00 0.13 0.00 0.08 98.55 12:11:01 AM 0 1.75 0.00 0.13 0.00 0.08 98.03 12:11:01 AM 1 0.72 0.00 0.13 0.00 0.08 99.06 12:11:01 AM CPU %user %nice %system %iowait %steal %idle 12:12:01 AM all 1.22 0.00 0.12 0.01 0.06 98.60 12:12:01 AM 0 0.58 0.00 0.08 0.02 0.03 99.28 12:12:01 AM 1 1.85 0.00 0.15 0.00 0.08 97.91 12:13:01 AM all 1.22 0.00 0.13 0.01 0.07 98.57 12:13:01 AM 0 1.74 0.00 0.18 0.00 0.08 98.00 12:13:01 AM 1 0.70 0.00 0.08 0.02 0.05 99.15 12:14:01 AM all 6.62 0.00 0.56 0.00 0.08 92.75 12:14:01 AM 0 7.58 0.00 0.66 0.00 0.07 91.70 12:14:01 AM 1 5.66 0.00 0.47 0.00 0.08 93.79 12:15:01 AM all 4.30 0.00 0.34 0.00 0.08 95.27 12:15:01 AM 0 6.99 0.00 0.47 0.00 0.08 92.46 12:15:01 AM 1 1.62 0.00 0.22 0.00 0.08 98.08 12:16:01 AM all 4.29 0.00 0.30 0.00 0.09 95.31 12:16:01 AM 0 5.90 0.00 0.40 0.00 0.10 93.60 12:16:01 AM 1 2.69 0.00 0.20 0.00 0.08 97.02 12:17:01 AM all 4.28 0.00 0.32 0.00 0.08 95.32 12:17:01 AM 0 5.85 0.00 0.42 0.00 0.08 93.65 12:17:01 AM 1 2.71 0.00 0.22 0.00 0.08 96.99 12:18:01 AM all 4.28 0.00 0.31 0.00 0.08 95.34 12:18:01 AM 0 5.13 0.00 0.33 0.00 0.08 94.45 12:18:01 AM 1 3.43 0.00 0.28 0.00 0.07 96.22 12:19:01 AM all 4.26 0.00 0.33 0.00 0.08 95.34 12:19:01 AM 0 5.85 0.00 0.42 0.00 0.08 93.65 12:19:01 AM 1 2.66 0.00 0.23 0.00 0.07 97.04 12:20:01 AM all 7.34 0.00 0.44 0.00 0.09 92.13 12:20:01 AM 0 10.48 0.00 0.59 0.00 0.10 88.84 12:20:01 AM 1 4.20 0.00 0.30 0.00 0.08 95.42 12:21:01 AM all 0.56 0.00 0.09 0.01 0.08 99.27 12:21:01 AM 0 0.53 0.00 0.12 0.00 0.08 99.27 12:21:01 AM 1 0.58 0.00 0.07 0.02 0.07 99.27 12:22:01 AM all 0.55 0.00 0.08 0.00 0.08 99.29 12:22:01 AM 0 0.53 0.00 0.12 0.00 0.10 99.25 12:22:01 AM 1 0.57 0.00 0.05 0.00 0.05 99.33 12:22:01 AM CPU %user %nice %system %iowait %steal %idle 12:23:01 AM all 0.55 0.00 0.08 0.00 0.08 99.29 12:23:01 AM 0 0.55 0.00 0.07 0.00 0.07 99.32 12:23:01 AM 1 0.55 0.00 0.10 0.00 0.08 99.26 12:24:01 AM all 0.49 0.00 0.08 0.00 0.07 99.37 12:24:01 AM 0 0.55 0.00 0.05 0.00 0.05 99.35 12:24:01 AM 1 0.43 0.00 0.10 0.00 0.08 99.38 12:25:02 AM all 0.43 0.00 0.09 0.00 0.08 99.40 12:25:02 AM 0 0.32 0.00 0.07 0.00 0.05 99.57 12:25:02 AM 1 0.55 0.00 0.12 0.00 0.10 99.23 12:26:01 AM all 0.41 0.00 0.09 0.00 0.07 99.43 12:26:01 AM 0 0.46 0.00 0.10 0.00 0.05 99.39 12:26:01 AM 1 0.36 0.00 0.08 0.00 0.08 99.47 12:27:01 AM all 2.87 0.00 0.53 0.00 0.08 96.52 12:27:01 AM 0 4.05 0.00 0.65 0.00 0.10 95.19 12:27:01 AM 1 1.69 0.00 0.40 0.00 0.07 97.84 12:28:01 AM all 7.54 0.00 0.71 0.00 0.08 91.67 12:28:01 AM 0 8.98 0.00 0.77 0.00 0.08 90.17 12:28:01 AM 1 6.09 0.00 0.65 0.00 0.08 93.17 12:29:01 AM all 2.47 0.00 0.16 0.00 0.07 97.30 12:29:01 AM 0 4.11 0.00 0.20 0.00 0.08 95.61 12:29:01 AM 1 0.84 0.00 0.12 0.00 0.05 99.00 12:30:01 AM all 1.23 0.00 0.13 0.00 0.07 98.57 12:30:01 AM 0 1.02 0.00 0.13 0.00 0.05 98.80 12:30:01 AM 1 1.44 0.00 0.13 0.00 0.08 98.35 12:31:01 AM all 1.21 0.00 0.13 0.00 0.07 98.60 12:31:01 AM 0 0.85 0.00 0.08 0.00 0.05 99.02 12:31:01 AM 1 1.57 0.00 0.17 0.00 0.08 98.18 12:32:01 AM all 1.20 0.00 0.10 0.00 0.08 98.62 12:32:01 AM 0 1.18 0.00 0.08 0.00 0.07 98.66 12:32:01 AM 1 1.22 0.00 0.12 0.00 0.08 98.58 12:33:01 AM all 1.15 0.00 0.13 0.00 0.08 98.65 12:33:01 AM 0 0.80 0.00 0.08 0.00 0.07 99.05 12:33:01 AM 1 1.50 0.00 0.17 0.00 0.08 98.25 12:33:01 AM CPU %user %nice %system %iowait %steal %idle 12:34:01 AM all 1.26 0.00 0.12 0.00 0.08 98.54 12:34:01 AM 0 1.97 0.00 0.17 0.00 0.10 97.76 12:34:01 AM 1 0.55 0.00 0.07 0.00 0.07 99.32 12:35:01 AM all 6.85 0.00 0.70 0.02 0.07 92.37 12:35:01 AM 0 6.97 0.00 0.71 0.02 0.07 92.24 12:35:01 AM 1 6.73 0.00 0.69 0.02 0.07 92.49 12:36:01 AM all 4.23 0.00 0.29 0.00 0.08 95.41 12:36:01 AM 0 3.01 0.00 0.22 0.00 0.07 96.70 12:36:01 AM 1 5.44 0.00 0.37 0.00 0.08 94.11 12:37:01 AM all 4.43 0.00 0.35 0.00 0.07 95.15 12:37:01 AM 0 4.71 0.00 0.33 0.00 0.07 94.88 12:37:01 AM 1 4.14 0.00 0.37 0.00 0.07 95.42 12:38:01 AM all 4.41 0.00 0.29 0.00 0.08 95.22 12:38:01 AM 0 6.92 0.00 0.42 0.00 0.08 92.58 12:38:01 AM 1 1.91 0.00 0.17 0.00 0.07 97.86 12:39:01 AM all 7.15 0.00 0.40 0.00 0.08 92.36 12:39:01 AM 0 11.75 0.00 0.52 0.00 0.08 87.65 12:39:01 AM 1 2.56 0.00 0.28 0.00 0.08 97.07 12:40:01 AM all 1.02 0.00 0.11 0.00 0.07 98.81 12:40:01 AM 0 1.32 0.00 0.10 0.00 0.05 98.53 12:40:01 AM 1 0.72 0.00 0.12 0.00 0.08 99.08 12:41:01 AM all 1.14 0.00 0.14 0.01 0.07 98.64 12:41:01 AM 0 0.92 0.00 0.10 0.02 0.05 98.92 12:41:01 AM 1 1.37 0.00 0.18 0.00 0.08 98.36 12:42:01 AM all 1.24 0.00 0.13 0.00 0.08 98.55 12:42:01 AM 0 0.67 0.00 0.08 0.00 0.05 99.20 12:42:01 AM 1 1.82 0.00 0.18 0.00 0.10 97.89 12:43:01 AM all 1.22 0.00 0.13 0.00 0.07 98.58 12:43:01 AM 0 0.63 0.00 0.07 0.00 0.05 99.25 12:43:01 AM 1 1.81 0.00 0.20 0.00 0.08 97.91 12:44:01 AM all 1.24 0.00 0.13 0.00 0.07 98.56 12:44:01 AM 0 0.58 0.00 0.08 0.00 0.05 99.28 12:44:01 AM 1 1.91 0.00 0.17 0.00 0.08 97.84 12:44:01 AM CPU %user %nice %system %iowait %steal %idle 12:45:01 AM all 2.28 0.00 0.18 0.00 0.06 97.49 12:45:01 AM 0 2.91 0.00 0.17 0.00 0.05 96.88 12:45:01 AM 1 1.65 0.00 0.18 0.00 0.07 98.09 12:46:01 AM all 6.68 0.00 0.72 0.00 0.08 92.52 12:46:01 AM 0 8.28 0.00 0.86 0.00 0.07 90.79 12:46:01 AM 1 5.09 0.00 0.59 0.00 0.08 94.24 12:47:01 AM all 4.22 0.00 0.35 0.00 0.07 95.36 12:47:01 AM 0 5.40 0.00 0.40 0.00 0.07 94.13 12:47:01 AM 1 3.04 0.00 0.30 0.00 0.07 96.59 12:48:01 AM all 4.23 0.00 0.31 0.00 0.09 95.37 12:48:01 AM 0 5.55 0.00 0.40 0.00 0.10 93.95 12:48:01 AM 1 2.91 0.00 0.22 0.00 0.08 96.79 12:49:01 AM all 4.35 0.00 0.32 0.00 0.07 95.26 12:49:01 AM 0 3.77 0.00 0.25 0.00 0.07 95.91 12:49:01 AM 1 4.93 0.00 0.38 0.00 0.07 94.62 12:50:01 AM all 4.43 0.00 0.31 0.00 0.08 95.18 12:50:01 AM 0 4.43 0.00 0.30 0.00 0.08 95.19 12:50:01 AM 1 4.43 0.00 0.32 0.00 0.08 95.17 12:51:01 AM all 4.30 0.00 0.36 0.00 0.07 95.27 12:51:01 AM 0 5.70 0.00 0.43 0.00 0.07 93.80 12:51:01 AM 1 2.91 0.00 0.28 0.00 0.07 96.74 12:52:01 AM all 5.47 0.00 0.33 0.00 0.08 94.13 12:52:01 AM 0 5.15 0.00 0.33 0.00 0.10 94.42 12:52:01 AM 1 5.78 0.00 0.32 0.00 0.07 93.83 12:53:01 AM all 0.54 0.00 0.09 0.00 0.05 99.32 12:53:01 AM 0 0.52 0.00 0.13 0.00 0.07 99.28 12:53:01 AM 1 0.57 0.00 0.05 0.00 0.03 99.35 12:54:01 AM all 0.58 0.00 0.11 0.00 0.06 99.25 12:54:01 AM 0 0.65 0.00 0.10 0.00 0.05 99.20 12:54:01 AM 1 0.52 0.00 0.12 0.00 0.07 99.30 12:55:01 AM all 0.42 0.00 0.08 0.00 0.07 99.43 12:55:01 AM 0 0.53 0.00 0.10 0.00 0.07 99.30 12:55:01 AM 1 0.30 0.00 0.07 0.00 0.07 99.57 12:55:01 AM CPU %user %nice %system %iowait %steal %idle 12:56:01 AM all 0.47 0.00 0.10 0.00 0.06 99.37 12:56:01 AM 0 0.59 0.00 0.15 0.00 0.08 99.18 12:56:01 AM 1 0.35 0.00 0.05 0.00 0.03 99.57 12:57:01 AM all 0.47 0.00 0.07 0.00 0.07 99.40 12:57:01 AM 0 0.37 0.00 0.05 0.00 0.05 99.53 12:57:01 AM 1 0.57 0.00 0.08 0.00 0.08 99.26 12:58:01 AM all 4.12 0.00 0.35 0.00 0.07 95.46 12:58:01 AM 0 2.22 0.00 0.25 0.00 0.07 97.46 12:58:01 AM 1 6.02 0.00 0.45 0.00 0.07 93.47 12:59:01 AM all 4.35 0.00 0.32 0.00 0.08 95.26 12:59:01 AM 0 6.02 0.00 0.43 0.00 0.08 93.46 12:59:01 AM 1 2.67 0.00 0.20 0.00 0.07 97.06 01:00:01 AM all 4.40 0.00 0.35 0.00 0.07 95.18 01:00:01 AM 0 5.67 0.00 0.42 0.00 0.07 93.85 01:00:01 AM 1 3.14 0.00 0.28 0.00 0.07 96.51 01:01:01 AM all 5.45 0.00 0.36 0.00 0.07 94.12 01:01:01 AM 0 6.93 0.00 0.38 0.00 0.07 92.62 01:01:01 AM 1 3.98 0.00 0.33 0.00 0.07 95.62 01:02:01 AM all 0.90 0.00 0.11 0.01 0.05 98.93 01:02:01 AM 0 0.67 0.00 0.07 0.02 0.05 99.20 01:02:01 AM 1 1.14 0.00 0.15 0.00 0.05 98.66 01:03:01 AM all 1.08 0.00 0.12 0.00 0.06 98.75 01:03:01 AM 0 1.02 0.00 0.13 0.00 0.07 98.78 01:03:01 AM 1 1.14 0.00 0.10 0.00 0.05 98.71 01:04:01 AM all 1.19 0.00 0.13 0.00 0.07 98.60 01:04:01 AM 0 1.40 0.00 0.15 0.00 0.07 98.38 01:04:01 AM 1 0.99 0.00 0.12 0.00 0.07 98.83 01:05:01 AM all 1.29 0.00 0.12 0.00 0.04 98.55 01:05:01 AM 0 1.97 0.00 0.15 0.00 0.05 97.83 01:05:01 AM 1 0.60 0.00 0.08 0.00 0.03 99.28 01:06:01 AM all 1.22 0.00 0.10 0.00 0.06 98.62 01:06:01 AM 0 1.42 0.00 0.10 0.00 0.05 98.43 01:06:01 AM 1 1.02 0.00 0.10 0.00 0.07 98.81 01:06:01 AM CPU %user %nice %system %iowait %steal %idle 01:07:01 AM all 15.86 0.00 2.29 0.02 0.08 81.76 01:07:01 AM 0 20.00 0.00 2.15 0.00 0.07 77.78 01:07:01 AM 1 11.71 0.00 2.44 0.03 0.08 85.74 01:08:01 AM all 25.27 0.00 3.24 0.23 0.08 71.18 01:08:01 AM 0 35.32 0.00 3.56 0.08 0.07 60.98 01:08:01 AM 1 15.23 0.00 2.92 0.38 0.08 81.38 Average: all 3.73 0.00 0.34 0.01 0.07 95.85 Average: 0 4.56 0.00 0.36 0.00 0.07 95.00 Average: 1 2.91 0.00 0.31 0.01 0.07 96.70