16:47:14 Triggered by Gerrit: https://git.opendaylight.org/gerrit/c/transportpce/+/113906 16:47:14 Running as SYSTEM 16:47:14 [EnvInject] - Loading node environment variables. 16:47:14 Building remotely on prd-ubuntu2004-docker-4c-16g-1243 (ubuntu2004-docker-4c-16g) in workspace /w/workspace/transportpce-tox-verify-transportpce-master 16:47:14 [ssh-agent] Looking for ssh-agent implementation... 16:47:15 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 16:47:15 $ ssh-agent 16:47:15 SSH_AUTH_SOCK=/tmp/ssh-BquNArEtaDB9/agent.13152 16:47:15 SSH_AGENT_PID=13155 16:47:15 [ssh-agent] Started. 16:47:15 Running ssh-add (command line suppressed) 16:47:15 Identity added: /w/workspace/transportpce-tox-verify-transportpce-master@tmp/private_key_4355511651618854613.key (/w/workspace/transportpce-tox-verify-transportpce-master@tmp/private_key_4355511651618854613.key) 16:47:15 [ssh-agent] Using credentials jenkins (jenkins-ssh) 16:47:15 The recommended git tool is: NONE 16:47:17 using credential jenkins-ssh 16:47:17 Wiping out workspace first. 16:47:17 Cloning the remote Git repository 16:47:17 Cloning repository git://devvexx.opendaylight.org/mirror/transportpce 16:47:17 > git init /w/workspace/transportpce-tox-verify-transportpce-master # timeout=10 16:47:17 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 16:47:17 > git --version # timeout=10 16:47:17 > git --version # 'git version 2.25.1' 16:47:17 using GIT_SSH to set credentials jenkins-ssh 16:47:17 Verifying host key using known hosts file 16:47:17 You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. 16:47:17 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce +refs/heads/*:refs/remotes/origin/* # timeout=10 16:47:21 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 16:47:21 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 16:47:21 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 16:47:21 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 16:47:21 using GIT_SSH to set credentials jenkins-ssh 16:47:21 Verifying host key using known hosts file 16:47:21 You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. 16:47:21 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce refs/changes/06/113906/16 # timeout=10 16:47:21 > git rev-parse 2d4b1fd2f8ee9c658bbbbc0863daed27302c445f^{commit} # timeout=10 16:47:21 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script 16:47:21 Checking out Revision 2d4b1fd2f8ee9c658bbbbc0863daed27302c445f (refs/changes/06/113906/16) 16:47:21 > git config core.sparsecheckout # timeout=10 16:47:21 > git checkout -f 2d4b1fd2f8ee9c658bbbbc0863daed27302c445f # timeout=10 16:47:22 Commit message: "Add Tapi Abstracted Node to OR Topo" 16:47:22 > git rev-parse FETCH_HEAD^{commit} # timeout=10 16:47:22 > git rev-list --no-walk 9889c236444fdb4bb40abe9dc03a01d4dc69b802 # timeout=10 16:47:22 > git remote # timeout=10 16:47:22 > git submodule init # timeout=10 16:47:22 > git submodule sync # timeout=10 16:47:22 > git config --get remote.origin.url # timeout=10 16:47:22 > git submodule init # timeout=10 16:47:22 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 16:47:22 ERROR: No submodules found. 16:47:25 provisioning config files... 16:47:25 copy managed file [npmrc] to file:/home/jenkins/.npmrc 16:47:25 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 16:47:25 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins8437611792763878125.sh 16:47:25 ---> python-tools-install.sh 16:47:25 Setup pyenv: 16:47:25 * system (set by /opt/pyenv/version) 16:47:25 * 3.8.13 (set by /opt/pyenv/version) 16:47:25 * 3.9.13 (set by /opt/pyenv/version) 16:47:25 * 3.10.13 (set by /opt/pyenv/version) 16:47:25 * 3.11.7 (set by /opt/pyenv/version) 16:47:30 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-E7Mz 16:47:30 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 16:47:34 lf-activate-venv(): INFO: Installing: lftools 16:48:18 lf-activate-venv(): INFO: Adding /tmp/venv-E7Mz/bin to PATH 16:48:18 Generating Requirements File 16:48:41 Python 3.11.7 16:48:41 pip 24.2 from /tmp/venv-E7Mz/lib/python3.11/site-packages/pip (python 3.11) 16:48:42 appdirs==1.4.4 16:48:42 argcomplete==3.5.1 16:48:42 aspy.yaml==1.3.0 16:48:42 attrs==24.2.0 16:48:42 autopage==0.5.2 16:48:42 beautifulsoup4==4.12.3 16:48:42 boto3==1.35.48 16:48:42 botocore==1.35.48 16:48:42 bs4==0.0.2 16:48:42 cachetools==5.5.0 16:48:42 certifi==2024.8.30 16:48:42 cffi==1.17.1 16:48:42 cfgv==3.4.0 16:48:42 chardet==5.2.0 16:48:42 charset-normalizer==3.4.0 16:48:42 click==8.1.7 16:48:42 cliff==4.7.0 16:48:42 cmd2==2.5.0 16:48:42 cryptography==3.3.2 16:48:42 debtcollector==3.0.0 16:48:42 decorator==5.1.1 16:48:42 defusedxml==0.7.1 16:48:42 Deprecated==1.2.14 16:48:42 distlib==0.3.9 16:48:42 dnspython==2.7.0 16:48:42 docker==4.2.2 16:48:42 dogpile.cache==1.3.3 16:48:42 durationpy==0.9 16:48:42 email_validator==2.2.0 16:48:42 filelock==3.16.1 16:48:42 future==1.0.0 16:48:42 gitdb==4.0.11 16:48:42 GitPython==3.1.43 16:48:42 google-auth==2.35.0 16:48:42 httplib2==0.22.0 16:48:42 identify==2.6.1 16:48:42 idna==3.10 16:48:42 importlib-resources==1.5.0 16:48:42 iso8601==2.1.0 16:48:42 Jinja2==3.1.4 16:48:42 jmespath==1.0.1 16:48:42 jsonpatch==1.33 16:48:42 jsonpointer==3.0.0 16:48:42 jsonschema==4.23.0 16:48:42 jsonschema-specifications==2024.10.1 16:48:42 keystoneauth1==5.8.0 16:48:42 kubernetes==31.0.0 16:48:42 lftools==0.37.10 16:48:42 lxml==5.3.0 16:48:42 MarkupSafe==3.0.2 16:48:42 msgpack==1.1.0 16:48:42 multi_key_dict==2.0.3 16:48:42 munch==4.0.0 16:48:42 netaddr==1.3.0 16:48:42 netifaces==0.11.0 16:48:42 niet==1.4.2 16:48:42 nodeenv==1.9.1 16:48:42 oauth2client==4.1.3 16:48:42 oauthlib==3.2.2 16:48:42 openstacksdk==4.1.0 16:48:42 os-client-config==2.1.0 16:48:42 os-service-types==1.7.0 16:48:42 osc-lib==3.1.0 16:48:42 oslo.config==9.6.0 16:48:42 oslo.context==5.6.0 16:48:42 oslo.i18n==6.4.0 16:48:42 oslo.log==6.1.2 16:48:42 oslo.serialization==5.5.0 16:48:42 oslo.utils==7.3.0 16:48:42 packaging==24.1 16:48:42 pbr==6.1.0 16:48:42 platformdirs==4.3.6 16:48:42 prettytable==3.11.0 16:48:42 pyasn1==0.6.1 16:48:42 pyasn1_modules==0.4.1 16:48:42 pycparser==2.22 16:48:42 pygerrit2==2.0.15 16:48:42 PyGithub==2.4.0 16:48:42 PyJWT==2.9.0 16:48:42 PyNaCl==1.5.0 16:48:42 pyparsing==2.4.7 16:48:42 pyperclip==1.9.0 16:48:42 pyrsistent==0.20.0 16:48:42 python-cinderclient==9.6.0 16:48:42 python-dateutil==2.9.0.post0 16:48:42 python-heatclient==4.0.0 16:48:42 python-jenkins==1.8.2 16:48:42 python-keystoneclient==5.5.0 16:48:42 python-magnumclient==4.7.0 16:48:42 python-openstackclient==7.2.0 16:48:42 python-swiftclient==4.6.0 16:48:42 PyYAML==6.0.2 16:48:42 referencing==0.35.1 16:48:42 requests==2.32.3 16:48:42 requests-oauthlib==2.0.0 16:48:42 requestsexceptions==1.4.0 16:48:42 rfc3986==2.0.0 16:48:42 rpds-py==0.20.0 16:48:42 rsa==4.9 16:48:42 ruamel.yaml==0.18.6 16:48:42 ruamel.yaml.clib==0.2.12 16:48:42 s3transfer==0.10.3 16:48:42 simplejson==3.19.3 16:48:42 six==1.16.0 16:48:42 smmap==5.0.1 16:48:42 soupsieve==2.6 16:48:42 stevedore==5.3.0 16:48:42 tabulate==0.9.0 16:48:42 toml==0.10.2 16:48:42 tomlkit==0.13.2 16:48:42 tqdm==4.66.5 16:48:42 typing_extensions==4.12.2 16:48:42 tzdata==2024.2 16:48:42 urllib3==1.26.20 16:48:42 virtualenv==20.27.0 16:48:42 wcwidth==0.2.13 16:48:42 websocket-client==1.8.0 16:48:42 wrapt==1.16.0 16:48:42 xdg==6.0.0 16:48:42 xmltodict==0.14.2 16:48:42 yq==3.4.3 16:48:42 [EnvInject] - Injecting environment variables from a build step. 16:48:42 [EnvInject] - Injecting as environment variables the properties content 16:48:42 PYTHON=python3 16:48:42 16:48:42 [EnvInject] - Variables injected successfully. 16:48:42 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins17650042742224198657.sh 16:48:42 ---> tox-install.sh 16:48:42 + source /home/jenkins/lf-env.sh 16:48:42 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 16:48:42 ++ mktemp -d /tmp/venv-XXXX 16:48:42 + lf_venv=/tmp/venv-dXo8 16:48:42 + local venv_file=/tmp/.os_lf_venv 16:48:42 + local python=python3 16:48:42 + local options 16:48:42 + local set_path=true 16:48:42 + local install_args= 16:48:42 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 16:48:42 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 16:48:42 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 16:48:42 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 16:48:42 + true 16:48:42 + case $1 in 16:48:42 + venv_file=/tmp/.toxenv 16:48:42 + shift 2 16:48:42 + true 16:48:42 + case $1 in 16:48:42 + shift 16:48:42 + break 16:48:42 + case $python in 16:48:42 + local pkg_list= 16:48:42 + [[ -d /opt/pyenv ]] 16:48:42 + echo 'Setup pyenv:' 16:48:42 Setup pyenv: 16:48:42 + export PYENV_ROOT=/opt/pyenv 16:48:42 + PYENV_ROOT=/opt/pyenv 16:48:42 + export PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 16:48:42 + PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 16:48:42 + pyenv versions 16:48:42 system 16:48:42 3.8.13 16:48:42 3.9.13 16:48:42 3.10.13 16:48:42 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 16:48:42 + command -v pyenv 16:48:42 ++ pyenv init - --no-rehash 16:48:42 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 16:48:42 for i in ${!paths[@]}; do 16:48:42 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 16:48:42 fi; done; 16:48:42 echo "${paths[*]}"'\'')" 16:48:42 export PATH="/opt/pyenv/shims:${PATH}" 16:48:42 export PYENV_SHELL=bash 16:48:42 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 16:48:42 pyenv() { 16:48:42 local command 16:48:42 command="${1:-}" 16:48:42 if [ "$#" -gt 0 ]; then 16:48:42 shift 16:48:42 fi 16:48:42 16:48:42 case "$command" in 16:48:42 rehash|shell) 16:48:42 eval "$(pyenv "sh-$command" "$@")" 16:48:42 ;; 16:48:42 *) 16:48:42 command pyenv "$command" "$@" 16:48:42 ;; 16:48:42 esac 16:48:42 }' 16:48:42 +++ bash --norc -ec 'IFS=:; paths=($PATH); 16:48:42 for i in ${!paths[@]}; do 16:48:42 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 16:48:42 fi; done; 16:48:42 echo "${paths[*]}"' 16:48:42 ++ PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 16:48:42 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 16:48:42 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 16:48:42 ++ export PYENV_SHELL=bash 16:48:42 ++ PYENV_SHELL=bash 16:48:42 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 16:48:42 +++ complete -F _pyenv pyenv 16:48:42 ++ lf-pyver python3 16:48:42 ++ local py_version_xy=python3 16:48:42 ++ local py_version_xyz= 16:48:42 ++ pyenv versions 16:48:42 ++ local command 16:48:42 ++ command=versions 16:48:42 ++ '[' 1 -gt 0 ']' 16:48:42 ++ shift 16:48:42 ++ case "$command" in 16:48:42 ++ command pyenv versions 16:48:42 ++ pyenv versions 16:48:42 ++ sed 's/^[ *]* //' 16:48:42 ++ awk '{ print $1 }' 16:48:42 ++ grep -E '^[0-9.]*[0-9]$' 16:48:42 ++ [[ ! -s /tmp/.pyenv_versions ]] 16:48:42 +++ grep '^3' /tmp/.pyenv_versions 16:48:42 +++ sort -V 16:48:42 +++ tail -n 1 16:48:42 ++ py_version_xyz=3.11.7 16:48:42 ++ [[ -z 3.11.7 ]] 16:48:42 ++ echo 3.11.7 16:48:42 ++ return 0 16:48:42 + pyenv local 3.11.7 16:48:42 + local command 16:48:42 + command=local 16:48:42 + '[' 2 -gt 0 ']' 16:48:42 + shift 16:48:42 + case "$command" in 16:48:42 + command pyenv local 3.11.7 16:48:42 + pyenv local 3.11.7 16:48:42 + for arg in "$@" 16:48:42 + case $arg in 16:48:42 + pkg_list+='tox ' 16:48:42 + for arg in "$@" 16:48:42 + case $arg in 16:48:42 + pkg_list+='virtualenv ' 16:48:42 + for arg in "$@" 16:48:42 + case $arg in 16:48:42 + pkg_list+='urllib3~=1.26.15 ' 16:48:42 + [[ -f /tmp/.toxenv ]] 16:48:42 + [[ ! -f /tmp/.toxenv ]] 16:48:42 + [[ -n '' ]] 16:48:42 + python3 -m venv /tmp/venv-dXo8 16:48:46 + echo 'lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-dXo8' 16:48:46 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-dXo8 16:48:46 + echo /tmp/venv-dXo8 16:48:46 + echo 'lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv' 16:48:46 lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv 16:48:46 + /tmp/venv-dXo8/bin/python3 -m pip install --upgrade --quiet pip virtualenv 16:48:49 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 16:48:49 + echo 'lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 ' 16:48:49 lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 16:48:49 + /tmp/venv-dXo8/bin/python3 -m pip install --upgrade --quiet --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 16:48:52 + type python3 16:48:52 + true 16:48:52 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-dXo8/bin to PATH' 16:48:52 lf-activate-venv(): INFO: Adding /tmp/venv-dXo8/bin to PATH 16:48:52 + PATH=/tmp/venv-dXo8/bin:/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 16:48:52 + return 0 16:48:52 + python3 --version 16:48:52 Python 3.11.7 16:48:52 + python3 -m pip --version 16:48:52 pip 24.2 from /tmp/venv-dXo8/lib/python3.11/site-packages/pip (python 3.11) 16:48:52 + python3 -m pip freeze 16:48:52 cachetools==5.5.0 16:48:52 chardet==5.2.0 16:48:52 colorama==0.4.6 16:48:52 distlib==0.3.9 16:48:52 filelock==3.16.1 16:48:52 packaging==24.1 16:48:52 platformdirs==4.3.6 16:48:52 pluggy==1.5.0 16:48:52 pyproject-api==1.8.0 16:48:52 tox==4.23.2 16:48:52 urllib3==1.26.20 16:48:52 virtualenv==20.27.0 16:48:52 [transportpce-tox-verify-transportpce-master] $ /bin/sh -xe /tmp/jenkins844206218064662707.sh 16:48:52 [EnvInject] - Injecting environment variables from a build step. 16:48:52 [EnvInject] - Injecting as environment variables the properties content 16:48:52 PARALLEL=True 16:48:52 16:48:52 [EnvInject] - Variables injected successfully. 16:48:52 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins12708871466940222432.sh 16:48:52 ---> tox-run.sh 16:48:52 + PATH=/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 16:48:52 + ARCHIVE_TOX_DIR=/w/workspace/transportpce-tox-verify-transportpce-master/archives/tox 16:48:52 + ARCHIVE_DOC_DIR=/w/workspace/transportpce-tox-verify-transportpce-master/archives/docs 16:48:52 + mkdir -p /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox 16:48:52 + cd /w/workspace/transportpce-tox-verify-transportpce-master/. 16:48:52 + source /home/jenkins/lf-env.sh 16:48:52 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 16:48:52 ++ mktemp -d /tmp/venv-XXXX 16:48:52 + lf_venv=/tmp/venv-Nj4q 16:48:52 + local venv_file=/tmp/.os_lf_venv 16:48:52 + local python=python3 16:48:52 + local options 16:48:52 + local set_path=true 16:48:52 + local install_args= 16:48:52 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 16:48:52 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 16:48:52 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 16:48:52 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 16:48:52 + true 16:48:52 + case $1 in 16:48:52 + venv_file=/tmp/.toxenv 16:48:52 + shift 2 16:48:52 + true 16:48:52 + case $1 in 16:48:52 + shift 16:48:52 + break 16:48:52 + case $python in 16:48:52 + local pkg_list= 16:48:52 + [[ -d /opt/pyenv ]] 16:48:52 + echo 'Setup pyenv:' 16:48:52 Setup pyenv: 16:48:52 + export PYENV_ROOT=/opt/pyenv 16:48:52 + PYENV_ROOT=/opt/pyenv 16:48:52 + export PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 16:48:52 + PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 16:48:52 + pyenv versions 16:48:52 system 16:48:52 3.8.13 16:48:52 3.9.13 16:48:52 3.10.13 16:48:52 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 16:48:52 + command -v pyenv 16:48:52 ++ pyenv init - --no-rehash 16:48:52 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 16:48:52 for i in ${!paths[@]}; do 16:48:52 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 16:48:52 fi; done; 16:48:52 echo "${paths[*]}"'\'')" 16:48:52 export PATH="/opt/pyenv/shims:${PATH}" 16:48:52 export PYENV_SHELL=bash 16:48:52 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 16:48:52 pyenv() { 16:48:52 local command 16:48:52 command="${1:-}" 16:48:52 if [ "$#" -gt 0 ]; then 16:48:52 shift 16:48:52 fi 16:48:52 16:48:52 case "$command" in 16:48:52 rehash|shell) 16:48:52 eval "$(pyenv "sh-$command" "$@")" 16:48:52 ;; 16:48:52 *) 16:48:52 command pyenv "$command" "$@" 16:48:52 ;; 16:48:52 esac 16:48:52 }' 16:48:52 +++ bash --norc -ec 'IFS=:; paths=($PATH); 16:48:52 for i in ${!paths[@]}; do 16:48:52 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 16:48:52 fi; done; 16:48:52 echo "${paths[*]}"' 16:48:52 ++ PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 16:48:52 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 16:48:52 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 16:48:52 ++ export PYENV_SHELL=bash 16:48:52 ++ PYENV_SHELL=bash 16:48:52 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 16:48:52 +++ complete -F _pyenv pyenv 16:48:52 ++ lf-pyver python3 16:48:52 ++ local py_version_xy=python3 16:48:52 ++ local py_version_xyz= 16:48:52 ++ pyenv versions 16:48:52 ++ local command 16:48:52 ++ command=versions 16:48:52 ++ '[' 1 -gt 0 ']' 16:48:52 ++ shift 16:48:52 ++ case "$command" in 16:48:52 ++ command pyenv versions 16:48:52 ++ pyenv versions 16:48:52 ++ sed 's/^[ *]* //' 16:48:52 ++ awk '{ print $1 }' 16:48:52 ++ grep -E '^[0-9.]*[0-9]$' 16:48:52 ++ [[ ! -s /tmp/.pyenv_versions ]] 16:48:52 +++ grep '^3' /tmp/.pyenv_versions 16:48:52 +++ sort -V 16:48:52 +++ tail -n 1 16:48:52 ++ py_version_xyz=3.11.7 16:48:52 ++ [[ -z 3.11.7 ]] 16:48:52 ++ echo 3.11.7 16:48:52 ++ return 0 16:48:52 + pyenv local 3.11.7 16:48:52 + local command 16:48:52 + command=local 16:48:52 + '[' 2 -gt 0 ']' 16:48:52 + shift 16:48:52 + case "$command" in 16:48:52 + command pyenv local 3.11.7 16:48:52 + pyenv local 3.11.7 16:48:52 + for arg in "$@" 16:48:52 + case $arg in 16:48:52 + pkg_list+='tox ' 16:48:52 + for arg in "$@" 16:48:52 + case $arg in 16:48:52 + pkg_list+='virtualenv ' 16:48:52 + for arg in "$@" 16:48:52 + case $arg in 16:48:52 + pkg_list+='urllib3~=1.26.15 ' 16:48:52 + [[ -f /tmp/.toxenv ]] 16:48:52 ++ cat /tmp/.toxenv 16:48:52 + lf_venv=/tmp/venv-dXo8 16:48:52 + echo 'lf-activate-venv(): INFO: Reuse venv:/tmp/venv-dXo8 from' file:/tmp/.toxenv 16:48:52 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-dXo8 from file:/tmp/.toxenv 16:48:52 + /tmp/venv-dXo8/bin/python3 -m pip install --upgrade --quiet pip virtualenv 16:48:53 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 16:48:53 + echo 'lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 ' 16:48:53 lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 16:48:53 + /tmp/venv-dXo8/bin/python3 -m pip install --upgrade --quiet --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 16:48:54 + type python3 16:48:54 + true 16:48:54 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-dXo8/bin to PATH' 16:48:54 lf-activate-venv(): INFO: Adding /tmp/venv-dXo8/bin to PATH 16:48:54 + PATH=/tmp/venv-dXo8/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 16:48:54 + return 0 16:48:54 + [[ -d /opt/pyenv ]] 16:48:54 + echo '---> Setting up pyenv' 16:48:54 ---> Setting up pyenv 16:48:54 + export PYENV_ROOT=/opt/pyenv 16:48:54 + PYENV_ROOT=/opt/pyenv 16:48:54 + export PATH=/opt/pyenv/bin:/tmp/venv-dXo8/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 16:48:54 + PATH=/opt/pyenv/bin:/tmp/venv-dXo8/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 16:48:54 ++ pwd 16:48:54 + PYTHONPATH=/w/workspace/transportpce-tox-verify-transportpce-master 16:48:54 + export PYTHONPATH 16:48:54 + export TOX_TESTENV_PASSENV=PYTHONPATH 16:48:54 + TOX_TESTENV_PASSENV=PYTHONPATH 16:48:54 + tox --version 16:48:55 4.23.2 from /tmp/venv-dXo8/lib/python3.11/site-packages/tox/__init__.py 16:48:55 + PARALLEL=True 16:48:55 + TOX_OPTIONS_LIST= 16:48:55 + [[ -n '' ]] 16:48:55 + case ${PARALLEL,,} in 16:48:55 + TOX_OPTIONS_LIST=' --parallel auto --parallel-live' 16:48:55 + tox --parallel auto --parallel-live 16:48:55 + tee -a /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tox.log 16:48:56 docs-linkcheck: install_deps> python -I -m pip install -r docs/requirements.txt 16:48:56 docs: install_deps> python -I -m pip install -r docs/requirements.txt 16:48:56 checkbashisms: freeze> python -m pip freeze --all 16:48:56 buildcontroller: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 16:48:57 checkbashisms: pip==24.2,setuptools==75.2.0,wheel==0.44.0 16:48:57 checkbashisms: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./fixCIcentOS8reposMirrors.sh 16:48:57 checkbashisms: commands[1] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sh -c 'command checkbashisms>/dev/null || sudo yum install -y devscripts-checkbashisms || sudo yum install -y devscripts-minimal || sudo yum install -y devscripts || sudo yum install -y https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/31/Everything/x86_64/os/Packages/d/devscripts-checkbashisms-2.19.6-2.fc31.x86_64.rpm || (echo "checkbashisms command not found - please install it (e.g. sudo apt-get install devscripts | yum install devscripts-minimal )" >&2 && exit 1)' 16:48:57 checkbashisms: commands[2] /w/workspace/transportpce-tox-verify-transportpce-master/tests> find . -not -path '*/\.*' -name '*.sh' -exec checkbashisms -f '{}' + 16:48:58 script ./reflectwarn.sh does not appear to have a #! interpreter line; 16:48:58 you may get strange results 16:48:58 checkbashisms: OK ✔ in 2.96 seconds 16:48:58 pre-commit: install_deps> python -I -m pip install pre-commit 16:49:01 pre-commit: freeze> python -m pip freeze --all 16:49:01 pre-commit: cfgv==3.4.0,distlib==0.3.9,filelock==3.16.1,identify==2.6.1,nodeenv==1.9.1,pip==24.2,platformdirs==4.3.6,pre_commit==4.0.1,PyYAML==6.0.2,setuptools==75.2.0,virtualenv==20.27.0,wheel==0.44.0 16:49:01 pre-commit: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./fixCIcentOS8reposMirrors.sh 16:49:01 pre-commit: commands[1] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sh -c 'which cpan || sudo yum install -y perl-CPAN || (echo "cpan command not found - please install it (e.g. sudo apt-get install perl-modules | yum install perl-CPAN )" >&2 && exit 1)' 16:49:01 /usr/bin/cpan 16:49:01 pre-commit: commands[2] /w/workspace/transportpce-tox-verify-transportpce-master/tests> pre-commit run --all-files --show-diff-on-failure 16:49:01 [WARNING] hook id `remove-tabs` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 16:49:01 [WARNING] hook id `perltidy` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 16:49:01 [INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks. 16:49:02 [WARNING] repo `https://github.com/pre-commit/pre-commit-hooks` uses deprecated stage names (commit, push) which will be removed in a future version. Hint: often `pre-commit autoupdate --repo https://github.com/pre-commit/pre-commit-hooks` will fix this. if it does not -- consider reporting an issue to that repo. 16:49:02 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint. 16:49:02 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint:./gitlint-core[trusted-deps]. 16:49:03 [INFO] Initializing environment for https://github.com/Lucas-C/pre-commit-hooks. 16:49:03 buildcontroller: freeze> python -m pip freeze --all 16:49:03 [INFO] Initializing environment for https://github.com/pre-commit/mirrors-autopep8. 16:49:03 buildcontroller: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 16:49:03 buildcontroller: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_controller.sh 16:49:03 + update-java-alternatives -l 16:49:03 java-1.11.0-openjdk-amd64 1111 /usr/lib/jvm/java-1.11.0-openjdk-amd64 16:49:03 java-1.12.0-openjdk-amd64 1211 /usr/lib/jvm/java-1.12.0-openjdk-amd64 16:49:03 java-1.17.0-openjdk-amd64 1711 /usr/lib/jvm/java-1.17.0-openjdk-amd64 16:49:03 java-1.21.0-openjdk-amd64 2111 /usr/lib/jvm/java-1.21.0-openjdk-amd64 16:49:03 java-1.8.0-openjdk-amd64 1081 /usr/lib/jvm/java-1.8.0-openjdk-amd64 16:49:03 + sudo update-java-alternatives -s java-1.21.0-openjdk-amd64 16:49:04 [INFO] Initializing environment for https://github.com/perltidy/perltidy. 16:49:04 + java -version 16:49:04 + sed -n ;s/.* version "\(.*\)\.\(.*\)\..*".*$/\1/p; 16:49:04 21 16:49:04 + JAVA_VER=21 16:49:04 + echo 21 16:49:04 + javac -version 16:49:04 + sed -n ;s/javac \(.*\)\.\(.*\)\..*.*$/\1/p; 16:49:04 [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. 16:49:04 [INFO] Once installed this environment will be reused. 16:49:04 [INFO] This may take a few minutes... 16:49:05 + JAVAC_VER=21 16:49:05 + echo 21 16:49:05 + [ 21 -ge 21 ] 16:49:05 + [ 21 -ge 21 ] 16:49:05 + echo ok, java is 21 or newer 16:49:05 + wget -nv https://dlcdn.apache.org/maven/maven-3/3.9.8/binaries/apache-maven-3.9.8-bin.tar.gz -P /tmp 16:49:05 21 16:49:05 ok, java is 21 or newer 16:49:05 2024-10-25 16:49:05 URL:https://dlcdn.apache.org/maven/maven-3/3.9.8/binaries/apache-maven-3.9.8-bin.tar.gz [9083702/9083702] -> "/tmp/apache-maven-3.9.8-bin.tar.gz" [1] 16:49:05 + sudo mkdir -p /opt 16:49:05 + sudo tar xf /tmp/apache-maven-3.9.8-bin.tar.gz -C /opt 16:49:05 + sudo ln -s /opt/apache-maven-3.9.8 /opt/maven 16:49:05 + sudo ln -s /opt/maven/bin/mvn /usr/bin/mvn 16:49:05 + mvn --version 16:49:06 Apache Maven 3.9.8 (36645f6c9b5079805ea5009217e36f2cffd34256) 16:49:06 Maven home: /opt/maven 16:49:06 Java version: 21.0.4, vendor: Ubuntu, runtime: /usr/lib/jvm/java-21-openjdk-amd64 16:49:06 Default locale: en, platform encoding: UTF-8 16:49:06 OS name: "linux", version: "5.4.0-190-generic", arch: "amd64", family: "unix" 16:49:06 NOTE: Picked up JDK_JAVA_OPTIONS: 16:49:06 --add-opens=java.base/java.io=ALL-UNNAMED 16:49:06 --add-opens=java.base/java.lang=ALL-UNNAMED 16:49:06 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 16:49:06 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 16:49:06 --add-opens=java.base/java.net=ALL-UNNAMED 16:49:06 --add-opens=java.base/java.nio=ALL-UNNAMED 16:49:06 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 16:49:06 --add-opens=java.base/java.nio.file=ALL-UNNAMED 16:49:06 --add-opens=java.base/java.util=ALL-UNNAMED 16:49:06 --add-opens=java.base/java.util.jar=ALL-UNNAMED 16:49:06 --add-opens=java.base/java.util.stream=ALL-UNNAMED 16:49:06 --add-opens=java.base/java.util.zip=ALL-UNNAMED 16:49:06 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 16:49:06 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 16:49:06 -Xlog:disable 16:49:08 [INFO] Installing environment for https://github.com/Lucas-C/pre-commit-hooks. 16:49:08 [INFO] Once installed this environment will be reused. 16:49:08 [INFO] This may take a few minutes... 16:49:14 [INFO] Installing environment for https://github.com/pre-commit/mirrors-autopep8. 16:49:14 [INFO] Once installed this environment will be reused. 16:49:14 [INFO] This may take a few minutes... 16:49:18 [INFO] Installing environment for https://github.com/perltidy/perltidy. 16:49:18 [INFO] Once installed this environment will be reused. 16:49:18 [INFO] This may take a few minutes... 16:49:25 docs-linkcheck: freeze> python -m pip freeze --all 16:49:25 docs-linkcheck: alabaster==1.0.0,attrs==24.2.0,babel==2.16.0,blockdiag==3.0.0,certifi==2024.8.30,charset-normalizer==3.4.0,contourpy==1.3.0,cycler==0.12.1,docutils==0.21.2,fonttools==4.54.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.10,imagesize==1.4.1,Jinja2==3.1.4,jsonschema==3.2.0,kiwisolver==1.4.7,lfdocs-conf==0.9.0,MarkupSafe==3.0.2,matplotlib==3.9.2,numpy==2.1.2,nwdiag==3.0.0,packaging==24.1,pillow==11.0.0,pip==24.2,Pygments==2.18.0,pyparsing==3.2.0,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.2,requests==2.32.3,requests-file==1.5.1,seqdiag==3.0.0,setuptools==75.2.0,six==1.16.0,snowballstemmer==2.2.0,Sphinx==8.1.3,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-rtd-theme==3.0.1,sphinx-tabs==3.4.7,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.30,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.2.3,webcolors==24.8.0,wheel==0.44.0 16:49:25 docs-linkcheck: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sphinx-build -q -b linkcheck -d /w/workspace/transportpce-tox-verify-transportpce-master/.tox/docs-linkcheck/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-transportpce-master/docs/_build/linkcheck 16:49:26 docs: freeze> python -m pip freeze --all 16:49:26 docs: alabaster==1.0.0,attrs==24.2.0,babel==2.16.0,blockdiag==3.0.0,certifi==2024.8.30,charset-normalizer==3.4.0,contourpy==1.3.0,cycler==0.12.1,docutils==0.21.2,fonttools==4.54.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.10,imagesize==1.4.1,Jinja2==3.1.4,jsonschema==3.2.0,kiwisolver==1.4.7,lfdocs-conf==0.9.0,MarkupSafe==3.0.2,matplotlib==3.9.2,numpy==2.1.2,nwdiag==3.0.0,packaging==24.1,pillow==11.0.0,pip==24.2,Pygments==2.18.0,pyparsing==3.2.0,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.2,requests==2.32.3,requests-file==1.5.1,seqdiag==3.0.0,setuptools==75.2.0,six==1.16.0,snowballstemmer==2.2.0,Sphinx==8.1.3,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-rtd-theme==3.0.1,sphinx-tabs==3.4.7,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.30,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.2.3,webcolors==24.8.0,wheel==0.44.0 16:49:26 docs: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sphinx-build -q -W --keep-going -b html -n -d /w/workspace/transportpce-tox-verify-transportpce-master/.tox/docs/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-transportpce-master/docs/_build/html 16:49:28 trim trailing whitespace.................................................Passed 16:49:29 Tabs remover.............................................................Passed 16:49:29 autopep8.................................................................docs: OK ✔ in 37.88 seconds 16:49:33 pylint: install_deps> python -I -m pip install 'pylint>=2.6.0' 16:49:33 Passed 16:49:33 perltidy.................................................................Passed 16:49:34 pre-commit: commands[3] /w/workspace/transportpce-tox-verify-transportpce-master/tests> pre-commit run gitlint-ci --hook-stage manual 16:49:34 [WARNING] hook id `remove-tabs` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 16:49:34 [WARNING] hook id `perltidy` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 16:49:34 [INFO] Installing environment for https://github.com/jorisroovers/gitlint. 16:49:34 [INFO] Once installed this environment will be reused. 16:49:34 [INFO] This may take a few minutes... 16:49:38 docs-linkcheck: OK ✔ in 39.61 seconds 16:49:38 pylint: freeze> python -m pip freeze --all 16:49:38 pylint: astroid==3.3.5,dill==0.3.9,isort==5.13.2,mccabe==0.7.0,pip==24.2,platformdirs==4.3.6,pylint==3.3.1,setuptools==75.2.0,tomlkit==0.13.2,wheel==0.44.0 16:49:38 pylint: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> find transportpce_tests/ -name '*.py' -exec pylint --fail-under=10 --max-line-length=120 --disable=missing-docstring,import-error --disable=fixme --disable=duplicate-code '--module-rgx=([a-z0-9_]+$)|([0-9.]{1,30}$)' '--method-rgx=(([a-z_][a-zA-Z0-9_]{2,})|(_[a-z0-9_]*)|(__[a-zA-Z][a-zA-Z0-9_]+__))$' '--variable-rgx=[a-zA-Z_][a-zA-Z0-9_]{1,30}$' '{}' + 16:49:42 gitlint..................................................................Passed 16:49:57 16:49:57 ------------------------------------ 16:49:57 Your code has been rated at 10.00/10 16:49:57 16:50:43 pre-commit: OK ✔ in 43.88 seconds 16:50:43 pylint: OK ✔ in 25.46 seconds 16:50:43 buildcontroller: OK ✔ in 1 minute 47.19 seconds 16:50:43 testsPCE: install_deps> python -I -m pip install gnpy4tpce==2.4.7 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 16:50:43 sims: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 16:50:43 build_karaf_tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 16:50:43 build_karaf_tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 16:50:50 sims: freeze> python -m pip freeze --all 16:50:50 build_karaf_tests221: freeze> python -m pip freeze --all 16:50:50 build_karaf_tests121: freeze> python -m pip freeze --all 16:50:50 build_karaf_tests221: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 16:50:50 build_karaf_tests221: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 16:50:50 sims: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 16:50:50 sims: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./install_lightynode.sh 16:50:50 NOTE: Picked up JDK_JAVA_OPTIONS: 16:50:50 --add-opens=java.base/java.io=ALL-UNNAMED 16:50:50 --add-opens=java.base/java.lang=ALL-UNNAMED 16:50:50 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 16:50:50 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 16:50:50 --add-opens=java.base/java.net=ALL-UNNAMED 16:50:50 --add-opens=java.base/java.nio=ALL-UNNAMED 16:50:50 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 16:50:50 --add-opens=java.base/java.nio.file=ALL-UNNAMED 16:50:50 --add-opens=java.base/java.util=ALL-UNNAMED 16:50:50 --add-opens=java.base/java.util.jar=ALL-UNNAMED 16:50:50 --add-opens=java.base/java.util.stream=ALL-UNNAMED 16:50:50 --add-opens=java.base/java.util.zip=ALL-UNNAMED 16:50:50 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 16:50:50 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 16:50:50 -Xlog:disable 16:50:50 Using lighynode version 20.1.0.2 16:50:50 Installing lightynode device to ./lightynode/lightynode-openroadm-device directory 16:50:50 build_karaf_tests121: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 16:50:50 build_karaf_tests121: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 16:50:50 NOTE: Picked up JDK_JAVA_OPTIONS: 16:50:50 --add-opens=java.base/java.io=ALL-UNNAMED 16:50:50 --add-opens=java.base/java.lang=ALL-UNNAMED 16:50:50 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 16:50:50 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 16:50:50 --add-opens=java.base/java.net=ALL-UNNAMED 16:50:50 --add-opens=java.base/java.nio=ALL-UNNAMED 16:50:50 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 16:50:50 --add-opens=java.base/java.nio.file=ALL-UNNAMED 16:50:50 --add-opens=java.base/java.util=ALL-UNNAMED 16:50:50 --add-opens=java.base/java.util.jar=ALL-UNNAMED 16:50:50 --add-opens=java.base/java.util.stream=ALL-UNNAMED 16:50:50 --add-opens=java.base/java.util.zip=ALL-UNNAMED 16:50:50 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 16:50:50 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 16:50:50 -Xlog:disable 16:50:53 sims: OK ✔ in 11.14 seconds 16:50:53 build_karaf_tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 16:51:06 build_karaf_tests71: freeze> python -m pip freeze --all 16:51:07 build_karaf_tests71: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 16:51:07 build_karaf_tests71: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 16:51:07 NOTE: Picked up JDK_JAVA_OPTIONS: 16:51:07 --add-opens=java.base/java.io=ALL-UNNAMED 16:51:07 --add-opens=java.base/java.lang=ALL-UNNAMED 16:51:07 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 16:51:07 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 16:51:07 --add-opens=java.base/java.net=ALL-UNNAMED 16:51:07 --add-opens=java.base/java.nio=ALL-UNNAMED 16:51:07 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 16:51:07 --add-opens=java.base/java.nio.file=ALL-UNNAMED 16:51:07 --add-opens=java.base/java.util=ALL-UNNAMED 16:51:07 --add-opens=java.base/java.util.jar=ALL-UNNAMED 16:51:07 --add-opens=java.base/java.util.stream=ALL-UNNAMED 16:51:07 --add-opens=java.base/java.util.zip=ALL-UNNAMED 16:51:07 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 16:51:07 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 16:51:07 -Xlog:disable 16:51:35 build_karaf_tests221: OK ✔ in 52.23 seconds 16:51:35 build_karaf_tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 16:51:36 build_karaf_tests121: OK ✔ in 53.38 seconds 16:51:36 tests_tapi: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 16:51:41 build_karaf_tests_hybrid: freeze> python -m pip freeze --all 16:51:41 build_karaf_tests_hybrid: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 16:51:41 build_karaf_tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 16:51:41 NOTE: Picked up JDK_JAVA_OPTIONS: 16:51:41 --add-opens=java.base/java.io=ALL-UNNAMED 16:51:41 --add-opens=java.base/java.lang=ALL-UNNAMED 16:51:41 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 16:51:41 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 16:51:41 --add-opens=java.base/java.net=ALL-UNNAMED 16:51:41 --add-opens=java.base/java.nio=ALL-UNNAMED 16:51:41 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 16:51:41 --add-opens=java.base/java.nio.file=ALL-UNNAMED 16:51:41 --add-opens=java.base/java.util=ALL-UNNAMED 16:51:41 --add-opens=java.base/java.util.jar=ALL-UNNAMED 16:51:41 --add-opens=java.base/java.util.stream=ALL-UNNAMED 16:51:41 --add-opens=java.base/java.util.zip=ALL-UNNAMED 16:51:41 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 16:51:41 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 16:51:41 -Xlog:disable 16:51:41 tests_tapi: freeze> python -m pip freeze --all 16:51:42 tests_tapi: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 16:51:42 tests_tapi: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh tapi 16:51:42 using environment variables from ./karaf221.env 16:51:42 pytest -q transportpce_tests/tapi/test01_abstracted_topology.py 16:52:01 build_karaf_tests71: OK ✔ in 1 minute 3.08 seconds 16:52:01 testsPCE: freeze> python -m pip freeze --all 16:52:01 testsPCE: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,click==8.1.7,contourpy==1.3.0,cryptography==3.3.2,cycler==0.12.1,dict2xml==1.7.6,Flask==2.1.3,Flask-Injector==0.14.0,fonttools==4.54.1,gnpy4tpce==2.4.7,idna==3.10,iniconfig==2.0.0,injector==0.22.0,itsdangerous==2.2.0,Jinja2==3.1.4,kiwisolver==1.4.7,lxml==5.3.0,MarkupSafe==3.0.2,matplotlib==3.9.2,netconf-client==3.1.1,networkx==2.8.8,numpy==1.26.4,packaging==24.1,pandas==1.5.3,paramiko==3.5.0,pbr==5.11.1,pillow==11.0.0,pip==24.2,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pyparsing==3.2.0,pytest==8.3.3,python-dateutil==2.9.0.post0,pytz==2024.2,requests==2.32.3,scipy==1.14.1,setuptools==50.3.2,six==1.16.0,urllib3==2.2.3,Werkzeug==2.0.3,wheel==0.44.0,xlrd==1.2.0 16:52:01 testsPCE: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh pce 16:52:01 pytest -q transportpce_tests/pce/test01_pce.py 16:52:59 ....................................... [100%] 16:54:02 20 passed in 120.84s (0:02:00) 16:54:02 pytest -q transportpce_tests/pce/test02_pce_400G.py 16:54:03 ...................... [100%] 16:54:45 9 passed in 42.55s 16:54:45 pytest -q transportpce_tests/pce/test03_gnpy.py 16:54:53 .............. [100%] 16:55:24 8 passed in 38.32s 16:55:24 pytest -q transportpce_tests/pce/test04_pce_bug_fix.py 16:55:38 ............ [100%] 16:55:43 50 passed in 240.47s (0:04:00) 16:55:43 pytest -q transportpce_tests/tapi/test02_full_topology.py 16:55:59 ... [100%] 16:56:04 3 passed in 40.00s 16:56:04 build_karaf_tests_hybrid: OK ✔ in 1 minute 0.72 seconds 16:56:04 testsPCE: OK ✔ in 5 minutes 22.16 seconds 16:56:04 tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 16:56:11 tests121: freeze> python -m pip freeze --all 16:56:11 tests121: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 16:56:11 tests121: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 1.2.1 16:56:11 using environment variables from ./karaf121.env 16:56:11 pytest -q transportpce_tests/1.2.1/test01_portmapping.py 16:57:08 ...........FF....................... [100%] 17:00:36 =================================== FAILURES =================================== 17:00:36 _____________ TransportPCEtesting.test_12_check_openroadm_topology _____________ 17:00:36 17:00:36 self = 17:00:36 17:00:36 def test_12_check_openroadm_topology(self): 17:00:36 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 17:00:36 self.assertEqual(response['status_code'], requests.codes.ok) 17:00:36 > self.assertEqual(len(response['network'][0]['node']), 14, 'There should be 14 openroadm nodes') 17:00:36 E AssertionError: 18 != 14 : There should be 14 openroadm nodes 17:00:36 17:00:36 transportpce_tests/tapi/test02_full_topology.py:272: AssertionError 17:00:36 ____________ TransportPCEtesting.test_13_get_tapi_topology_details _____________ 17:00:36 17:00:36 self = 17:00:36 17:00:36 def test_13_get_tapi_topology_details(self): 17:00:36 self.tapi_topo["topology-id"] = test_utils.T0_FULL_MULTILAYER_TOPO_UUID 17:00:36 response = test_utils.transportpce_api_rpc_request( 17:00:36 'tapi-topology', 'get-topology-details', self.tapi_topo) 17:00:36 time.sleep(2) 17:00:36 self.assertEqual(response['status_code'], requests.codes.ok) 17:00:36 > self.assertEqual(len(response['output']['topology']['node']), 8, 'There should be 8 TAPI nodes') 17:00:36 E AssertionError: 9 != 8 : There should be 8 TAPI nodes 17:00:36 17:00:36 transportpce_tests/tapi/test02_full_topology.py:282: AssertionError 17:00:36 =========================== short test summary info ============================ 17:00:36 FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_12_check_openroadm_topology 17:00:36 FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_13_get_tapi_topology_details 17:00:36 2 failed, 28 passed in 293.02s (0:04:53) 17:00:36 tests_tapi: exit 1 (534.10 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh tapi pid=30678 17:00:36 tests_tapi: FAIL ✖ in 9 minutes 0.5 seconds 17:00:36 tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 17:00:53 tests71: freeze> python -m pip freeze --all 17:00:53 tests71: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 17:00:53 tests71: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 7.1 17:00:53 using environment variables from ./karaf71.env 17:00:53 pytest -q transportpce_tests/7.1/test01_portmapping.py 17:01:24 ............ [100%] 17:01:37 12 passed in 43.53s 17:01:37 pytest -q transportpce_tests/7.1/test02_otn_renderer.py 17:02:02 .....................................FF.FF.FF.FF....................... [100%] 17:04:13 62 passed in 155.23s (0:02:35) 17:04:13 pytest -q transportpce_tests/7.1/test03_renderer_or_modes.py 17:04:44 ................................................FF [100%] 17:06:27 48 passed in 134.57s (0:02:14) 17:06:27 pytest -q transportpce_tests/7.1/test04_renderer_regen_mode.py 17:06:28 FFFF [100%] 17:06:34 =================================== FAILURES =================================== 17:06:34 _________ TransportPCEPortMappingTesting.test_08_xpdr_device_connected _________ 17:06:34 17:06:34 self = 17:06:34 17:06:34 def test_08_xpdr_device_connected(self): 17:06:34 response = test_utils.check_device_connection("XPDRA01") 17:06:34 > self.assertEqual(response['status_code'], requests.codes.ok) 17:06:34 E AssertionError: 409 != 200 17:06:34 17:06:34 transportpce_tests/1.2.1/test01_portmapping.py:104: AssertionError 17:06:34 ----------------------------- Captured stdout call ----------------------------- 17:06:34 execution of test_08_xpdr_device_connected 17:06:34 _________ TransportPCEPortMappingTesting.test_09_xpdr_portmapping_info _________ 17:06:34 17:06:34 self = 17:06:34 17:06:34 def test_09_xpdr_portmapping_info(self): 17:06:34 response = test_utils.get_portmapping_node_attr("XPDRA01", "node-info", None) 17:06:34 > self.assertEqual(response['status_code'], requests.codes.ok) 17:06:34 E AssertionError: 409 != 200 17:06:34 17:06:34 transportpce_tests/1.2.1/test01_portmapping.py:110: AssertionError 17:06:34 ----------------------------- Captured stdout call ----------------------------- 17:06:34 execution of test_09_xpdr_portmapping_info 17:06:34 _______ TransportPCEPortMappingTesting.test_10_xpdr_portmapping_NETWORK1 _______ 17:06:34 17:06:34 self = 17:06:34 17:06:34 def test_10_xpdr_portmapping_NETWORK1(self): 17:06:34 response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-NETWORK1") 17:06:34 > self.assertEqual(response['status_code'], requests.codes.ok) 17:06:34 E AssertionError: 409 != 200 17:06:34 17:06:34 transportpce_tests/1.2.1/test01_portmapping.py:123: AssertionError 17:06:34 ----------------------------- Captured stdout call ----------------------------- 17:06:34 execution of test_10_xpdr_portmapping_NETWORK1 17:06:34 _______ TransportPCEPortMappingTesting.test_11_xpdr_portmapping_NETWORK2 _______ 17:06:34 17:06:34 self = 17:06:34 17:06:34 def test_11_xpdr_portmapping_NETWORK2(self): 17:06:34 response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-NETWORK2") 17:06:34 > self.assertEqual(response['status_code'], requests.codes.ok) 17:06:34 E AssertionError: 409 != 200 17:06:34 17:06:34 transportpce_tests/1.2.1/test01_portmapping.py:134: AssertionError 17:06:34 ----------------------------- Captured stdout call ----------------------------- 17:06:34 execution of test_11_xpdr_portmapping_NETWORK2 17:06:34 _______ TransportPCEPortMappingTesting.test_12_xpdr_portmapping_CLIENT1 ________ 17:06:34 17:06:34 self = 17:06:34 17:06:34 def test_12_xpdr_portmapping_CLIENT1(self): 17:06:34 response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT1") 17:06:34 > self.assertEqual(response['status_code'], requests.codes.ok) 17:06:34 E AssertionError: 409 != 200 17:06:34 17:06:34 transportpce_tests/1.2.1/test01_portmapping.py:145: AssertionError 17:06:34 ----------------------------- Captured stdout call ----------------------------- 17:06:34 execution of test_12_xpdr_portmapping_CLIENT1 17:06:34 _______ TransportPCEPortMappingTesting.test_13_xpdr_portmapping_CLIENT2 ________ 17:06:34 17:06:34 self = 17:06:34 17:06:34 def test_13_xpdr_portmapping_CLIENT2(self): 17:06:34 response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT2") 17:06:34 > self.assertEqual(response['status_code'], requests.codes.ok) 17:06:34 E AssertionError: 409 != 200 17:06:34 17:06:34 transportpce_tests/1.2.1/test01_portmapping.py:157: AssertionError 17:06:34 ----------------------------- Captured stdout call ----------------------------- 17:06:34 execution of test_13_xpdr_portmapping_CLIENT2 17:06:34 _______ TransportPCEPortMappingTesting.test_14_xpdr_portmapping_CLIENT3 ________ 17:06:34 17:06:34 self = 17:06:34 17:06:34 def test_14_xpdr_portmapping_CLIENT3(self): 17:06:34 response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT3") 17:06:34 > self.assertEqual(response['status_code'], requests.codes.ok) 17:06:34 E AssertionError: 409 != 200 17:06:34 17:06:34 transportpce_tests/1.2.1/test01_portmapping.py:169: AssertionError 17:06:34 ----------------------------- Captured stdout call ----------------------------- 17:06:34 execution of test_14_xpdr_portmapping_CLIENT3 17:06:34 _______ TransportPCEPortMappingTesting.test_15_xpdr_portmapping_CLIENT4 ________ 17:06:34 17:06:34 self = 17:06:34 17:06:34 def test_15_xpdr_portmapping_CLIENT4(self): 17:06:34 response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT4") 17:06:34 > self.assertEqual(response['status_code'], requests.codes.ok) 17:06:34 E AssertionError: 409 != 200 17:06:34 17:06:34 transportpce_tests/1.2.1/test01_portmapping.py:181: AssertionError 17:06:34 ----------------------------- Captured stdout call ----------------------------- 17:06:34 execution of test_15_xpdr_portmapping_CLIENT4 17:06:34 _______ TransportPCEPortMappingTesting.test_16_xpdr_device_disconnection _______ 17:06:34 17:06:34 self = 17:06:34 17:06:34 def test_16_xpdr_device_disconnection(self): 17:06:34 response = test_utils.unmount_device("XPDRA01") 17:06:34 > self.assertIn(response.status_code, (requests.codes.ok, requests.codes.no_content)) 17:06:34 E AssertionError: 409 not found in (200, 204) 17:06:34 17:06:34 transportpce_tests/1.2.1/test01_portmapping.py:192: AssertionError 17:06:34 ----------------------------- Captured stdout call ----------------------------- 17:06:34 execution of test_16_xpdr_device_disconnection 17:06:34 Searching for pattern 'onDeviceDisConnected:\ XPDRA01' in karaf.log... Pattern not found after 180 seconds! Node XPDRA01 still not deleted from tpce topology... 17:06:34 _______ TransportPCEPortMappingTesting.test_17_xpdr_device_disconnected ________ 17:06:34 17:06:34 self = 17:06:34 17:06:34 def _new_conn(self) -> socket.socket: 17:06:34 """Establish a socket connection and set nodelay settings on it. 17:06:34 17:06:34 :return: New socket connection. 17:06:34 """ 17:06:34 try: 17:06:34 > sock = connection.create_connection( 17:06:34 (self._dns_host, self.port), 17:06:34 self.timeout, 17:06:34 source_address=self.source_address, 17:06:34 socket_options=self.socket_options, 17:06:34 ) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 17:06:34 raise err 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 17:06:34 address = ('localhost', 8182), timeout = 10, source_address = None 17:06:34 socket_options = [(6, 1, 1)] 17:06:34 17:06:34 def create_connection( 17:06:34 address: tuple[str, int], 17:06:34 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:06:34 source_address: tuple[str, int] | None = None, 17:06:34 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 17:06:34 ) -> socket.socket: 17:06:34 """Connect to *address* and return the socket object. 17:06:34 17:06:34 Convenience function. Connect to *address* (a 2-tuple ``(host, 17:06:34 port)``) and return the socket object. Passing the optional 17:06:34 *timeout* parameter will set the timeout on the socket instance 17:06:34 before attempting to connect. If no *timeout* is supplied, the 17:06:34 global default timeout setting returned by :func:`socket.getdefaulttimeout` 17:06:34 is used. If *source_address* is set it must be a tuple of (host, port) 17:06:34 for the socket to bind as a source address before making the connection. 17:06:34 An host of '' or port 0 tells the OS to use the default. 17:06:34 """ 17:06:34 17:06:34 host, port = address 17:06:34 if host.startswith("["): 17:06:34 host = host.strip("[]") 17:06:34 err = None 17:06:34 17:06:34 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 17:06:34 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 17:06:34 # The original create_connection function always returns all records. 17:06:34 family = allowed_gai_family() 17:06:34 17:06:34 try: 17:06:34 host.encode("idna") 17:06:34 except UnicodeError: 17:06:34 raise LocationParseError(f"'{host}', label empty or too long") from None 17:06:34 17:06:34 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 17:06:34 af, socktype, proto, canonname, sa = res 17:06:34 sock = None 17:06:34 try: 17:06:34 sock = socket.socket(af, socktype, proto) 17:06:34 17:06:34 # If provided, set socket level options before connecting. 17:06:34 _set_socket_options(sock, socket_options) 17:06:34 17:06:34 if timeout is not _DEFAULT_TIMEOUT: 17:06:34 sock.settimeout(timeout) 17:06:34 if source_address: 17:06:34 sock.bind(source_address) 17:06:34 > sock.connect(sa) 17:06:34 E ConnectionRefusedError: [Errno 111] Connection refused 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 17:06:34 17:06:34 The above exception was the direct cause of the following exception: 17:06:34 17:06:34 self = 17:06:34 method = 'GET' 17:06:34 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' 17:06:34 body = None 17:06:34 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:06:34 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:06:34 redirect = False, assert_same_host = False 17:06:34 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:06:34 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:06:34 decode_content = False, response_kw = {} 17:06:34 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query='content=nonconfig', fragment=None) 17:06:34 destination_scheme = None, conn = None, release_this_conn = True 17:06:34 http_tunnel_required = False, err = None, clean_exit = False 17:06:34 17:06:34 def urlopen( # type: ignore[override] 17:06:34 self, 17:06:34 method: str, 17:06:34 url: str, 17:06:34 body: _TYPE_BODY | None = None, 17:06:34 headers: typing.Mapping[str, str] | None = None, 17:06:34 retries: Retry | bool | int | None = None, 17:06:34 redirect: bool = True, 17:06:34 assert_same_host: bool = True, 17:06:34 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:06:34 pool_timeout: int | None = None, 17:06:34 release_conn: bool | None = None, 17:06:34 chunked: bool = False, 17:06:34 body_pos: _TYPE_BODY_POSITION | None = None, 17:06:34 preload_content: bool = True, 17:06:34 decode_content: bool = True, 17:06:34 **response_kw: typing.Any, 17:06:34 ) -> BaseHTTPResponse: 17:06:34 """ 17:06:34 Get a connection from the pool and perform an HTTP request. This is the 17:06:34 lowest level call for making a request, so you'll need to specify all 17:06:34 the raw details. 17:06:34 17:06:34 .. note:: 17:06:34 17:06:34 More commonly, it's appropriate to use a convenience method 17:06:34 such as :meth:`request`. 17:06:34 17:06:34 .. note:: 17:06:34 17:06:34 `release_conn` will only behave as expected if 17:06:34 `preload_content=False` because we want to make 17:06:34 `preload_content=False` the default behaviour someday soon without 17:06:34 breaking backwards compatibility. 17:06:34 17:06:34 :param method: 17:06:34 HTTP request method (such as GET, POST, PUT, etc.) 17:06:34 17:06:34 :param url: 17:06:34 The URL to perform the request on. 17:06:34 17:06:34 :param body: 17:06:34 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:06:34 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:06:34 17:06:34 :param headers: 17:06:34 Dictionary of custom headers to send, such as User-Agent, 17:06:34 If-None-Match, etc. If None, pool headers are used. If provided, 17:06:34 these headers completely replace any pool-specific headers. 17:06:34 17:06:34 :param retries: 17:06:34 Configure the number of retries to allow before raising a 17:06:34 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:06:34 17:06:34 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:06:34 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:06:34 over different types of retries. 17:06:34 Pass an integer number to retry connection errors that many times, 17:06:34 but no other types of errors. Pass zero to never retry. 17:06:34 17:06:34 If ``False``, then retries are disabled and any exception is raised 17:06:34 immediately. Also, instead of raising a MaxRetryError on redirects, 17:06:34 the redirect response will be returned. 17:06:34 17:06:34 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:06:34 17:06:34 :param redirect: 17:06:34 If True, automatically handle redirects (status codes 301, 302, 17:06:34 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:06:34 will disable redirect, too. 17:06:34 17:06:34 :param assert_same_host: 17:06:34 If ``True``, will make sure that the host of the pool requests is 17:06:34 consistent else will raise HostChangedError. When ``False``, you can 17:06:34 use the pool on an HTTP proxy and request foreign hosts. 17:06:34 17:06:34 :param timeout: 17:06:34 If specified, overrides the default timeout for this one 17:06:34 request. It may be a float (in seconds) or an instance of 17:06:34 :class:`urllib3.util.Timeout`. 17:06:34 17:06:34 :param pool_timeout: 17:06:34 If set and the pool is set to block=True, then this method will 17:06:34 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:06:34 connection is available within the time period. 17:06:34 17:06:34 :param bool preload_content: 17:06:34 If True, the response's body will be preloaded into memory. 17:06:34 17:06:34 :param bool decode_content: 17:06:34 If True, will attempt to decode the body based on the 17:06:34 'content-encoding' header. 17:06:34 17:06:34 :param release_conn: 17:06:34 If False, then the urlopen call will not release the connection 17:06:34 back into the pool once a response is received (but will release if 17:06:34 you read the entire contents of the response such as when 17:06:34 `preload_content=True`). This is useful if you're not preloading 17:06:34 the response's content immediately. You will need to call 17:06:34 ``r.release_conn()`` on the response ``r`` to return the connection 17:06:34 back into the pool. If None, it takes the value of ``preload_content`` 17:06:34 which defaults to ``True``. 17:06:34 17:06:34 :param bool chunked: 17:06:34 If True, urllib3 will send the body using chunked transfer 17:06:34 encoding. Otherwise, urllib3 will send the body using the standard 17:06:34 content-length form. Defaults to False. 17:06:34 17:06:34 :param int body_pos: 17:06:34 Position to seek to in file-like body in the event of a retry or 17:06:34 redirect. Typically this won't need to be set because urllib3 will 17:06:34 auto-populate the value when needed. 17:06:34 """ 17:06:34 parsed_url = parse_url(url) 17:06:34 destination_scheme = parsed_url.scheme 17:06:34 17:06:34 if headers is None: 17:06:34 headers = self.headers 17:06:34 17:06:34 if not isinstance(retries, Retry): 17:06:34 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:06:34 17:06:34 if release_conn is None: 17:06:34 release_conn = preload_content 17:06:34 17:06:34 # Check host 17:06:34 if assert_same_host and not self.is_same_host(url): 17:06:34 raise HostChangedError(self, url, retries) 17:06:34 17:06:34 # Ensure that the URL we're connecting to is properly encoded 17:06:34 if url.startswith("/"): 17:06:34 url = to_str(_encode_target(url)) 17:06:34 else: 17:06:34 url = to_str(parsed_url.url) 17:06:34 17:06:34 conn = None 17:06:34 17:06:34 # Track whether `conn` needs to be released before 17:06:34 # returning/raising/recursing. Update this variable if necessary, and 17:06:34 # leave `release_conn` constant throughout the function. That way, if 17:06:34 # the function recurses, the original value of `release_conn` will be 17:06:34 # passed down into the recursive call, and its value will be respected. 17:06:34 # 17:06:34 # See issue #651 [1] for details. 17:06:34 # 17:06:34 # [1] 17:06:34 release_this_conn = release_conn 17:06:34 17:06:34 http_tunnel_required = connection_requires_http_tunnel( 17:06:34 self.proxy, self.proxy_config, destination_scheme 17:06:34 ) 17:06:34 17:06:34 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:06:34 # have to copy the headers dict so we can safely change it without those 17:06:34 # changes being reflected in anyone else's copy. 17:06:34 if not http_tunnel_required: 17:06:34 headers = headers.copy() # type: ignore[attr-defined] 17:06:34 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:06:34 17:06:34 # Must keep the exception bound to a separate variable or else Python 3 17:06:34 # complains about UnboundLocalError. 17:06:34 err = None 17:06:34 17:06:34 # Keep track of whether we cleanly exited the except block. This 17:06:34 # ensures we do proper cleanup in finally. 17:06:34 clean_exit = False 17:06:34 17:06:34 # Rewind body position, if needed. Record current position 17:06:34 # for future rewinds in the event of a redirect/retry. 17:06:34 body_pos = set_file_position(body, body_pos) 17:06:34 17:06:34 try: 17:06:34 # Request a connection from the queue. 17:06:34 timeout_obj = self._get_timeout(timeout) 17:06:34 conn = self._get_conn(timeout=pool_timeout) 17:06:34 17:06:34 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:06:34 17:06:34 # Is this a closed/new connection that requires CONNECT tunnelling? 17:06:34 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:06:34 try: 17:06:34 self._prepare_proxy(conn) 17:06:34 except (BaseSSLError, OSError, SocketTimeout) as e: 17:06:34 self._raise_timeout( 17:06:34 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:06:34 ) 17:06:34 raise 17:06:34 17:06:34 # If we're going to release the connection in ``finally:``, then 17:06:34 # the response doesn't need to know about the connection. Otherwise 17:06:34 # it will also try to release it and we'll have a double-release 17:06:34 # mess. 17:06:34 response_conn = conn if not release_conn else None 17:06:34 17:06:34 # Make the request on the HTTPConnection object 17:06:34 > response = self._make_request( 17:06:34 conn, 17:06:34 method, 17:06:34 url, 17:06:34 timeout=timeout_obj, 17:06:34 body=body, 17:06:34 headers=headers, 17:06:34 chunked=chunked, 17:06:34 retries=retries, 17:06:34 response_conn=response_conn, 17:06:34 preload_content=preload_content, 17:06:34 decode_content=decode_content, 17:06:34 **response_kw, 17:06:34 ) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 17:06:34 conn.request( 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 17:06:34 self.endheaders() 17:06:34 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 17:06:34 self._send_output(message_body, encode_chunked=encode_chunked) 17:06:34 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 17:06:34 self.send(msg) 17:06:34 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 17:06:34 self.connect() 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 17:06:34 self.sock = self._new_conn() 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 17:06:34 self = 17:06:34 17:06:34 def _new_conn(self) -> socket.socket: 17:06:34 """Establish a socket connection and set nodelay settings on it. 17:06:34 17:06:34 :return: New socket connection. 17:06:34 """ 17:06:34 try: 17:06:34 sock = connection.create_connection( 17:06:34 (self._dns_host, self.port), 17:06:34 self.timeout, 17:06:34 source_address=self.source_address, 17:06:34 socket_options=self.socket_options, 17:06:34 ) 17:06:34 except socket.gaierror as e: 17:06:34 raise NameResolutionError(self.host, self, e) from e 17:06:34 except SocketTimeout as e: 17:06:34 raise ConnectTimeoutError( 17:06:34 self, 17:06:34 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 17:06:34 ) from e 17:06:34 17:06:34 except OSError as e: 17:06:34 > raise NewConnectionError( 17:06:34 self, f"Failed to establish a new connection: {e}" 17:06:34 ) from e 17:06:34 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 17:06:34 17:06:34 The above exception was the direct cause of the following exception: 17:06:34 17:06:34 self = 17:06:34 request = , stream = False 17:06:34 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:06:34 proxies = OrderedDict() 17:06:34 17:06:34 def send( 17:06:34 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:06:34 ): 17:06:34 """Sends PreparedRequest object. Returns Response object. 17:06:34 17:06:34 :param request: The :class:`PreparedRequest ` being sent. 17:06:34 :param stream: (optional) Whether to stream the request content. 17:06:34 :param timeout: (optional) How long to wait for the server to send 17:06:34 data before giving up, as a float, or a :ref:`(connect timeout, 17:06:34 read timeout) ` tuple. 17:06:34 :type timeout: float or tuple or urllib3 Timeout object 17:06:34 :param verify: (optional) Either a boolean, in which case it controls whether 17:06:34 we verify the server's TLS certificate, or a string, in which case it 17:06:34 must be a path to a CA bundle to use 17:06:34 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:06:34 :param proxies: (optional) The proxies dictionary to apply to the request. 17:06:34 :rtype: requests.Response 17:06:34 """ 17:06:34 17:06:34 try: 17:06:34 conn = self.get_connection_with_tls_context( 17:06:34 request, verify, proxies=proxies, cert=cert 17:06:34 ) 17:06:34 except LocationValueError as e: 17:06:34 raise InvalidURL(e, request=request) 17:06:34 17:06:34 self.cert_verify(conn, request.url, verify, cert) 17:06:34 url = self.request_url(request, proxies) 17:06:34 self.add_headers( 17:06:34 request, 17:06:34 stream=stream, 17:06:34 timeout=timeout, 17:06:34 verify=verify, 17:06:34 cert=cert, 17:06:34 proxies=proxies, 17:06:34 ) 17:06:34 17:06:34 chunked = not (request.body is None or "Content-Length" in request.headers) 17:06:34 17:06:34 if isinstance(timeout, tuple): 17:06:34 try: 17:06:34 connect, read = timeout 17:06:34 timeout = TimeoutSauce(connect=connect, read=read) 17:06:34 except ValueError: 17:06:34 raise ValueError( 17:06:34 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:06:34 f"or a single float to set both timeouts to the same value." 17:06:34 ) 17:06:34 elif isinstance(timeout, TimeoutSauce): 17:06:34 pass 17:06:34 else: 17:06:34 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:06:34 17:06:34 try: 17:06:34 > resp = conn.urlopen( 17:06:34 method=request.method, 17:06:34 url=url, 17:06:34 body=request.body, 17:06:34 headers=request.headers, 17:06:34 redirect=False, 17:06:34 assert_same_host=False, 17:06:34 preload_content=False, 17:06:34 decode_content=False, 17:06:34 retries=self.max_retries, 17:06:34 timeout=timeout, 17:06:34 chunked=chunked, 17:06:34 ) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:06:34 retries = retries.increment( 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 17:06:34 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:06:34 method = 'GET' 17:06:34 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' 17:06:34 response = None 17:06:34 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 17:06:34 _pool = 17:06:34 _stacktrace = 17:06:34 17:06:34 def increment( 17:06:34 self, 17:06:34 method: str | None = None, 17:06:34 url: str | None = None, 17:06:34 response: BaseHTTPResponse | None = None, 17:06:34 error: Exception | None = None, 17:06:34 _pool: ConnectionPool | None = None, 17:06:34 _stacktrace: TracebackType | None = None, 17:06:34 ) -> Self: 17:06:34 """Return a new Retry object with incremented retry counters. 17:06:34 17:06:34 :param response: A response object, or None, if the server did not 17:06:34 return a response. 17:06:34 :type response: :class:`~urllib3.response.BaseHTTPResponse` 17:06:34 :param Exception error: An error encountered during the request, or 17:06:34 None if the response was received successfully. 17:06:34 17:06:34 :return: A new ``Retry`` object. 17:06:34 """ 17:06:34 if self.total is False and error: 17:06:34 # Disabled, indicate to re-raise the error. 17:06:34 raise reraise(type(error), error, _stacktrace) 17:06:34 17:06:34 total = self.total 17:06:34 if total is not None: 17:06:34 total -= 1 17:06:34 17:06:34 connect = self.connect 17:06:34 read = self.read 17:06:34 redirect = self.redirect 17:06:34 status_count = self.status 17:06:34 other = self.other 17:06:34 cause = "unknown" 17:06:34 status = None 17:06:34 redirect_location = None 17:06:34 17:06:34 if error and self._is_connection_error(error): 17:06:34 # Connect retry? 17:06:34 if connect is False: 17:06:34 raise reraise(type(error), error, _stacktrace) 17:06:34 elif connect is not None: 17:06:34 connect -= 1 17:06:34 17:06:34 elif error and self._is_read_error(error): 17:06:34 # Read retry? 17:06:34 if read is False or method is None or not self._is_method_retryable(method): 17:06:34 raise reraise(type(error), error, _stacktrace) 17:06:34 elif read is not None: 17:06:34 read -= 1 17:06:34 17:06:34 elif error: 17:06:34 # Other retry? 17:06:34 if other is not None: 17:06:34 other -= 1 17:06:34 17:06:34 elif response and response.get_redirect_location(): 17:06:34 # Redirect retry? 17:06:34 if redirect is not None: 17:06:34 redirect -= 1 17:06:34 cause = "too many redirects" 17:06:34 response_redirect_location = response.get_redirect_location() 17:06:34 if response_redirect_location: 17:06:34 redirect_location = response_redirect_location 17:06:34 status = response.status 17:06:34 17:06:34 else: 17:06:34 # Incrementing because of a server error like a 500 in 17:06:34 # status_forcelist and the given method is in the allowed_methods 17:06:34 cause = ResponseError.GENERIC_ERROR 17:06:34 if response and response.status: 17:06:34 if status_count is not None: 17:06:34 status_count -= 1 17:06:34 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 17:06:34 status = response.status 17:06:34 17:06:34 history = self.history + ( 17:06:34 RequestHistory(method, url, error, status, redirect_location), 17:06:34 ) 17:06:34 17:06:34 new_retry = self.new( 17:06:34 total=total, 17:06:34 connect=connect, 17:06:34 read=read, 17:06:34 redirect=redirect, 17:06:34 status=status_count, 17:06:34 other=other, 17:06:34 history=history, 17:06:34 ) 17:06:34 17:06:34 if new_retry.is_exhausted(): 17:06:34 reason = error or ResponseError(cause) 17:06:34 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 17:06:34 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 17:06:34 17:06:34 During handling of the above exception, another exception occurred: 17:06:34 17:06:34 self = 17:06:34 17:06:34 def test_17_xpdr_device_disconnected(self): 17:06:34 > response = test_utils.check_device_connection("XPDRA01") 17:06:34 17:06:34 transportpce_tests/1.2.1/test01_portmapping.py:195: 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 transportpce_tests/common/test_utils.py:369: in check_device_connection 17:06:34 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 17:06:34 transportpce_tests/common/test_utils.py:116: in get_request 17:06:34 return requests.request( 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:06:34 return session.request(method=method, url=url, **kwargs) 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:06:34 resp = self.send(prep, **send_kwargs) 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:06:34 r = adapter.send(request, **kwargs) 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 17:06:34 self = 17:06:34 request = , stream = False 17:06:34 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:06:34 proxies = OrderedDict() 17:06:34 17:06:34 def send( 17:06:34 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:06:34 ): 17:06:34 """Sends PreparedRequest object. Returns Response object. 17:06:34 17:06:34 :param request: The :class:`PreparedRequest ` being sent. 17:06:34 :param stream: (optional) Whether to stream the request content. 17:06:34 :param timeout: (optional) How long to wait for the server to send 17:06:34 data before giving up, as a float, or a :ref:`(connect timeout, 17:06:34 read timeout) ` tuple. 17:06:34 :type timeout: float or tuple or urllib3 Timeout object 17:06:34 :param verify: (optional) Either a boolean, in which case it controls whether 17:06:34 we verify the server's TLS certificate, or a string, in which case it 17:06:34 must be a path to a CA bundle to use 17:06:34 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:06:34 :param proxies: (optional) The proxies dictionary to apply to the request. 17:06:34 :rtype: requests.Response 17:06:34 """ 17:06:34 17:06:34 try: 17:06:34 conn = self.get_connection_with_tls_context( 17:06:34 request, verify, proxies=proxies, cert=cert 17:06:34 ) 17:06:34 except LocationValueError as e: 17:06:34 raise InvalidURL(e, request=request) 17:06:34 17:06:34 self.cert_verify(conn, request.url, verify, cert) 17:06:34 url = self.request_url(request, proxies) 17:06:34 self.add_headers( 17:06:34 request, 17:06:34 stream=stream, 17:06:34 timeout=timeout, 17:06:34 verify=verify, 17:06:34 cert=cert, 17:06:34 proxies=proxies, 17:06:34 ) 17:06:34 17:06:34 chunked = not (request.body is None or "Content-Length" in request.headers) 17:06:34 17:06:34 if isinstance(timeout, tuple): 17:06:34 try: 17:06:34 connect, read = timeout 17:06:34 timeout = TimeoutSauce(connect=connect, read=read) 17:06:34 except ValueError: 17:06:34 raise ValueError( 17:06:34 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:06:34 f"or a single float to set both timeouts to the same value." 17:06:34 ) 17:06:34 elif isinstance(timeout, TimeoutSauce): 17:06:34 pass 17:06:34 else: 17:06:34 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:06:34 17:06:34 try: 17:06:34 resp = conn.urlopen( 17:06:34 method=request.method, 17:06:34 url=url, 17:06:34 body=request.body, 17:06:34 headers=request.headers, 17:06:34 redirect=False, 17:06:34 assert_same_host=False, 17:06:34 preload_content=False, 17:06:34 decode_content=False, 17:06:34 retries=self.max_retries, 17:06:34 timeout=timeout, 17:06:34 chunked=chunked, 17:06:34 ) 17:06:34 17:06:34 except (ProtocolError, OSError) as err: 17:06:34 raise ConnectionError(err, request=request) 17:06:34 17:06:34 except MaxRetryError as e: 17:06:34 if isinstance(e.reason, ConnectTimeoutError): 17:06:34 # TODO: Remove this in 3.0.0: see #2811 17:06:34 if not isinstance(e.reason, NewConnectionError): 17:06:34 raise ConnectTimeout(e, request=request) 17:06:34 17:06:34 if isinstance(e.reason, ResponseError): 17:06:34 raise RetryError(e, request=request) 17:06:34 17:06:34 if isinstance(e.reason, _ProxyError): 17:06:34 raise ProxyError(e, request=request) 17:06:34 17:06:34 if isinstance(e.reason, _SSLError): 17:06:34 # This branch is for urllib3 v1.22 and later. 17:06:34 raise SSLError(e, request=request) 17:06:34 17:06:34 > raise ConnectionError(e, request=request) 17:06:34 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 17:06:34 ----------------------------- Captured stdout call ----------------------------- 17:06:34 execution of test_17_xpdr_device_disconnected 17:06:34 _______ TransportPCEPortMappingTesting.test_18_xpdr_device_not_connected _______ 17:06:34 17:06:34 self = 17:06:34 17:06:34 def _new_conn(self) -> socket.socket: 17:06:34 """Establish a socket connection and set nodelay settings on it. 17:06:34 17:06:34 :return: New socket connection. 17:06:34 """ 17:06:34 try: 17:06:34 > sock = connection.create_connection( 17:06:34 (self._dns_host, self.port), 17:06:34 self.timeout, 17:06:34 source_address=self.source_address, 17:06:34 socket_options=self.socket_options, 17:06:34 ) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 17:06:34 raise err 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 17:06:34 address = ('localhost', 8182), timeout = 10, source_address = None 17:06:34 socket_options = [(6, 1, 1)] 17:06:34 17:06:34 def create_connection( 17:06:34 address: tuple[str, int], 17:06:34 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:06:34 source_address: tuple[str, int] | None = None, 17:06:34 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 17:06:34 ) -> socket.socket: 17:06:34 """Connect to *address* and return the socket object. 17:06:34 17:06:34 Convenience function. Connect to *address* (a 2-tuple ``(host, 17:06:34 port)``) and return the socket object. Passing the optional 17:06:34 *timeout* parameter will set the timeout on the socket instance 17:06:34 before attempting to connect. If no *timeout* is supplied, the 17:06:34 global default timeout setting returned by :func:`socket.getdefaulttimeout` 17:06:34 is used. If *source_address* is set it must be a tuple of (host, port) 17:06:34 for the socket to bind as a source address before making the connection. 17:06:34 An host of '' or port 0 tells the OS to use the default. 17:06:34 """ 17:06:34 17:06:34 host, port = address 17:06:34 if host.startswith("["): 17:06:34 host = host.strip("[]") 17:06:34 err = None 17:06:34 17:06:34 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 17:06:34 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 17:06:34 # The original create_connection function always returns all records. 17:06:34 family = allowed_gai_family() 17:06:34 17:06:34 try: 17:06:34 host.encode("idna") 17:06:34 except UnicodeError: 17:06:34 raise LocationParseError(f"'{host}', label empty or too long") from None 17:06:34 17:06:34 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 17:06:34 af, socktype, proto, canonname, sa = res 17:06:34 sock = None 17:06:34 try: 17:06:34 sock = socket.socket(af, socktype, proto) 17:06:34 17:06:34 # If provided, set socket level options before connecting. 17:06:34 _set_socket_options(sock, socket_options) 17:06:34 17:06:34 if timeout is not _DEFAULT_TIMEOUT: 17:06:34 sock.settimeout(timeout) 17:06:34 if source_address: 17:06:34 sock.bind(source_address) 17:06:34 > sock.connect(sa) 17:06:34 E ConnectionRefusedError: [Errno 111] Connection refused 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 17:06:34 17:06:34 The above exception was the direct cause of the following exception: 17:06:34 17:06:34 self = 17:06:34 method = 'GET' 17:06:34 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' 17:06:34 body = None 17:06:34 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:06:34 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:06:34 redirect = False, assert_same_host = False 17:06:34 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:06:34 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:06:34 decode_content = False, response_kw = {} 17:06:34 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info', query=None, fragment=None) 17:06:34 destination_scheme = None, conn = None, release_this_conn = True 17:06:34 http_tunnel_required = False, err = None, clean_exit = False 17:06:34 17:06:34 def urlopen( # type: ignore[override] 17:06:34 self, 17:06:34 method: str, 17:06:34 url: str, 17:06:34 body: _TYPE_BODY | None = None, 17:06:34 headers: typing.Mapping[str, str] | None = None, 17:06:34 retries: Retry | bool | int | None = None, 17:06:34 redirect: bool = True, 17:06:34 assert_same_host: bool = True, 17:06:34 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:06:34 pool_timeout: int | None = None, 17:06:34 release_conn: bool | None = None, 17:06:34 chunked: bool = False, 17:06:34 body_pos: _TYPE_BODY_POSITION | None = None, 17:06:34 preload_content: bool = True, 17:06:34 decode_content: bool = True, 17:06:34 **response_kw: typing.Any, 17:06:34 ) -> BaseHTTPResponse: 17:06:34 """ 17:06:34 Get a connection from the pool and perform an HTTP request. This is the 17:06:34 lowest level call for making a request, so you'll need to specify all 17:06:34 the raw details. 17:06:34 17:06:34 .. note:: 17:06:34 17:06:34 More commonly, it's appropriate to use a convenience method 17:06:34 such as :meth:`request`. 17:06:34 17:06:34 .. note:: 17:06:34 17:06:34 `release_conn` will only behave as expected if 17:06:34 `preload_content=False` because we want to make 17:06:34 `preload_content=False` the default behaviour someday soon without 17:06:34 breaking backwards compatibility. 17:06:34 17:06:34 :param method: 17:06:34 HTTP request method (such as GET, POST, PUT, etc.) 17:06:34 17:06:34 :param url: 17:06:34 The URL to perform the request on. 17:06:34 17:06:34 :param body: 17:06:34 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:06:34 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:06:34 17:06:34 :param headers: 17:06:34 Dictionary of custom headers to send, such as User-Agent, 17:06:34 If-None-Match, etc. If None, pool headers are used. If provided, 17:06:34 these headers completely replace any pool-specific headers. 17:06:34 17:06:34 :param retries: 17:06:34 Configure the number of retries to allow before raising a 17:06:34 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:06:34 17:06:34 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:06:34 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:06:34 over different types of retries. 17:06:34 Pass an integer number to retry connection errors that many times, 17:06:34 but no other types of errors. Pass zero to never retry. 17:06:34 17:06:34 If ``False``, then retries are disabled and any exception is raised 17:06:34 immediately. Also, instead of raising a MaxRetryError on redirects, 17:06:34 the redirect response will be returned. 17:06:34 17:06:34 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:06:34 17:06:34 :param redirect: 17:06:34 If True, automatically handle redirects (status codes 301, 302, 17:06:34 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:06:34 will disable redirect, too. 17:06:34 17:06:34 :param assert_same_host: 17:06:34 If ``True``, will make sure that the host of the pool requests is 17:06:34 consistent else will raise HostChangedError. When ``False``, you can 17:06:34 use the pool on an HTTP proxy and request foreign hosts. 17:06:34 17:06:34 :param timeout: 17:06:34 If specified, overrides the default timeout for this one 17:06:34 request. It may be a float (in seconds) or an instance of 17:06:34 :class:`urllib3.util.Timeout`. 17:06:34 17:06:34 :param pool_timeout: 17:06:34 If set and the pool is set to block=True, then this method will 17:06:34 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:06:34 connection is available within the time period. 17:06:34 17:06:34 :param bool preload_content: 17:06:34 If True, the response's body will be preloaded into memory. 17:06:34 17:06:34 :param bool decode_content: 17:06:34 If True, will attempt to decode the body based on the 17:06:34 'content-encoding' header. 17:06:34 17:06:34 :param release_conn: 17:06:34 If False, then the urlopen call will not release the connection 17:06:34 back into the pool once a response is received (but will release if 17:06:34 you read the entire contents of the response such as when 17:06:34 `preload_content=True`). This is useful if you're not preloading 17:06:34 the response's content immediately. You will need to call 17:06:34 ``r.release_conn()`` on the response ``r`` to return the connection 17:06:34 back into the pool. If None, it takes the value of ``preload_content`` 17:06:34 which defaults to ``True``. 17:06:34 17:06:34 :param bool chunked: 17:06:34 If True, urllib3 will send the body using chunked transfer 17:06:34 encoding. Otherwise, urllib3 will send the body using the standard 17:06:34 content-length form. Defaults to False. 17:06:34 17:06:34 :param int body_pos: 17:06:34 Position to seek to in file-like body in the event of a retry or 17:06:34 redirect. Typically this won't need to be set because urllib3 will 17:06:34 auto-populate the value when needed. 17:06:34 """ 17:06:34 parsed_url = parse_url(url) 17:06:34 destination_scheme = parsed_url.scheme 17:06:34 17:06:34 if headers is None: 17:06:34 headers = self.headers 17:06:34 17:06:34 if not isinstance(retries, Retry): 17:06:34 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:06:34 17:06:34 if release_conn is None: 17:06:34 release_conn = preload_content 17:06:34 17:06:34 # Check host 17:06:34 if assert_same_host and not self.is_same_host(url): 17:06:34 raise HostChangedError(self, url, retries) 17:06:34 17:06:34 # Ensure that the URL we're connecting to is properly encoded 17:06:34 if url.startswith("/"): 17:06:34 url = to_str(_encode_target(url)) 17:06:34 else: 17:06:34 url = to_str(parsed_url.url) 17:06:34 17:06:34 conn = None 17:06:34 17:06:34 # Track whether `conn` needs to be released before 17:06:34 # returning/raising/recursing. Update this variable if necessary, and 17:06:34 # leave `release_conn` constant throughout the function. That way, if 17:06:34 # the function recurses, the original value of `release_conn` will be 17:06:34 # passed down into the recursive call, and its value will be respected. 17:06:34 # 17:06:34 # See issue #651 [1] for details. 17:06:34 # 17:06:34 # [1] 17:06:34 release_this_conn = release_conn 17:06:34 17:06:34 http_tunnel_required = connection_requires_http_tunnel( 17:06:34 self.proxy, self.proxy_config, destination_scheme 17:06:34 ) 17:06:34 17:06:34 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:06:34 # have to copy the headers dict so we can safely change it without those 17:06:34 # changes being reflected in anyone else's copy. 17:06:34 if not http_tunnel_required: 17:06:34 headers = headers.copy() # type: ignore[attr-defined] 17:06:34 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:06:34 17:06:34 # Must keep the exception bound to a separate variable or else Python 3 17:06:34 # complains about UnboundLocalError. 17:06:34 err = None 17:06:34 17:06:34 # Keep track of whether we cleanly exited the except block. This 17:06:34 # ensures we do proper cleanup in finally. 17:06:34 clean_exit = False 17:06:34 17:06:34 # Rewind body position, if needed. Record current position 17:06:34 # for future rewinds in the event of a redirect/retry. 17:06:34 body_pos = set_file_position(body, body_pos) 17:06:34 17:06:34 try: 17:06:34 # Request a connection from the queue. 17:06:34 timeout_obj = self._get_timeout(timeout) 17:06:34 conn = self._get_conn(timeout=pool_timeout) 17:06:34 17:06:34 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:06:34 17:06:34 # Is this a closed/new connection that requires CONNECT tunnelling? 17:06:34 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:06:34 try: 17:06:34 self._prepare_proxy(conn) 17:06:34 except (BaseSSLError, OSError, SocketTimeout) as e: 17:06:34 self._raise_timeout( 17:06:34 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:06:34 ) 17:06:34 raise 17:06:34 17:06:34 # If we're going to release the connection in ``finally:``, then 17:06:34 # the response doesn't need to know about the connection. Otherwise 17:06:34 # it will also try to release it and we'll have a double-release 17:06:34 # mess. 17:06:34 response_conn = conn if not release_conn else None 17:06:34 17:06:34 # Make the request on the HTTPConnection object 17:06:34 > response = self._make_request( 17:06:34 conn, 17:06:34 method, 17:06:34 url, 17:06:34 timeout=timeout_obj, 17:06:34 body=body, 17:06:34 headers=headers, 17:06:34 chunked=chunked, 17:06:34 retries=retries, 17:06:34 response_conn=response_conn, 17:06:34 preload_content=preload_content, 17:06:34 decode_content=decode_content, 17:06:34 **response_kw, 17:06:34 ) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 17:06:34 conn.request( 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 17:06:34 self.endheaders() 17:06:34 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 17:06:34 self._send_output(message_body, encode_chunked=encode_chunked) 17:06:34 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 17:06:34 self.send(msg) 17:06:34 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 17:06:34 self.connect() 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 17:06:34 self.sock = self._new_conn() 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 17:06:34 self = 17:06:34 17:06:34 def _new_conn(self) -> socket.socket: 17:06:34 """Establish a socket connection and set nodelay settings on it. 17:06:34 17:06:34 :return: New socket connection. 17:06:34 """ 17:06:34 try: 17:06:34 sock = connection.create_connection( 17:06:34 (self._dns_host, self.port), 17:06:34 self.timeout, 17:06:34 source_address=self.source_address, 17:06:34 socket_options=self.socket_options, 17:06:34 ) 17:06:34 except socket.gaierror as e: 17:06:34 raise NameResolutionError(self.host, self, e) from e 17:06:34 except SocketTimeout as e: 17:06:34 raise ConnectTimeoutError( 17:06:34 self, 17:06:34 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 17:06:34 ) from e 17:06:34 17:06:34 except OSError as e: 17:06:34 > raise NewConnectionError( 17:06:34 self, f"Failed to establish a new connection: {e}" 17:06:34 ) from e 17:06:34 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 17:06:34 17:06:34 The above exception was the direct cause of the following exception: 17:06:34 17:06:34 self = 17:06:34 request = , stream = False 17:06:34 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:06:34 proxies = OrderedDict() 17:06:34 17:06:34 def send( 17:06:34 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:06:34 ): 17:06:34 """Sends PreparedRequest object. Returns Response object. 17:06:34 17:06:34 :param request: The :class:`PreparedRequest ` being sent. 17:06:34 :param stream: (optional) Whether to stream the request content. 17:06:34 :param timeout: (optional) How long to wait for the server to send 17:06:34 data before giving up, as a float, or a :ref:`(connect timeout, 17:06:34 read timeout) ` tuple. 17:06:34 :type timeout: float or tuple or urllib3 Timeout object 17:06:34 :param verify: (optional) Either a boolean, in which case it controls whether 17:06:34 we verify the server's TLS certificate, or a string, in which case it 17:06:34 must be a path to a CA bundle to use 17:06:34 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:06:34 :param proxies: (optional) The proxies dictionary to apply to the request. 17:06:34 :rtype: requests.Response 17:06:34 """ 17:06:34 17:06:34 try: 17:06:34 conn = self.get_connection_with_tls_context( 17:06:34 request, verify, proxies=proxies, cert=cert 17:06:34 ) 17:06:34 except LocationValueError as e: 17:06:34 raise InvalidURL(e, request=request) 17:06:34 17:06:34 self.cert_verify(conn, request.url, verify, cert) 17:06:34 url = self.request_url(request, proxies) 17:06:34 self.add_headers( 17:06:34 request, 17:06:34 stream=stream, 17:06:34 timeout=timeout, 17:06:34 verify=verify, 17:06:34 cert=cert, 17:06:34 proxies=proxies, 17:06:34 ) 17:06:34 17:06:34 chunked = not (request.body is None or "Content-Length" in request.headers) 17:06:34 17:06:34 if isinstance(timeout, tuple): 17:06:34 try: 17:06:34 connect, read = timeout 17:06:34 timeout = TimeoutSauce(connect=connect, read=read) 17:06:34 except ValueError: 17:06:34 raise ValueError( 17:06:34 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:06:34 f"or a single float to set both timeouts to the same value." 17:06:34 ) 17:06:34 elif isinstance(timeout, TimeoutSauce): 17:06:34 pass 17:06:34 else: 17:06:34 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:06:34 17:06:34 try: 17:06:34 > resp = conn.urlopen( 17:06:34 method=request.method, 17:06:34 url=url, 17:06:34 body=request.body, 17:06:34 headers=request.headers, 17:06:34 redirect=False, 17:06:34 assert_same_host=False, 17:06:34 preload_content=False, 17:06:34 decode_content=False, 17:06:34 retries=self.max_retries, 17:06:34 timeout=timeout, 17:06:34 chunked=chunked, 17:06:34 ) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:06:34 retries = retries.increment( 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 17:06:34 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:06:34 method = 'GET' 17:06:34 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' 17:06:34 response = None 17:06:34 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 17:06:34 _pool = 17:06:34 _stacktrace = 17:06:34 17:06:34 def increment( 17:06:34 self, 17:06:34 method: str | None = None, 17:06:34 url: str | None = None, 17:06:34 response: BaseHTTPResponse | None = None, 17:06:34 error: Exception | None = None, 17:06:34 _pool: ConnectionPool | None = None, 17:06:34 _stacktrace: TracebackType | None = None, 17:06:34 ) -> Self: 17:06:34 """Return a new Retry object with incremented retry counters. 17:06:34 17:06:34 :param response: A response object, or None, if the server did not 17:06:34 return a response. 17:06:34 :type response: :class:`~urllib3.response.BaseHTTPResponse` 17:06:34 :param Exception error: An error encountered during the request, or 17:06:34 None if the response was received successfully. 17:06:34 17:06:34 :return: A new ``Retry`` object. 17:06:34 """ 17:06:34 if self.total is False and error: 17:06:34 # Disabled, indicate to re-raise the error. 17:06:34 raise reraise(type(error), error, _stacktrace) 17:06:34 17:06:34 total = self.total 17:06:34 if total is not None: 17:06:34 total -= 1 17:06:34 17:06:34 connect = self.connect 17:06:34 read = self.read 17:06:34 redirect = self.redirect 17:06:34 status_count = self.status 17:06:34 other = self.other 17:06:34 cause = "unknown" 17:06:34 status = None 17:06:34 redirect_location = None 17:06:34 17:06:34 if error and self._is_connection_error(error): 17:06:34 # Connect retry? 17:06:34 if connect is False: 17:06:34 raise reraise(type(error), error, _stacktrace) 17:06:34 elif connect is not None: 17:06:34 connect -= 1 17:06:34 17:06:34 elif error and self._is_read_error(error): 17:06:34 # Read retry? 17:06:34 if read is False or method is None or not self._is_method_retryable(method): 17:06:34 raise reraise(type(error), error, _stacktrace) 17:06:34 elif read is not None: 17:06:34 read -= 1 17:06:34 17:06:34 elif error: 17:06:34 # Other retry? 17:06:34 if other is not None: 17:06:34 other -= 1 17:06:34 17:06:34 elif response and response.get_redirect_location(): 17:06:34 # Redirect retry? 17:06:34 if redirect is not None: 17:06:34 redirect -= 1 17:06:34 cause = "too many redirects" 17:06:34 response_redirect_location = response.get_redirect_location() 17:06:34 if response_redirect_location: 17:06:34 redirect_location = response_redirect_location 17:06:34 status = response.status 17:06:34 17:06:34 else: 17:06:34 # Incrementing because of a server error like a 500 in 17:06:34 # status_forcelist and the given method is in the allowed_methods 17:06:34 cause = ResponseError.GENERIC_ERROR 17:06:34 if response and response.status: 17:06:34 if status_count is not None: 17:06:34 status_count -= 1 17:06:34 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 17:06:34 status = response.status 17:06:34 17:06:34 history = self.history + ( 17:06:34 RequestHistory(method, url, error, status, redirect_location), 17:06:34 ) 17:06:34 17:06:34 new_retry = self.new( 17:06:34 total=total, 17:06:34 connect=connect, 17:06:34 read=read, 17:06:34 redirect=redirect, 17:06:34 status=status_count, 17:06:34 other=other, 17:06:34 history=history, 17:06:34 ) 17:06:34 17:06:34 if new_retry.is_exhausted(): 17:06:34 reason = error or ResponseError(cause) 17:06:34 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 17:06:34 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 17:06:34 17:06:34 During handling of the above exception, another exception occurred: 17:06:34 17:06:34 self = 17:06:34 17:06:34 def test_18_xpdr_device_not_connected(self): 17:06:34 > response = test_utils.get_portmapping_node_attr("XPDRA01", "node-info", None) 17:06:34 17:06:34 transportpce_tests/1.2.1/test01_portmapping.py:203: 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 transportpce_tests/common/test_utils.py:470: in get_portmapping_node_attr 17:06:34 response = get_request(target_url) 17:06:34 transportpce_tests/common/test_utils.py:116: in get_request 17:06:34 return requests.request( 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:06:34 return session.request(method=method, url=url, **kwargs) 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:06:34 resp = self.send(prep, **send_kwargs) 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:06:34 r = adapter.send(request, **kwargs) 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 17:06:34 self = 17:06:34 request = , stream = False 17:06:34 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:06:34 proxies = OrderedDict() 17:06:34 17:06:34 def send( 17:06:34 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:06:34 ): 17:06:34 """Sends PreparedRequest object. Returns Response object. 17:06:34 17:06:34 :param request: The :class:`PreparedRequest ` being sent. 17:06:34 :param stream: (optional) Whether to stream the request content. 17:06:34 :param timeout: (optional) How long to wait for the server to send 17:06:34 data before giving up, as a float, or a :ref:`(connect timeout, 17:06:34 read timeout) ` tuple. 17:06:34 :type timeout: float or tuple or urllib3 Timeout object 17:06:34 :param verify: (optional) Either a boolean, in which case it controls whether 17:06:34 we verify the server's TLS certificate, or a string, in which case it 17:06:34 must be a path to a CA bundle to use 17:06:34 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:06:34 :param proxies: (optional) The proxies dictionary to apply to the request. 17:06:34 :rtype: requests.Response 17:06:34 """ 17:06:34 17:06:34 try: 17:06:34 conn = self.get_connection_with_tls_context( 17:06:34 request, verify, proxies=proxies, cert=cert 17:06:34 ) 17:06:34 except LocationValueError as e: 17:06:34 raise InvalidURL(e, request=request) 17:06:34 17:06:34 self.cert_verify(conn, request.url, verify, cert) 17:06:34 url = self.request_url(request, proxies) 17:06:34 self.add_headers( 17:06:34 request, 17:06:34 stream=stream, 17:06:34 timeout=timeout, 17:06:34 verify=verify, 17:06:34 cert=cert, 17:06:34 proxies=proxies, 17:06:34 ) 17:06:34 17:06:34 chunked = not (request.body is None or "Content-Length" in request.headers) 17:06:34 17:06:34 if isinstance(timeout, tuple): 17:06:34 try: 17:06:34 connect, read = timeout 17:06:34 timeout = TimeoutSauce(connect=connect, read=read) 17:06:34 except ValueError: 17:06:34 raise ValueError( 17:06:34 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:06:34 f"or a single float to set both timeouts to the same value." 17:06:34 ) 17:06:34 elif isinstance(timeout, TimeoutSauce): 17:06:34 pass 17:06:34 else: 17:06:34 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:06:34 17:06:34 try: 17:06:34 resp = conn.urlopen( 17:06:34 method=request.method, 17:06:34 url=url, 17:06:34 body=request.body, 17:06:34 headers=request.headers, 17:06:34 redirect=False, 17:06:34 assert_same_host=False, 17:06:34 preload_content=False, 17:06:34 decode_content=False, 17:06:34 retries=self.max_retries, 17:06:34 timeout=timeout, 17:06:34 chunked=chunked, 17:06:34 ) 17:06:34 17:06:34 except (ProtocolError, OSError) as err: 17:06:34 raise ConnectionError(err, request=request) 17:06:34 17:06:34 except MaxRetryError as e: 17:06:34 if isinstance(e.reason, ConnectTimeoutError): 17:06:34 # TODO: Remove this in 3.0.0: see #2811 17:06:34 if not isinstance(e.reason, NewConnectionError): 17:06:34 raise ConnectTimeout(e, request=request) 17:06:34 17:06:34 if isinstance(e.reason, ResponseError): 17:06:34 raise RetryError(e, request=request) 17:06:34 17:06:34 if isinstance(e.reason, _ProxyError): 17:06:34 raise ProxyError(e, request=request) 17:06:34 17:06:34 if isinstance(e.reason, _SSLError): 17:06:34 # This branch is for urllib3 v1.22 and later. 17:06:34 raise SSLError(e, request=request) 17:06:34 17:06:34 > raise ConnectionError(e, request=request) 17:06:34 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 17:06:34 ----------------------------- Captured stdout call ----------------------------- 17:06:34 execution of test_18_xpdr_device_not_connected 17:06:34 _______ TransportPCEPortMappingTesting.test_19_rdm_device_disconnection ________ 17:06:34 17:06:34 self = 17:06:34 17:06:34 def _new_conn(self) -> socket.socket: 17:06:34 """Establish a socket connection and set nodelay settings on it. 17:06:34 17:06:34 :return: New socket connection. 17:06:34 """ 17:06:34 try: 17:06:34 > sock = connection.create_connection( 17:06:34 (self._dns_host, self.port), 17:06:34 self.timeout, 17:06:34 source_address=self.source_address, 17:06:34 socket_options=self.socket_options, 17:06:34 ) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 17:06:34 raise err 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 17:06:34 address = ('localhost', 8182), timeout = 10, source_address = None 17:06:34 socket_options = [(6, 1, 1)] 17:06:34 17:06:34 def create_connection( 17:06:34 address: tuple[str, int], 17:06:34 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:06:34 source_address: tuple[str, int] | None = None, 17:06:34 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 17:06:34 ) -> socket.socket: 17:06:34 """Connect to *address* and return the socket object. 17:06:34 17:06:34 Convenience function. Connect to *address* (a 2-tuple ``(host, 17:06:34 port)``) and return the socket object. Passing the optional 17:06:34 *timeout* parameter will set the timeout on the socket instance 17:06:34 before attempting to connect. If no *timeout* is supplied, the 17:06:34 global default timeout setting returned by :func:`socket.getdefaulttimeout` 17:06:34 is used. If *source_address* is set it must be a tuple of (host, port) 17:06:34 for the socket to bind as a source address before making the connection. 17:06:34 An host of '' or port 0 tells the OS to use the default. 17:06:34 """ 17:06:34 17:06:34 host, port = address 17:06:34 if host.startswith("["): 17:06:34 host = host.strip("[]") 17:06:34 err = None 17:06:34 17:06:34 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 17:06:34 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 17:06:34 # The original create_connection function always returns all records. 17:06:34 family = allowed_gai_family() 17:06:34 17:06:34 try: 17:06:34 host.encode("idna") 17:06:34 except UnicodeError: 17:06:34 raise LocationParseError(f"'{host}', label empty or too long") from None 17:06:34 17:06:34 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 17:06:34 af, socktype, proto, canonname, sa = res 17:06:34 sock = None 17:06:34 try: 17:06:34 sock = socket.socket(af, socktype, proto) 17:06:34 17:06:34 # If provided, set socket level options before connecting. 17:06:34 _set_socket_options(sock, socket_options) 17:06:34 17:06:34 if timeout is not _DEFAULT_TIMEOUT: 17:06:34 sock.settimeout(timeout) 17:06:34 if source_address: 17:06:34 sock.bind(source_address) 17:06:34 > sock.connect(sa) 17:06:34 E ConnectionRefusedError: [Errno 111] Connection refused 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 17:06:34 17:06:34 The above exception was the direct cause of the following exception: 17:06:34 17:06:34 self = 17:06:34 method = 'DELETE' 17:06:34 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' 17:06:34 body = None 17:06:34 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:06:34 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:06:34 redirect = False, assert_same_host = False 17:06:34 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:06:34 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:06:34 decode_content = False, response_kw = {} 17:06:34 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query=None, fragment=None) 17:06:34 destination_scheme = None, conn = None, release_this_conn = True 17:06:34 http_tunnel_required = False, err = None, clean_exit = False 17:06:34 17:06:34 def urlopen( # type: ignore[override] 17:06:34 self, 17:06:34 method: str, 17:06:34 url: str, 17:06:34 body: _TYPE_BODY | None = None, 17:06:34 headers: typing.Mapping[str, str] | None = None, 17:06:34 retries: Retry | bool | int | None = None, 17:06:34 redirect: bool = True, 17:06:34 assert_same_host: bool = True, 17:06:34 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:06:34 pool_timeout: int | None = None, 17:06:34 release_conn: bool | None = None, 17:06:34 chunked: bool = False, 17:06:34 body_pos: _TYPE_BODY_POSITION | None = None, 17:06:34 preload_content: bool = True, 17:06:34 decode_content: bool = True, 17:06:34 **response_kw: typing.Any, 17:06:34 ) -> BaseHTTPResponse: 17:06:34 """ 17:06:34 Get a connection from the pool and perform an HTTP request. This is the 17:06:34 lowest level call for making a request, so you'll need to specify all 17:06:34 the raw details. 17:06:34 17:06:34 .. note:: 17:06:34 17:06:34 More commonly, it's appropriate to use a convenience method 17:06:34 such as :meth:`request`. 17:06:34 17:06:34 .. note:: 17:06:34 17:06:34 `release_conn` will only behave as expected if 17:06:34 `preload_content=False` because we want to make 17:06:34 `preload_content=False` the default behaviour someday soon without 17:06:34 breaking backwards compatibility. 17:06:34 17:06:34 :param method: 17:06:34 HTTP request method (such as GET, POST, PUT, etc.) 17:06:34 17:06:34 :param url: 17:06:34 The URL to perform the request on. 17:06:34 17:06:34 :param body: 17:06:34 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:06:34 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:06:34 17:06:34 :param headers: 17:06:34 Dictionary of custom headers to send, such as User-Agent, 17:06:34 If-None-Match, etc. If None, pool headers are used. If provided, 17:06:34 these headers completely replace any pool-specific headers. 17:06:34 17:06:34 :param retries: 17:06:34 Configure the number of retries to allow before raising a 17:06:34 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:06:34 17:06:34 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:06:34 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:06:34 over different types of retries. 17:06:34 Pass an integer number to retry connection errors that many times, 17:06:34 but no other types of errors. Pass zero to never retry. 17:06:34 17:06:34 If ``False``, then retries are disabled and any exception is raised 17:06:34 immediately. Also, instead of raising a MaxRetryError on redirects, 17:06:34 the redirect response will be returned. 17:06:34 17:06:34 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:06:34 17:06:34 :param redirect: 17:06:34 If True, automatically handle redirects (status codes 301, 302, 17:06:34 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:06:34 will disable redirect, too. 17:06:34 17:06:34 :param assert_same_host: 17:06:34 If ``True``, will make sure that the host of the pool requests is 17:06:34 consistent else will raise HostChangedError. When ``False``, you can 17:06:34 use the pool on an HTTP proxy and request foreign hosts. 17:06:34 17:06:34 :param timeout: 17:06:34 If specified, overrides the default timeout for this one 17:06:34 request. It may be a float (in seconds) or an instance of 17:06:34 :class:`urllib3.util.Timeout`. 17:06:34 17:06:34 :param pool_timeout: 17:06:34 If set and the pool is set to block=True, then this method will 17:06:34 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:06:34 connection is available within the time period. 17:06:34 17:06:34 :param bool preload_content: 17:06:34 If True, the response's body will be preloaded into memory. 17:06:34 17:06:34 :param bool decode_content: 17:06:34 If True, will attempt to decode the body based on the 17:06:34 'content-encoding' header. 17:06:34 17:06:34 :param release_conn: 17:06:34 If False, then the urlopen call will not release the connection 17:06:34 back into the pool once a response is received (but will release if 17:06:34 you read the entire contents of the response such as when 17:06:34 `preload_content=True`). This is useful if you're not preloading 17:06:34 the response's content immediately. You will need to call 17:06:34 ``r.release_conn()`` on the response ``r`` to return the connection 17:06:34 back into the pool. If None, it takes the value of ``preload_content`` 17:06:34 which defaults to ``True``. 17:06:34 17:06:34 :param bool chunked: 17:06:34 If True, urllib3 will send the body using chunked transfer 17:06:34 encoding. Otherwise, urllib3 will send the body using the standard 17:06:34 content-length form. Defaults to False. 17:06:34 17:06:34 :param int body_pos: 17:06:34 Position to seek to in file-like body in the event of a retry or 17:06:34 redirect. Typically this won't need to be set because urllib3 will 17:06:34 auto-populate the value when needed. 17:06:34 """ 17:06:34 parsed_url = parse_url(url) 17:06:34 destination_scheme = parsed_url.scheme 17:06:34 17:06:34 if headers is None: 17:06:34 headers = self.headers 17:06:34 17:06:34 if not isinstance(retries, Retry): 17:06:34 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:06:34 17:06:34 if release_conn is None: 17:06:34 release_conn = preload_content 17:06:34 17:06:34 # Check host 17:06:34 if assert_same_host and not self.is_same_host(url): 17:06:34 raise HostChangedError(self, url, retries) 17:06:34 17:06:34 # Ensure that the URL we're connecting to is properly encoded 17:06:34 if url.startswith("/"): 17:06:34 url = to_str(_encode_target(url)) 17:06:34 else: 17:06:34 url = to_str(parsed_url.url) 17:06:34 17:06:34 conn = None 17:06:34 17:06:34 # Track whether `conn` needs to be released before 17:06:34 # returning/raising/recursing. Update this variable if necessary, and 17:06:34 # leave `release_conn` constant throughout the function. That way, if 17:06:34 # the function recurses, the original value of `release_conn` will be 17:06:34 # passed down into the recursive call, and its value will be respected. 17:06:34 # 17:06:34 # See issue #651 [1] for details. 17:06:34 # 17:06:34 # [1] 17:06:34 release_this_conn = release_conn 17:06:34 17:06:34 http_tunnel_required = connection_requires_http_tunnel( 17:06:34 self.proxy, self.proxy_config, destination_scheme 17:06:34 ) 17:06:34 17:06:34 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:06:34 # have to copy the headers dict so we can safely change it without those 17:06:34 # changes being reflected in anyone else's copy. 17:06:34 if not http_tunnel_required: 17:06:34 headers = headers.copy() # type: ignore[attr-defined] 17:06:34 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:06:34 17:06:34 # Must keep the exception bound to a separate variable or else Python 3 17:06:34 # complains about UnboundLocalError. 17:06:34 err = None 17:06:34 17:06:34 # Keep track of whether we cleanly exited the except block. This 17:06:34 # ensures we do proper cleanup in finally. 17:06:34 clean_exit = False 17:06:34 17:06:34 # Rewind body position, if needed. Record current position 17:06:34 # for future rewinds in the event of a redirect/retry. 17:06:34 body_pos = set_file_position(body, body_pos) 17:06:34 17:06:34 try: 17:06:34 # Request a connection from the queue. 17:06:34 timeout_obj = self._get_timeout(timeout) 17:06:34 conn = self._get_conn(timeout=pool_timeout) 17:06:34 17:06:34 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:06:34 17:06:34 # Is this a closed/new connection that requires CONNECT tunnelling? 17:06:34 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:06:34 try: 17:06:34 self._prepare_proxy(conn) 17:06:34 except (BaseSSLError, OSError, SocketTimeout) as e: 17:06:34 self._raise_timeout( 17:06:34 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:06:34 ) 17:06:34 raise 17:06:34 17:06:34 # If we're going to release the connection in ``finally:``, then 17:06:34 # the response doesn't need to know about the connection. Otherwise 17:06:34 # it will also try to release it and we'll have a double-release 17:06:34 # mess. 17:06:34 response_conn = conn if not release_conn else None 17:06:34 17:06:34 # Make the request on the HTTPConnection object 17:06:34 > response = self._make_request( 17:06:34 conn, 17:06:34 method, 17:06:34 url, 17:06:34 timeout=timeout_obj, 17:06:34 body=body, 17:06:34 headers=headers, 17:06:34 chunked=chunked, 17:06:34 retries=retries, 17:06:34 response_conn=response_conn, 17:06:34 preload_content=preload_content, 17:06:34 decode_content=decode_content, 17:06:34 **response_kw, 17:06:34 ) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 17:06:34 conn.request( 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 17:06:34 self.endheaders() 17:06:34 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 17:06:34 self._send_output(message_body, encode_chunked=encode_chunked) 17:06:34 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 17:06:34 self.send(msg) 17:06:34 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 17:06:34 self.connect() 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 17:06:34 self.sock = self._new_conn() 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 17:06:34 self = 17:06:34 17:06:34 def _new_conn(self) -> socket.socket: 17:06:34 """Establish a socket connection and set nodelay settings on it. 17:06:34 17:06:34 :return: New socket connection. 17:06:34 """ 17:06:34 try: 17:06:34 sock = connection.create_connection( 17:06:34 (self._dns_host, self.port), 17:06:34 self.timeout, 17:06:34 source_address=self.source_address, 17:06:34 socket_options=self.socket_options, 17:06:34 ) 17:06:34 except socket.gaierror as e: 17:06:34 raise NameResolutionError(self.host, self, e) from e 17:06:34 except SocketTimeout as e: 17:06:34 raise ConnectTimeoutError( 17:06:34 self, 17:06:34 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 17:06:34 ) from e 17:06:34 17:06:34 except OSError as e: 17:06:34 > raise NewConnectionError( 17:06:34 self, f"Failed to establish a new connection: {e}" 17:06:34 ) from e 17:06:34 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 17:06:34 17:06:34 The above exception was the direct cause of the following exception: 17:06:34 17:06:34 self = 17:06:34 request = , stream = False 17:06:34 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:06:34 proxies = OrderedDict() 17:06:34 17:06:34 def send( 17:06:34 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:06:34 ): 17:06:34 """Sends PreparedRequest object. Returns Response object. 17:06:34 17:06:34 :param request: The :class:`PreparedRequest ` being sent. 17:06:34 :param stream: (optional) Whether to stream the request content. 17:06:34 :param timeout: (optional) How long to wait for the server to send 17:06:34 data before giving up, as a float, or a :ref:`(connect timeout, 17:06:34 read timeout) ` tuple. 17:06:34 :type timeout: float or tuple or urllib3 Timeout object 17:06:34 :param verify: (optional) Either a boolean, in which case it controls whether 17:06:34 we verify the server's TLS certificate, or a string, in which case it 17:06:34 must be a path to a CA bundle to use 17:06:34 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:06:34 :param proxies: (optional) The proxies dictionary to apply to the request. 17:06:34 :rtype: requests.Response 17:06:34 """ 17:06:34 17:06:34 try: 17:06:34 conn = self.get_connection_with_tls_context( 17:06:34 request, verify, proxies=proxies, cert=cert 17:06:34 ) 17:06:34 except LocationValueError as e: 17:06:34 raise InvalidURL(e, request=request) 17:06:34 17:06:34 self.cert_verify(conn, request.url, verify, cert) 17:06:34 url = self.request_url(request, proxies) 17:06:34 self.add_headers( 17:06:34 request, 17:06:34 stream=stream, 17:06:34 timeout=timeout, 17:06:34 verify=verify, 17:06:34 cert=cert, 17:06:34 proxies=proxies, 17:06:34 ) 17:06:34 17:06:34 chunked = not (request.body is None or "Content-Length" in request.headers) 17:06:34 17:06:34 if isinstance(timeout, tuple): 17:06:34 try: 17:06:34 connect, read = timeout 17:06:34 timeout = TimeoutSauce(connect=connect, read=read) 17:06:34 except ValueError: 17:06:34 raise ValueError( 17:06:34 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:06:34 f"or a single float to set both timeouts to the same value." 17:06:34 ) 17:06:34 elif isinstance(timeout, TimeoutSauce): 17:06:34 pass 17:06:34 else: 17:06:34 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:06:34 17:06:34 try: 17:06:34 > resp = conn.urlopen( 17:06:34 method=request.method, 17:06:34 url=url, 17:06:34 body=request.body, 17:06:34 headers=request.headers, 17:06:34 redirect=False, 17:06:34 assert_same_host=False, 17:06:34 preload_content=False, 17:06:34 decode_content=False, 17:06:34 retries=self.max_retries, 17:06:34 timeout=timeout, 17:06:34 chunked=chunked, 17:06:34 ) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:06:34 retries = retries.increment( 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 17:06:34 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:06:34 method = 'DELETE' 17:06:34 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' 17:06:34 response = None 17:06:34 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 17:06:34 _pool = 17:06:34 _stacktrace = 17:06:34 17:06:34 def increment( 17:06:34 self, 17:06:34 method: str | None = None, 17:06:34 url: str | None = None, 17:06:34 response: BaseHTTPResponse | None = None, 17:06:34 error: Exception | None = None, 17:06:34 _pool: ConnectionPool | None = None, 17:06:34 _stacktrace: TracebackType | None = None, 17:06:34 ) -> Self: 17:06:34 """Return a new Retry object with incremented retry counters. 17:06:34 17:06:34 :param response: A response object, or None, if the server did not 17:06:34 return a response. 17:06:34 :type response: :class:`~urllib3.response.BaseHTTPResponse` 17:06:34 :param Exception error: An error encountered during the request, or 17:06:34 None if the response was received successfully. 17:06:34 17:06:34 :return: A new ``Retry`` object. 17:06:34 """ 17:06:34 if self.total is False and error: 17:06:34 # Disabled, indicate to re-raise the error. 17:06:34 raise reraise(type(error), error, _stacktrace) 17:06:34 17:06:34 total = self.total 17:06:34 if total is not None: 17:06:34 total -= 1 17:06:34 17:06:34 connect = self.connect 17:06:34 read = self.read 17:06:34 redirect = self.redirect 17:06:34 status_count = self.status 17:06:34 other = self.other 17:06:34 cause = "unknown" 17:06:34 status = None 17:06:34 redirect_location = None 17:06:34 17:06:34 if error and self._is_connection_error(error): 17:06:34 # Connect retry? 17:06:34 if connect is False: 17:06:34 raise reraise(type(error), error, _stacktrace) 17:06:34 elif connect is not None: 17:06:34 connect -= 1 17:06:34 17:06:34 elif error and self._is_read_error(error): 17:06:34 # Read retry? 17:06:34 if read is False or method is None or not self._is_method_retryable(method): 17:06:34 raise reraise(type(error), error, _stacktrace) 17:06:34 elif read is not None: 17:06:34 read -= 1 17:06:34 17:06:34 elif error: 17:06:34 # Other retry? 17:06:34 if other is not None: 17:06:34 other -= 1 17:06:34 17:06:34 elif response and response.get_redirect_location(): 17:06:34 # Redirect retry? 17:06:34 if redirect is not None: 17:06:34 redirect -= 1 17:06:34 cause = "too many redirects" 17:06:34 response_redirect_location = response.get_redirect_location() 17:06:34 if response_redirect_location: 17:06:34 redirect_location = response_redirect_location 17:06:34 status = response.status 17:06:34 17:06:34 else: 17:06:34 # Incrementing because of a server error like a 500 in 17:06:34 # status_forcelist and the given method is in the allowed_methods 17:06:34 cause = ResponseError.GENERIC_ERROR 17:06:34 if response and response.status: 17:06:34 if status_count is not None: 17:06:34 status_count -= 1 17:06:34 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 17:06:34 status = response.status 17:06:34 17:06:34 history = self.history + ( 17:06:34 RequestHistory(method, url, error, status, redirect_location), 17:06:34 ) 17:06:34 17:06:34 new_retry = self.new( 17:06:34 total=total, 17:06:34 connect=connect, 17:06:34 read=read, 17:06:34 redirect=redirect, 17:06:34 status=status_count, 17:06:34 other=other, 17:06:34 history=history, 17:06:34 ) 17:06:34 17:06:34 if new_retry.is_exhausted(): 17:06:34 reason = error or ResponseError(cause) 17:06:34 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 17:06:34 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 17:06:34 17:06:34 During handling of the above exception, another exception occurred: 17:06:34 17:06:34 self = 17:06:34 17:06:34 def test_19_rdm_device_disconnection(self): 17:06:34 > response = test_utils.unmount_device("ROADMA01") 17:06:34 17:06:34 transportpce_tests/1.2.1/test01_portmapping.py:211: 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 transportpce_tests/common/test_utils.py:358: in unmount_device 17:06:34 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 17:06:34 transportpce_tests/common/test_utils.py:133: in delete_request 17:06:34 return requests.request( 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:06:34 return session.request(method=method, url=url, **kwargs) 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:06:34 resp = self.send(prep, **send_kwargs) 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:06:34 r = adapter.send(request, **kwargs) 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 17:06:34 self = 17:06:34 request = , stream = False 17:06:34 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:06:34 proxies = OrderedDict() 17:06:34 17:06:34 def send( 17:06:34 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:06:34 ): 17:06:34 """Sends PreparedRequest object. Returns Response object. 17:06:34 17:06:34 :param request: The :class:`PreparedRequest ` being sent. 17:06:34 :param stream: (optional) Whether to stream the request content. 17:06:34 :param timeout: (optional) How long to wait for the server to send 17:06:34 data before giving up, as a float, or a :ref:`(connect timeout, 17:06:34 read timeout) ` tuple. 17:06:34 :type timeout: float or tuple or urllib3 Timeout object 17:06:34 :param verify: (optional) Either a boolean, in which case it controls whether 17:06:34 we verify the server's TLS certificate, or a string, in which case it 17:06:34 must be a path to a CA bundle to use 17:06:34 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:06:34 :param proxies: (optional) The proxies dictionary to apply to the request. 17:06:34 :rtype: requests.Response 17:06:34 """ 17:06:34 17:06:34 try: 17:06:34 conn = self.get_connection_with_tls_context( 17:06:34 request, verify, proxies=proxies, cert=cert 17:06:34 ) 17:06:34 except LocationValueError as e: 17:06:34 raise InvalidURL(e, request=request) 17:06:34 17:06:34 self.cert_verify(conn, request.url, verify, cert) 17:06:34 url = self.request_url(request, proxies) 17:06:34 self.add_headers( 17:06:34 request, 17:06:34 stream=stream, 17:06:34 timeout=timeout, 17:06:34 verify=verify, 17:06:34 cert=cert, 17:06:34 proxies=proxies, 17:06:34 ) 17:06:34 17:06:34 chunked = not (request.body is None or "Content-Length" in request.headers) 17:06:34 17:06:34 if isinstance(timeout, tuple): 17:06:34 try: 17:06:34 connect, read = timeout 17:06:34 timeout = TimeoutSauce(connect=connect, read=read) 17:06:34 except ValueError: 17:06:34 raise ValueError( 17:06:34 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:06:34 f"or a single float to set both timeouts to the same value." 17:06:34 ) 17:06:34 elif isinstance(timeout, TimeoutSauce): 17:06:34 pass 17:06:34 else: 17:06:34 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:06:34 17:06:34 try: 17:06:34 resp = conn.urlopen( 17:06:34 method=request.method, 17:06:34 url=url, 17:06:34 body=request.body, 17:06:34 headers=request.headers, 17:06:34 redirect=False, 17:06:34 assert_same_host=False, 17:06:34 preload_content=False, 17:06:34 decode_content=False, 17:06:34 retries=self.max_retries, 17:06:34 timeout=timeout, 17:06:34 chunked=chunked, 17:06:34 ) 17:06:34 17:06:34 except (ProtocolError, OSError) as err: 17:06:34 raise ConnectionError(err, request=request) 17:06:34 17:06:34 except MaxRetryError as e: 17:06:34 if isinstance(e.reason, ConnectTimeoutError): 17:06:34 # TODO: Remove this in 3.0.0: see #2811 17:06:34 if not isinstance(e.reason, NewConnectionError): 17:06:34 raise ConnectTimeout(e, request=request) 17:06:34 17:06:34 if isinstance(e.reason, ResponseError): 17:06:34 raise RetryError(e, request=request) 17:06:34 17:06:34 if isinstance(e.reason, _ProxyError): 17:06:34 raise ProxyError(e, request=request) 17:06:34 17:06:34 if isinstance(e.reason, _SSLError): 17:06:34 # This branch is for urllib3 v1.22 and later. 17:06:34 raise SSLError(e, request=request) 17:06:34 17:06:34 > raise ConnectionError(e, request=request) 17:06:34 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 17:06:34 ----------------------------- Captured stdout call ----------------------------- 17:06:34 execution of test_19_rdm_device_disconnection 17:06:34 ________ TransportPCEPortMappingTesting.test_20_rdm_device_disconnected ________ 17:06:34 17:06:34 self = 17:06:34 17:06:34 def _new_conn(self) -> socket.socket: 17:06:34 """Establish a socket connection and set nodelay settings on it. 17:06:34 17:06:34 :return: New socket connection. 17:06:34 """ 17:06:34 try: 17:06:34 > sock = connection.create_connection( 17:06:34 (self._dns_host, self.port), 17:06:34 self.timeout, 17:06:34 source_address=self.source_address, 17:06:34 socket_options=self.socket_options, 17:06:34 ) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 17:06:34 raise err 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 17:06:34 address = ('localhost', 8182), timeout = 10, source_address = None 17:06:34 socket_options = [(6, 1, 1)] 17:06:34 17:06:34 def create_connection( 17:06:34 address: tuple[str, int], 17:06:34 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:06:34 source_address: tuple[str, int] | None = None, 17:06:34 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 17:06:34 ) -> socket.socket: 17:06:34 """Connect to *address* and return the socket object. 17:06:34 17:06:34 Convenience function. Connect to *address* (a 2-tuple ``(host, 17:06:34 port)``) and return the socket object. Passing the optional 17:06:34 *timeout* parameter will set the timeout on the socket instance 17:06:34 before attempting to connect. If no *timeout* is supplied, the 17:06:34 global default timeout setting returned by :func:`socket.getdefaulttimeout` 17:06:34 is used. If *source_address* is set it must be a tuple of (host, port) 17:06:34 for the socket to bind as a source address before making the connection. 17:06:34 An host of '' or port 0 tells the OS to use the default. 17:06:34 """ 17:06:34 17:06:34 host, port = address 17:06:34 if host.startswith("["): 17:06:34 host = host.strip("[]") 17:06:34 err = None 17:06:34 17:06:34 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 17:06:34 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 17:06:34 # The original create_connection function always returns all records. 17:06:34 family = allowed_gai_family() 17:06:34 17:06:34 try: 17:06:34 host.encode("idna") 17:06:34 except UnicodeError: 17:06:34 raise LocationParseError(f"'{host}', label empty or too long") from None 17:06:34 17:06:34 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 17:06:34 af, socktype, proto, canonname, sa = res 17:06:34 sock = None 17:06:34 try: 17:06:34 sock = socket.socket(af, socktype, proto) 17:06:34 17:06:34 # If provided, set socket level options before connecting. 17:06:34 _set_socket_options(sock, socket_options) 17:06:34 17:06:34 if timeout is not _DEFAULT_TIMEOUT: 17:06:34 sock.settimeout(timeout) 17:06:34 if source_address: 17:06:34 sock.bind(source_address) 17:06:34 > sock.connect(sa) 17:06:34 E ConnectionRefusedError: [Errno 111] Connection refused 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 17:06:34 17:06:34 The above exception was the direct cause of the following exception: 17:06:34 17:06:34 self = 17:06:34 method = 'GET' 17:06:34 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' 17:06:34 body = None 17:06:34 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:06:34 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:06:34 redirect = False, assert_same_host = False 17:06:34 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:06:34 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:06:34 decode_content = False, response_kw = {} 17:06:34 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query='content=nonconfig', fragment=None) 17:06:34 destination_scheme = None, conn = None, release_this_conn = True 17:06:34 http_tunnel_required = False, err = None, clean_exit = False 17:06:34 17:06:34 def urlopen( # type: ignore[override] 17:06:34 self, 17:06:34 method: str, 17:06:34 url: str, 17:06:34 body: _TYPE_BODY | None = None, 17:06:34 headers: typing.Mapping[str, str] | None = None, 17:06:34 retries: Retry | bool | int | None = None, 17:06:34 redirect: bool = True, 17:06:34 assert_same_host: bool = True, 17:06:34 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:06:34 pool_timeout: int | None = None, 17:06:34 release_conn: bool | None = None, 17:06:34 chunked: bool = False, 17:06:34 body_pos: _TYPE_BODY_POSITION | None = None, 17:06:34 preload_content: bool = True, 17:06:34 decode_content: bool = True, 17:06:34 **response_kw: typing.Any, 17:06:34 ) -> BaseHTTPResponse: 17:06:34 """ 17:06:34 Get a connection from the pool and perform an HTTP request. This is the 17:06:34 lowest level call for making a request, so you'll need to specify all 17:06:34 the raw details. 17:06:34 17:06:34 .. note:: 17:06:34 17:06:34 More commonly, it's appropriate to use a convenience method 17:06:34 such as :meth:`request`. 17:06:34 17:06:34 .. note:: 17:06:34 17:06:34 `release_conn` will only behave as expected if 17:06:34 `preload_content=False` because we want to make 17:06:34 `preload_content=False` the default behaviour someday soon without 17:06:34 breaking backwards compatibility. 17:06:34 17:06:34 :param method: 17:06:34 HTTP request method (such as GET, POST, PUT, etc.) 17:06:34 17:06:34 :param url: 17:06:34 The URL to perform the request on. 17:06:34 17:06:34 :param body: 17:06:34 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:06:34 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:06:34 17:06:34 :param headers: 17:06:34 Dictionary of custom headers to send, such as User-Agent, 17:06:34 If-None-Match, etc. If None, pool headers are used. If provided, 17:06:34 these headers completely replace any pool-specific headers. 17:06:34 17:06:34 :param retries: 17:06:34 Configure the number of retries to allow before raising a 17:06:34 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:06:34 17:06:34 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:06:34 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:06:34 over different types of retries. 17:06:34 Pass an integer number to retry connection errors that many times, 17:06:34 but no other types of errors. Pass zero to never retry. 17:06:34 17:06:34 If ``False``, then retries are disabled and any exception is raised 17:06:34 immediately. Also, instead of raising a MaxRetryError on redirects, 17:06:34 the redirect response will be returned. 17:06:34 17:06:34 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:06:34 17:06:34 :param redirect: 17:06:34 If True, automatically handle redirects (status codes 301, 302, 17:06:34 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:06:34 will disable redirect, too. 17:06:34 17:06:34 :param assert_same_host: 17:06:34 If ``True``, will make sure that the host of the pool requests is 17:06:34 consistent else will raise HostChangedError. When ``False``, you can 17:06:34 use the pool on an HTTP proxy and request foreign hosts. 17:06:34 17:06:34 :param timeout: 17:06:34 If specified, overrides the default timeout for this one 17:06:34 request. It may be a float (in seconds) or an instance of 17:06:34 :class:`urllib3.util.Timeout`. 17:06:34 17:06:34 :param pool_timeout: 17:06:34 If set and the pool is set to block=True, then this method will 17:06:34 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:06:34 connection is available within the time period. 17:06:34 17:06:34 :param bool preload_content: 17:06:34 If True, the response's body will be preloaded into memory. 17:06:34 17:06:34 :param bool decode_content: 17:06:34 If True, will attempt to decode the body based on the 17:06:34 'content-encoding' header. 17:06:34 17:06:34 :param release_conn: 17:06:34 If False, then the urlopen call will not release the connection 17:06:34 back into the pool once a response is received (but will release if 17:06:34 you read the entire contents of the response such as when 17:06:34 `preload_content=True`). This is useful if you're not preloading 17:06:34 the response's content immediately. You will need to call 17:06:34 ``r.release_conn()`` on the response ``r`` to return the connection 17:06:34 back into the pool. If None, it takes the value of ``preload_content`` 17:06:34 which defaults to ``True``. 17:06:34 17:06:34 :param bool chunked: 17:06:34 If True, urllib3 will send the body using chunked transfer 17:06:34 encoding. Otherwise, urllib3 will send the body using the standard 17:06:34 content-length form. Defaults to False. 17:06:34 17:06:34 :param int body_pos: 17:06:34 Position to seek to in file-like body in the event of a retry or 17:06:34 redirect. Typically this won't need to be set because urllib3 will 17:06:34 auto-populate the value when needed. 17:06:34 """ 17:06:34 parsed_url = parse_url(url) 17:06:34 destination_scheme = parsed_url.scheme 17:06:34 17:06:34 if headers is None: 17:06:34 headers = self.headers 17:06:34 17:06:34 if not isinstance(retries, Retry): 17:06:34 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:06:34 17:06:34 if release_conn is None: 17:06:34 release_conn = preload_content 17:06:34 17:06:34 # Check host 17:06:34 if assert_same_host and not self.is_same_host(url): 17:06:34 raise HostChangedError(self, url, retries) 17:06:34 17:06:34 # Ensure that the URL we're connecting to is properly encoded 17:06:34 if url.startswith("/"): 17:06:34 url = to_str(_encode_target(url)) 17:06:34 else: 17:06:34 url = to_str(parsed_url.url) 17:06:34 17:06:34 conn = None 17:06:34 17:06:34 # Track whether `conn` needs to be released before 17:06:34 # returning/raising/recursing. Update this variable if necessary, and 17:06:34 # leave `release_conn` constant throughout the function. That way, if 17:06:34 # the function recurses, the original value of `release_conn` will be 17:06:34 # passed down into the recursive call, and its value will be respected. 17:06:34 # 17:06:34 # See issue #651 [1] for details. 17:06:34 # 17:06:34 # [1] 17:06:34 release_this_conn = release_conn 17:06:34 17:06:34 http_tunnel_required = connection_requires_http_tunnel( 17:06:34 self.proxy, self.proxy_config, destination_scheme 17:06:34 ) 17:06:34 17:06:34 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:06:34 # have to copy the headers dict so we can safely change it without those 17:06:34 # changes being reflected in anyone else's copy. 17:06:34 if not http_tunnel_required: 17:06:34 headers = headers.copy() # type: ignore[attr-defined] 17:06:34 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:06:34 17:06:34 # Must keep the exception bound to a separate variable or else Python 3 17:06:34 # complains about UnboundLocalError. 17:06:34 err = None 17:06:34 17:06:34 # Keep track of whether we cleanly exited the except block. This 17:06:34 # ensures we do proper cleanup in finally. 17:06:34 clean_exit = False 17:06:34 17:06:34 # Rewind body position, if needed. Record current position 17:06:34 # for future rewinds in the event of a redirect/retry. 17:06:34 body_pos = set_file_position(body, body_pos) 17:06:34 17:06:34 try: 17:06:34 # Request a connection from the queue. 17:06:34 timeout_obj = self._get_timeout(timeout) 17:06:34 conn = self._get_conn(timeout=pool_timeout) 17:06:34 17:06:34 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:06:34 17:06:34 # Is this a closed/new connection that requires CONNECT tunnelling? 17:06:34 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:06:34 try: 17:06:34 self._prepare_proxy(conn) 17:06:34 except (BaseSSLError, OSError, SocketTimeout) as e: 17:06:34 self._raise_timeout( 17:06:34 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:06:34 ) 17:06:34 raise 17:06:34 17:06:34 # If we're going to release the connection in ``finally:``, then 17:06:34 # the response doesn't need to know about the connection. Otherwise 17:06:34 # it will also try to release it and we'll have a double-release 17:06:34 # mess. 17:06:34 response_conn = conn if not release_conn else None 17:06:34 17:06:34 # Make the request on the HTTPConnection object 17:06:34 > response = self._make_request( 17:06:34 conn, 17:06:34 method, 17:06:34 url, 17:06:34 timeout=timeout_obj, 17:06:34 body=body, 17:06:34 headers=headers, 17:06:34 chunked=chunked, 17:06:34 retries=retries, 17:06:34 response_conn=response_conn, 17:06:34 preload_content=preload_content, 17:06:34 decode_content=decode_content, 17:06:34 **response_kw, 17:06:34 ) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 17:06:34 conn.request( 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 17:06:34 self.endheaders() 17:06:34 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 17:06:34 self._send_output(message_body, encode_chunked=encode_chunked) 17:06:34 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 17:06:34 self.send(msg) 17:06:34 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 17:06:34 self.connect() 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 17:06:34 self.sock = self._new_conn() 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 17:06:34 self = 17:06:34 17:06:34 def _new_conn(self) -> socket.socket: 17:06:34 """Establish a socket connection and set nodelay settings on it. 17:06:34 17:06:34 :return: New socket connection. 17:06:34 """ 17:06:34 try: 17:06:34 sock = connection.create_connection( 17:06:34 (self._dns_host, self.port), 17:06:34 self.timeout, 17:06:34 source_address=self.source_address, 17:06:34 socket_options=self.socket_options, 17:06:34 ) 17:06:34 except socket.gaierror as e: 17:06:34 raise NameResolutionError(self.host, self, e) from e 17:06:34 except SocketTimeout as e: 17:06:34 raise ConnectTimeoutError( 17:06:34 self, 17:06:34 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 17:06:34 ) from e 17:06:34 17:06:34 except OSError as e: 17:06:34 > raise NewConnectionError( 17:06:34 self, f"Failed to establish a new connection: {e}" 17:06:34 ) from e 17:06:34 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 17:06:34 17:06:34 The above exception was the direct cause of the following exception: 17:06:34 17:06:34 self = 17:06:34 request = , stream = False 17:06:34 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:06:34 proxies = OrderedDict() 17:06:34 17:06:34 def send( 17:06:34 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:06:34 ): 17:06:34 """Sends PreparedRequest object. Returns Response object. 17:06:34 17:06:34 :param request: The :class:`PreparedRequest ` being sent. 17:06:34 :param stream: (optional) Whether to stream the request content. 17:06:34 :param timeout: (optional) How long to wait for the server to send 17:06:34 data before giving up, as a float, or a :ref:`(connect timeout, 17:06:34 read timeout) ` tuple. 17:06:34 :type timeout: float or tuple or urllib3 Timeout object 17:06:34 :param verify: (optional) Either a boolean, in which case it controls whether 17:06:34 we verify the server's TLS certificate, or a string, in which case it 17:06:34 must be a path to a CA bundle to use 17:06:34 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:06:34 :param proxies: (optional) The proxies dictionary to apply to the request. 17:06:34 :rtype: requests.Response 17:06:34 """ 17:06:34 17:06:34 try: 17:06:34 conn = self.get_connection_with_tls_context( 17:06:34 request, verify, proxies=proxies, cert=cert 17:06:34 ) 17:06:34 except LocationValueError as e: 17:06:34 raise InvalidURL(e, request=request) 17:06:34 17:06:34 self.cert_verify(conn, request.url, verify, cert) 17:06:34 url = self.request_url(request, proxies) 17:06:34 self.add_headers( 17:06:34 request, 17:06:34 stream=stream, 17:06:34 timeout=timeout, 17:06:34 verify=verify, 17:06:34 cert=cert, 17:06:34 proxies=proxies, 17:06:34 ) 17:06:34 17:06:34 chunked = not (request.body is None or "Content-Length" in request.headers) 17:06:34 17:06:34 if isinstance(timeout, tuple): 17:06:34 try: 17:06:34 connect, read = timeout 17:06:34 timeout = TimeoutSauce(connect=connect, read=read) 17:06:34 except ValueError: 17:06:34 raise ValueError( 17:06:34 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:06:34 f"or a single float to set both timeouts to the same value." 17:06:34 ) 17:06:34 elif isinstance(timeout, TimeoutSauce): 17:06:34 pass 17:06:34 else: 17:06:34 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:06:34 17:06:34 try: 17:06:34 > resp = conn.urlopen( 17:06:34 method=request.method, 17:06:34 url=url, 17:06:34 body=request.body, 17:06:34 headers=request.headers, 17:06:34 redirect=False, 17:06:34 assert_same_host=False, 17:06:34 preload_content=False, 17:06:34 decode_content=False, 17:06:34 retries=self.max_retries, 17:06:34 timeout=timeout, 17:06:34 chunked=chunked, 17:06:34 ) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:06:34 retries = retries.increment( 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 17:06:34 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:06:34 method = 'GET' 17:06:34 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' 17:06:34 response = None 17:06:34 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 17:06:34 _pool = 17:06:34 _stacktrace = 17:06:34 17:06:34 def increment( 17:06:34 self, 17:06:34 method: str | None = None, 17:06:34 url: str | None = None, 17:06:34 response: BaseHTTPResponse | None = None, 17:06:34 error: Exception | None = None, 17:06:34 _pool: ConnectionPool | None = None, 17:06:34 _stacktrace: TracebackType | None = None, 17:06:34 ) -> Self: 17:06:34 """Return a new Retry object with incremented retry counters. 17:06:34 17:06:34 :param response: A response object, or None, if the server did not 17:06:34 return a response. 17:06:34 :type response: :class:`~urllib3.response.BaseHTTPResponse` 17:06:34 :param Exception error: An error encountered during the request, or 17:06:34 None if the response was received successfully. 17:06:34 17:06:34 :return: A new ``Retry`` object. 17:06:34 """ 17:06:34 if self.total is False and error: 17:06:34 # Disabled, indicate to re-raise the error. 17:06:34 raise reraise(type(error), error, _stacktrace) 17:06:34 17:06:34 total = self.total 17:06:34 if total is not None: 17:06:34 total -= 1 17:06:34 17:06:34 connect = self.connect 17:06:34 read = self.read 17:06:34 redirect = self.redirect 17:06:34 status_count = self.status 17:06:34 other = self.other 17:06:34 cause = "unknown" 17:06:34 status = None 17:06:34 redirect_location = None 17:06:34 17:06:34 if error and self._is_connection_error(error): 17:06:34 # Connect retry? 17:06:34 if connect is False: 17:06:34 raise reraise(type(error), error, _stacktrace) 17:06:34 elif connect is not None: 17:06:34 connect -= 1 17:06:34 17:06:34 elif error and self._is_read_error(error): 17:06:34 # Read retry? 17:06:34 if read is False or method is None or not self._is_method_retryable(method): 17:06:34 raise reraise(type(error), error, _stacktrace) 17:06:34 elif read is not None: 17:06:34 read -= 1 17:06:34 17:06:34 elif error: 17:06:34 # Other retry? 17:06:34 if other is not None: 17:06:34 other -= 1 17:06:34 17:06:34 elif response and response.get_redirect_location(): 17:06:34 # Redirect retry? 17:06:34 if redirect is not None: 17:06:34 redirect -= 1 17:06:34 cause = "too many redirects" 17:06:34 response_redirect_location = response.get_redirect_location() 17:06:34 if response_redirect_location: 17:06:34 redirect_location = response_redirect_location 17:06:34 status = response.status 17:06:34 17:06:34 else: 17:06:34 # Incrementing because of a server error like a 500 in 17:06:34 # status_forcelist and the given method is in the allowed_methods 17:06:34 cause = ResponseError.GENERIC_ERROR 17:06:34 if response and response.status: 17:06:34 if status_count is not None: 17:06:34 status_count -= 1 17:06:34 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 17:06:34 status = response.status 17:06:34 17:06:34 history = self.history + ( 17:06:34 RequestHistory(method, url, error, status, redirect_location), 17:06:34 ) 17:06:34 17:06:34 new_retry = self.new( 17:06:34 total=total, 17:06:34 connect=connect, 17:06:34 read=read, 17:06:34 redirect=redirect, 17:06:34 status=status_count, 17:06:34 other=other, 17:06:34 history=history, 17:06:34 ) 17:06:34 17:06:34 if new_retry.is_exhausted(): 17:06:34 reason = error or ResponseError(cause) 17:06:34 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 17:06:34 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 17:06:34 17:06:34 During handling of the above exception, another exception occurred: 17:06:34 17:06:34 self = 17:06:34 17:06:34 def test_20_rdm_device_disconnected(self): 17:06:34 > response = test_utils.check_device_connection("ROADMA01") 17:06:34 17:06:34 transportpce_tests/1.2.1/test01_portmapping.py:215: 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 transportpce_tests/common/test_utils.py:369: in check_device_connection 17:06:34 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 17:06:34 transportpce_tests/common/test_utils.py:116: in get_request 17:06:34 return requests.request( 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:06:34 return session.request(method=method, url=url, **kwargs) 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:06:34 resp = self.send(prep, **send_kwargs) 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:06:34 r = adapter.send(request, **kwargs) 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 17:06:34 self = 17:06:34 request = , stream = False 17:06:34 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:06:34 proxies = OrderedDict() 17:06:34 17:06:34 def send( 17:06:34 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:06:34 ): 17:06:34 """Sends PreparedRequest object. Returns Response object. 17:06:34 17:06:34 :param request: The :class:`PreparedRequest ` being sent. 17:06:34 :param stream: (optional) Whether to stream the request content. 17:06:34 :param timeout: (optional) How long to wait for the server to send 17:06:34 data before giving up, as a float, or a :ref:`(connect timeout, 17:06:34 read timeout) ` tuple. 17:06:34 :type timeout: float or tuple or urllib3 Timeout object 17:06:34 :param verify: (optional) Either a boolean, in which case it controls whether 17:06:34 we verify the server's TLS certificate, or a string, in which case it 17:06:34 must be a path to a CA bundle to use 17:06:34 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:06:34 :param proxies: (optional) The proxies dictionary to apply to the request. 17:06:34 :rtype: requests.Response 17:06:34 """ 17:06:34 17:06:34 try: 17:06:34 conn = self.get_connection_with_tls_context( 17:06:34 request, verify, proxies=proxies, cert=cert 17:06:34 ) 17:06:34 except LocationValueError as e: 17:06:34 raise InvalidURL(e, request=request) 17:06:34 17:06:34 self.cert_verify(conn, request.url, verify, cert) 17:06:34 url = self.request_url(request, proxies) 17:06:34 self.add_headers( 17:06:34 request, 17:06:34 stream=stream, 17:06:34 timeout=timeout, 17:06:34 verify=verify, 17:06:34 cert=cert, 17:06:34 proxies=proxies, 17:06:34 ) 17:06:34 17:06:34 chunked = not (request.body is None or "Content-Length" in request.headers) 17:06:34 17:06:34 if isinstance(timeout, tuple): 17:06:34 try: 17:06:34 connect, read = timeout 17:06:34 timeout = TimeoutSauce(connect=connect, read=read) 17:06:34 except ValueError: 17:06:34 raise ValueError( 17:06:34 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:06:34 f"or a single float to set both timeouts to the same value." 17:06:34 ) 17:06:34 elif isinstance(timeout, TimeoutSauce): 17:06:34 pass 17:06:34 else: 17:06:34 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:06:34 17:06:34 try: 17:06:34 resp = conn.urlopen( 17:06:34 method=request.method, 17:06:34 url=url, 17:06:34 body=request.body, 17:06:34 headers=request.headers, 17:06:34 redirect=False, 17:06:34 assert_same_host=False, 17:06:34 preload_content=False, 17:06:34 decode_content=False, 17:06:34 retries=self.max_retries, 17:06:34 timeout=timeout, 17:06:34 chunked=chunked, 17:06:34 ) 17:06:34 17:06:34 except (ProtocolError, OSError) as err: 17:06:34 raise ConnectionError(err, request=request) 17:06:34 17:06:34 except MaxRetryError as e: 17:06:34 if isinstance(e.reason, ConnectTimeoutError): 17:06:34 # TODO: Remove this in 3.0.0: see #2811 17:06:34 if not isinstance(e.reason, NewConnectionError): 17:06:34 raise ConnectTimeout(e, request=request) 17:06:34 17:06:34 if isinstance(e.reason, ResponseError): 17:06:34 raise RetryError(e, request=request) 17:06:34 17:06:34 if isinstance(e.reason, _ProxyError): 17:06:34 raise ProxyError(e, request=request) 17:06:34 17:06:34 if isinstance(e.reason, _SSLError): 17:06:34 # This branch is for urllib3 v1.22 and later. 17:06:34 raise SSLError(e, request=request) 17:06:34 17:06:34 > raise ConnectionError(e, request=request) 17:06:34 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 17:06:34 ----------------------------- Captured stdout call ----------------------------- 17:06:34 execution of test_20_rdm_device_disconnected 17:06:34 _______ TransportPCEPortMappingTesting.test_21_rdm_device_not_connected ________ 17:06:34 17:06:34 self = 17:06:34 17:06:34 def _new_conn(self) -> socket.socket: 17:06:34 """Establish a socket connection and set nodelay settings on it. 17:06:34 17:06:34 :return: New socket connection. 17:06:34 """ 17:06:34 try: 17:06:34 > sock = connection.create_connection( 17:06:34 (self._dns_host, self.port), 17:06:34 self.timeout, 17:06:34 source_address=self.source_address, 17:06:34 socket_options=self.socket_options, 17:06:34 ) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 17:06:34 raise err 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 17:06:34 address = ('localhost', 8182), timeout = 10, source_address = None 17:06:34 socket_options = [(6, 1, 1)] 17:06:34 17:06:34 def create_connection( 17:06:34 address: tuple[str, int], 17:06:34 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:06:34 source_address: tuple[str, int] | None = None, 17:06:34 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 17:06:34 ) -> socket.socket: 17:06:34 """Connect to *address* and return the socket object. 17:06:34 17:06:34 Convenience function. Connect to *address* (a 2-tuple ``(host, 17:06:34 port)``) and return the socket object. Passing the optional 17:06:34 *timeout* parameter will set the timeout on the socket instance 17:06:34 before attempting to connect. If no *timeout* is supplied, the 17:06:34 global default timeout setting returned by :func:`socket.getdefaulttimeout` 17:06:34 is used. If *source_address* is set it must be a tuple of (host, port) 17:06:34 for the socket to bind as a source address before making the connection. 17:06:34 An host of '' or port 0 tells the OS to use the default. 17:06:34 """ 17:06:34 17:06:34 host, port = address 17:06:34 if host.startswith("["): 17:06:34 host = host.strip("[]") 17:06:34 err = None 17:06:34 17:06:34 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 17:06:34 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 17:06:34 # The original create_connection function always returns all records. 17:06:34 family = allowed_gai_family() 17:06:34 17:06:34 try: 17:06:34 host.encode("idna") 17:06:34 except UnicodeError: 17:06:34 raise LocationParseError(f"'{host}', label empty or too long") from None 17:06:34 17:06:34 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 17:06:34 af, socktype, proto, canonname, sa = res 17:06:34 sock = None 17:06:34 try: 17:06:34 sock = socket.socket(af, socktype, proto) 17:06:34 17:06:34 # If provided, set socket level options before connecting. 17:06:34 _set_socket_options(sock, socket_options) 17:06:34 17:06:34 if timeout is not _DEFAULT_TIMEOUT: 17:06:34 sock.settimeout(timeout) 17:06:34 if source_address: 17:06:34 sock.bind(source_address) 17:06:34 > sock.connect(sa) 17:06:34 E ConnectionRefusedError: [Errno 111] Connection refused 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 17:06:34 17:06:34 The above exception was the direct cause of the following exception: 17:06:34 17:06:34 self = 17:06:34 method = 'GET' 17:06:34 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' 17:06:34 body = None 17:06:34 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:06:34 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:06:34 redirect = False, assert_same_host = False 17:06:34 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:06:34 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:06:34 decode_content = False, response_kw = {} 17:06:34 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info', query=None, fragment=None) 17:06:34 destination_scheme = None, conn = None, release_this_conn = True 17:06:34 http_tunnel_required = False, err = None, clean_exit = False 17:06:34 17:06:34 def urlopen( # type: ignore[override] 17:06:34 self, 17:06:34 method: str, 17:06:34 url: str, 17:06:34 body: _TYPE_BODY | None = None, 17:06:34 headers: typing.Mapping[str, str] | None = None, 17:06:34 retries: Retry | bool | int | None = None, 17:06:34 redirect: bool = True, 17:06:34 assert_same_host: bool = True, 17:06:34 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:06:34 pool_timeout: int | None = None, 17:06:34 release_conn: bool | None = None, 17:06:34 chunked: bool = False, 17:06:34 body_pos: _TYPE_BODY_POSITION | None = None, 17:06:34 preload_content: bool = True, 17:06:34 decode_content: bool = True, 17:06:34 **response_kw: typing.Any, 17:06:34 ) -> BaseHTTPResponse: 17:06:34 """ 17:06:34 Get a connection from the pool and perform an HTTP request. This is the 17:06:34 lowest level call for making a request, so you'll need to specify all 17:06:34 the raw details. 17:06:34 17:06:34 .. note:: 17:06:34 17:06:34 More commonly, it's appropriate to use a convenience method 17:06:34 such as :meth:`request`. 17:06:34 17:06:34 .. note:: 17:06:34 17:06:34 `release_conn` will only behave as expected if 17:06:34 `preload_content=False` because we want to make 17:06:34 `preload_content=False` the default behaviour someday soon without 17:06:34 breaking backwards compatibility. 17:06:34 17:06:34 :param method: 17:06:34 HTTP request method (such as GET, POST, PUT, etc.) 17:06:34 17:06:34 :param url: 17:06:34 The URL to perform the request on. 17:06:34 17:06:34 :param body: 17:06:34 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:06:34 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:06:34 17:06:34 :param headers: 17:06:34 Dictionary of custom headers to send, such as User-Agent, 17:06:34 If-None-Match, etc. If None, pool headers are used. If provided, 17:06:34 these headers completely replace any pool-specific headers. 17:06:34 17:06:34 :param retries: 17:06:34 Configure the number of retries to allow before raising a 17:06:34 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:06:34 17:06:34 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:06:34 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:06:34 over different types of retries. 17:06:34 Pass an integer number to retry connection errors that many times, 17:06:34 but no other types of errors. Pass zero to never retry. 17:06:34 17:06:34 If ``False``, then retries are disabled and any exception is raised 17:06:34 immediately. Also, instead of raising a MaxRetryError on redirects, 17:06:34 the redirect response will be returned. 17:06:34 17:06:34 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:06:34 17:06:34 :param redirect: 17:06:34 If True, automatically handle redirects (status codes 301, 302, 17:06:34 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:06:34 will disable redirect, too. 17:06:34 17:06:34 :param assert_same_host: 17:06:34 If ``True``, will make sure that the host of the pool requests is 17:06:34 consistent else will raise HostChangedError. When ``False``, you can 17:06:34 use the pool on an HTTP proxy and request foreign hosts. 17:06:34 17:06:34 :param timeout: 17:06:34 If specified, overrides the default timeout for this one 17:06:34 request. It may be a float (in seconds) or an instance of 17:06:34 :class:`urllib3.util.Timeout`. 17:06:34 17:06:34 :param pool_timeout: 17:06:34 If set and the pool is set to block=True, then this method will 17:06:34 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:06:34 connection is available within the time period. 17:06:34 17:06:34 :param bool preload_content: 17:06:34 If True, the response's body will be preloaded into memory. 17:06:34 17:06:34 :param bool decode_content: 17:06:34 If True, will attempt to decode the body based on the 17:06:34 'content-encoding' header. 17:06:34 17:06:34 :param release_conn: 17:06:34 If False, then the urlopen call will not release the connection 17:06:34 back into the pool once a response is received (but will release if 17:06:34 you read the entire contents of the response such as when 17:06:34 `preload_content=True`). This is useful if you're not preloading 17:06:34 the response's content immediately. You will need to call 17:06:34 ``r.release_conn()`` on the response ``r`` to return the connection 17:06:34 back into the pool. If None, it takes the value of ``preload_content`` 17:06:34 which defaults to ``True``. 17:06:34 17:06:34 :param bool chunked: 17:06:34 If True, urllib3 will send the body using chunked transfer 17:06:34 encoding. Otherwise, urllib3 will send the body using the standard 17:06:34 content-length form. Defaults to False. 17:06:34 17:06:34 :param int body_pos: 17:06:34 Position to seek to in file-like body in the event of a retry or 17:06:34 redirect. Typically this won't need to be set because urllib3 will 17:06:34 auto-populate the value when needed. 17:06:34 """ 17:06:34 parsed_url = parse_url(url) 17:06:34 destination_scheme = parsed_url.scheme 17:06:34 17:06:34 if headers is None: 17:06:34 headers = self.headers 17:06:34 17:06:34 if not isinstance(retries, Retry): 17:06:34 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:06:34 17:06:34 if release_conn is None: 17:06:34 release_conn = preload_content 17:06:34 17:06:34 # Check host 17:06:34 if assert_same_host and not self.is_same_host(url): 17:06:34 raise HostChangedError(self, url, retries) 17:06:34 17:06:34 # Ensure that the URL we're connecting to is properly encoded 17:06:34 if url.startswith("/"): 17:06:34 url = to_str(_encode_target(url)) 17:06:34 else: 17:06:34 url = to_str(parsed_url.url) 17:06:34 17:06:34 conn = None 17:06:34 17:06:34 # Track whether `conn` needs to be released before 17:06:34 # returning/raising/recursing. Update this variable if necessary, and 17:06:34 # leave `release_conn` constant throughout the function. That way, if 17:06:34 # the function recurses, the original value of `release_conn` will be 17:06:34 # passed down into the recursive call, and its value will be respected. 17:06:34 # 17:06:34 # See issue #651 [1] for details. 17:06:34 # 17:06:34 # [1] 17:06:34 release_this_conn = release_conn 17:06:34 17:06:34 http_tunnel_required = connection_requires_http_tunnel( 17:06:34 self.proxy, self.proxy_config, destination_scheme 17:06:34 ) 17:06:34 17:06:34 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:06:34 # have to copy the headers dict so we can safely change it without those 17:06:34 # changes being reflected in anyone else's copy. 17:06:34 if not http_tunnel_required: 17:06:34 headers = headers.copy() # type: ignore[attr-defined] 17:06:34 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:06:34 17:06:34 # Must keep the exception bound to a separate variable or else Python 3 17:06:34 # complains about UnboundLocalError. 17:06:34 err = None 17:06:34 17:06:34 # Keep track of whether we cleanly exited the except block. This 17:06:34 # ensures we do proper cleanup in finally. 17:06:34 clean_exit = False 17:06:34 17:06:34 # Rewind body position, if needed. Record current position 17:06:34 # for future rewinds in the event of a redirect/retry. 17:06:34 body_pos = set_file_position(body, body_pos) 17:06:34 17:06:34 try: 17:06:34 # Request a connection from the queue. 17:06:34 timeout_obj = self._get_timeout(timeout) 17:06:34 conn = self._get_conn(timeout=pool_timeout) 17:06:34 17:06:34 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:06:34 17:06:34 # Is this a closed/new connection that requires CONNECT tunnelling? 17:06:34 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:06:34 try: 17:06:34 self._prepare_proxy(conn) 17:06:34 except (BaseSSLError, OSError, SocketTimeout) as e: 17:06:34 self._raise_timeout( 17:06:34 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:06:34 ) 17:06:34 raise 17:06:34 17:06:34 # If we're going to release the connection in ``finally:``, then 17:06:34 # the response doesn't need to know about the connection. Otherwise 17:06:34 # it will also try to release it and we'll have a double-release 17:06:34 # mess. 17:06:34 response_conn = conn if not release_conn else None 17:06:34 17:06:34 # Make the request on the HTTPConnection object 17:06:34 > response = self._make_request( 17:06:34 conn, 17:06:34 method, 17:06:34 url, 17:06:34 timeout=timeout_obj, 17:06:34 body=body, 17:06:34 headers=headers, 17:06:34 chunked=chunked, 17:06:34 retries=retries, 17:06:34 response_conn=response_conn, 17:06:34 preload_content=preload_content, 17:06:34 decode_content=decode_content, 17:06:34 **response_kw, 17:06:34 ) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 17:06:34 conn.request( 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 17:06:34 self.endheaders() 17:06:34 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 17:06:34 self._send_output(message_body, encode_chunked=encode_chunked) 17:06:34 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 17:06:34 self.send(msg) 17:06:34 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 17:06:34 self.connect() 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 17:06:34 self.sock = self._new_conn() 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 17:06:34 self = 17:06:34 17:06:34 def _new_conn(self) -> socket.socket: 17:06:34 """Establish a socket connection and set nodelay settings on it. 17:06:34 17:06:34 :return: New socket connection. 17:06:34 """ 17:06:34 try: 17:06:34 sock = connection.create_connection( 17:06:34 (self._dns_host, self.port), 17:06:34 self.timeout, 17:06:34 source_address=self.source_address, 17:06:34 socket_options=self.socket_options, 17:06:34 ) 17:06:34 except socket.gaierror as e: 17:06:34 raise NameResolutionError(self.host, self, e) from e 17:06:34 except SocketTimeout as e: 17:06:34 raise ConnectTimeoutError( 17:06:34 self, 17:06:34 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 17:06:34 ) from e 17:06:34 17:06:34 except OSError as e: 17:06:34 > raise NewConnectionError( 17:06:34 self, f"Failed to establish a new connection: {e}" 17:06:34 ) from e 17:06:34 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 17:06:34 17:06:34 The above exception was the direct cause of the following exception: 17:06:34 17:06:34 self = 17:06:34 request = , stream = False 17:06:34 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:06:34 proxies = OrderedDict() 17:06:34 17:06:34 def send( 17:06:34 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:06:34 ): 17:06:34 """Sends PreparedRequest object. Returns Response object. 17:06:34 17:06:34 :param request: The :class:`PreparedRequest ` being sent. 17:06:34 :param stream: (optional) Whether to stream the request content. 17:06:34 :param timeout: (optional) How long to wait for the server to send 17:06:34 data before giving up, as a float, or a :ref:`(connect timeout, 17:06:34 read timeout) ` tuple. 17:06:34 :type timeout: float or tuple or urllib3 Timeout object 17:06:34 :param verify: (optional) Either a boolean, in which case it controls whether 17:06:34 we verify the server's TLS certificate, or a string, in which case it 17:06:34 must be a path to a CA bundle to use 17:06:34 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:06:34 :param proxies: (optional) The proxies dictionary to apply to the request. 17:06:34 :rtype: requests.Response 17:06:34 """ 17:06:34 17:06:34 try: 17:06:34 conn = self.get_connection_with_tls_context( 17:06:34 request, verify, proxies=proxies, cert=cert 17:06:34 ) 17:06:34 except LocationValueError as e: 17:06:34 raise InvalidURL(e, request=request) 17:06:34 17:06:34 self.cert_verify(conn, request.url, verify, cert) 17:06:34 url = self.request_url(request, proxies) 17:06:34 self.add_headers( 17:06:34 request, 17:06:34 stream=stream, 17:06:34 timeout=timeout, 17:06:34 verify=verify, 17:06:34 cert=cert, 17:06:34 proxies=proxies, 17:06:34 ) 17:06:34 17:06:34 chunked = not (request.body is None or "Content-Length" in request.headers) 17:06:34 17:06:34 if isinstance(timeout, tuple): 17:06:34 try: 17:06:34 connect, read = timeout 17:06:34 timeout = TimeoutSauce(connect=connect, read=read) 17:06:34 except ValueError: 17:06:34 raise ValueError( 17:06:34 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:06:34 f"or a single float to set both timeouts to the same value." 17:06:34 ) 17:06:34 elif isinstance(timeout, TimeoutSauce): 17:06:34 pass 17:06:34 else: 17:06:34 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:06:34 17:06:34 try: 17:06:34 > resp = conn.urlopen( 17:06:34 method=request.method, 17:06:34 url=url, 17:06:34 body=request.body, 17:06:34 headers=request.headers, 17:06:34 redirect=False, 17:06:34 assert_same_host=False, 17:06:34 preload_content=False, 17:06:34 decode_content=False, 17:06:34 retries=self.max_retries, 17:06:34 timeout=timeout, 17:06:34 chunked=chunked, 17:06:34 ) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:06:34 retries = retries.increment( 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 17:06:34 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:06:34 method = 'GET' 17:06:34 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' 17:06:34 response = None 17:06:34 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 17:06:34 _pool = 17:06:34 _stacktrace = 17:06:34 17:06:34 def increment( 17:06:34 self, 17:06:34 method: str | None = None, 17:06:34 url: str | None = None, 17:06:34 response: BaseHTTPResponse | None = None, 17:06:34 error: Exception | None = None, 17:06:34 _pool: ConnectionPool | None = None, 17:06:34 _stacktrace: TracebackType | None = None, 17:06:34 ) -> Self: 17:06:34 """Return a new Retry object with incremented retry counters. 17:06:34 17:06:34 :param response: A response object, or None, if the server did not 17:06:34 return a response. 17:06:34 :type response: :class:`~urllib3.response.BaseHTTPResponse` 17:06:34 :param Exception error: An error encountered during the request, or 17:06:34 None if the response was received successfully. 17:06:34 17:06:34 :return: A new ``Retry`` object. 17:06:34 """ 17:06:34 if self.total is False and error: 17:06:34 # Disabled, indicate to re-raise the error. 17:06:34 raise reraise(type(error), error, _stacktrace) 17:06:34 17:06:34 total = self.total 17:06:34 if total is not None: 17:06:34 total -= 1 17:06:34 17:06:34 connect = self.connect 17:06:34 read = self.read 17:06:34 redirect = self.redirect 17:06:34 status_count = self.status 17:06:34 other = self.other 17:06:34 cause = "unknown" 17:06:34 status = None 17:06:34 redirect_location = None 17:06:34 17:06:34 if error and self._is_connection_error(error): 17:06:34 # Connect retry? 17:06:34 if connect is False: 17:06:34 raise reraise(type(error), error, _stacktrace) 17:06:34 elif connect is not None: 17:06:34 connect -= 1 17:06:34 17:06:34 elif error and self._is_read_error(error): 17:06:34 # Read retry? 17:06:34 if read is False or method is None or not self._is_method_retryable(method): 17:06:34 raise reraise(type(error), error, _stacktrace) 17:06:34 elif read is not None: 17:06:34 read -= 1 17:06:34 17:06:34 elif error: 17:06:34 # Other retry? 17:06:34 if other is not None: 17:06:34 other -= 1 17:06:34 17:06:34 elif response and response.get_redirect_location(): 17:06:34 # Redirect retry? 17:06:34 if redirect is not None: 17:06:34 redirect -= 1 17:06:34 cause = "too many redirects" 17:06:34 response_redirect_location = response.get_redirect_location() 17:06:34 if response_redirect_location: 17:06:34 redirect_location = response_redirect_location 17:06:34 status = response.status 17:06:34 17:06:34 else: 17:06:34 # Incrementing because of a server error like a 500 in 17:06:34 # status_forcelist and the given method is in the allowed_methods 17:06:34 cause = ResponseError.GENERIC_ERROR 17:06:34 if response and response.status: 17:06:34 if status_count is not None: 17:06:34 status_count -= 1 17:06:34 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 17:06:34 status = response.status 17:06:34 17:06:34 history = self.history + ( 17:06:34 RequestHistory(method, url, error, status, redirect_location), 17:06:34 ) 17:06:34 17:06:34 new_retry = self.new( 17:06:34 total=total, 17:06:34 connect=connect, 17:06:34 read=read, 17:06:34 redirect=redirect, 17:06:34 status=status_count, 17:06:34 other=other, 17:06:34 history=history, 17:06:34 ) 17:06:34 17:06:34 if new_retry.is_exhausted(): 17:06:34 reason = error or ResponseError(cause) 17:06:34 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 17:06:34 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 17:06:34 17:06:34 During handling of the above exception, another exception occurred: 17:06:34 17:06:34 self = 17:06:34 17:06:34 def test_21_rdm_device_not_connected(self): 17:06:34 > response = test_utils.get_portmapping_node_attr("ROADMA01", "node-info", None) 17:06:34 17:06:34 transportpce_tests/1.2.1/test01_portmapping.py:223: 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 transportpce_tests/common/test_utils.py:470: in get_portmapping_node_attr 17:06:34 response = get_request(target_url) 17:06:34 transportpce_tests/common/test_utils.py:116: in get_request 17:06:34 return requests.request( 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:06:34 return session.request(method=method, url=url, **kwargs) 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:06:34 resp = self.send(prep, **send_kwargs) 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:06:34 r = adapter.send(request, **kwargs) 17:06:34 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:06:34 17:06:34 self = 17:06:34 request = , stream = False 17:06:34 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:06:34 proxies = OrderedDict() 17:06:34 17:06:34 def send( 17:06:34 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:06:34 ): 17:06:34 """Sends PreparedRequest object. Returns Response object. 17:06:34 17:06:34 :param request: The :class:`PreparedRequest ` being sent. 17:06:34 :param stream: (optional) Whether to stream the request content. 17:06:34 :param timeout: (optional) How long to wait for the server to send 17:06:34 data before giving up, as a float, or a :ref:`(connect timeout, 17:06:34 read timeout) ` tuple. 17:06:34 :type timeout: float or tuple or urllib3 Timeout object 17:06:34 :param verify: (optional) Either a boolean, in which case it controls whether 17:06:34 we verify the server's TLS certificate, or a string, in which case it 17:06:34 must be a path to a CA bundle to use 17:06:34 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:06:34 :param proxies: (optional) The proxies dictionary to apply to the request. 17:06:34 :rtype: requests.Response 17:06:34 """ 17:06:34 17:06:34 try: 17:06:34 conn = self.get_connection_with_tls_context( 17:06:34 request, verify, proxies=proxies, cert=cert 17:06:34 ) 17:06:34 except LocationValueError as e: 17:06:34 raise InvalidURL(e, request=request) 17:06:34 17:06:34 self.cert_verify(conn, request.url, verify, cert) 17:06:34 url = self.request_url(request, proxies) 17:06:34 self.add_headers( 17:06:34 request, 17:06:34 stream=stream, 17:06:34 timeout=timeout, 17:06:34 verify=verify, 17:06:34 cert=cert, 17:06:34 proxies=proxies, 17:06:34 ) 17:06:34 17:06:34 chunked = not (request.body is None or "Content-Length" in request.headers) 17:06:34 17:06:34 if isinstance(timeout, tuple): 17:06:34 try: 17:06:34 connect, read = timeout 17:06:34 timeout = TimeoutSauce(connect=connect, read=read) 17:06:34 except ValueError: 17:06:34 raise ValueError( 17:06:34 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:06:34 f"or a single float to set both timeouts to the same value." 17:06:34 ) 17:06:34 elif isinstance(timeout, TimeoutSauce): 17:06:34 pass 17:06:34 else: 17:06:34 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:06:34 17:06:34 try: 17:06:34 resp = conn.urlopen( 17:06:34 method=request.method, 17:06:34 url=url, 17:06:34 body=request.body, 17:06:34 headers=request.headers, 17:06:34 redirect=False, 17:06:34 assert_same_host=False, 17:06:34 preload_content=False, 17:06:34 decode_content=False, 17:06:34 retries=self.max_retries, 17:06:34 timeout=timeout, 17:06:34 chunked=chunked, 17:06:34 ) 17:06:34 17:06:34 except (ProtocolError, OSError) as err: 17:06:34 raise ConnectionError(err, request=request) 17:06:34 17:06:34 except MaxRetryError as e: 17:06:34 if isinstance(e.reason, ConnectTimeoutError): 17:06:34 # TODO: Remove this in 3.0.0: see #2811 17:06:34 if not isinstance(e.reason, NewConnectionError): 17:06:34 raise ConnectTimeout(e, request=request) 17:06:34 17:06:34 if isinstance(e.reason, ResponseError): 17:06:34 raise RetryError(e, request=request) 17:06:34 17:06:34 if isinstance(e.reason, _ProxyError): 17:06:34 raise ProxyError(e, request=request) 17:06:34 17:06:34 if isinstance(e.reason, _SSLError): 17:06:34 # This branch is for urllib3 v1.22 and later. 17:06:34 raise SSLError(e, request=request) 17:06:34 17:06:34 > raise ConnectionError(e, request=request) 17:06:34 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:06:34 17:06:34 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 17:06:34 ----------------------------- Captured stdout call ----------------------------- 17:06:34 execution of test_21_rdm_device_not_connected 17:06:34 --------------------------- Captured stdout teardown --------------------------- 17:06:34 all processes killed 17:06:34 =========================== short test summary info ============================ 17:06:34 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_08_xpdr_device_connected 17:06:34 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_09_xpdr_portmapping_info 17:06:34 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_10_xpdr_portmapping_NETWORK1 17:06:34 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_11_xpdr_portmapping_NETWORK2 17:06:34 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_12_xpdr_portmapping_CLIENT1 17:06:34 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_13_xpdr_portmapping_CLIENT2 17:06:34 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_14_xpdr_portmapping_CLIENT3 17:06:34 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_15_xpdr_portmapping_CLIENT4 17:06:34 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_16_xpdr_device_disconnection 17:06:34 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_17_xpdr_device_disconnected 17:06:34 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_18_xpdr_device_not_connected 17:06:34 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_19_rdm_device_disconnection 17:06:34 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_20_rdm_device_disconnected 17:06:34 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_21_rdm_device_not_connected 17:06:34 14 failed, 7 passed in 622.68s (0:10:22) 17:06:34 tests121: exit 1 (623.05 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 1.2.1 pid=36181 17:06:53 ...................... [100%] 17:07:40 22 passed in 72.59s (0:01:12) 17:07:41 tests121: FAIL ✖ in 10 minutes 30.22 seconds 17:07:41 tests71: OK ✔ in 7 minutes 4.43 seconds 17:07:41 tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 17:07:46 tests221: freeze> python -m pip freeze --all 17:07:47 tests221: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 17:07:47 tests221: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 2.2.1 17:07:47 using environment variables from ./karaf221.env 17:07:47 pytest -q transportpce_tests/2.2.1/test01_portmapping.py 17:08:22 ................................... [100%] 17:09:02 35 passed in 75.01s (0:01:15) 17:09:02 pytest -q transportpce_tests/2.2.1/test02_topo_portmapping.py 17:09:36 .F..F. [100%] 17:09:50 =================================== FAILURES =================================== 17:09:50 _____ TransportPCEtesting.test_02_compareOpenroadmTopologyPortMapping_rdm ______ 17:09:50 17:09:50 self = 17:09:50 17:09:50 def test_02_compareOpenroadmTopologyPortMapping_rdm(self): 17:09:50 resTopo = test_utils.get_ietf_network_request('openroadm-topology', 'config') 17:09:50 self.assertEqual(resTopo['status_code'], requests.codes.ok) 17:09:50 nbMapCumul = 0 17:09:50 nbMappings = 0 17:09:50 for node in resTopo['network'][0]['node']: 17:09:50 nodeId = node['node-id'] 17:09:50 # pylint: disable=consider-using-f-string 17:09:50 print("nodeId={}".format(nodeId)) 17:09:50 nodeMapId = nodeId.split("-")[0] + "-" + nodeId.split("-")[1] 17:09:50 print("nodeMapId={}".format(nodeMapId)) 17:09:50 response = test_utils.get_portmapping_node_attr(nodeMapId, "node-info", None) 17:09:50 > self.assertEqual(response['status_code'], requests.codes.ok) 17:09:50 E AssertionError: 409 != 200 17:09:50 17:09:50 transportpce_tests/2.2.1/test02_topo_portmapping.py:64: AssertionError 17:09:50 ----------------------------- Captured stdout call ----------------------------- 17:09:50 nodeId=ROADM-A1-SRG3 17:09:50 nodeMapId=ROADM-A1 17:09:50 nodeId=ROADM-A1-DEG1 17:09:50 nodeMapId=ROADM-A1 17:09:50 nodeId=TAPI-SBI-ABS-NODE 17:09:50 nodeMapId=TAPI-SBI 17:09:50 _____ TransportPCEtesting.test_05_compareOpenroadmTopologyPortMapping_xpdr _____ 17:09:50 17:09:50 self = 17:09:50 17:09:50 def test_05_compareOpenroadmTopologyPortMapping_xpdr(self): 17:09:50 > self.test_02_compareOpenroadmTopologyPortMapping_rdm() 17:09:50 17:09:50 transportpce_tests/2.2.1/test02_topo_portmapping.py:91: 17:09:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:09:50 transportpce_tests/2.2.1/test02_topo_portmapping.py:64: in test_02_compareOpenroadmTopologyPortMapping_rdm 17:09:50 self.assertEqual(response['status_code'], requests.codes.ok) 17:09:50 E AssertionError: 409 != 200 17:09:50 ----------------------------- Captured stdout call ----------------------------- 17:09:50 nodeId=TAPI-SBI-ABS-NODE 17:09:50 nodeMapId=TAPI-SBI 17:09:50 =========================== short test summary info ============================ 17:09:50 FAILED transportpce_tests/2.2.1/test02_topo_portmapping.py::TransportPCEtesting::test_02_compareOpenroadmTopologyPortMapping_rdm 17:09:50 FAILED transportpce_tests/2.2.1/test02_topo_portmapping.py::TransportPCEtesting::test_05_compareOpenroadmTopologyPortMapping_xpdr 17:09:50 2 failed, 4 passed in 48.07s 17:09:50 tests221: exit 1 (123.56 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 2.2.1 pid=40894 17:09:50 tests221: FAIL ✖ in 2 minutes 9.83 seconds 17:09:50 tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 17:09:56 tests_hybrid: freeze> python -m pip freeze --all 17:09:57 tests_hybrid: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 17:09:57 tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh hybrid 17:09:57 using environment variables from ./karaf121.env 17:09:57 pytest -q transportpce_tests/hybrid/test01_device_change_notifications.py 17:10:46 ................................................... [100%] 17:12:34 51 passed in 157.03s (0:02:37) 17:12:34 pytest -q transportpce_tests/hybrid/test02_B100G_end2end.py 17:13:17 ........................................................................ [ 66%] 17:17:38 ..................................... [100%] 17:19:44 109 passed in 429.60s (0:07:09) 17:19:44 pytest -q transportpce_tests/hybrid/test03_autonomous_reroute.py 17:20:31 ..................................................... [100%] 17:24:03 53 passed in 259.31s (0:04:19) 17:24:04 tests_hybrid: OK ✔ in 14 minutes 13.18 seconds 17:24:04 buildlighty: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 17:24:10 buildlighty: freeze> python -m pip freeze --all 17:24:10 buildlighty: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 17:24:10 buildlighty: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/lighty> ./build.sh 17:24:10 NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED 17:25:04 [ERROR] COMPILATION ERROR : 17:25:04 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[17,42] cannot find symbol 17:25:04 symbol: class YangModuleInfo 17:25:04 location: package org.opendaylight.yangtools.binding 17:25:04 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[21,30] cannot find symbol 17:25:04 symbol: class YangModuleInfo 17:25:04 location: class io.lighty.controllers.tpce.utils.TPCEUtils 17:25:04 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[343,30] cannot find symbol 17:25:04 symbol: class YangModuleInfo 17:25:04 location: class io.lighty.controllers.tpce.utils.TPCEUtils 17:25:04 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[350,23] cannot find symbol 17:25:04 symbol: class YangModuleInfo 17:25:04 location: class io.lighty.controllers.tpce.utils.TPCEUtils 17:25:04 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.13.0:compile (default-compile) on project tpce: Compilation failure: Compilation failure: 17:25:04 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[17,42] cannot find symbol 17:25:04 [ERROR] symbol: class YangModuleInfo 17:25:04 [ERROR] location: package org.opendaylight.yangtools.binding 17:25:04 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[21,30] cannot find symbol 17:25:04 [ERROR] symbol: class YangModuleInfo 17:25:04 [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils 17:25:04 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[343,30] cannot find symbol 17:25:04 [ERROR] symbol: class YangModuleInfo 17:25:04 [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils 17:25:04 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[350,23] cannot find symbol 17:25:04 [ERROR] symbol: class YangModuleInfo 17:25:04 [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils 17:25:04 [ERROR] -> [Help 1] 17:25:04 [ERROR] 17:25:04 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. 17:25:04 [ERROR] Re-run Maven using the -X switch to enable full debug logging. 17:25:04 [ERROR] 17:25:04 [ERROR] For more information about the errors and possible solutions, please read the following articles: 17:25:04 [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException 17:25:04 unzip: cannot find or open target/tpce-bin.zip, target/tpce-bin.zip.zip or target/tpce-bin.zip.ZIP. 17:25:04 buildlighty: exit 9 (54.03 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/lighty> ./build.sh pid=47151 17:25:04 buildlighty: command failed but is marked ignore outcome so handling it as success 17:25:04 buildcontroller: OK (107.19=setup[8.52]+cmd[98.68] seconds) 17:25:04 testsPCE: OK (322.16=setup[79.15]+cmd[243.01] seconds) 17:25:04 sims: OK (11.14=setup[7.82]+cmd[3.31] seconds) 17:25:04 build_karaf_tests121: OK (53.38=setup[8.00]+cmd[45.38] seconds) 17:25:04 tests121: FAIL code 1 (630.22=setup[7.17]+cmd[623.05] seconds) 17:25:04 build_karaf_tests221: OK (52.23=setup[7.80]+cmd[44.43] seconds) 17:25:04 tests_tapi: FAIL code 1 (540.50=setup[6.40]+cmd[534.10] seconds) 17:25:04 tests221: FAIL code 1 (129.83=setup[6.27]+cmd[123.56] seconds) 17:25:04 build_karaf_tests71: OK (63.08=setup[14.09]+cmd[48.99] seconds) 17:25:04 tests71: OK (424.43=setup[17.57]+cmd[406.87] seconds) 17:25:04 build_karaf_tests_hybrid: OK (60.72=setup[7.13]+cmd[53.59] seconds) 17:25:04 tests_hybrid: OK (853.18=setup[6.55]+cmd[846.63] seconds) 17:25:04 buildlighty: OK (60.49=setup[6.46]+cmd[54.03] seconds) 17:25:04 docs: OK (37.88=setup[31.39]+cmd[6.48] seconds) 17:25:04 docs-linkcheck: OK (39.61=setup[30.24]+cmd[9.38] seconds) 17:25:04 checkbashisms: OK (2.96=setup[1.99]+cmd[0.03,0.09,0.85] seconds) 17:25:04 pre-commit: OK (43.88=setup[3.31]+cmd[0.01,0.01,32.69,7.86] seconds) 17:25:04 pylint: OK (25.45=setup[5.67]+cmd[19.79] seconds) 17:25:04 evaluation failed :( (2169.10 seconds) 17:25:04 + tox_status=255 17:25:04 + echo '---> Completed tox runs' 17:25:04 ---> Completed tox runs 17:25:04 + for i in .tox/*/log 17:25:04 ++ echo .tox/build_karaf_tests121/log 17:25:04 ++ awk -F/ '{print $2}' 17:25:04 + tox_env=build_karaf_tests121 17:25:04 + cp -r .tox/build_karaf_tests121/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests121 17:25:04 + for i in .tox/*/log 17:25:04 ++ echo .tox/build_karaf_tests221/log 17:25:04 ++ awk -F/ '{print $2}' 17:25:04 + tox_env=build_karaf_tests221 17:25:04 + cp -r .tox/build_karaf_tests221/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests221 17:25:04 + for i in .tox/*/log 17:25:04 ++ echo .tox/build_karaf_tests71/log 17:25:04 ++ awk -F/ '{print $2}' 17:25:04 + tox_env=build_karaf_tests71 17:25:04 + cp -r .tox/build_karaf_tests71/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests71 17:25:04 + for i in .tox/*/log 17:25:04 ++ echo .tox/build_karaf_tests_hybrid/log 17:25:04 ++ awk -F/ '{print $2}' 17:25:04 + tox_env=build_karaf_tests_hybrid 17:25:04 + cp -r .tox/build_karaf_tests_hybrid/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests_hybrid 17:25:04 + for i in .tox/*/log 17:25:04 ++ echo .tox/buildcontroller/log 17:25:04 ++ awk -F/ '{print $2}' 17:25:04 + tox_env=buildcontroller 17:25:04 + cp -r .tox/buildcontroller/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/buildcontroller 17:25:04 + for i in .tox/*/log 17:25:04 ++ echo .tox/buildlighty/log 17:25:04 ++ awk -F/ '{print $2}' 17:25:04 + tox_env=buildlighty 17:25:04 + cp -r .tox/buildlighty/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/buildlighty 17:25:04 + for i in .tox/*/log 17:25:04 ++ echo .tox/checkbashisms/log 17:25:04 ++ awk -F/ '{print $2}' 17:25:04 + tox_env=checkbashisms 17:25:04 + cp -r .tox/checkbashisms/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/checkbashisms 17:25:04 + for i in .tox/*/log 17:25:04 ++ echo .tox/docs-linkcheck/log 17:25:04 ++ awk -F/ '{print $2}' 17:25:04 + tox_env=docs-linkcheck 17:25:04 + cp -r .tox/docs-linkcheck/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/docs-linkcheck 17:25:04 + for i in .tox/*/log 17:25:04 ++ echo .tox/docs/log 17:25:04 ++ awk -F/ '{print $2}' 17:25:04 + tox_env=docs 17:25:04 + cp -r .tox/docs/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/docs 17:25:04 + for i in .tox/*/log 17:25:04 ++ echo .tox/pre-commit/log 17:25:04 ++ awk -F/ '{print $2}' 17:25:04 + tox_env=pre-commit 17:25:04 + cp -r .tox/pre-commit/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/pre-commit 17:25:04 + for i in .tox/*/log 17:25:04 ++ echo .tox/pylint/log 17:25:04 ++ awk -F/ '{print $2}' 17:25:04 + tox_env=pylint 17:25:04 + cp -r .tox/pylint/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/pylint 17:25:04 + for i in .tox/*/log 17:25:04 ++ echo .tox/sims/log 17:25:04 ++ awk -F/ '{print $2}' 17:25:04 + tox_env=sims 17:25:04 + cp -r .tox/sims/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/sims 17:25:04 + for i in .tox/*/log 17:25:04 ++ echo .tox/tests121/log 17:25:04 ++ awk -F/ '{print $2}' 17:25:04 + tox_env=tests121 17:25:04 + cp -r .tox/tests121/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests121 17:25:04 + for i in .tox/*/log 17:25:04 ++ echo .tox/tests221/log 17:25:04 ++ awk -F/ '{print $2}' 17:25:04 + tox_env=tests221 17:25:04 + cp -r .tox/tests221/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests221 17:25:04 + for i in .tox/*/log 17:25:04 ++ echo .tox/tests71/log 17:25:04 ++ awk -F/ '{print $2}' 17:25:04 + tox_env=tests71 17:25:04 + cp -r .tox/tests71/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests71 17:25:04 + for i in .tox/*/log 17:25:04 ++ echo .tox/testsPCE/log 17:25:04 ++ awk -F/ '{print $2}' 17:25:04 + tox_env=testsPCE 17:25:04 + cp -r .tox/testsPCE/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/testsPCE 17:25:04 + for i in .tox/*/log 17:25:04 ++ echo .tox/tests_hybrid/log 17:25:04 ++ awk -F/ '{print $2}' 17:25:04 + tox_env=tests_hybrid 17:25:04 + cp -r .tox/tests_hybrid/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests_hybrid 17:25:04 + for i in .tox/*/log 17:25:04 ++ echo .tox/tests_tapi/log 17:25:04 ++ awk -F/ '{print $2}' 17:25:04 + tox_env=tests_tapi 17:25:04 + cp -r .tox/tests_tapi/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests_tapi 17:25:04 + DOC_DIR=docs/_build/html 17:25:04 + [[ -d docs/_build/html ]] 17:25:04 + echo '---> Archiving generated docs' 17:25:04 ---> Archiving generated docs 17:25:04 + mv docs/_build/html /w/workspace/transportpce-tox-verify-transportpce-master/archives/docs 17:25:04 + echo '---> tox-run.sh ends' 17:25:04 ---> tox-run.sh ends 17:25:04 + test 255 -eq 0 17:25:04 + exit 255 17:25:04 ++ '[' 1 = 1 ']' 17:25:04 ++ '[' -x /usr/bin/clear_console ']' 17:25:04 ++ /usr/bin/clear_console -q 17:25:04 Build step 'Execute shell' marked build as failure 17:25:04 $ ssh-agent -k 17:25:04 unset SSH_AUTH_SOCK; 17:25:04 unset SSH_AGENT_PID; 17:25:04 echo Agent pid 13155 killed; 17:25:04 [ssh-agent] Stopped. 17:25:04 [PostBuildScript] - [INFO] Executing post build scripts. 17:25:04 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins16683514460229013105.sh 17:25:04 ---> sysstat.sh 17:25:05 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins12053656969915364429.sh 17:25:05 ---> package-listing.sh 17:25:05 ++ facter osfamily 17:25:05 ++ tr '[:upper:]' '[:lower:]' 17:25:05 + OS_FAMILY=debian 17:25:05 + workspace=/w/workspace/transportpce-tox-verify-transportpce-master 17:25:05 + START_PACKAGES=/tmp/packages_start.txt 17:25:05 + END_PACKAGES=/tmp/packages_end.txt 17:25:05 + DIFF_PACKAGES=/tmp/packages_diff.txt 17:25:05 + PACKAGES=/tmp/packages_start.txt 17:25:05 + '[' /w/workspace/transportpce-tox-verify-transportpce-master ']' 17:25:05 + PACKAGES=/tmp/packages_end.txt 17:25:05 + case "${OS_FAMILY}" in 17:25:05 + dpkg -l 17:25:05 + grep '^ii' 17:25:05 + '[' -f /tmp/packages_start.txt ']' 17:25:05 + '[' -f /tmp/packages_end.txt ']' 17:25:05 + diff /tmp/packages_start.txt /tmp/packages_end.txt 17:25:05 + '[' /w/workspace/transportpce-tox-verify-transportpce-master ']' 17:25:05 + mkdir -p /w/workspace/transportpce-tox-verify-transportpce-master/archives/ 17:25:05 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/transportpce-tox-verify-transportpce-master/archives/ 17:25:05 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins7676820551323403499.sh 17:25:05 ---> capture-instance-metadata.sh 17:25:05 Setup pyenv: 17:25:05 system 17:25:05 3.8.13 17:25:05 3.9.13 17:25:05 3.10.13 17:25:05 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 17:25:05 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-E7Mz from file:/tmp/.os_lf_venv 17:25:06 lf-activate-venv(): INFO: Installing: lftools 17:25:19 lf-activate-venv(): INFO: Adding /tmp/venv-E7Mz/bin to PATH 17:25:19 INFO: Running in OpenStack, capturing instance metadata 17:25:21 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins16828793546693166229.sh 17:25:21 provisioning config files... 17:25:21 Could not find credentials [logs] for transportpce-tox-verify-transportpce-master #2113 17:25:21 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/transportpce-tox-verify-transportpce-master@tmp/config7462885109414406671tmp 17:25:21 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[odl-logs-s3-cloudfront-index] 17:25:21 Run condition [Regular expression match] enabling perform for step [Provide Configuration files] 17:25:21 provisioning config files... 17:25:21 copy managed file [jenkins-s3-log-ship] to file:/home/jenkins/.aws/credentials 17:25:21 [EnvInject] - Injecting environment variables from a build step. 17:25:21 [EnvInject] - Injecting as environment variables the properties content 17:25:21 SERVER_ID=logs 17:25:21 17:25:21 [EnvInject] - Variables injected successfully. 17:25:21 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins15412634342217495542.sh 17:25:21 ---> create-netrc.sh 17:25:21 WARN: Log server credential not found. 17:25:21 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins9702415780449328295.sh 17:25:21 ---> python-tools-install.sh 17:25:21 Setup pyenv: 17:25:21 system 17:25:21 3.8.13 17:25:21 3.9.13 17:25:21 3.10.13 17:25:21 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 17:25:22 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-E7Mz from file:/tmp/.os_lf_venv 17:25:23 lf-activate-venv(): INFO: Installing: lftools 17:25:32 lf-activate-venv(): INFO: Adding /tmp/venv-E7Mz/bin to PATH 17:25:32 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins11898550982930002105.sh 17:25:32 ---> sudo-logs.sh 17:25:32 Archiving 'sudo' log.. 17:25:32 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins15489448618646167370.sh 17:25:32 ---> job-cost.sh 17:25:32 Setup pyenv: 17:25:32 system 17:25:32 3.8.13 17:25:32 3.9.13 17:25:32 3.10.13 17:25:32 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 17:25:32 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-E7Mz from file:/tmp/.os_lf_venv 17:25:33 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 17:25:38 lf-activate-venv(): INFO: Adding /tmp/venv-E7Mz/bin to PATH 17:25:38 INFO: No Stack... 17:25:38 INFO: Retrieving Pricing Info for: v3-standard-4 17:25:38 INFO: Archiving Costs 17:25:38 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins4048838696669413763.sh 17:25:38 ---> logs-deploy.sh 17:25:38 Setup pyenv: 17:25:39 system 17:25:39 3.8.13 17:25:39 3.9.13 17:25:39 3.10.13 17:25:39 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 17:25:39 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-E7Mz from file:/tmp/.os_lf_venv 17:25:40 lf-activate-venv(): INFO: Installing: lftools 17:25:48 lf-activate-venv(): INFO: Adding /tmp/venv-E7Mz/bin to PATH 17:25:48 WARNING: Nexus logging server not set 17:25:48 INFO: S3 path logs/releng/vex-yul-odl-jenkins-1/transportpce-tox-verify-transportpce-master/2113/ 17:25:48 INFO: archiving logs to S3 17:25:50 ---> uname -a: 17:25:50 Linux prd-ubuntu2004-docker-4c-16g-1243 5.4.0-190-generic #210-Ubuntu SMP Fri Jul 5 17:03:38 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 17:25:50 17:25:50 17:25:50 ---> lscpu: 17:25:50 Architecture: x86_64 17:25:50 CPU op-mode(s): 32-bit, 64-bit 17:25:50 Byte Order: Little Endian 17:25:50 Address sizes: 40 bits physical, 48 bits virtual 17:25:50 CPU(s): 4 17:25:50 On-line CPU(s) list: 0-3 17:25:50 Thread(s) per core: 1 17:25:50 Core(s) per socket: 1 17:25:50 Socket(s): 4 17:25:50 NUMA node(s): 1 17:25:50 Vendor ID: AuthenticAMD 17:25:50 CPU family: 23 17:25:50 Model: 49 17:25:50 Model name: AMD EPYC-Rome Processor 17:25:50 Stepping: 0 17:25:50 CPU MHz: 2800.000 17:25:50 BogoMIPS: 5600.00 17:25:50 Virtualization: AMD-V 17:25:50 Hypervisor vendor: KVM 17:25:50 Virtualization type: full 17:25:50 L1d cache: 128 KiB 17:25:50 L1i cache: 128 KiB 17:25:50 L2 cache: 2 MiB 17:25:50 L3 cache: 64 MiB 17:25:50 NUMA node0 CPU(s): 0-3 17:25:50 Vulnerability Gather data sampling: Not affected 17:25:50 Vulnerability Itlb multihit: Not affected 17:25:50 Vulnerability L1tf: Not affected 17:25:50 Vulnerability Mds: Not affected 17:25:50 Vulnerability Meltdown: Not affected 17:25:50 Vulnerability Mmio stale data: Not affected 17:25:50 Vulnerability Retbleed: Vulnerable 17:25:50 Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp 17:25:50 Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization 17:25:50 Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected 17:25:50 Vulnerability Srbds: Not affected 17:25:50 Vulnerability Tsx async abort: Not affected 17:25:50 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities 17:25:50 17:25:50 17:25:50 ---> nproc: 17:25:50 4 17:25:50 17:25:50 17:25:50 ---> df -h: 17:25:50 Filesystem Size Used Avail Use% Mounted on 17:25:50 udev 7.8G 0 7.8G 0% /dev 17:25:50 tmpfs 1.6G 1.1M 1.6G 1% /run 17:25:50 /dev/vda1 78G 17G 62G 21% / 17:25:50 tmpfs 7.9G 0 7.9G 0% /dev/shm 17:25:50 tmpfs 5.0M 0 5.0M 0% /run/lock 17:25:50 tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup 17:25:50 /dev/loop0 62M 62M 0 100% /snap/core20/1405 17:25:50 /dev/loop1 68M 68M 0 100% /snap/lxd/22753 17:25:50 /dev/vda15 105M 6.1M 99M 6% /boot/efi 17:25:50 tmpfs 1.6G 0 1.6G 0% /run/user/1001 17:25:50 /dev/loop3 39M 39M 0 100% /snap/snapd/21759 17:25:50 /dev/loop4 64M 64M 0 100% /snap/core20/2434 17:25:50 /dev/loop5 92M 92M 0 100% /snap/lxd/29619 17:25:50 17:25:50 17:25:50 ---> free -m: 17:25:50 total used free shared buff/cache available 17:25:50 Mem: 15997 668 5678 0 9651 14989 17:25:50 Swap: 1023 0 1023 17:25:50 17:25:50 17:25:50 ---> ip addr: 17:25:50 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 17:25:50 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 17:25:50 inet 127.0.0.1/8 scope host lo 17:25:50 valid_lft forever preferred_lft forever 17:25:50 inet6 ::1/128 scope host 17:25:50 valid_lft forever preferred_lft forever 17:25:50 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 17:25:50 link/ether fa:16:3e:2f:c7:b0 brd ff:ff:ff:ff:ff:ff 17:25:50 inet 10.30.171.246/23 brd 10.30.171.255 scope global dynamic ens3 17:25:50 valid_lft 84046sec preferred_lft 84046sec 17:25:50 inet6 fe80::f816:3eff:fe2f:c7b0/64 scope link 17:25:50 valid_lft forever preferred_lft forever 17:25:50 3: docker0: mtu 1458 qdisc noqueue state DOWN group default 17:25:50 link/ether 02:42:5e:f7:58:b3 brd ff:ff:ff:ff:ff:ff 17:25:50 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 17:25:50 valid_lft forever preferred_lft forever 17:25:50 17:25:50 17:25:50 ---> sar -b -r -n DEV: 17:25:50 Linux 5.4.0-190-generic (prd-ubuntu2004-docker-4c-16g-1243) 10/25/24 _x86_64_ (4 CPU) 17:25:50 17:25:50 16:46:39 LINUX RESTART (4 CPU) 17:25:50 17:25:50 16:47:01 tps rtps wtps dtps bread/s bwrtn/s bdscd/s 17:25:50 16:48:01 326.66 143.89 182.77 0.00 10522.65 45267.66 0.00 17:25:50 16:49:01 84.67 12.20 72.47 0.00 1029.30 11858.84 0.00 17:25:50 16:50:01 245.34 40.96 204.38 0.00 2730.10 52309.05 0.00 17:25:50 16:51:01 69.24 1.53 67.71 0.00 83.72 30945.91 0.00 17:25:50 16:52:01 226.49 8.96 217.53 0.00 354.82 163630.12 0.00 17:25:50 16:53:01 208.78 7.65 201.13 0.00 4594.43 53119.15 0.00 17:25:50 16:54:01 55.57 2.02 53.56 0.00 76.10 1076.93 0.00 17:25:50 16:55:01 142.54 1.90 140.64 0.00 200.50 2472.12 0.00 17:25:50 16:56:01 90.87 0.23 90.64 0.00 26.79 1516.56 0.00 17:25:50 16:57:01 131.21 0.35 130.86 0.00 18.93 10575.17 0.00 17:25:50 16:58:01 3.65 0.00 3.65 0.00 0.00 137.71 0.00 17:25:50 16:59:01 3.45 0.00 3.45 0.00 0.00 64.26 0.00 17:25:50 17:00:01 2.02 0.00 2.02 0.00 0.00 37.86 0.00 17:25:50 17:01:01 13.85 0.02 13.83 0.00 0.13 1347.78 0.00 17:25:50 17:02:01 173.76 0.90 172.86 0.00 17.19 10298.83 0.00 17:25:50 17:03:01 2.43 0.02 2.42 0.00 0.13 53.06 0.00 17:25:50 17:04:01 1.73 0.00 1.73 0.00 0.00 37.73 0.00 17:25:50 17:05:01 75.42 0.00 75.42 0.00 0.00 1103.82 0.00 17:25:50 17:06:01 2.48 0.00 2.48 0.00 0.00 47.86 0.00 17:25:50 17:07:01 74.04 0.05 73.99 0.00 1.87 1096.48 0.00 17:25:50 17:08:01 30.78 0.00 30.78 0.00 0.00 1354.31 0.00 17:25:50 17:09:01 57.17 0.00 57.17 0.00 0.00 2322.28 0.00 17:25:50 17:10:01 35.19 0.02 35.17 0.00 0.13 851.98 0.00 17:25:50 17:11:01 104.28 0.00 104.28 0.00 0.00 9727.45 0.00 17:25:50 17:12:01 3.45 0.00 3.45 0.00 0.00 141.69 0.00 17:25:50 17:13:01 25.30 0.00 25.30 0.00 0.00 712.55 0.00 17:25:50 17:14:01 50.92 0.00 50.92 0.00 0.00 744.41 0.00 17:25:50 17:15:01 2.88 0.00 2.88 0.00 0.00 51.59 0.00 17:25:50 17:16:01 1.63 0.00 1.63 0.00 0.00 31.46 0.00 17:25:50 17:17:01 2.08 0.05 2.03 0.00 1.07 38.26 0.00 17:25:50 17:18:01 2.13 0.00 2.13 0.00 0.00 38.13 0.00 17:25:50 17:19:01 1.77 0.00 1.77 0.00 0.00 44.26 0.00 17:25:50 17:20:01 23.93 0.00 23.93 0.00 0.00 384.81 0.00 17:25:50 17:21:01 50.09 0.00 50.09 0.00 0.00 726.55 0.00 17:25:50 17:22:02 3.70 0.00 3.70 0.00 0.00 192.10 0.00 17:25:50 17:23:01 1.61 0.00 1.61 0.00 0.00 51.38 0.00 17:25:50 17:24:01 1.82 0.00 1.82 0.00 0.00 44.39 0.00 17:25:50 17:25:01 18.38 0.10 18.28 0.00 0.80 2737.01 0.00 17:25:50 Average: 61.91 5.81 56.09 0.00 517.56 10721.02 0.00 17:25:50 17:25:50 16:47:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 17:25:50 16:48:01 13519176 15402972 586036 3.58 56228 2033200 1319024 7.57 809996 1794384 72900 17:25:50 16:49:01 12881760 15265560 695688 4.25 87596 2469692 1557704 8.94 1004144 2176692 302200 17:25:50 16:50:01 10453496 14290332 1668020 10.18 138256 3764048 2374096 13.62 2154264 3342620 208412 17:25:50 16:51:01 8558256 14094548 1862528 11.37 164416 5359004 3102772 17.80 2698084 4613380 975476 17:25:50 16:52:01 5986904 14376520 1572716 9.60 209172 8042008 2412408 13.84 3284552 6463624 170088 17:25:50 16:53:01 4157012 13621448 2322080 14.18 230460 9061496 3616276 20.75 4566912 6963376 4720 17:25:50 16:54:01 152792 8911184 7030080 42.92 220744 8378780 8046544 46.17 9133584 6398160 380 17:25:50 16:55:01 173088 8666436 7274112 44.40 225412 8111768 8546128 49.03 9380884 6132976 224 17:25:50 16:56:01 5218776 13673636 2269536 13.85 228432 8070432 3387444 19.43 4398756 6084300 864 17:25:50 16:57:01 1141612 9839440 6101796 37.25 237852 8296980 7402472 42.47 8285276 6258384 1812 17:25:50 16:58:01 480920 9179516 6761256 41.27 237904 8297668 7671372 44.01 8945420 6257056 68 17:25:50 16:59:01 456500 9155436 6785324 41.42 237940 8297972 7671372 44.01 8969224 6256948 44 17:25:50 17:00:01 432128 9131292 6809508 41.57 237956 8298176 7671372 44.01 8993172 6257116 208 17:25:50 17:01:01 3374880 12314968 3627644 22.14 244020 8520992 4505600 25.85 5857748 6450028 206132 17:25:50 17:02:01 2498864 11447968 4494184 27.43 248528 8525096 5351368 30.70 6781596 6397112 436 17:25:50 17:03:01 2431872 11381148 4560852 27.84 248544 8525244 5383860 30.89 6848424 6396408 300 17:25:50 17:04:01 2384164 11333600 4608388 28.13 248580 8525368 5456052 31.30 6895936 6395932 184 17:25:50 17:05:01 1451440 10402348 5539152 33.81 249816 8525548 6663352 38.23 7840724 6384692 340 17:25:50 17:06:01 1381672 10332788 5608700 34.24 249852 8525724 6695364 38.41 7909544 6384860 68 17:25:50 17:07:01 4903756 13856888 2086576 12.74 251472 8526080 2866924 16.45 4400368 6384088 544 17:25:50 17:08:01 5208620 14230648 1712940 10.46 253896 8586372 2870900 16.47 4031788 6442984 47648 17:25:50 17:09:01 5267816 14290908 1652432 10.09 254980 8586332 2802820 16.08 3968912 6442912 328 17:25:50 17:10:01 6019596 15129600 814316 4.97 258212 8662976 1661340 9.53 3161656 6499524 77932 17:25:50 17:11:01 2283940 11556160 4385540 26.77 262768 8813892 5213924 29.91 6776432 6599084 3088 17:25:50 17:12:01 2208960 11481232 4460484 27.23 262800 8813996 5315100 30.49 6852264 6598864 164 17:25:50 17:13:01 3756936 13029852 2912772 17.78 262980 8814256 4157076 23.85 5327492 6580664 876 17:25:50 17:14:01 2287328 11561440 4380232 26.74 263900 8814560 5135260 29.46 6798024 6577476 228 17:25:50 17:15:01 2270412 11544708 4396964 26.84 263928 8814716 5151284 29.55 6815076 6577484 128 17:25:50 17:16:01 2246464 11521036 4420708 26.99 263932 8814992 5167280 29.65 6838012 6577748 172 17:25:50 17:17:01 2239392 11514160 4427576 27.03 263956 8815164 5183268 29.74 6842264 6577928 584 17:25:50 17:18:01 2212468 11487576 4454044 27.19 263960 8815472 5183268 29.74 6869824 6578220 172 17:25:50 17:19:01 2197340 11472864 4468724 27.28 263968 8815872 5199256 29.83 6884428 6578600 276 17:25:50 17:20:01 4715700 13991524 1951760 11.91 264036 8816060 2813876 16.14 4386180 6567640 556 17:25:50 17:21:01 1554988 10831580 5109668 31.19 264672 8816168 5889328 33.79 7536988 6567676 684 17:25:50 17:22:02 1439208 10716440 5224704 31.89 264704 8816720 5955728 34.17 7651616 6568072 44 17:25:50 17:23:01 1383996 10662004 5279056 32.23 264728 8817460 5987748 34.35 7705492 6568804 272 17:25:50 17:24:01 1329328 10608032 5333088 32.56 264752 8818128 5987748 34.35 7759324 6569460 364 17:25:50 17:25:01 5582032 15058800 884568 5.40 269324 8998084 1656284 9.50 3333312 6742576 119992 17:25:50 Average: 3585358 12035963 3908783 23.86 236439 8063329 4816658 27.63 6018360 6105470 57866 17:25:50 17:25:50 16:47:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 17:25:50 16:48:01 ens3 388.55 266.49 1324.57 71.04 0.00 0.00 0.00 0.00 17:25:50 16:48:01 lo 0.67 0.67 0.07 0.07 0.00 0.00 0.00 0.00 17:25:50 16:48:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 16:49:01 ens3 71.20 62.39 826.96 10.42 0.00 0.00 0.00 0.00 17:25:50 16:49:01 lo 1.27 1.27 0.12 0.12 0.00 0.00 0.00 0.00 17:25:50 16:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 16:50:01 ens3 504.48 413.23 7373.41 41.08 0.00 0.00 0.00 0.00 17:25:50 16:50:01 lo 5.53 5.53 0.57 0.57 0.00 0.00 0.00 0.00 17:25:50 16:50:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 16:51:01 ens3 355.14 272.99 5648.25 27.47 0.00 0.00 0.00 0.00 17:25:50 16:51:01 lo 1.33 1.33 0.13 0.13 0.00 0.00 0.00 0.00 17:25:50 16:51:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 16:52:01 ens3 125.04 80.77 2434.68 8.29 0.00 0.00 0.00 0.00 17:25:50 16:52:01 lo 0.87 0.87 0.09 0.09 0.00 0.00 0.00 0.00 17:25:50 16:52:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 16:53:01 ens3 151.92 104.58 2102.19 7.12 0.00 0.00 0.00 0.00 17:25:50 16:53:01 lo 5.00 5.00 1.15 1.15 0.00 0.00 0.00 0.00 17:25:50 16:53:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 16:54:01 ens3 1.62 1.50 0.28 0.26 0.00 0.00 0.00 0.00 17:25:50 16:54:01 lo 29.49 29.49 30.53 30.53 0.00 0.00 0.00 0.00 17:25:50 16:54:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 16:55:01 ens3 2.32 1.95 4.66 0.26 0.00 0.00 0.00 0.00 17:25:50 16:55:01 lo 34.23 34.23 26.75 26.75 0.00 0.00 0.00 0.00 17:25:50 16:55:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 16:56:01 ens3 1.23 0.98 0.88 0.16 0.00 0.00 0.00 0.00 17:25:50 16:56:01 lo 31.84 31.84 14.28 14.28 0.00 0.00 0.00 0.00 17:25:50 16:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 16:57:01 ens3 2.05 2.15 0.80 0.71 0.00 0.00 0.00 0.00 17:25:50 16:57:01 lo 10.33 10.33 5.93 5.93 0.00 0.00 0.00 0.00 17:25:50 16:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 16:58:01 ens3 0.77 0.60 0.24 0.17 0.00 0.00 0.00 0.00 17:25:50 16:58:01 lo 29.56 29.56 25.90 25.90 0.00 0.00 0.00 0.00 17:25:50 16:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 16:59:01 ens3 0.32 0.22 0.04 0.03 0.00 0.00 0.00 0.00 17:25:50 16:59:01 lo 11.95 11.95 5.65 5.65 0.00 0.00 0.00 0.00 17:25:50 16:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:00:01 ens3 0.32 0.25 0.05 0.05 0.00 0.00 0.00 0.00 17:25:50 17:00:01 lo 16.90 16.90 6.26 6.26 0.00 0.00 0.00 0.00 17:25:50 17:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:01:01 ens3 2.42 2.13 0.90 0.79 0.00 0.00 0.00 0.00 17:25:50 17:01:01 lo 19.60 19.60 6.87 6.87 0.00 0.00 0.00 0.00 17:25:50 17:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:02:01 ens3 1.25 0.97 0.45 0.36 0.00 0.00 0.00 0.00 17:25:50 17:02:01 lo 11.16 11.16 16.81 16.81 0.00 0.00 0.00 0.00 17:25:50 17:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:03:01 ens3 1.28 1.10 0.36 0.27 0.00 0.00 0.00 0.00 17:25:50 17:03:01 lo 27.63 27.63 11.40 11.40 0.00 0.00 0.00 0.00 17:25:50 17:03:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:04:01 ens3 1.47 1.32 0.30 0.27 0.00 0.00 0.00 0.00 17:25:50 17:04:01 lo 26.38 26.38 8.34 8.34 0.00 0.00 0.00 0.00 17:25:50 17:04:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:05:01 ens3 0.92 0.82 0.15 0.14 0.00 0.00 0.00 0.00 17:25:50 17:05:01 lo 17.91 17.91 11.17 11.17 0.00 0.00 0.00 0.00 17:25:50 17:05:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:06:01 ens3 1.10 0.98 0.22 0.20 0.00 0.00 0.00 0.00 17:25:50 17:06:01 lo 30.54 30.54 10.33 10.33 0.00 0.00 0.00 0.00 17:25:50 17:06:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:07:01 ens3 6.85 7.38 1.40 3.77 0.00 0.00 0.00 0.00 17:25:50 17:07:01 lo 23.95 23.95 10.46 10.46 0.00 0.00 0.00 0.00 17:25:50 17:07:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:08:01 ens3 2.18 2.33 0.96 0.85 0.00 0.00 0.00 0.00 17:25:50 17:08:01 lo 17.95 17.95 5.50 5.50 0.00 0.00 0.00 0.00 17:25:50 17:08:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:09:01 ens3 1.58 1.47 0.35 0.29 0.00 0.00 0.00 0.00 17:25:50 17:09:01 lo 15.25 15.25 10.10 10.10 0.00 0.00 0.00 0.00 17:25:50 17:09:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:10:01 ens3 2.08 2.08 0.84 0.78 0.00 0.00 0.00 0.00 17:25:50 17:10:01 lo 11.08 11.08 4.23 4.23 0.00 0.00 0.00 0.00 17:25:50 17:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:11:01 ens3 0.73 0.60 0.09 0.08 0.00 0.00 0.00 0.00 17:25:50 17:11:01 lo 36.21 36.21 32.73 32.73 0.00 0.00 0.00 0.00 17:25:50 17:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:12:01 ens3 1.22 1.15 0.26 0.24 0.00 0.00 0.00 0.00 17:25:50 17:12:01 lo 17.34 17.34 9.35 9.35 0.00 0.00 0.00 0.00 17:25:50 17:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:13:01 ens3 0.80 0.70 0.17 0.16 0.00 0.00 0.00 0.00 17:25:50 17:13:01 lo 21.11 21.11 5.76 5.76 0.00 0.00 0.00 0.00 17:25:50 17:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:14:01 ens3 0.88 0.75 0.21 0.15 0.00 0.00 0.00 0.00 17:25:50 17:14:01 lo 34.71 34.71 17.91 17.91 0.00 0.00 0.00 0.00 17:25:50 17:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:15:01 ens3 0.95 0.98 0.18 0.16 0.00 0.00 0.00 0.00 17:25:50 17:15:01 lo 11.73 11.73 5.00 5.00 0.00 0.00 0.00 0.00 17:25:50 17:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:16:01 ens3 0.67 0.38 0.10 0.06 0.00 0.00 0.00 0.00 17:25:50 17:16:01 lo 24.81 24.81 9.61 9.61 0.00 0.00 0.00 0.00 17:25:50 17:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:17:01 ens3 1.25 1.50 0.44 0.39 0.00 0.00 0.00 0.00 17:25:50 17:17:01 lo 18.50 18.50 7.56 7.56 0.00 0.00 0.00 0.00 17:25:50 17:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:18:01 ens3 0.63 0.72 0.21 0.16 0.00 0.00 0.00 0.00 17:25:50 17:18:01 lo 27.68 27.68 8.68 8.68 0.00 0.00 0.00 0.00 17:25:50 17:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:19:01 ens3 0.73 0.87 0.13 0.13 0.00 0.00 0.00 0.00 17:25:50 17:19:01 lo 21.20 21.20 8.05 8.05 0.00 0.00 0.00 0.00 17:25:50 17:19:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:20:01 ens3 0.93 1.15 0.18 0.18 0.00 0.00 0.00 0.00 17:25:50 17:20:01 lo 37.82 37.82 12.02 12.02 0.00 0.00 0.00 0.00 17:25:50 17:20:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:21:01 ens3 0.87 0.93 0.13 0.13 0.00 0.00 0.00 0.00 17:25:50 17:21:01 lo 42.23 42.23 21.05 21.05 0.00 0.00 0.00 0.00 17:25:50 17:21:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:22:02 ens3 0.82 0.83 0.17 0.16 0.00 0.00 0.00 0.00 17:25:50 17:22:02 lo 41.64 41.64 15.19 15.19 0.00 0.00 0.00 0.00 17:25:50 17:22:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:23:01 ens3 0.68 0.53 0.23 0.13 0.00 0.00 0.00 0.00 17:25:50 17:23:01 lo 59.19 59.19 20.36 20.36 0.00 0.00 0.00 0.00 17:25:50 17:23:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:24:01 ens3 0.70 0.78 0.13 0.12 0.00 0.00 0.00 0.00 17:25:50 17:24:01 lo 75.95 75.95 24.67 24.67 0.00 0.00 0.00 0.00 17:25:50 17:24:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:25:01 ens3 152.22 111.55 1937.04 11.87 0.00 0.00 0.00 0.00 17:25:50 17:25:01 lo 0.47 0.47 0.04 0.04 0.00 0.00 0.00 0.00 17:25:50 17:25:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 Average: ens3 47.13 35.60 570.37 4.97 0.00 0.00 0.00 0.00 17:25:50 Average: lo 22.38 22.38 10.80 10.80 0.00 0.00 0.00 0.00 17:25:50 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:25:50 17:25:50 17:25:50 ---> sar -P ALL: 17:25:50 Linux 5.4.0-190-generic (prd-ubuntu2004-docker-4c-16g-1243) 10/25/24 _x86_64_ (4 CPU) 17:25:50 17:25:50 16:46:39 LINUX RESTART (4 CPU) 17:25:50 17:25:50 16:47:01 CPU %user %nice %system %iowait %steal %idle 17:25:50 16:48:01 all 16.66 17.72 15.01 4.40 0.15 46.06 17:25:50 16:48:01 0 19.01 17.51 14.32 3.07 0.13 45.96 17:25:50 16:48:01 1 12.84 18.72 15.73 2.96 0.15 49.60 17:25:50 16:48:01 2 19.82 16.94 14.01 3.49 0.15 45.58 17:25:50 16:48:01 3 14.99 17.72 15.97 8.05 0.15 43.12 17:25:50 16:49:01 all 21.83 0.03 2.56 3.65 0.07 71.85 17:25:50 16:49:01 0 29.43 0.00 2.86 2.37 0.08 65.25 17:25:50 16:49:01 1 21.52 0.03 2.40 1.53 0.08 74.43 17:25:50 16:49:01 2 27.23 0.05 2.91 1.84 0.05 67.91 17:25:50 16:49:01 3 9.12 0.05 2.08 8.87 0.07 79.81 17:25:50 16:50:01 all 81.08 0.00 3.90 4.55 0.14 10.33 17:25:50 16:50:01 0 77.61 0.00 3.96 4.43 0.13 13.86 17:25:50 16:50:01 1 80.23 0.00 4.48 6.60 0.13 8.56 17:25:50 16:50:01 2 85.64 0.00 3.62 2.30 0.13 8.30 17:25:50 16:50:01 3 80.84 0.00 3.55 4.87 0.15 10.59 17:25:50 16:51:01 all 57.07 0.00 2.94 1.35 0.12 38.52 17:25:50 16:51:01 0 55.44 0.00 2.15 2.52 0.10 39.80 17:25:50 16:51:01 1 55.98 0.00 2.31 0.94 0.12 40.65 17:25:50 16:51:01 2 46.90 0.00 3.19 1.24 0.13 48.54 17:25:50 16:51:01 3 69.91 0.00 4.11 0.70 0.12 25.15 17:25:50 16:52:01 all 79.22 0.00 4.49 12.16 0.14 3.98 17:25:50 16:52:01 0 80.29 0.00 4.03 6.75 0.15 8.77 17:25:50 16:52:01 1 79.57 0.00 4.53 13.52 0.15 2.22 17:25:50 16:52:01 2 79.84 0.00 4.08 14.20 0.13 1.75 17:25:50 16:52:01 3 77.18 0.00 5.32 14.19 0.14 3.18 17:25:50 16:53:01 all 89.68 0.00 3.88 3.31 0.16 2.98 17:25:50 16:53:01 0 89.67 0.00 3.64 3.39 0.12 3.17 17:25:50 16:53:01 1 90.18 0.00 3.79 2.70 0.17 3.16 17:25:50 16:53:01 2 89.73 0.00 4.04 3.70 0.15 2.39 17:25:50 16:53:01 3 89.15 0.00 4.05 3.43 0.19 3.18 17:25:50 16:54:01 all 49.48 0.00 1.86 0.31 0.11 48.24 17:25:50 16:54:01 0 46.24 0.00 2.03 0.08 0.12 51.53 17:25:50 16:54:01 1 50.31 0.00 2.23 0.70 0.12 46.64 17:25:50 16:54:01 2 51.66 0.00 1.32 0.30 0.10 46.62 17:25:50 16:54:01 3 49.73 0.00 1.86 0.15 0.10 48.16 17:25:50 16:55:01 all 51.62 0.00 1.81 0.59 0.10 45.86 17:25:50 16:55:01 0 51.72 0.00 1.88 0.05 0.10 46.26 17:25:50 16:55:01 1 51.89 0.00 1.96 1.71 0.10 44.34 17:25:50 16:55:01 2 52.29 0.00 1.74 0.50 0.10 45.37 17:25:50 16:55:01 3 50.61 0.00 1.68 0.12 0.12 47.48 17:25:50 16:56:01 all 44.24 0.00 1.69 0.07 0.09 53.90 17:25:50 16:56:01 0 42.88 0.00 1.52 0.03 0.10 55.47 17:25:50 16:56:01 1 46.33 0.00 1.84 0.12 0.08 51.63 17:25:50 16:56:01 2 42.13 0.00 1.62 0.05 0.08 56.12 17:25:50 16:56:01 3 45.64 0.00 1.78 0.08 0.10 52.39 17:25:50 16:57:01 all 86.63 0.00 3.08 0.43 0.12 9.75 17:25:50 16:57:01 0 84.05 0.00 3.03 1.16 0.10 11.65 17:25:50 16:57:01 1 89.31 0.00 2.69 0.28 0.12 7.59 17:25:50 16:57:01 2 86.63 0.00 3.12 0.20 0.13 9.91 17:25:50 16:57:01 3 86.53 0.00 3.45 0.07 0.12 9.84 17:25:50 16:58:01 all 18.37 0.00 0.66 0.08 0.08 80.82 17:25:50 16:58:01 0 16.71 0.00 0.64 0.15 0.08 82.42 17:25:50 16:58:01 1 18.74 0.00 0.64 0.15 0.07 80.40 17:25:50 16:58:01 2 18.18 0.00 0.62 0.00 0.10 81.10 17:25:50 16:58:01 3 19.85 0.00 0.74 0.00 0.07 79.35 17:25:50 17:25:50 16:58:01 CPU %user %nice %system %iowait %steal %idle 17:25:50 16:59:01 all 3.65 0.00 0.36 0.02 0.08 95.90 17:25:50 16:59:01 0 3.85 0.00 0.35 0.07 0.08 95.65 17:25:50 16:59:01 1 4.01 0.00 0.39 0.00 0.08 95.52 17:25:50 16:59:01 2 3.22 0.00 0.25 0.00 0.07 96.46 17:25:50 16:59:01 3 3.53 0.00 0.45 0.00 0.07 95.95 17:25:50 17:00:01 all 2.99 0.00 0.34 0.02 0.08 96.58 17:25:50 17:00:01 0 2.97 0.00 0.42 0.07 0.07 96.48 17:25:50 17:00:01 1 3.56 0.00 0.42 0.00 0.10 95.92 17:25:50 17:00:01 2 2.73 0.00 0.20 0.00 0.05 97.02 17:25:50 17:00:01 3 2.72 0.00 0.30 0.00 0.08 96.90 17:25:50 17:01:01 all 15.43 0.00 1.00 0.16 0.09 83.32 17:25:50 17:01:01 0 15.00 0.00 1.03 0.48 0.08 83.41 17:25:50 17:01:01 1 12.98 0.00 1.04 0.12 0.08 85.78 17:25:50 17:01:01 2 14.18 0.00 0.92 0.00 0.08 84.82 17:25:50 17:01:01 3 19.59 0.00 1.02 0.03 0.10 79.26 17:25:50 17:02:01 all 53.81 0.00 1.77 1.26 0.14 43.02 17:25:50 17:02:01 0 50.02 0.00 2.08 2.57 0.17 45.16 17:25:50 17:02:01 1 54.73 0.00 1.96 1.11 0.13 42.07 17:25:50 17:02:01 2 56.64 0.00 1.14 0.02 0.12 42.08 17:25:50 17:02:01 3 53.84 0.00 1.91 1.36 0.13 42.75 17:25:50 17:03:01 all 7.74 0.00 0.46 0.05 0.07 91.69 17:25:50 17:03:01 0 7.71 0.00 0.47 0.15 0.05 91.62 17:25:50 17:03:01 1 7.75 0.00 0.40 0.00 0.08 91.76 17:25:50 17:03:01 2 6.79 0.00 0.52 0.00 0.08 92.61 17:25:50 17:03:01 3 8.69 0.00 0.43 0.03 0.07 90.78 17:25:50 17:04:01 all 3.06 0.00 0.42 0.01 0.06 96.45 17:25:50 17:04:01 0 3.23 0.00 0.50 0.00 0.07 96.20 17:25:50 17:04:01 1 3.10 0.00 0.37 0.00 0.05 96.48 17:25:50 17:04:01 2 3.46 0.00 0.40 0.00 0.07 96.07 17:25:50 17:04:01 3 2.46 0.00 0.40 0.03 0.05 97.06 17:25:50 17:05:01 all 38.27 0.00 1.36 0.32 0.10 59.94 17:25:50 17:05:01 0 39.60 0.00 1.29 0.07 0.12 58.93 17:25:50 17:05:01 1 33.49 0.00 1.51 0.08 0.10 64.82 17:25:50 17:05:01 2 40.00 0.00 1.42 1.05 0.10 57.42 17:25:50 17:05:01 3 40.01 0.00 1.24 0.08 0.10 58.57 17:25:50 17:06:01 all 4.56 0.00 0.45 0.03 0.06 94.90 17:25:50 17:06:01 0 4.55 0.00 0.45 0.00 0.07 94.93 17:25:50 17:06:01 1 4.22 0.00 0.43 0.03 0.05 95.26 17:25:50 17:06:01 2 4.96 0.00 0.47 0.07 0.07 94.44 17:25:50 17:06:01 3 4.52 0.00 0.44 0.00 0.07 94.97 17:25:50 17:07:01 all 31.95 0.00 1.16 0.31 0.10 66.48 17:25:50 17:07:01 0 32.49 0.00 0.96 0.80 0.10 65.64 17:25:50 17:07:01 1 33.71 0.00 1.35 0.00 0.10 64.84 17:25:50 17:07:01 2 31.17 0.00 1.27 0.18 0.12 67.27 17:25:50 17:07:01 3 30.42 0.00 1.07 0.25 0.10 68.16 17:25:50 17:08:01 all 25.05 0.00 0.93 0.13 0.08 73.81 17:25:50 17:08:01 0 28.94 0.00 1.32 0.33 0.08 69.32 17:25:50 17:08:01 1 23.45 0.00 0.82 0.17 0.08 75.48 17:25:50 17:08:01 2 25.07 0.00 0.80 0.00 0.07 74.06 17:25:50 17:08:01 3 22.75 0.00 0.78 0.02 0.08 76.37 17:25:50 17:09:01 all 21.96 0.00 0.72 0.39 0.08 76.85 17:25:50 17:09:01 0 21.90 0.00 0.69 0.57 0.07 76.78 17:25:50 17:09:01 1 22.02 0.00 0.57 0.28 0.10 77.02 17:25:50 17:09:01 2 21.47 0.00 0.72 0.05 0.07 77.69 17:25:50 17:09:01 3 22.44 0.00 0.92 0.64 0.08 75.91 17:25:50 17:25:50 17:09:01 CPU %user %nice %system %iowait %steal %idle 17:25:50 17:10:01 all 46.12 0.00 1.87 0.51 0.11 51.39 17:25:50 17:10:01 0 45.05 0.00 2.08 1.33 0.12 51.43 17:25:50 17:10:01 1 43.97 0.00 1.67 0.00 0.10 54.26 17:25:50 17:10:01 2 52.96 0.00 2.06 0.07 0.10 44.82 17:25:50 17:10:01 3 42.49 0.00 1.66 0.66 0.12 55.07 17:25:50 17:11:01 all 61.46 0.00 2.09 0.86 0.14 35.45 17:25:50 17:11:01 0 61.08 0.00 1.51 0.03 0.13 37.24 17:25:50 17:11:01 1 57.68 0.00 2.10 0.13 0.15 39.93 17:25:50 17:11:01 2 65.00 0.00 2.28 0.47 0.13 32.11 17:25:50 17:11:01 3 62.07 0.00 2.46 2.82 0.13 32.51 17:25:50 17:12:01 all 3.78 0.00 0.37 0.04 0.06 95.75 17:25:50 17:12:01 0 3.53 0.00 0.33 0.00 0.07 96.07 17:25:50 17:12:01 1 3.85 0.00 0.45 0.05 0.07 95.58 17:25:50 17:12:01 2 3.62 0.00 0.42 0.00 0.07 95.89 17:25:50 17:12:01 3 4.11 0.00 0.27 0.10 0.05 95.48 17:25:50 17:13:01 all 34.53 0.00 1.33 0.03 0.09 64.02 17:25:50 17:13:01 0 34.67 0.00 1.41 0.03 0.08 63.80 17:25:50 17:13:01 1 34.64 0.00 1.04 0.05 0.10 64.17 17:25:50 17:13:01 2 35.00 0.00 1.36 0.03 0.08 63.53 17:25:50 17:13:01 3 33.81 0.00 1.52 0.00 0.10 64.57 17:25:50 17:14:01 all 27.59 0.00 0.84 0.23 0.09 71.25 17:25:50 17:14:01 0 25.86 0.00 0.89 0.10 0.08 73.07 17:25:50 17:14:01 1 27.20 0.00 1.06 0.18 0.10 71.46 17:25:50 17:14:01 2 28.19 0.00 0.75 0.59 0.10 70.37 17:25:50 17:14:01 3 29.11 0.00 0.67 0.03 0.08 70.10 17:25:50 17:15:01 all 2.77 0.00 0.17 0.04 0.06 96.96 17:25:50 17:15:01 0 2.73 0.00 0.13 0.07 0.07 97.01 17:25:50 17:15:01 1 2.97 0.00 0.18 0.03 0.05 96.76 17:25:50 17:15:01 2 2.70 0.00 0.18 0.07 0.07 96.98 17:25:50 17:15:01 3 2.68 0.00 0.18 0.00 0.07 97.07 17:25:50 17:16:01 all 3.27 0.00 0.21 0.01 0.05 96.46 17:25:50 17:16:01 0 3.24 0.00 0.18 0.00 0.03 96.55 17:25:50 17:16:01 1 3.19 0.00 0.29 0.00 0.08 96.44 17:25:50 17:16:01 2 3.34 0.00 0.20 0.02 0.05 96.39 17:25:50 17:16:01 3 3.29 0.00 0.17 0.02 0.05 96.47 17:25:50 17:17:01 all 1.86 0.00 0.20 0.01 0.05 97.88 17:25:50 17:17:01 0 2.18 0.00 0.22 0.00 0.05 97.56 17:25:50 17:17:01 1 1.80 0.00 0.18 0.00 0.05 97.96 17:25:50 17:17:01 2 1.61 0.00 0.18 0.05 0.07 98.09 17:25:50 17:17:01 3 1.85 0.00 0.20 0.00 0.05 97.90 17:25:50 17:18:01 all 2.90 0.00 0.21 0.01 0.06 96.83 17:25:50 17:18:01 0 2.84 0.00 0.18 0.02 0.03 96.93 17:25:50 17:18:01 1 2.93 0.00 0.17 0.00 0.05 96.85 17:25:50 17:18:01 2 2.86 0.00 0.25 0.02 0.07 96.80 17:25:50 17:18:01 3 2.96 0.00 0.22 0.00 0.08 96.73 17:25:50 17:19:01 all 2.14 0.00 0.18 0.00 0.06 97.62 17:25:50 17:19:01 0 1.98 0.00 0.08 0.00 0.07 97.87 17:25:50 17:19:01 1 2.46 0.00 0.20 0.00 0.08 97.25 17:25:50 17:19:01 2 2.06 0.00 0.23 0.02 0.07 97.62 17:25:50 17:19:01 3 2.06 0.00 0.18 0.00 0.03 97.72 17:25:50 17:20:01 all 24.51 0.00 0.95 0.04 0.08 74.43 17:25:50 17:20:01 0 22.10 0.00 0.80 0.03 0.08 76.98 17:25:50 17:20:01 1 26.04 0.00 1.31 0.00 0.07 72.59 17:25:50 17:20:01 2 23.79 0.00 0.79 0.08 0.08 75.25 17:25:50 17:20:01 3 26.10 0.00 0.89 0.03 0.07 72.92 17:25:50 17:25:50 17:20:01 CPU %user %nice %system %iowait %steal %idle 17:25:50 17:21:01 all 42.18 0.00 1.13 0.26 0.10 56.33 17:25:50 17:21:01 0 42.81 0.00 0.80 0.00 0.10 56.29 17:25:50 17:21:01 1 43.61 0.00 1.37 0.18 0.10 54.73 17:25:50 17:21:01 2 42.02 0.00 1.01 0.67 0.10 56.20 17:25:50 17:21:01 3 40.31 0.00 1.32 0.18 0.10 58.08 17:25:50 17:22:02 all 7.52 0.00 0.28 0.05 0.08 92.06 17:25:50 17:22:02 0 7.90 0.00 0.37 0.07 0.07 91.60 17:25:50 17:22:02 1 6.89 0.00 0.23 0.00 0.10 92.78 17:25:50 17:22:02 2 7.32 0.00 0.23 0.13 0.08 92.23 17:25:50 17:22:02 3 7.97 0.00 0.30 0.00 0.08 91.65 17:25:50 17:23:01 all 6.48 0.00 0.32 0.01 0.06 93.13 17:25:50 17:23:01 0 6.26 0.00 0.31 0.02 0.05 93.37 17:25:50 17:23:01 1 7.10 0.00 0.32 0.00 0.08 92.49 17:25:50 17:23:01 2 5.83 0.00 0.27 0.02 0.05 93.82 17:25:50 17:23:01 3 6.72 0.00 0.37 0.00 0.05 92.86 17:25:50 17:24:01 all 5.57 0.00 0.35 0.00 0.06 94.02 17:25:50 17:24:01 0 5.11 0.00 0.33 0.00 0.05 94.51 17:25:50 17:24:01 1 6.26 0.00 0.42 0.00 0.07 93.25 17:25:50 17:24:01 2 5.31 0.00 0.35 0.02 0.07 94.25 17:25:50 17:24:01 3 5.60 0.00 0.28 0.00 0.05 94.06 17:25:50 17:25:01 all 12.59 0.00 0.83 0.53 0.06 85.99 17:25:50 17:25:01 0 9.37 0.00 0.80 1.29 0.05 88.50 17:25:50 17:25:01 1 20.32 0.00 0.80 0.07 0.07 78.75 17:25:50 17:25:01 2 10.54 0.00 1.05 0.75 0.05 87.61 17:25:50 17:25:01 3 10.15 0.00 0.67 0.00 0.07 89.12 17:25:50 Average: all 28.66 0.46 1.63 0.95 0.09 68.20 17:25:50 Average: 0 28.41 0.46 1.58 0.84 0.09 68.62 17:25:50 Average: 1 28.69 0.49 1.67 0.88 0.09 68.17 17:25:50 Average: 2 28.99 0.44 1.58 0.85 0.09 68.06 17:25:50 Average: 3 28.55 0.47 1.71 1.23 0.09 67.96 17:25:50 17:25:50 17:25:50