14:17:12 Triggered by Gerrit: https://git.opendaylight.org/gerrit/c/transportpce/+/113937 14:17:12 Running as SYSTEM 14:17:12 [EnvInject] - Loading node environment variables. 14:17:12 Building remotely on prd-ubuntu2004-docker-4c-16g-41493 (ubuntu2004-docker-4c-16g) in workspace /w/workspace/transportpce-tox-verify-transportpce-master 14:17:12 [ssh-agent] Looking for ssh-agent implementation... 14:17:12 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 14:17:12 $ ssh-agent 14:17:12 SSH_AUTH_SOCK=/tmp/ssh-MpPCKiDbiOO2/agent.28782 14:17:12 SSH_AGENT_PID=28784 14:17:12 [ssh-agent] Started. 14:17:12 Running ssh-add (command line suppressed) 14:17:12 Identity added: /w/workspace/transportpce-tox-verify-transportpce-master@tmp/private_key_16870035155474170410.key (/w/workspace/transportpce-tox-verify-transportpce-master@tmp/private_key_16870035155474170410.key) 14:17:12 [ssh-agent] Using credentials jenkins (jenkins-ssh) 14:17:12 The recommended git tool is: NONE 14:17:14 using credential jenkins-ssh 14:17:14 Wiping out workspace first. 14:17:14 Cloning the remote Git repository 14:17:14 Cloning repository git://devvexx.opendaylight.org/mirror/transportpce 14:17:14 > git init /w/workspace/transportpce-tox-verify-transportpce-master # timeout=10 14:17:14 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 14:17:14 > git --version # timeout=10 14:17:14 > git --version # 'git version 2.25.1' 14:17:14 using GIT_SSH to set credentials jenkins-ssh 14:17:14 Verifying host key using known hosts file 14:17:14 You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. 14:17:14 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce +refs/heads/*:refs/remotes/origin/* # timeout=10 14:17:17 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 14:17:17 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 14:17:18 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 14:17:18 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 14:17:18 using GIT_SSH to set credentials jenkins-ssh 14:17:18 Verifying host key using known hosts file 14:17:18 You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. 14:17:18 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce refs/changes/37/113937/4 # timeout=10 14:17:18 > git rev-parse 1953804cbf14a64ccdc177ab229dae508833bbfb^{commit} # timeout=10 14:17:18 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script 14:17:18 Checking out Revision 1953804cbf14a64ccdc177ab229dae508833bbfb (refs/changes/37/113937/4) 14:17:18 > git config core.sparsecheckout # timeout=10 14:17:18 > git checkout -f 1953804cbf14a64ccdc177ab229dae508833bbfb # timeout=10 14:17:18 Commit message: "Add Func Test for Topology extension" 14:17:18 > git rev-parse FETCH_HEAD^{commit} # timeout=10 14:17:18 > git rev-list --no-walk c1a854b119ad7777d6ad79f7d94717154b415645 # timeout=10 14:17:19 > git remote # timeout=10 14:17:19 > git submodule init # timeout=10 14:17:19 > git submodule sync # timeout=10 14:17:19 > git config --get remote.origin.url # timeout=10 14:17:19 > git submodule init # timeout=10 14:17:19 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 14:17:19 ERROR: No submodules found. 14:17:22 provisioning config files... 14:17:22 copy managed file [npmrc] to file:/home/jenkins/.npmrc 14:17:22 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 14:17:22 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins11592166895240314996.sh 14:17:22 ---> python-tools-install.sh 14:17:22 Setup pyenv: 14:17:22 * system (set by /opt/pyenv/version) 14:17:22 * 3.8.13 (set by /opt/pyenv/version) 14:17:22 * 3.9.13 (set by /opt/pyenv/version) 14:17:22 * 3.10.13 (set by /opt/pyenv/version) 14:17:22 * 3.11.7 (set by /opt/pyenv/version) 14:17:27 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-Voun 14:17:27 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 14:17:29 lf-activate-venv(): INFO: Installing: lftools 14:17:59 lf-activate-venv(): INFO: Adding /tmp/venv-Voun/bin to PATH 14:17:59 Generating Requirements File 14:18:19 Python 3.11.7 14:18:19 pip 24.2 from /tmp/venv-Voun/lib/python3.11/site-packages/pip (python 3.11) 14:18:20 appdirs==1.4.4 14:18:20 argcomplete==3.5.1 14:18:20 aspy.yaml==1.3.0 14:18:20 attrs==24.2.0 14:18:20 autopage==0.5.2 14:18:20 beautifulsoup4==4.12.3 14:18:20 boto3==1.35.38 14:18:20 botocore==1.35.38 14:18:20 bs4==0.0.2 14:18:20 cachetools==5.5.0 14:18:20 certifi==2024.8.30 14:18:20 cffi==1.17.1 14:18:20 cfgv==3.4.0 14:18:20 chardet==5.2.0 14:18:20 charset-normalizer==3.4.0 14:18:20 click==8.1.7 14:18:20 cliff==4.7.0 14:18:20 cmd2==2.4.3 14:18:20 cryptography==3.3.2 14:18:20 debtcollector==3.0.0 14:18:20 decorator==5.1.1 14:18:20 defusedxml==0.7.1 14:18:20 Deprecated==1.2.14 14:18:20 distlib==0.3.9 14:18:20 dnspython==2.7.0 14:18:20 docker==4.2.2 14:18:20 dogpile.cache==1.3.3 14:18:20 durationpy==0.9 14:18:20 email_validator==2.2.0 14:18:20 filelock==3.16.1 14:18:20 future==1.0.0 14:18:20 gitdb==4.0.11 14:18:20 GitPython==3.1.43 14:18:20 google-auth==2.35.0 14:18:20 httplib2==0.22.0 14:18:20 identify==2.6.1 14:18:20 idna==3.10 14:18:20 importlib-resources==1.5.0 14:18:20 iso8601==2.1.0 14:18:20 Jinja2==3.1.4 14:18:20 jmespath==1.0.1 14:18:20 jsonpatch==1.33 14:18:20 jsonpointer==3.0.0 14:18:20 jsonschema==4.23.0 14:18:20 jsonschema-specifications==2024.10.1 14:18:20 keystoneauth1==5.8.0 14:18:20 kubernetes==31.0.0 14:18:20 lftools==0.37.10 14:18:20 lxml==5.3.0 14:18:20 MarkupSafe==3.0.1 14:18:20 msgpack==1.1.0 14:18:20 multi_key_dict==2.0.3 14:18:20 munch==4.0.0 14:18:20 netaddr==1.3.0 14:18:20 netifaces==0.11.0 14:18:20 niet==1.4.2 14:18:20 nodeenv==1.9.1 14:18:20 oauth2client==4.1.3 14:18:20 oauthlib==3.2.2 14:18:20 openstacksdk==4.0.0 14:18:20 os-client-config==2.1.0 14:18:20 os-service-types==1.7.0 14:18:20 osc-lib==3.1.0 14:18:20 oslo.config==9.6.0 14:18:20 oslo.context==5.6.0 14:18:20 oslo.i18n==6.4.0 14:18:20 oslo.log==6.1.2 14:18:20 oslo.serialization==5.5.0 14:18:20 oslo.utils==7.3.0 14:18:20 packaging==24.1 14:18:20 pbr==6.1.0 14:18:20 platformdirs==4.3.6 14:18:20 prettytable==3.11.0 14:18:20 pyasn1==0.6.1 14:18:20 pyasn1_modules==0.4.1 14:18:20 pycparser==2.22 14:18:20 pygerrit2==2.0.15 14:18:20 PyGithub==2.4.0 14:18:20 PyJWT==2.9.0 14:18:20 PyNaCl==1.5.0 14:18:20 pyparsing==2.4.7 14:18:20 pyperclip==1.9.0 14:18:20 pyrsistent==0.20.0 14:18:20 python-cinderclient==9.6.0 14:18:20 python-dateutil==2.9.0.post0 14:18:20 python-heatclient==4.0.0 14:18:20 python-jenkins==1.8.2 14:18:20 python-keystoneclient==5.5.0 14:18:20 python-magnumclient==4.7.0 14:18:20 python-openstackclient==7.1.3 14:18:20 python-swiftclient==4.6.0 14:18:20 PyYAML==6.0.2 14:18:20 referencing==0.35.1 14:18:20 requests==2.32.3 14:18:20 requests-oauthlib==2.0.0 14:18:20 requestsexceptions==1.4.0 14:18:20 rfc3986==2.0.0 14:18:20 rpds-py==0.20.0 14:18:20 rsa==4.9 14:18:20 ruamel.yaml==0.18.6 14:18:20 ruamel.yaml.clib==0.2.8 14:18:20 s3transfer==0.10.3 14:18:20 simplejson==3.19.3 14:18:20 six==1.16.0 14:18:20 smmap==5.0.1 14:18:20 soupsieve==2.6 14:18:20 stevedore==5.3.0 14:18:20 tabulate==0.9.0 14:18:20 toml==0.10.2 14:18:20 tomlkit==0.13.2 14:18:20 tqdm==4.66.5 14:18:20 typing_extensions==4.12.2 14:18:20 tzdata==2024.2 14:18:20 urllib3==1.26.20 14:18:20 virtualenv==20.26.6 14:18:20 wcwidth==0.2.13 14:18:20 websocket-client==1.8.0 14:18:20 wrapt==1.16.0 14:18:20 xdg==6.0.0 14:18:20 xmltodict==0.14.1 14:18:20 yq==3.4.3 14:18:20 [EnvInject] - Injecting environment variables from a build step. 14:18:20 [EnvInject] - Injecting as environment variables the properties content 14:18:20 PYTHON=python3 14:18:20 14:18:20 [EnvInject] - Variables injected successfully. 14:18:20 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins6172317790579126380.sh 14:18:20 ---> tox-install.sh 14:18:20 + source /home/jenkins/lf-env.sh 14:18:20 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 14:18:20 ++ mktemp -d /tmp/venv-XXXX 14:18:20 + lf_venv=/tmp/venv-IXLU 14:18:20 + local venv_file=/tmp/.os_lf_venv 14:18:20 + local python=python3 14:18:20 + local options 14:18:20 + local set_path=true 14:18:20 + local install_args= 14:18:20 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 14:18:20 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 14:18:20 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 14:18:20 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 14:18:20 + true 14:18:20 + case $1 in 14:18:20 + venv_file=/tmp/.toxenv 14:18:20 + shift 2 14:18:20 + true 14:18:20 + case $1 in 14:18:20 + shift 14:18:20 + break 14:18:20 + case $python in 14:18:20 + local pkg_list= 14:18:20 + [[ -d /opt/pyenv ]] 14:18:20 + echo 'Setup pyenv:' 14:18:20 Setup pyenv: 14:18:20 + export PYENV_ROOT=/opt/pyenv 14:18:20 + PYENV_ROOT=/opt/pyenv 14:18:20 + export PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 14:18:20 + PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 14:18:20 + pyenv versions 14:18:20 system 14:18:20 3.8.13 14:18:20 3.9.13 14:18:20 3.10.13 14:18:20 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 14:18:20 + command -v pyenv 14:18:20 ++ pyenv init - --no-rehash 14:18:20 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 14:18:20 for i in ${!paths[@]}; do 14:18:20 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 14:18:20 fi; done; 14:18:20 echo "${paths[*]}"'\'')" 14:18:20 export PATH="/opt/pyenv/shims:${PATH}" 14:18:20 export PYENV_SHELL=bash 14:18:20 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 14:18:20 pyenv() { 14:18:20 local command 14:18:20 command="${1:-}" 14:18:20 if [ "$#" -gt 0 ]; then 14:18:20 shift 14:18:20 fi 14:18:20 14:18:20 case "$command" in 14:18:20 rehash|shell) 14:18:20 eval "$(pyenv "sh-$command" "$@")" 14:18:20 ;; 14:18:20 *) 14:18:20 command pyenv "$command" "$@" 14:18:20 ;; 14:18:20 esac 14:18:20 }' 14:18:20 +++ bash --norc -ec 'IFS=:; paths=($PATH); 14:18:20 for i in ${!paths[@]}; do 14:18:20 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 14:18:20 fi; done; 14:18:20 echo "${paths[*]}"' 14:18:20 ++ PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 14:18:20 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 14:18:20 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 14:18:20 ++ export PYENV_SHELL=bash 14:18:20 ++ PYENV_SHELL=bash 14:18:20 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 14:18:20 +++ complete -F _pyenv pyenv 14:18:20 ++ lf-pyver python3 14:18:20 ++ local py_version_xy=python3 14:18:20 ++ local py_version_xyz= 14:18:20 ++ pyenv versions 14:18:20 ++ local command 14:18:20 ++ command=versions 14:18:20 ++ '[' 1 -gt 0 ']' 14:18:20 ++ shift 14:18:20 ++ case "$command" in 14:18:20 ++ command pyenv versions 14:18:20 ++ pyenv versions 14:18:20 ++ awk '{ print $1 }' 14:18:20 ++ sed 's/^[ *]* //' 14:18:20 ++ grep -E '^[0-9.]*[0-9]$' 14:18:20 ++ [[ ! -s /tmp/.pyenv_versions ]] 14:18:20 +++ grep '^3' /tmp/.pyenv_versions 14:18:20 +++ tail -n 1 14:18:20 +++ sort -V 14:18:20 ++ py_version_xyz=3.11.7 14:18:20 ++ [[ -z 3.11.7 ]] 14:18:20 ++ echo 3.11.7 14:18:20 ++ return 0 14:18:20 + pyenv local 3.11.7 14:18:20 + local command 14:18:20 + command=local 14:18:20 + '[' 2 -gt 0 ']' 14:18:20 + shift 14:18:20 + case "$command" in 14:18:20 + command pyenv local 3.11.7 14:18:20 + pyenv local 3.11.7 14:18:20 + for arg in "$@" 14:18:20 + case $arg in 14:18:20 + pkg_list+='tox ' 14:18:20 + for arg in "$@" 14:18:20 + case $arg in 14:18:20 + pkg_list+='virtualenv ' 14:18:20 + for arg in "$@" 14:18:20 + case $arg in 14:18:20 + pkg_list+='urllib3~=1.26.15 ' 14:18:20 + [[ -f /tmp/.toxenv ]] 14:18:20 + [[ ! -f /tmp/.toxenv ]] 14:18:20 + [[ -n '' ]] 14:18:20 + python3 -m venv /tmp/venv-IXLU 14:18:24 + echo 'lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-IXLU' 14:18:24 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-IXLU 14:18:24 + echo /tmp/venv-IXLU 14:18:24 + echo 'lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv' 14:18:24 lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv 14:18:24 + /tmp/venv-IXLU/bin/python3 -m pip install --upgrade --quiet pip virtualenv 14:18:27 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 14:18:27 + echo 'lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 ' 14:18:27 lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 14:18:27 + /tmp/venv-IXLU/bin/python3 -m pip install --upgrade --quiet --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 14:18:29 + type python3 14:18:29 + true 14:18:29 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-IXLU/bin to PATH' 14:18:29 lf-activate-venv(): INFO: Adding /tmp/venv-IXLU/bin to PATH 14:18:29 + PATH=/tmp/venv-IXLU/bin:/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 14:18:29 + return 0 14:18:29 + python3 --version 14:18:29 Python 3.11.7 14:18:29 + python3 -m pip --version 14:18:29 pip 24.2 from /tmp/venv-IXLU/lib/python3.11/site-packages/pip (python 3.11) 14:18:29 + python3 -m pip freeze 14:18:29 cachetools==5.5.0 14:18:29 chardet==5.2.0 14:18:29 colorama==0.4.6 14:18:29 distlib==0.3.9 14:18:29 filelock==3.16.1 14:18:29 packaging==24.1 14:18:29 platformdirs==4.3.6 14:18:29 pluggy==1.5.0 14:18:29 pyproject-api==1.8.0 14:18:29 tox==4.21.2 14:18:29 urllib3==1.26.20 14:18:29 virtualenv==20.26.6 14:18:29 [transportpce-tox-verify-transportpce-master] $ /bin/sh -xe /tmp/jenkins12719531192774715062.sh 14:18:29 [EnvInject] - Injecting environment variables from a build step. 14:18:29 [EnvInject] - Injecting as environment variables the properties content 14:18:29 PARALLEL=True 14:18:29 14:18:29 [EnvInject] - Variables injected successfully. 14:18:29 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins3102960630576501977.sh 14:18:29 ---> tox-run.sh 14:18:29 + PATH=/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 14:18:29 + ARCHIVE_TOX_DIR=/w/workspace/transportpce-tox-verify-transportpce-master/archives/tox 14:18:29 + ARCHIVE_DOC_DIR=/w/workspace/transportpce-tox-verify-transportpce-master/archives/docs 14:18:29 + mkdir -p /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox 14:18:29 + cd /w/workspace/transportpce-tox-verify-transportpce-master/. 14:18:29 + source /home/jenkins/lf-env.sh 14:18:29 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 14:18:29 ++ mktemp -d /tmp/venv-XXXX 14:18:29 + lf_venv=/tmp/venv-Gv9d 14:18:29 + local venv_file=/tmp/.os_lf_venv 14:18:29 + local python=python3 14:18:29 + local options 14:18:29 + local set_path=true 14:18:29 + local install_args= 14:18:29 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 14:18:29 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 14:18:29 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 14:18:29 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 14:18:29 + true 14:18:29 + case $1 in 14:18:29 + venv_file=/tmp/.toxenv 14:18:29 + shift 2 14:18:29 + true 14:18:29 + case $1 in 14:18:29 + shift 14:18:29 + break 14:18:29 + case $python in 14:18:29 + local pkg_list= 14:18:29 + [[ -d /opt/pyenv ]] 14:18:29 + echo 'Setup pyenv:' 14:18:29 Setup pyenv: 14:18:29 + export PYENV_ROOT=/opt/pyenv 14:18:29 + PYENV_ROOT=/opt/pyenv 14:18:29 + export PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 14:18:29 + PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 14:18:29 + pyenv versions 14:18:29 system 14:18:29 3.8.13 14:18:29 3.9.13 14:18:29 3.10.13 14:18:29 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 14:18:29 + command -v pyenv 14:18:29 ++ pyenv init - --no-rehash 14:18:30 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 14:18:30 for i in ${!paths[@]}; do 14:18:30 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 14:18:30 fi; done; 14:18:30 echo "${paths[*]}"'\'')" 14:18:30 export PATH="/opt/pyenv/shims:${PATH}" 14:18:30 export PYENV_SHELL=bash 14:18:30 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 14:18:30 pyenv() { 14:18:30 local command 14:18:30 command="${1:-}" 14:18:30 if [ "$#" -gt 0 ]; then 14:18:30 shift 14:18:30 fi 14:18:30 14:18:30 case "$command" in 14:18:30 rehash|shell) 14:18:30 eval "$(pyenv "sh-$command" "$@")" 14:18:30 ;; 14:18:30 *) 14:18:30 command pyenv "$command" "$@" 14:18:30 ;; 14:18:30 esac 14:18:30 }' 14:18:30 +++ bash --norc -ec 'IFS=:; paths=($PATH); 14:18:30 for i in ${!paths[@]}; do 14:18:30 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 14:18:30 fi; done; 14:18:30 echo "${paths[*]}"' 14:18:30 ++ PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 14:18:30 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 14:18:30 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 14:18:30 ++ export PYENV_SHELL=bash 14:18:30 ++ PYENV_SHELL=bash 14:18:30 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 14:18:30 +++ complete -F _pyenv pyenv 14:18:30 ++ lf-pyver python3 14:18:30 ++ local py_version_xy=python3 14:18:30 ++ local py_version_xyz= 14:18:30 ++ pyenv versions 14:18:30 ++ local command 14:18:30 ++ command=versions 14:18:30 ++ '[' 1 -gt 0 ']' 14:18:30 ++ shift 14:18:30 ++ case "$command" in 14:18:30 ++ command pyenv versions 14:18:30 ++ pyenv versions 14:18:30 ++ sed 's/^[ *]* //' 14:18:30 ++ grep -E '^[0-9.]*[0-9]$' 14:18:30 ++ awk '{ print $1 }' 14:18:30 ++ [[ ! -s /tmp/.pyenv_versions ]] 14:18:30 +++ grep '^3' /tmp/.pyenv_versions 14:18:30 +++ sort -V 14:18:30 +++ tail -n 1 14:18:30 ++ py_version_xyz=3.11.7 14:18:30 ++ [[ -z 3.11.7 ]] 14:18:30 ++ echo 3.11.7 14:18:30 ++ return 0 14:18:30 + pyenv local 3.11.7 14:18:30 + local command 14:18:30 + command=local 14:18:30 + '[' 2 -gt 0 ']' 14:18:30 + shift 14:18:30 + case "$command" in 14:18:30 + command pyenv local 3.11.7 14:18:30 + pyenv local 3.11.7 14:18:30 + for arg in "$@" 14:18:30 + case $arg in 14:18:30 + pkg_list+='tox ' 14:18:30 + for arg in "$@" 14:18:30 + case $arg in 14:18:30 + pkg_list+='virtualenv ' 14:18:30 + for arg in "$@" 14:18:30 + case $arg in 14:18:30 + pkg_list+='urllib3~=1.26.15 ' 14:18:30 + [[ -f /tmp/.toxenv ]] 14:18:30 ++ cat /tmp/.toxenv 14:18:30 + lf_venv=/tmp/venv-IXLU 14:18:30 + echo 'lf-activate-venv(): INFO: Reuse venv:/tmp/venv-IXLU from' file:/tmp/.toxenv 14:18:30 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-IXLU from file:/tmp/.toxenv 14:18:30 + /tmp/venv-IXLU/bin/python3 -m pip install --upgrade --quiet pip virtualenv 14:18:30 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 14:18:30 + echo 'lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 ' 14:18:30 lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 14:18:30 + /tmp/venv-IXLU/bin/python3 -m pip install --upgrade --quiet --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 14:18:32 + type python3 14:18:32 + true 14:18:32 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-IXLU/bin to PATH' 14:18:32 lf-activate-venv(): INFO: Adding /tmp/venv-IXLU/bin to PATH 14:18:32 + PATH=/tmp/venv-IXLU/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 14:18:32 + return 0 14:18:32 + [[ -d /opt/pyenv ]] 14:18:32 + echo '---> Setting up pyenv' 14:18:32 ---> Setting up pyenv 14:18:32 + export PYENV_ROOT=/opt/pyenv 14:18:32 + PYENV_ROOT=/opt/pyenv 14:18:32 + export PATH=/opt/pyenv/bin:/tmp/venv-IXLU/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 14:18:32 + PATH=/opt/pyenv/bin:/tmp/venv-IXLU/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 14:18:32 ++ pwd 14:18:32 + PYTHONPATH=/w/workspace/transportpce-tox-verify-transportpce-master 14:18:32 + export PYTHONPATH 14:18:32 + export TOX_TESTENV_PASSENV=PYTHONPATH 14:18:32 + TOX_TESTENV_PASSENV=PYTHONPATH 14:18:32 + tox --version 14:18:32 4.21.2 from /tmp/venv-IXLU/lib/python3.11/site-packages/tox/__init__.py 14:18:32 + PARALLEL=True 14:18:32 + TOX_OPTIONS_LIST= 14:18:32 + [[ -n '' ]] 14:18:32 + case ${PARALLEL,,} in 14:18:32 + TOX_OPTIONS_LIST=' --parallel auto --parallel-live' 14:18:32 + tox --parallel auto --parallel-live 14:18:32 + tee -a /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tox.log 14:18:33 docs: install_deps> python -I -m pip install -r docs/requirements.txt 14:18:33 docs-linkcheck: install_deps> python -I -m pip install -r docs/requirements.txt 14:18:33 checkbashisms: freeze> python -m pip freeze --all 14:18:33 buildcontroller: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 14:18:34 checkbashisms: pip==24.2,setuptools==75.1.0,wheel==0.44.0 14:18:34 checkbashisms: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./fixCIcentOS8reposMirrors.sh 14:18:34 checkbashisms: commands[1] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sh -c 'command checkbashisms>/dev/null || sudo yum install -y devscripts-checkbashisms || sudo yum install -y devscripts-minimal || sudo yum install -y devscripts || sudo yum install -y https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/31/Everything/x86_64/os/Packages/d/devscripts-checkbashisms-2.19.6-2.fc31.x86_64.rpm || (echo "checkbashisms command not found - please install it (e.g. sudo apt-get install devscripts | yum install devscripts-minimal )" >&2 && exit 1)' 14:18:34 checkbashisms: commands[2] /w/workspace/transportpce-tox-verify-transportpce-master/tests> find . -not -path '*/\.*' -name '*.sh' -exec checkbashisms -f '{}' + 14:18:35 script ./reflectwarn.sh does not appear to have a #! interpreter line; 14:18:35 you may get strange results 14:18:35 checkbashisms: OK ✔ in 2.88 seconds 14:18:35 pre-commit: install_deps> python -I -m pip install pre-commit 14:18:38 pre-commit: freeze> python -m pip freeze --all 14:18:38 pre-commit: cfgv==3.4.0,distlib==0.3.9,filelock==3.16.1,identify==2.6.1,nodeenv==1.9.1,pip==24.2,platformdirs==4.3.6,pre_commit==4.0.1,PyYAML==6.0.2,setuptools==75.1.0,virtualenv==20.26.6,wheel==0.44.0 14:18:38 pre-commit: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./fixCIcentOS8reposMirrors.sh 14:18:38 pre-commit: commands[1] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sh -c 'which cpan || sudo yum install -y perl-CPAN || (echo "cpan command not found - please install it (e.g. sudo apt-get install perl-modules | yum install perl-CPAN )" >&2 && exit 1)' 14:18:38 /usr/bin/cpan 14:18:38 pre-commit: commands[2] /w/workspace/transportpce-tox-verify-transportpce-master/tests> pre-commit run --all-files --show-diff-on-failure 14:18:38 [WARNING] hook id `remove-tabs` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 14:18:38 [WARNING] hook id `perltidy` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 14:18:38 [INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks. 14:18:39 [WARNING] repo `https://github.com/pre-commit/pre-commit-hooks` uses deprecated stage names (commit, push) which will be removed in a future version. Hint: often `pre-commit autoupdate --repo https://github.com/pre-commit/pre-commit-hooks` will fix this. if it does not -- consider reporting an issue to that repo. 14:18:39 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint. 14:18:39 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint:./gitlint-core[trusted-deps]. 14:18:39 buildcontroller: freeze> python -m pip freeze --all 14:18:40 [INFO] Initializing environment for https://github.com/Lucas-C/pre-commit-hooks. 14:18:40 buildcontroller: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 14:18:40 buildcontroller: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_controller.sh 14:18:40 + update-java-alternatives -l 14:18:40 java-1.11.0-openjdk-amd64 1111 /usr/lib/jvm/java-1.11.0-openjdk-amd64 14:18:40 java-1.12.0-openjdk-amd64 1211 /usr/lib/jvm/java-1.12.0-openjdk-amd64 14:18:40 java-1.17.0-openjdk-amd64 1711 /usr/lib/jvm/java-1.17.0-openjdk-amd64 14:18:40 java-1.21.0-openjdk-amd64 2111 /usr/lib/jvm/java-1.21.0-openjdk-amd64 14:18:40 java-1.8.0-openjdk-amd64 1081 /usr/lib/jvm/java-1.8.0-openjdk-amd64 14:18:40 + sudo update-java-alternatives -s java-1.21.0-openjdk-amd64 14:18:40 [INFO] Initializing environment for https://github.com/pre-commit/mirrors-autopep8. 14:18:40 + + sed -n ;s/.* version "\(.*\)\.\(.*\)\..*".*$/\1/p; 14:18:40 java -version 14:18:40 [INFO] Initializing environment for https://github.com/perltidy/perltidy. 14:18:41 + JAVA_VER=21 14:18:41 + echo 21 14:18:41 21 14:18:41 + javac -version 14:18:41 + sed -n ;s/javac \(.*\)\.\(.*\)\..*.*$/\1/p; 14:18:41 21 14:18:41 ok, java is 21 or newer 14:18:41 + JAVAC_VER=21 14:18:41 + echo 21 14:18:41 + [ 21 -ge 21 ] 14:18:41 + [ 21 -ge 21 ] 14:18:41 + echo ok, java is 21 or newer 14:18:41 + wget -nv https://dlcdn.apache.org/maven/maven-3/3.9.8/binaries/apache-maven-3.9.8-bin.tar.gz -P /tmp 14:18:41 [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. 14:18:41 [INFO] Once installed this environment will be reused. 14:18:41 [INFO] This may take a few minutes... 14:18:41 2024-10-11 14:18:41 URL:https://dlcdn.apache.org/maven/maven-3/3.9.8/binaries/apache-maven-3.9.8-bin.tar.gz [9083702/9083702] -> "/tmp/apache-maven-3.9.8-bin.tar.gz" [1] 14:18:41 + sudo mkdir -p /opt 14:18:41 + sudo tar xf /tmp/apache-maven-3.9.8-bin.tar.gz -C /opt 14:18:41 + sudo ln -s /opt/apache-maven-3.9.8 /opt/maven 14:18:41 + sudo ln -s /opt/maven/bin/mvn /usr/bin/mvn 14:18:41 + mvn --version 14:18:42 Apache Maven 3.9.8 (36645f6c9b5079805ea5009217e36f2cffd34256) 14:18:42 Maven home: /opt/maven 14:18:42 Java version: 21.0.4, vendor: Ubuntu, runtime: /usr/lib/jvm/java-21-openjdk-amd64 14:18:42 Default locale: en, platform encoding: UTF-8 14:18:42 OS name: "linux", version: "5.4.0-190-generic", arch: "amd64", family: "unix" 14:18:42 NOTE: Picked up JDK_JAVA_OPTIONS: 14:18:42 --add-opens=java.base/java.io=ALL-UNNAMED 14:18:42 --add-opens=java.base/java.lang=ALL-UNNAMED 14:18:42 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 14:18:42 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 14:18:42 --add-opens=java.base/java.net=ALL-UNNAMED 14:18:42 --add-opens=java.base/java.nio=ALL-UNNAMED 14:18:42 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 14:18:42 --add-opens=java.base/java.nio.file=ALL-UNNAMED 14:18:42 --add-opens=java.base/java.util=ALL-UNNAMED 14:18:42 --add-opens=java.base/java.util.jar=ALL-UNNAMED 14:18:42 --add-opens=java.base/java.util.stream=ALL-UNNAMED 14:18:42 --add-opens=java.base/java.util.zip=ALL-UNNAMED 14:18:42 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 14:18:42 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 14:18:42 -Xlog:disable 14:18:45 [INFO] Installing environment for https://github.com/Lucas-C/pre-commit-hooks. 14:18:45 [INFO] Once installed this environment will be reused. 14:18:45 [INFO] This may take a few minutes... 14:18:50 [INFO] Installing environment for https://github.com/pre-commit/mirrors-autopep8. 14:18:50 [INFO] Once installed this environment will be reused. 14:18:50 [INFO] This may take a few minutes... 14:18:54 [INFO] Installing environment for https://github.com/perltidy/perltidy. 14:18:54 [INFO] Once installed this environment will be reused. 14:18:54 [INFO] This may take a few minutes... 14:19:01 docs: freeze> python -m pip freeze --all 14:19:01 docs-linkcheck: freeze> python -m pip freeze --all 14:19:01 docs: alabaster==1.0.0,attrs==24.2.0,babel==2.16.0,blockdiag==3.0.0,certifi==2024.8.30,charset-normalizer==3.4.0,contourpy==1.3.0,cycler==0.12.1,docutils==0.21.2,fonttools==4.54.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.10,imagesize==1.4.1,Jinja2==3.1.4,jsonschema==3.2.0,kiwisolver==1.4.7,lfdocs-conf==0.9.0,MarkupSafe==3.0.1,matplotlib==3.9.2,numpy==2.1.2,nwdiag==3.0.0,packaging==24.1,pillow==10.4.0,pip==24.2,Pygments==2.18.0,pyparsing==3.1.4,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.2,requests==2.32.3,requests-file==1.5.1,seqdiag==3.0.0,setuptools==75.1.0,six==1.16.0,snowballstemmer==2.2.0,Sphinx==8.1.0,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-rtd-theme==3.0.1,sphinx-tabs==3.4.7,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.30,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.2.3,webcolors==24.8.0,wheel==0.44.0 14:19:01 docs: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sphinx-build -q -W --keep-going -b html -n -d /w/workspace/transportpce-tox-verify-transportpce-master/.tox/docs/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-transportpce-master/docs/_build/html 14:19:02 docs-linkcheck: alabaster==1.0.0,attrs==24.2.0,babel==2.16.0,blockdiag==3.0.0,certifi==2024.8.30,charset-normalizer==3.4.0,contourpy==1.3.0,cycler==0.12.1,docutils==0.21.2,fonttools==4.54.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.10,imagesize==1.4.1,Jinja2==3.1.4,jsonschema==3.2.0,kiwisolver==1.4.7,lfdocs-conf==0.9.0,MarkupSafe==3.0.1,matplotlib==3.9.2,numpy==2.1.2,nwdiag==3.0.0,packaging==24.1,pillow==10.4.0,pip==24.2,Pygments==2.18.0,pyparsing==3.1.4,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.2,requests==2.32.3,requests-file==1.5.1,seqdiag==3.0.0,setuptools==75.1.0,six==1.16.0,snowballstemmer==2.2.0,Sphinx==8.1.0,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-rtd-theme==3.0.1,sphinx-tabs==3.4.7,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.30,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.2.3,webcolors==24.8.0,wheel==0.44.0 14:19:02 docs-linkcheck: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sphinx-build -q -b linkcheck -d /w/workspace/transportpce-tox-verify-transportpce-master/.tox/docs-linkcheck/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-transportpce-master/docs/_build/linkcheck 14:19:04 docs: OK ✔ in 31.62 seconds 14:19:04 pylint: install_deps> python -I -m pip install 'pylint>=2.6.0' 14:19:06 trim trailing whitespace.................................................Failed 14:19:06 - hook id: trailing-whitespace 14:19:06 - exit code: 1 14:19:06 - files were modified by this hook 14:19:06 14:19:06 Fixing tests/transportpce_tests/network/test01_topo_extension.py 14:19:06 14:19:07 Tabs remover.............................................................Passed 14:19:07 autopep8.................................................................docs-linkcheck: OK ✔ in 33.21 seconds 14:19:08 pylint: freeze> python -m pip freeze --all 14:19:09 pylint: astroid==3.3.5,dill==0.3.9,isort==5.13.2,mccabe==0.7.0,pip==24.2,platformdirs==4.3.6,pylint==3.3.1,setuptools==75.1.0,tomlkit==0.13.2,wheel==0.44.0 14:19:09 pylint: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> find transportpce_tests/ -name '*.py' -exec pylint --fail-under=10 --max-line-length=120 --disable=missing-docstring,import-error --disable=fixme --disable=duplicate-code '--module-rgx=([a-z0-9_]+$)|([0-9.]{1,30}$)' '--method-rgx=(([a-z_][a-zA-Z0-9_]{2,})|(_[a-z0-9_]*)|(__[a-zA-Z][a-zA-Z0-9_]+__))$' '--variable-rgx=[a-zA-Z_][a-zA-Z0-9_]{1,30}$' '{}' + 14:19:11 Failed 14:19:11 - hook id: autopep8 14:19:11 - files were modified by this hook 14:19:11 perltidy.................................................................Passed 14:19:12 pre-commit hook(s) made changes. 14:19:12 If you are seeing this message in CI, reproduce locally with: `pre-commit run --all-files`. 14:19:12 To run `pre-commit` as part of git workflow, use `pre-commit install`. 14:19:12 All changes made by hooks: 14:19:12 diff --git a/tests/transportpce_tests/network/test01_topo_extension.py b/tests/transportpce_tests/network/test01_topo_extension.py 14:19:12 index 2294eacf..7ed5bf2d 100644 14:19:12 --- a/tests/transportpce_tests/network/test01_topo_extension.py 14:19:12 +++ b/tests/transportpce_tests/network/test01_topo_extension.py 14:19:12 @@ -124,7 +124,7 @@ class TransportPCEtesting(unittest.TestCase): 14:19:12 'ROADM-TC1-ROADM-TC1-SRG1-PP1-TXRXtoSPDR-SC1-XPDR1-XPDR1-NETWORK1', 14:19:12 'transportpce-or-network-augmentation:link-class': 'alien-to-tapi' 14:19:12 } 14:19:12 - 14:19:12 + 14:19:12 # uuid_A = uuid.UUID(bytes('SPDR-SA1-XPDR1+DSR+eOTSI+XPDR1-NETWORK1', 'utf-8')) 14:19:12 # uuid_C = uuid.UUID(bytes('SPDR-SC1-XPDR1+DSR+eOTSI+XPDR1-NETWORK1', 'utf-8')) 14:19:12 14:19:12 @@ -198,7 +198,7 @@ class TransportPCEtesting(unittest.TestCase): 14:19:12 response = test_utils.transportpce_api_rpc_request( 14:19:12 'transportpce-networkutils', 'init-xpdr-rdm-links', 14:19:12 {'links-input': {'xpdr-node': 'SPDR-SA1', 'xpdr-num': '1', 'network-num': '1', 14:19:12 - 'rdm-node': 'ROADM-TA1', 'termination-point-num' : 'SRG1-PP1-TXRX', 14:19:12 + 'rdm-node': 'ROADM-TA1', 'termination-point-num': 'SRG1-PP1-TXRX', 14:19:12 'rdm-topology-uuid': 'a21e4756-4d70-3d40-95b6-f7f630b4a13b', 14:19:12 'rdm-nep-uuid': '3c3c3679-ccd7-3343-9f36-bdb7bea11a84', 14:19:12 'rdm-node-uuid': 'f929e2dc-3c08-32c3-985f-c126023efc43'}}) 14:19:12 @@ -210,7 +210,7 @@ class TransportPCEtesting(unittest.TestCase): 14:19:12 response = test_utils.transportpce_api_rpc_request( 14:19:12 'transportpce-networkutils', 'init-rdm-xpdr-links', 14:19:12 {'links-input': {'xpdr-node': 'SPDR-SA1', 'xpdr-num': '1', 'network-num': '1', 14:19:12 - 'rdm-node': 'ROADM-TA1', 'termination-point-num' : 'SRG1-PP1-TXRX', 14:19:12 + 'rdm-node': 'ROADM-TA1', 'termination-point-num': 'SRG1-PP1-TXRX', 14:19:12 'rdm-topology-uuid': 'a21e4756-4d70-3d40-95b6-f7f630b4a13b', 14:19:12 'rdm-nep-uuid': '3c3c3679-ccd7-3343-9f36-bdb7bea11a84', 14:19:12 'rdm-node-uuid': 'f929e2dc-3c08-32c3-985f-c126023efc43'}}) 14:19:12 @@ -222,7 +222,7 @@ class TransportPCEtesting(unittest.TestCase): 14:19:12 response = test_utils.transportpce_api_rpc_request( 14:19:12 'transportpce-networkutils', 'init-xpdr-rdm-links', 14:19:12 {'links-input': {'xpdr-node': 'SPDR-SC1', 'xpdr-num': '1', 'network-num': '1', 14:19:12 - 'rdm-node': 'ROADM-TC1', 'termination-point-num' : 'SRG1-PP1-TXRX', 14:19:12 + 'rdm-node': 'ROADM-TC1', 'termination-point-num': 'SRG1-PP1-TXRX', 14:19:12 'rdm-topology-uuid': 'a21e4756-4d70-3d40-95b6-f7f630b4a13b', 14:19:12 'rdm-nep-uuid': 'e5a9d17d-40cd-3733-b736-cc787a876195', 14:19:12 'rdm-node-uuid': '7a44ea23-90d1-357d-8754-6e88d404b670'}}) 14:19:12 @@ -234,7 +234,7 @@ class TransportPCEtesting(unittest.TestCase): 14:19:12 response = test_utils.transportpce_api_rpc_request( 14:19:12 'transportpce-networkutils', 'init-rdm-xpdr-links', 14:19:12 {'links-input': {'xpdr-node': 'SPDR-SC1', 'xpdr-num': '1', 'network-num': '1', 14:19:12 - 'rdm-node': 'ROADM-TC1', 'termination-point-num' : 'SRG1-PP1-TXRX', 14:19:12 + 'rdm-node': 'ROADM-TC1', 'termination-point-num': 'SRG1-PP1-TXRX', 14:19:12 'rdm-topology-uuid': 'a21e4756-4d70-3d40-95b6-f7f630b4a13b', 14:19:12 'rdm-nep-uuid': 'e5a9d17d-40cd-3733-b736-cc787a876195', 14:19:12 'rdm-node-uuid': '7a44ea23-90d1-357d-8754-6e88d404b670'}}) 14:19:12 @@ -262,9 +262,9 @@ class TransportPCEtesting(unittest.TestCase): 14:19:12 'transportpce-networkutils', 'init-inter-domain-links', 14:19:12 {'a-end': {'rdm-node': 'ROADM-A1', 'deg-num': '1', 'termination-point': 'DEG1-TTP-TXRX'}, 14:19:12 'z-end': {'rdm-node': 'ROADM-TA1', 'deg-num': '2', 'termination-point': 'DEG2-TTP-TXRX', 14:19:12 - 'rdm-topology-uuid': 'a21e4756-4d70-3d40-95b6-f7f630b4a13b', 14:19:12 - 'rdm-nep-uuid': 'd42ed13c-d81f-3136-a7d8-b283681031d4', 14:19:12 - 'rdm-node-uuid': 'f929e2dc-3c08-32c3-985f-c126023efc43'}}) 14:19:12 + 'rdm-topology-uuid': 'a21e4756-4d70-3d40-95b6-f7f630b4a13b', 14:19:12 + 'rdm-nep-uuid': 'd42ed13c-d81f-3136-a7d8-b283681031d4', 14:19:12 + 'rdm-node-uuid': 'f929e2dc-3c08-32c3-985f-c126023efc43'}}) 14:19:12 self.assertEqual(response['status_code'], requests.codes.ok) 14:19:12 print(response['output']['result']) 14:19:12 time.sleep(2) 14:19:12 @@ -274,9 +274,9 @@ class TransportPCEtesting(unittest.TestCase): 14:19:12 'transportpce-networkutils', 'init-inter-domain-links', 14:19:12 {'a-end': {'rdm-node': 'ROADM-C1', 'deg-num': '2', 'termination-point': 'DEG2-TTP-TXRX'}, 14:19:12 'z-end': {'rdm-node': 'ROADM-TC1', 'deg-num': '1', 'termination-point': 'DEG1-TTP-TXRX', 14:19:12 - 'rdm-topology-uuid': 'a21e4756-4d70-3d40-95b6-f7f630b4a13b', 14:19:12 - 'rdm-nep-uuid': 'fb3a00c1-342f-3cdc-b83d-2c257de298c1', 14:19:12 - 'rdm-node-uuid': '7a44ea23-90d1-357d-8754-6e88d404b670'}}) 14:19:12 + 'rdm-topology-uuid': 'a21e4756-4d70-3d40-95b6-f7f630b4a13b', 14:19:12 + 'rdm-nep-uuid': 'fb3a00c1-342f-3cdc-b83d-2c257de298c1', 14:19:12 + 'rdm-node-uuid': '7a44ea23-90d1-357d-8754-6e88d404b670'}}) 14:19:12 self.assertEqual(response['status_code'], requests.codes.ok) 14:19:12 print(response['output']['result']) 14:19:12 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 14:19:12 @@ -304,15 +304,15 @@ class TransportPCEtesting(unittest.TestCase): 14:19:12 linkType = link['org-openroadm-common-network:link-type'] 14:19:12 if 'transportpce-or-network-augmentation:link-class' in link.keys(): 14:19:12 linkClass = link['transportpce-or-network-augmentation:link-class'] 14:19:12 - if (linkType == 'ROADM-TO-ROADM' and linkClass == 'inter-domain') : 14:19:12 + if (linkType == 'ROADM-TO-ROADM' and linkClass == 'inter-domain'): 14:19:12 find = linkId in check_list 14:19:12 self.assertEqual(find, True) 14:19:12 interDomainLinkNber += 1 14:19:12 - if (linkType == 'XPONDER-OUTPUT' and linkClass == 'alien-to-tapi') : 14:19:12 + if (linkType == 'XPONDER-OUTPUT' and linkClass == 'alien-to-tapi'): 14:19:12 find = linkId in check_list 14:19:12 self.assertEqual(find, True) 14:19:12 alienToTapiLinkNber += 1 14:19:12 - if (linkType == 'XPONDER-INPUT' and linkClass == 'alien-to-tapi') : 14:19:12 + if (linkType == 'XPONDER-INPUT' and linkClass == 'alien-to-tapi'): 14:19:12 find = linkId in check_list 14:19:12 self.assertEqual(find, True) 14:19:12 alienToTapiLinkNber += 1 14:19:12 pre-commit: exit 1 (33.92 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> pre-commit run --all-files --show-diff-on-failure pid=29891 14:19:29 ************* Module network.test01_topo_extension 14:19:29 transportpce_tests/network/test01_topo_extension.py:15:0: W0611: Unused import os (unused-import) 14:19:29 transportpce_tests/network/test01_topo_extension.py:21:0: W0611: Unused null imported from keyring.backends (unused-import) 14:19:29 14:19:29 ------------------------------------ 14:19:29 Your code has been rated at 10.00/10 14:19:29 14:19:31 pre-commit: FAIL ✖ in 37.12 seconds 14:19:31 pylint: exit 1 (22.44 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> find transportpce_tests/ -name '*.py' -exec pylint --fail-under=10 --max-line-length=120 --disable=missing-docstring,import-error --disable=fixme --disable=duplicate-code '--module-rgx=([a-z0-9_]+$)|([0-9.]{1,30}$)' '--method-rgx=(([a-z_][a-zA-Z0-9_]{2,})|(_[a-z0-9_]*)|(__[a-zA-Z][a-zA-Z0-9_]+__))$' '--variable-rgx=[a-zA-Z_][a-zA-Z0-9_]{1,30}$' '{}' + pid=30800 14:20:15 pylint: FAIL ✖ in 27.44 seconds 14:20:15 buildcontroller: OK ✔ in 1 minute 42.38 seconds 14:20:15 build_karaf_tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 14:20:15 sims: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 14:20:15 build_karaf_tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 14:20:15 testsPCE: install_deps> python -I -m pip install gnpy4tpce==2.4.7 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 14:20:22 build_karaf_tests121: freeze> python -m pip freeze --all 14:20:22 sims: freeze> python -m pip freeze --all 14:20:22 build_karaf_tests221: freeze> python -m pip freeze --all 14:20:22 build_karaf_tests121: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 14:20:22 build_karaf_tests121: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 14:20:22 NOTE: Picked up JDK_JAVA_OPTIONS: 14:20:22 --add-opens=java.base/java.io=ALL-UNNAMED 14:20:22 --add-opens=java.base/java.lang=ALL-UNNAMED 14:20:22 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 14:20:22 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 14:20:22 --add-opens=java.base/java.net=ALL-UNNAMED 14:20:22 --add-opens=java.base/java.nio=ALL-UNNAMED 14:20:22 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 14:20:22 --add-opens=java.base/java.nio.file=ALL-UNNAMED 14:20:22 --add-opens=java.base/java.util=ALL-UNNAMED 14:20:22 --add-opens=java.base/java.util.jar=ALL-UNNAMED 14:20:22 --add-opens=java.base/java.util.stream=ALL-UNNAMED 14:20:22 --add-opens=java.base/java.util.zip=ALL-UNNAMED 14:20:22 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 14:20:22 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 14:20:22 -Xlog:disable 14:20:22 sims: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 14:20:22 sims: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./install_lightynode.sh 14:20:22 Using lighynode version 20.1.0.2 14:20:22 Installing lightynode device to ./lightynode/lightynode-openroadm-device directory 14:20:22 build_karaf_tests221: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 14:20:22 build_karaf_tests221: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 14:20:22 NOTE: Picked up JDK_JAVA_OPTIONS: 14:20:22 --add-opens=java.base/java.io=ALL-UNNAMED 14:20:22 --add-opens=java.base/java.lang=ALL-UNNAMED 14:20:22 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 14:20:22 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 14:20:22 --add-opens=java.base/java.net=ALL-UNNAMED 14:20:22 --add-opens=java.base/java.nio=ALL-UNNAMED 14:20:22 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 14:20:22 --add-opens=java.base/java.nio.file=ALL-UNNAMED 14:20:22 --add-opens=java.base/java.util=ALL-UNNAMED 14:20:22 --add-opens=java.base/java.util.jar=ALL-UNNAMED 14:20:22 --add-opens=java.base/java.util.stream=ALL-UNNAMED 14:20:22 --add-opens=java.base/java.util.zip=ALL-UNNAMED 14:20:22 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 14:20:22 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 14:20:22 -Xlog:disable 14:20:27 sims: OK ✔ in 11.55 seconds 14:20:27 build_karaf_tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 14:20:38 build_karaf_tests71: freeze> python -m pip freeze --all 14:20:39 build_karaf_tests71: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 14:20:39 build_karaf_tests71: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 14:20:39 NOTE: Picked up JDK_JAVA_OPTIONS: 14:20:39 --add-opens=java.base/java.io=ALL-UNNAMED 14:20:39 --add-opens=java.base/java.lang=ALL-UNNAMED 14:20:39 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 14:20:39 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 14:20:39 --add-opens=java.base/java.net=ALL-UNNAMED 14:20:39 --add-opens=java.base/java.nio=ALL-UNNAMED 14:20:39 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 14:20:39 --add-opens=java.base/java.nio.file=ALL-UNNAMED 14:20:39 --add-opens=java.base/java.util=ALL-UNNAMED 14:20:39 --add-opens=java.base/java.util.jar=ALL-UNNAMED 14:20:39 --add-opens=java.base/java.util.stream=ALL-UNNAMED 14:20:39 --add-opens=java.base/java.util.zip=ALL-UNNAMED 14:20:39 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 14:20:39 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 14:20:39 -Xlog:disable 14:21:07 build_karaf_tests121: OK ✔ in 52.01 seconds 14:21:07 build_karaf_tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 14:21:08 build_karaf_tests221: OK ✔ in 53.47 seconds 14:21:08 tests_tapi: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 14:21:15 build_karaf_tests71: OK ✔ in 48.47 seconds 14:21:15 build_karaf_tests_hybrid: freeze> python -m pip freeze --all 14:21:15 tests_tapi: freeze> python -m pip freeze --all 14:21:16 build_karaf_tests_hybrid: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 14:21:16 build_karaf_tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 14:21:16 NOTE: Picked up JDK_JAVA_OPTIONS: 14:21:16 --add-opens=java.base/java.io=ALL-UNNAMED 14:21:16 --add-opens=java.base/java.lang=ALL-UNNAMED 14:21:16 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 14:21:16 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 14:21:16 --add-opens=java.base/java.net=ALL-UNNAMED 14:21:16 --add-opens=java.base/java.nio=ALL-UNNAMED 14:21:16 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 14:21:16 --add-opens=java.base/java.nio.file=ALL-UNNAMED 14:21:16 --add-opens=java.base/java.util=ALL-UNNAMED 14:21:16 --add-opens=java.base/java.util.jar=ALL-UNNAMED 14:21:16 --add-opens=java.base/java.util.stream=ALL-UNNAMED 14:21:16 --add-opens=java.base/java.util.zip=ALL-UNNAMED 14:21:16 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 14:21:16 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 14:21:16 -Xlog:disable 14:21:16 tests_tapi: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 14:21:16 tests_tapi: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh tapi 14:21:16 using environment variables from ./karaf221.env 14:21:16 pytest -q transportpce_tests/tapi/test01_abstracted_topology.py 14:21:27 testsPCE: freeze> python -m pip freeze --all 14:21:27 testsPCE: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,click==8.1.7,contourpy==1.3.0,cryptography==3.3.2,cycler==0.12.1,dict2xml==1.7.6,Flask==2.1.3,Flask-Injector==0.14.0,fonttools==4.54.1,gnpy4tpce==2.4.7,idna==3.10,iniconfig==2.0.0,injector==0.22.0,itsdangerous==2.2.0,Jinja2==3.1.4,kiwisolver==1.4.7,lxml==5.3.0,MarkupSafe==3.0.1,matplotlib==3.9.2,netconf-client==3.1.1,networkx==2.8.8,numpy==1.26.4,packaging==24.1,pandas==1.5.3,paramiko==3.5.0,pbr==5.11.1,pillow==10.4.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pyparsing==3.1.4,pytest==8.3.3,python-dateutil==2.9.0.post0,pytz==2024.2,requests==2.32.3,scipy==1.14.1,setuptools==50.3.2,six==1.16.0,urllib3==2.2.3,Werkzeug==2.0.3,wheel==0.44.0,xlrd==1.2.0 14:21:27 testsPCE: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh pce 14:21:27 pytest -q transportpce_tests/pce/test01_pce.py 14:22:20 ...............FF..F..F...F..FF.....FF..FF... [100%] 14:23:23 20 passed in 115.59s (0:01:55) 14:23:23 pytest -q transportpce_tests/pce/test02_pce_400G.py 14:23:25 ..FF....F....... [100%] 14:24:05 9 passed in 40.99s 14:24:05 pytest -q transportpce_tests/pce/test03_gnpy.py 14:24:08 .F..F......... [100%] 14:24:42 8 passed in 37.44s 14:24:42 pytest -q transportpce_tests/pce/test04_pce_bug_fix.py 14:24:52 .F.FF..F.... [100%] 14:24:57 =================================== FAILURES =================================== 14:24:57 _____________ TransportTapitesting.test_01_get_tapi_topology_T100G _____________ 14:24:57 14:24:57 self = 14:24:57 14:24:57 def test_01_get_tapi_topology_T100G(self): 14:24:57 self.tapi_topo["topology-id"] = test_utils.T100GE_UUID 14:24:57 response = test_utils.transportpce_api_rpc_request( 14:24:57 'tapi-topology', 'get-topology-details', self.tapi_topo) 14:24:57 > self.assertEqual(response['status_code'], requests.codes.ok) 14:24:57 E AssertionError: 500 != 200 14:24:57 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:190: AssertionError 14:24:57 ---------------------------- Captured stdout setup ----------------------------- 14:24:57 starting OpenDaylight... 14:24:57 starting KARAF TransportPCE build... 14:24:57 Searching for pattern 'Transportpce controller started' in karaf.log... Pattern found! OpenDaylight started ! 14:24:57 installing tapi feature... 14:24:57 installing feature odl-transportpce-tapi 14:24:57 client: JAVA_HOME not set; results may vary 14:24:57 odl-transportpce-tapi │ 10.0.0.SNAPSHOT │ x │ Started │ odl-transportpce-tapi │ OpenDaylight :: transportpce :: tapi 14:24:57 Restarting OpenDaylight... 14:24:57 starting KARAF TransportPCE build... 14:24:57 Searching for pattern 'Transportpce controller started' in karaf.log... Pattern found! starting simulator xpdra in OpenROADM device version 2.2.1... 14:24:57 Searching for pattern 'Data tree change listeners registered' in xpdra-221.log... Pattern found! simulator for xpdra started 14:24:57 starting simulator roadma in OpenROADM device version 2.2.1... 14:24:57 Searching for pattern 'Data tree change listeners registered' in roadma-221.log... Pattern found! simulator for roadma started 14:24:57 starting simulator roadmb in OpenROADM device version 2.2.1... 14:24:57 Searching for pattern 'Data tree change listeners registered' in roadmb-221.log... Pattern found! simulator for roadmb started 14:24:57 starting simulator roadmc in OpenROADM device version 2.2.1... 14:24:57 Searching for pattern 'Data tree change listeners registered' in roadmc-221.log... Pattern found! simulator for roadmc started 14:24:57 starting simulator xpdrc in OpenROADM device version 2.2.1... 14:24:57 Searching for pattern 'Data tree change listeners registered' in xpdrc-221.log... Pattern found! simulator for xpdrc started 14:24:57 starting simulator spdra in OpenROADM device version 2.2.1... 14:24:57 Searching for pattern 'Data tree change listeners registered' in spdra-221.log... Pattern found! simulator for spdra started 14:24:57 starting simulator spdrc in OpenROADM device version 2.2.1... 14:24:57 Searching for pattern 'Data tree change listeners registered' in spdrc-221.log... Pattern found! simulator for spdrc started 14:24:57 ---------------------------- Captured stderr setup ----------------------------- 14:24:57 SLF4J(W): No SLF4J providers were found. 14:24:57 SLF4J(W): Defaulting to no-operation (NOP) logger implementation 14:24:57 SLF4J(W): See https://www.slf4j.org/codes.html#noProviders for further details. 14:24:57 SLF4J(W): Class path contains SLF4J bindings targeting slf4j-api versions 1.7.x or earlier. 14:24:57 SLF4J(W): Ignoring binding found at [jar:file:/w/workspace/transportpce-tox-verify-transportpce-master/karaf221/target/assembly/system/org/apache/karaf/org.apache.karaf.client/4.4.6/org.apache.karaf.client-4.4.6.jar!/org/slf4j/impl/StaticLoggerBinder.class] 14:24:57 SLF4J(W): See https://www.slf4j.org/codes.html#ignoredBindings for an explanation. 14:24:57 ----------------------------- Captured stdout call ----------------------------- 14:24:57 execution of test_01_get_tapi_topology_T100G 14:24:57 ______________ TransportTapitesting.test_02_get_tapi_topology_T0 _______________ 14:24:57 14:24:57 self = 14:24:57 14:24:57 def test_02_get_tapi_topology_T0(self): 14:24:57 self.tapi_topo["topology-id"] = test_utils.T0_MULTILAYER_TOPO_UUID 14:24:57 response = test_utils.transportpce_api_rpc_request( 14:24:57 'tapi-topology', 'get-topology-details', self.tapi_topo) 14:24:57 > self.assertEqual(response['status_code'], requests.codes.ok) 14:24:57 E AssertionError: 500 != 200 14:24:57 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:206: AssertionError 14:24:57 ----------------------------- Captured stdout call ----------------------------- 14:24:57 execution of test_02_get_tapi_topology_T0 14:24:57 ________________ TransportTapitesting.test_04_check_tapi_topos _________________ 14:24:57 14:24:57 self = 14:24:57 14:24:57 def test_04_check_tapi_topos(self): 14:24:57 self.tapi_topo["topology-id"] = test_utils.T100GE_UUID 14:24:57 response = test_utils.transportpce_api_rpc_request( 14:24:57 'tapi-topology', 'get-topology-details', self.tapi_topo) 14:24:57 > self.assertEqual(response['status_code'], requests.codes.ok) 14:24:57 E AssertionError: 500 != 200 14:24:57 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:218: AssertionError 14:24:57 ----------------------------- Captured stdout call ----------------------------- 14:24:57 execution of test_04_check_tapi_topos 14:24:57 ________________ TransportTapitesting.test_07_check_tapi_topos _________________ 14:24:57 14:24:57 self = 14:24:57 14:24:57 def test_07_check_tapi_topos(self): 14:24:57 self.tapi_topo["topology-id"] = test_utils.T0_MULTILAYER_TOPO_UUID 14:24:57 response = test_utils.transportpce_api_rpc_request( 14:24:57 'tapi-topology', 'get-topology-details', self.tapi_topo) 14:24:57 > self.assertEqual(response['status_code'], requests.codes.ok) 14:24:57 E AssertionError: 500 != 200 14:24:57 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:241: AssertionError 14:24:57 ----------------------------- Captured stdout call ----------------------------- 14:24:57 execution of test_07_check_tapi_topos 14:24:57 ________________ TransportTapitesting.test_10_check_tapi_topos _________________ 14:24:57 14:24:57 self = 14:24:57 14:24:57 def test_10_check_tapi_topos(self): 14:24:57 > self.test_01_get_tapi_topology_T100G() 14:24:57 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:254: 14:24:57 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:190: in test_01_get_tapi_topology_T100G 14:24:57 self.assertEqual(response['status_code'], requests.codes.ok) 14:24:57 E AssertionError: 500 != 200 14:24:57 ----------------------------- Captured stdout call ----------------------------- 14:24:57 execution of test_10_check_tapi_topos 14:24:57 ____________ TransportTapitesting.test_13_check_tapi_topology_T100G ____________ 14:24:57 14:24:57 self = 14:24:57 14:24:57 def test_13_check_tapi_topology_T100G(self): 14:24:57 self.tapi_topo["topology-id"] = test_utils.T100GE_UUID 14:24:57 response = test_utils.transportpce_api_rpc_request( 14:24:57 'tapi-topology', 'get-topology-details', self.tapi_topo) 14:24:57 > self.assertEqual(response['status_code'], requests.codes.ok) 14:24:57 E AssertionError: 500 != 200 14:24:57 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:299: AssertionError 14:24:57 ----------------------------- Captured stdout call ----------------------------- 14:24:57 execution of test_13_check_tapi_topology_T100G 14:24:57 _____________ TransportTapitesting.test_14_check_tapi_topology_T0 ______________ 14:24:57 14:24:57 self = 14:24:57 14:24:57 def test_14_check_tapi_topology_T0(self): 14:24:57 self.tapi_topo["topology-id"] = test_utils.T0_MULTILAYER_TOPO_UUID 14:24:57 response = test_utils.transportpce_api_rpc_request( 14:24:57 'tapi-topology', 'get-topology-details', self.tapi_topo) 14:24:57 > self.assertEqual(response['status_code'], requests.codes.ok) 14:24:57 E AssertionError: 500 != 200 14:24:57 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:310: AssertionError 14:24:57 ----------------------------- Captured stdout call ----------------------------- 14:24:57 execution of test_14_check_tapi_topology_T0 14:24:57 ____________ TransportTapitesting.test_18_check_tapi_topology_T100G ____________ 14:24:57 14:24:57 self = 14:24:57 14:24:57 def test_18_check_tapi_topology_T100G(self): 14:24:57 self.tapi_topo["topology-id"] = test_utils.T100GE_UUID 14:24:57 response = test_utils.transportpce_api_rpc_request( 14:24:57 'tapi-topology', 'get-topology-details', self.tapi_topo) 14:24:57 > self.assertEqual(response['status_code'], requests.codes.ok) 14:24:57 E AssertionError: 500 != 200 14:24:57 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:350: AssertionError 14:24:57 ----------------------------- Captured stdout call ----------------------------- 14:24:57 execution of test_18_check_tapi_topology_T100G 14:24:57 _____________ TransportTapitesting.test_19_check_tapi_topology_T0 ______________ 14:24:57 14:24:57 self = 14:24:57 14:24:57 def test_19_check_tapi_topology_T0(self): 14:24:57 self.tapi_topo["topology-id"] = test_utils.T0_MULTILAYER_TOPO_UUID 14:24:57 response = test_utils.transportpce_api_rpc_request( 14:24:57 'tapi-topology', 'get-topology-details', self.tapi_topo) 14:24:57 > self.assertEqual(response['status_code'], requests.codes.ok) 14:24:57 E AssertionError: 500 != 200 14:24:57 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:364: AssertionError 14:24:57 ----------------------------- Captured stdout call ----------------------------- 14:24:57 execution of test_19_check_tapi_topology_T0 14:24:57 ____________ TransportTapitesting.test_22_check_tapi_topology_T100G ____________ 14:24:57 14:24:57 self = 14:24:57 14:24:57 def test_22_check_tapi_topology_T100G(self): 14:24:57 > self.test_18_check_tapi_topology_T100G() 14:24:57 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:387: 14:24:57 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:350: in test_18_check_tapi_topology_T100G 14:24:57 self.assertEqual(response['status_code'], requests.codes.ok) 14:24:57 E AssertionError: 500 != 200 14:24:57 ----------------------------- Captured stdout call ----------------------------- 14:24:57 execution of test_22_check_tapi_topology_T100G 14:24:57 _____________ TransportTapitesting.test_23_check_tapi_topology_T0 ______________ 14:24:57 14:24:57 self = 14:24:57 14:24:57 def test_23_check_tapi_topology_T0(self): 14:24:57 > self.test_19_check_tapi_topology_T0() 14:24:57 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:390: 14:24:57 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:364: in test_19_check_tapi_topology_T0 14:24:57 self.assertEqual(response['status_code'], requests.codes.ok) 14:24:57 E AssertionError: 500 != 200 14:24:57 ----------------------------- Captured stdout call ----------------------------- 14:24:57 execution of test_23_check_tapi_topology_T0 14:24:57 ____________ TransportTapitesting.test_28_check_tapi_topology_T100G ____________ 14:24:57 14:24:57 self = 14:24:57 14:24:57 def test_28_check_tapi_topology_T100G(self): 14:24:57 > self.test_18_check_tapi_topology_T100G() 14:24:57 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:433: 14:24:57 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:350: in test_18_check_tapi_topology_T100G 14:24:57 self.assertEqual(response['status_code'], requests.codes.ok) 14:24:57 E AssertionError: 500 != 200 14:24:57 ----------------------------- Captured stdout call ----------------------------- 14:24:57 execution of test_28_check_tapi_topology_T100G 14:24:57 _____________ TransportTapitesting.test_29_check_tapi_topology_T0 ______________ 14:24:57 14:24:57 self = 14:24:57 14:24:57 def test_29_check_tapi_topology_T0(self): 14:24:57 self.tapi_topo["topology-id"] = test_utils.T0_MULTILAYER_TOPO_UUID 14:24:57 response = test_utils.transportpce_api_rpc_request( 14:24:57 'tapi-topology', 'get-topology-details', self.tapi_topo) 14:24:57 > self.assertEqual(response['status_code'], requests.codes.ok) 14:24:57 E AssertionError: 500 != 200 14:24:57 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:439: AssertionError 14:24:57 ----------------------------- Captured stdout call ----------------------------- 14:24:57 execution of test_29_check_tapi_topology_T0 14:24:57 _____________ TransportTapitesting.test_32_check_tapi_topology_T0 ______________ 14:24:57 14:24:57 self = 14:24:57 14:24:57 def test_32_check_tapi_topology_T0(self): 14:24:57 self.tapi_topo["topology-id"] = test_utils.T0_MULTILAYER_TOPO_UUID 14:24:57 response = test_utils.transportpce_api_rpc_request( 14:24:57 'tapi-topology', 'get-topology-details', self.tapi_topo) 14:24:57 > self.assertEqual(response['status_code'], requests.codes.ok) 14:24:57 E AssertionError: 500 != 200 14:24:57 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:494: AssertionError 14:24:57 ----------------------------- Captured stdout call ----------------------------- 14:24:57 execution of test_32_check_tapi_topology_T0 14:24:57 _____________ TransportTapitesting.test_34_check_tapi_topology_T0 ______________ 14:24:57 14:24:57 self = 14:24:57 14:24:57 def test_34_check_tapi_topology_T0(self): 14:24:57 self.tapi_topo["topology-id"] = test_utils.T0_MULTILAYER_TOPO_UUID 14:24:57 response = test_utils.transportpce_api_rpc_request( 14:24:57 'tapi-topology', 'get-topology-details', self.tapi_topo) 14:24:57 > self.assertEqual(response['status_code'], requests.codes.ok) 14:24:57 E AssertionError: 500 != 200 14:24:57 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:533: AssertionError 14:24:57 ----------------------------- Captured stdout call ----------------------------- 14:24:57 execution of test_34_check_tapi_topology_T0 14:24:57 _____________ TransportTapitesting.test_37_check_tapi_topology_T0 ______________ 14:24:57 14:24:57 self = 14:24:57 14:24:57 def test_37_check_tapi_topology_T0(self): 14:24:57 self.tapi_topo["topology-id"] = test_utils.T0_MULTILAYER_TOPO_UUID 14:24:57 response = test_utils.transportpce_api_rpc_request( 14:24:57 'tapi-topology', 'get-topology-details', self.tapi_topo) 14:24:57 > self.assertEqual(response['status_code'], requests.codes.ok) 14:24:57 E AssertionError: 500 != 200 14:24:57 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:578: AssertionError 14:24:57 ----------------------------- Captured stdout call ----------------------------- 14:24:57 execution of test_37_check_tapi_topology_T0 14:24:57 _____________ TransportTapitesting.test_40_check_tapi_topology_T0 ______________ 14:24:57 14:24:57 self = 14:24:57 14:24:57 def test_40_check_tapi_topology_T0(self): 14:24:57 self.tapi_topo["topology-id"] = test_utils.T0_MULTILAYER_TOPO_UUID 14:24:57 response = test_utils.transportpce_api_rpc_request( 14:24:57 'tapi-topology', 'get-topology-details', self.tapi_topo) 14:24:57 > self.assertEqual(response['status_code'], requests.codes.ok) 14:24:57 E AssertionError: 500 != 200 14:24:57 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:616: AssertionError 14:24:57 ----------------------------- Captured stdout call ----------------------------- 14:24:57 execution of test_40_check_tapi_topology_T0 14:24:57 _____________ TransportTapitesting.test_42_check_tapi_topology_T0 ______________ 14:24:57 14:24:57 self = 14:24:57 14:24:57 def test_42_check_tapi_topology_T0(self): 14:24:57 self.tapi_topo["topology-id"] = test_utils.T0_MULTILAYER_TOPO_UUID 14:24:57 response = test_utils.transportpce_api_rpc_request( 14:24:57 'tapi-topology', 'get-topology-details', self.tapi_topo) 14:24:57 > self.assertEqual(response['status_code'], requests.codes.ok) 14:24:57 E AssertionError: 500 != 200 14:24:57 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:638: AssertionError 14:24:57 ----------------------------- Captured stdout call ----------------------------- 14:24:57 execution of test_42_check_tapi_topology_T0 14:24:57 _____________ TransportTapitesting.test_43_get_tapi_topology_T100G _____________ 14:24:57 14:24:57 self = 14:24:57 14:24:57 def test_43_get_tapi_topology_T100G(self): 14:24:57 self.tapi_topo["topology-id"] = test_utils.T100GE_UUID 14:24:57 response = test_utils.transportpce_api_rpc_request( 14:24:57 'tapi-topology', 'get-topology-details', self.tapi_topo) 14:24:57 > self.assertEqual(response['status_code'], requests.codes.ok) 14:24:57 E AssertionError: 500 != 200 14:24:57 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:652: AssertionError 14:24:57 ----------------------------- Captured stdout call ----------------------------- 14:24:57 execution of test_43_get_tapi_topology_T100G 14:24:57 ________________ TransportTapitesting.test_46_check_tapi_topos _________________ 14:24:57 14:24:57 self = 14:24:57 14:24:57 def test_46_check_tapi_topos(self): 14:24:57 > self.test_01_get_tapi_topology_T100G() 14:24:57 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:667: 14:24:57 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:24:57 transportpce_tests/tapi/test01_abstracted_topology.py:190: in test_01_get_tapi_topology_T100G 14:24:57 self.assertEqual(response['status_code'], requests.codes.ok) 14:24:57 E AssertionError: 500 != 200 14:24:57 ----------------------------- Captured stdout call ----------------------------- 14:24:57 execution of test_46_check_tapi_topos 14:24:57 =========================== short test summary info ============================ 14:24:57 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_01_get_tapi_topology_T100G 14:24:57 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_02_get_tapi_topology_T0 14:24:57 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_04_check_tapi_topos 14:24:57 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_07_check_tapi_topos 14:24:57 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_10_check_tapi_topos 14:24:57 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_13_check_tapi_topology_T100G 14:24:57 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_14_check_tapi_topology_T0 14:24:57 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_18_check_tapi_topology_T100G 14:24:57 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_19_check_tapi_topology_T0 14:24:57 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_22_check_tapi_topology_T100G 14:24:57 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_23_check_tapi_topology_T0 14:24:57 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_28_check_tapi_topology_T100G 14:24:57 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_29_check_tapi_topology_T0 14:24:57 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_32_check_tapi_topology_T0 14:24:57 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_34_check_tapi_topology_T0 14:24:57 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_37_check_tapi_topology_T0 14:24:57 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_40_check_tapi_topology_T0 14:24:57 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_42_check_tapi_topology_T0 14:24:57 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_43_get_tapi_topology_T100G 14:24:57 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_46_check_tapi_topos 14:24:57 20 failed, 30 passed in 220.88s (0:03:40) 14:24:57 build_karaf_tests_hybrid: OK ✔ in 50.31 seconds 14:24:57 tests_tapi: exit 1 (221.13 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh tapi pid=31358 14:24:57 tests_tapi: FAIL ✖ in 3 minutes 48.89 seconds 14:24:57 tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 14:25:04 tests71: freeze> python -m pip freeze --all 14:25:04 tests71: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 14:25:04 tests71: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 7.1 14:25:04 using environment variables from ./karaf71.env 14:25:04 pytest -q transportpce_tests/7.1/test01_portmapping.py 14:25:14 ... [100%] 14:25:20 3 passed in 37.20s 14:25:20 testsPCE: OK ✔ in 5 minutes 5.19 seconds 14:25:20 tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 14:25:26 tests121: freeze> python -m pip freeze --all 14:25:26 tests121: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 14:25:26 tests121: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 1.2.1 14:25:26 using environment variables from ./karaf121.env 14:25:26 pytest -q transportpce_tests/1.2.1/test01_portmapping.py 14:25:39 ............ [100%] 14:25:52 12 passed in 47.90s 14:25:52 pytest -q transportpce_tests/7.1/test02_otn_renderer.py 14:26:03 FFFFFF.............................................................. [100%] 14:28:30 62 passed in 157.15s (0:02:37) 14:28:30 pytest -q transportpce_tests/7.1/test03_renderer_or_modes.py 14:29:02 ......FF.FF.FF.FF........................................ [100%] 14:30:45 48 passed in 134.96s (0:02:14) 14:30:45 pytest -q transportpce_tests/7.1/test04_renderer_regen_mode.py 14:31:10 ...................... [100%] 14:31:57 22 passed in 72.00s (0:01:12) 14:31:57 tests71: OK ✔ in 7 minutes 0.26 seconds 14:31:57 tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 14:32:04 tests221: freeze> python -m pip freeze --all 14:32:04 tests221: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 14:32:04 tests221: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 2.2.1 14:32:04 using environment variables from ./karaf221.env 14:32:04 pytest -q transportpce_tests/2.2.1/test01_portmapping.py 14:32:21 F.............................F.. [100%] 14:35:29 =================================== FAILURES =================================== 14:35:29 _________ TransportPCEPortMappingTesting.test_01_rdm_device_connection _________ 14:35:29 14:35:29 self = 14:35:29 14:35:29 def _new_conn(self) -> socket.socket: 14:35:29 """Establish a socket connection and set nodelay settings on it. 14:35:29 14:35:29 :return: New socket connection. 14:35:29 """ 14:35:29 try: 14:35:29 > sock = connection.create_connection( 14:35:29 (self._dns_host, self.port), 14:35:29 self.timeout, 14:35:29 source_address=self.source_address, 14:35:29 socket_options=self.socket_options, 14:35:29 ) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 14:35:29 raise err 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 address = ('localhost', 8182), timeout = 10, source_address = None 14:35:29 socket_options = [(6, 1, 1)] 14:35:29 14:35:29 def create_connection( 14:35:29 address: tuple[str, int], 14:35:29 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 14:35:29 source_address: tuple[str, int] | None = None, 14:35:29 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 14:35:29 ) -> socket.socket: 14:35:29 """Connect to *address* and return the socket object. 14:35:29 14:35:29 Convenience function. Connect to *address* (a 2-tuple ``(host, 14:35:29 port)``) and return the socket object. Passing the optional 14:35:29 *timeout* parameter will set the timeout on the socket instance 14:35:29 before attempting to connect. If no *timeout* is supplied, the 14:35:29 global default timeout setting returned by :func:`socket.getdefaulttimeout` 14:35:29 is used. If *source_address* is set it must be a tuple of (host, port) 14:35:29 for the socket to bind as a source address before making the connection. 14:35:29 An host of '' or port 0 tells the OS to use the default. 14:35:29 """ 14:35:29 14:35:29 host, port = address 14:35:29 if host.startswith("["): 14:35:29 host = host.strip("[]") 14:35:29 err = None 14:35:29 14:35:29 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 14:35:29 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 14:35:29 # The original create_connection function always returns all records. 14:35:29 family = allowed_gai_family() 14:35:29 14:35:29 try: 14:35:29 host.encode("idna") 14:35:29 except UnicodeError: 14:35:29 raise LocationParseError(f"'{host}', label empty or too long") from None 14:35:29 14:35:29 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 14:35:29 af, socktype, proto, canonname, sa = res 14:35:29 sock = None 14:35:29 try: 14:35:29 sock = socket.socket(af, socktype, proto) 14:35:29 14:35:29 # If provided, set socket level options before connecting. 14:35:29 _set_socket_options(sock, socket_options) 14:35:29 14:35:29 if timeout is not _DEFAULT_TIMEOUT: 14:35:29 sock.settimeout(timeout) 14:35:29 if source_address: 14:35:29 sock.bind(source_address) 14:35:29 > sock.connect(sa) 14:35:29 E ConnectionRefusedError: [Errno 111] Connection refused 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 14:35:29 14:35:29 The above exception was the direct cause of the following exception: 14:35:29 14:35:29 self = 14:35:29 method = 'PUT' 14:35:29 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' 14:35:29 body = '{"node": [{"node-id": "ROADMA01", "netconf-node-topology:host": "127.0.0.1", "netconf-node-topology:port": "17831", "...lis": "60000", "netconf-node-topology:max-connection-attempts": "0", "netconf-node-topology:keepalive-delay": "120"}]}' 14:35:29 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '589', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 14:35:29 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 14:35:29 redirect = False, assert_same_host = False 14:35:29 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 14:35:29 release_conn = False, chunked = False, body_pos = None, preload_content = False 14:35:29 decode_content = False, response_kw = {} 14:35:29 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query=None, fragment=None) 14:35:29 destination_scheme = None, conn = None, release_this_conn = True 14:35:29 http_tunnel_required = False, err = None, clean_exit = False 14:35:29 14:35:29 def urlopen( # type: ignore[override] 14:35:29 self, 14:35:29 method: str, 14:35:29 url: str, 14:35:29 body: _TYPE_BODY | None = None, 14:35:29 headers: typing.Mapping[str, str] | None = None, 14:35:29 retries: Retry | bool | int | None = None, 14:35:29 redirect: bool = True, 14:35:29 assert_same_host: bool = True, 14:35:29 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 14:35:29 pool_timeout: int | None = None, 14:35:29 release_conn: bool | None = None, 14:35:29 chunked: bool = False, 14:35:29 body_pos: _TYPE_BODY_POSITION | None = None, 14:35:29 preload_content: bool = True, 14:35:29 decode_content: bool = True, 14:35:29 **response_kw: typing.Any, 14:35:29 ) -> BaseHTTPResponse: 14:35:29 """ 14:35:29 Get a connection from the pool and perform an HTTP request. This is the 14:35:29 lowest level call for making a request, so you'll need to specify all 14:35:29 the raw details. 14:35:29 14:35:29 .. note:: 14:35:29 14:35:29 More commonly, it's appropriate to use a convenience method 14:35:29 such as :meth:`request`. 14:35:29 14:35:29 .. note:: 14:35:29 14:35:29 `release_conn` will only behave as expected if 14:35:29 `preload_content=False` because we want to make 14:35:29 `preload_content=False` the default behaviour someday soon without 14:35:29 breaking backwards compatibility. 14:35:29 14:35:29 :param method: 14:35:29 HTTP request method (such as GET, POST, PUT, etc.) 14:35:29 14:35:29 :param url: 14:35:29 The URL to perform the request on. 14:35:29 14:35:29 :param body: 14:35:29 Data to send in the request body, either :class:`str`, :class:`bytes`, 14:35:29 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 14:35:29 14:35:29 :param headers: 14:35:29 Dictionary of custom headers to send, such as User-Agent, 14:35:29 If-None-Match, etc. If None, pool headers are used. If provided, 14:35:29 these headers completely replace any pool-specific headers. 14:35:29 14:35:29 :param retries: 14:35:29 Configure the number of retries to allow before raising a 14:35:29 :class:`~urllib3.exceptions.MaxRetryError` exception. 14:35:29 14:35:29 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 14:35:29 :class:`~urllib3.util.retry.Retry` object for fine-grained control 14:35:29 over different types of retries. 14:35:29 Pass an integer number to retry connection errors that many times, 14:35:29 but no other types of errors. Pass zero to never retry. 14:35:29 14:35:29 If ``False``, then retries are disabled and any exception is raised 14:35:29 immediately. Also, instead of raising a MaxRetryError on redirects, 14:35:29 the redirect response will be returned. 14:35:29 14:35:29 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 14:35:29 14:35:29 :param redirect: 14:35:29 If True, automatically handle redirects (status codes 301, 302, 14:35:29 303, 307, 308). Each redirect counts as a retry. Disabling retries 14:35:29 will disable redirect, too. 14:35:29 14:35:29 :param assert_same_host: 14:35:29 If ``True``, will make sure that the host of the pool requests is 14:35:29 consistent else will raise HostChangedError. When ``False``, you can 14:35:29 use the pool on an HTTP proxy and request foreign hosts. 14:35:29 14:35:29 :param timeout: 14:35:29 If specified, overrides the default timeout for this one 14:35:29 request. It may be a float (in seconds) or an instance of 14:35:29 :class:`urllib3.util.Timeout`. 14:35:29 14:35:29 :param pool_timeout: 14:35:29 If set and the pool is set to block=True, then this method will 14:35:29 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 14:35:29 connection is available within the time period. 14:35:29 14:35:29 :param bool preload_content: 14:35:29 If True, the response's body will be preloaded into memory. 14:35:29 14:35:29 :param bool decode_content: 14:35:29 If True, will attempt to decode the body based on the 14:35:29 'content-encoding' header. 14:35:29 14:35:29 :param release_conn: 14:35:29 If False, then the urlopen call will not release the connection 14:35:29 back into the pool once a response is received (but will release if 14:35:29 you read the entire contents of the response such as when 14:35:29 `preload_content=True`). This is useful if you're not preloading 14:35:29 the response's content immediately. You will need to call 14:35:29 ``r.release_conn()`` on the response ``r`` to return the connection 14:35:29 back into the pool. If None, it takes the value of ``preload_content`` 14:35:29 which defaults to ``True``. 14:35:29 14:35:29 :param bool chunked: 14:35:29 If True, urllib3 will send the body using chunked transfer 14:35:29 encoding. Otherwise, urllib3 will send the body using the standard 14:35:29 content-length form. Defaults to False. 14:35:29 14:35:29 :param int body_pos: 14:35:29 Position to seek to in file-like body in the event of a retry or 14:35:29 redirect. Typically this won't need to be set because urllib3 will 14:35:29 auto-populate the value when needed. 14:35:29 """ 14:35:29 parsed_url = parse_url(url) 14:35:29 destination_scheme = parsed_url.scheme 14:35:29 14:35:29 if headers is None: 14:35:29 headers = self.headers 14:35:29 14:35:29 if not isinstance(retries, Retry): 14:35:29 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 14:35:29 14:35:29 if release_conn is None: 14:35:29 release_conn = preload_content 14:35:29 14:35:29 # Check host 14:35:29 if assert_same_host and not self.is_same_host(url): 14:35:29 raise HostChangedError(self, url, retries) 14:35:29 14:35:29 # Ensure that the URL we're connecting to is properly encoded 14:35:29 if url.startswith("/"): 14:35:29 url = to_str(_encode_target(url)) 14:35:29 else: 14:35:29 url = to_str(parsed_url.url) 14:35:29 14:35:29 conn = None 14:35:29 14:35:29 # Track whether `conn` needs to be released before 14:35:29 # returning/raising/recursing. Update this variable if necessary, and 14:35:29 # leave `release_conn` constant throughout the function. That way, if 14:35:29 # the function recurses, the original value of `release_conn` will be 14:35:29 # passed down into the recursive call, and its value will be respected. 14:35:29 # 14:35:29 # See issue #651 [1] for details. 14:35:29 # 14:35:29 # [1] 14:35:29 release_this_conn = release_conn 14:35:29 14:35:29 http_tunnel_required = connection_requires_http_tunnel( 14:35:29 self.proxy, self.proxy_config, destination_scheme 14:35:29 ) 14:35:29 14:35:29 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 14:35:29 # have to copy the headers dict so we can safely change it without those 14:35:29 # changes being reflected in anyone else's copy. 14:35:29 if not http_tunnel_required: 14:35:29 headers = headers.copy() # type: ignore[attr-defined] 14:35:29 headers.update(self.proxy_headers) # type: ignore[union-attr] 14:35:29 14:35:29 # Must keep the exception bound to a separate variable or else Python 3 14:35:29 # complains about UnboundLocalError. 14:35:29 err = None 14:35:29 14:35:29 # Keep track of whether we cleanly exited the except block. This 14:35:29 # ensures we do proper cleanup in finally. 14:35:29 clean_exit = False 14:35:29 14:35:29 # Rewind body position, if needed. Record current position 14:35:29 # for future rewinds in the event of a redirect/retry. 14:35:29 body_pos = set_file_position(body, body_pos) 14:35:29 14:35:29 try: 14:35:29 # Request a connection from the queue. 14:35:29 timeout_obj = self._get_timeout(timeout) 14:35:29 conn = self._get_conn(timeout=pool_timeout) 14:35:29 14:35:29 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 14:35:29 14:35:29 # Is this a closed/new connection that requires CONNECT tunnelling? 14:35:29 if self.proxy is not None and http_tunnel_required and conn.is_closed: 14:35:29 try: 14:35:29 self._prepare_proxy(conn) 14:35:29 except (BaseSSLError, OSError, SocketTimeout) as e: 14:35:29 self._raise_timeout( 14:35:29 err=e, url=self.proxy.url, timeout_value=conn.timeout 14:35:29 ) 14:35:29 raise 14:35:29 14:35:29 # If we're going to release the connection in ``finally:``, then 14:35:29 # the response doesn't need to know about the connection. Otherwise 14:35:29 # it will also try to release it and we'll have a double-release 14:35:29 # mess. 14:35:29 response_conn = conn if not release_conn else None 14:35:29 14:35:29 # Make the request on the HTTPConnection object 14:35:29 > response = self._make_request( 14:35:29 conn, 14:35:29 method, 14:35:29 url, 14:35:29 timeout=timeout_obj, 14:35:29 body=body, 14:35:29 headers=headers, 14:35:29 chunked=chunked, 14:35:29 retries=retries, 14:35:29 response_conn=response_conn, 14:35:29 preload_content=preload_content, 14:35:29 decode_content=decode_content, 14:35:29 **response_kw, 14:35:29 ) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 14:35:29 conn.request( 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 14:35:29 self.endheaders() 14:35:29 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 14:35:29 self._send_output(message_body, encode_chunked=encode_chunked) 14:35:29 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 14:35:29 self.send(msg) 14:35:29 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 14:35:29 self.connect() 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 14:35:29 self.sock = self._new_conn() 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 self = 14:35:29 14:35:29 def _new_conn(self) -> socket.socket: 14:35:29 """Establish a socket connection and set nodelay settings on it. 14:35:29 14:35:29 :return: New socket connection. 14:35:29 """ 14:35:29 try: 14:35:29 sock = connection.create_connection( 14:35:29 (self._dns_host, self.port), 14:35:29 self.timeout, 14:35:29 source_address=self.source_address, 14:35:29 socket_options=self.socket_options, 14:35:29 ) 14:35:29 except socket.gaierror as e: 14:35:29 raise NameResolutionError(self.host, self, e) from e 14:35:29 except SocketTimeout as e: 14:35:29 raise ConnectTimeoutError( 14:35:29 self, 14:35:29 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 14:35:29 ) from e 14:35:29 14:35:29 except OSError as e: 14:35:29 > raise NewConnectionError( 14:35:29 self, f"Failed to establish a new connection: {e}" 14:35:29 ) from e 14:35:29 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 14:35:29 14:35:29 The above exception was the direct cause of the following exception: 14:35:29 14:35:29 self = 14:35:29 request = , stream = False 14:35:29 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 14:35:29 proxies = OrderedDict() 14:35:29 14:35:29 def send( 14:35:29 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 14:35:29 ): 14:35:29 """Sends PreparedRequest object. Returns Response object. 14:35:29 14:35:29 :param request: The :class:`PreparedRequest ` being sent. 14:35:29 :param stream: (optional) Whether to stream the request content. 14:35:29 :param timeout: (optional) How long to wait for the server to send 14:35:29 data before giving up, as a float, or a :ref:`(connect timeout, 14:35:29 read timeout) ` tuple. 14:35:29 :type timeout: float or tuple or urllib3 Timeout object 14:35:29 :param verify: (optional) Either a boolean, in which case it controls whether 14:35:29 we verify the server's TLS certificate, or a string, in which case it 14:35:29 must be a path to a CA bundle to use 14:35:29 :param cert: (optional) Any user-provided SSL certificate to be trusted. 14:35:29 :param proxies: (optional) The proxies dictionary to apply to the request. 14:35:29 :rtype: requests.Response 14:35:29 """ 14:35:29 14:35:29 try: 14:35:29 conn = self.get_connection_with_tls_context( 14:35:29 request, verify, proxies=proxies, cert=cert 14:35:29 ) 14:35:29 except LocationValueError as e: 14:35:29 raise InvalidURL(e, request=request) 14:35:29 14:35:29 self.cert_verify(conn, request.url, verify, cert) 14:35:29 url = self.request_url(request, proxies) 14:35:29 self.add_headers( 14:35:29 request, 14:35:29 stream=stream, 14:35:29 timeout=timeout, 14:35:29 verify=verify, 14:35:29 cert=cert, 14:35:29 proxies=proxies, 14:35:29 ) 14:35:29 14:35:29 chunked = not (request.body is None or "Content-Length" in request.headers) 14:35:29 14:35:29 if isinstance(timeout, tuple): 14:35:29 try: 14:35:29 connect, read = timeout 14:35:29 timeout = TimeoutSauce(connect=connect, read=read) 14:35:29 except ValueError: 14:35:29 raise ValueError( 14:35:29 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 14:35:29 f"or a single float to set both timeouts to the same value." 14:35:29 ) 14:35:29 elif isinstance(timeout, TimeoutSauce): 14:35:29 pass 14:35:29 else: 14:35:29 timeout = TimeoutSauce(connect=timeout, read=timeout) 14:35:29 14:35:29 try: 14:35:29 > resp = conn.urlopen( 14:35:29 method=request.method, 14:35:29 url=url, 14:35:29 body=request.body, 14:35:29 headers=request.headers, 14:35:29 redirect=False, 14:35:29 assert_same_host=False, 14:35:29 preload_content=False, 14:35:29 decode_content=False, 14:35:29 retries=self.max_retries, 14:35:29 timeout=timeout, 14:35:29 chunked=chunked, 14:35:29 ) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 14:35:29 retries = retries.increment( 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 14:35:29 method = 'PUT' 14:35:29 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' 14:35:29 response = None 14:35:29 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 14:35:29 _pool = 14:35:29 _stacktrace = 14:35:29 14:35:29 def increment( 14:35:29 self, 14:35:29 method: str | None = None, 14:35:29 url: str | None = None, 14:35:29 response: BaseHTTPResponse | None = None, 14:35:29 error: Exception | None = None, 14:35:29 _pool: ConnectionPool | None = None, 14:35:29 _stacktrace: TracebackType | None = None, 14:35:29 ) -> Self: 14:35:29 """Return a new Retry object with incremented retry counters. 14:35:29 14:35:29 :param response: A response object, or None, if the server did not 14:35:29 return a response. 14:35:29 :type response: :class:`~urllib3.response.BaseHTTPResponse` 14:35:29 :param Exception error: An error encountered during the request, or 14:35:29 None if the response was received successfully. 14:35:29 14:35:29 :return: A new ``Retry`` object. 14:35:29 """ 14:35:29 if self.total is False and error: 14:35:29 # Disabled, indicate to re-raise the error. 14:35:29 raise reraise(type(error), error, _stacktrace) 14:35:29 14:35:29 total = self.total 14:35:29 if total is not None: 14:35:29 total -= 1 14:35:29 14:35:29 connect = self.connect 14:35:29 read = self.read 14:35:29 redirect = self.redirect 14:35:29 status_count = self.status 14:35:29 other = self.other 14:35:29 cause = "unknown" 14:35:29 status = None 14:35:29 redirect_location = None 14:35:29 14:35:29 if error and self._is_connection_error(error): 14:35:29 # Connect retry? 14:35:29 if connect is False: 14:35:29 raise reraise(type(error), error, _stacktrace) 14:35:29 elif connect is not None: 14:35:29 connect -= 1 14:35:29 14:35:29 elif error and self._is_read_error(error): 14:35:29 # Read retry? 14:35:29 if read is False or method is None or not self._is_method_retryable(method): 14:35:29 raise reraise(type(error), error, _stacktrace) 14:35:29 elif read is not None: 14:35:29 read -= 1 14:35:29 14:35:29 elif error: 14:35:29 # Other retry? 14:35:29 if other is not None: 14:35:29 other -= 1 14:35:29 14:35:29 elif response and response.get_redirect_location(): 14:35:29 # Redirect retry? 14:35:29 if redirect is not None: 14:35:29 redirect -= 1 14:35:29 cause = "too many redirects" 14:35:29 response_redirect_location = response.get_redirect_location() 14:35:29 if response_redirect_location: 14:35:29 redirect_location = response_redirect_location 14:35:29 status = response.status 14:35:29 14:35:29 else: 14:35:29 # Incrementing because of a server error like a 500 in 14:35:29 # status_forcelist and the given method is in the allowed_methods 14:35:29 cause = ResponseError.GENERIC_ERROR 14:35:29 if response and response.status: 14:35:29 if status_count is not None: 14:35:29 status_count -= 1 14:35:29 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 14:35:29 status = response.status 14:35:29 14:35:29 history = self.history + ( 14:35:29 RequestHistory(method, url, error, status, redirect_location), 14:35:29 ) 14:35:29 14:35:29 new_retry = self.new( 14:35:29 total=total, 14:35:29 connect=connect, 14:35:29 read=read, 14:35:29 redirect=redirect, 14:35:29 status=status_count, 14:35:29 other=other, 14:35:29 history=history, 14:35:29 ) 14:35:29 14:35:29 if new_retry.is_exhausted(): 14:35:29 reason = error or ResponseError(cause) 14:35:29 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 14:35:29 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 14:35:29 14:35:29 During handling of the above exception, another exception occurred: 14:35:29 14:35:29 self = 14:35:29 14:35:29 def test_01_rdm_device_connection(self): 14:35:29 > response = test_utils.mount_device("ROADMA01", ('roadma', self.NODE_VERSION)) 14:35:29 14:35:29 transportpce_tests/1.2.1/test01_portmapping.py:49: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 transportpce_tests/common/test_utils.py:344: in mount_device 14:35:29 response = put_request(url[RESTCONF_VERSION].format('{}', node), body) 14:35:29 transportpce_tests/common/test_utils.py:124: in put_request 14:35:29 return requests.request( 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 14:35:29 return session.request(method=method, url=url, **kwargs) 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 14:35:29 resp = self.send(prep, **send_kwargs) 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 14:35:29 r = adapter.send(request, **kwargs) 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 self = 14:35:29 request = , stream = False 14:35:29 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 14:35:29 proxies = OrderedDict() 14:35:29 14:35:29 def send( 14:35:29 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 14:35:29 ): 14:35:29 """Sends PreparedRequest object. Returns Response object. 14:35:29 14:35:29 :param request: The :class:`PreparedRequest ` being sent. 14:35:29 :param stream: (optional) Whether to stream the request content. 14:35:29 :param timeout: (optional) How long to wait for the server to send 14:35:29 data before giving up, as a float, or a :ref:`(connect timeout, 14:35:29 read timeout) ` tuple. 14:35:29 :type timeout: float or tuple or urllib3 Timeout object 14:35:29 :param verify: (optional) Either a boolean, in which case it controls whether 14:35:29 we verify the server's TLS certificate, or a string, in which case it 14:35:29 must be a path to a CA bundle to use 14:35:29 :param cert: (optional) Any user-provided SSL certificate to be trusted. 14:35:29 :param proxies: (optional) The proxies dictionary to apply to the request. 14:35:29 :rtype: requests.Response 14:35:29 """ 14:35:29 14:35:29 try: 14:35:29 conn = self.get_connection_with_tls_context( 14:35:29 request, verify, proxies=proxies, cert=cert 14:35:29 ) 14:35:29 except LocationValueError as e: 14:35:29 raise InvalidURL(e, request=request) 14:35:29 14:35:29 self.cert_verify(conn, request.url, verify, cert) 14:35:29 url = self.request_url(request, proxies) 14:35:29 self.add_headers( 14:35:29 request, 14:35:29 stream=stream, 14:35:29 timeout=timeout, 14:35:29 verify=verify, 14:35:29 cert=cert, 14:35:29 proxies=proxies, 14:35:29 ) 14:35:29 14:35:29 chunked = not (request.body is None or "Content-Length" in request.headers) 14:35:29 14:35:29 if isinstance(timeout, tuple): 14:35:29 try: 14:35:29 connect, read = timeout 14:35:29 timeout = TimeoutSauce(connect=connect, read=read) 14:35:29 except ValueError: 14:35:29 raise ValueError( 14:35:29 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 14:35:29 f"or a single float to set both timeouts to the same value." 14:35:29 ) 14:35:29 elif isinstance(timeout, TimeoutSauce): 14:35:29 pass 14:35:29 else: 14:35:29 timeout = TimeoutSauce(connect=timeout, read=timeout) 14:35:29 14:35:29 try: 14:35:29 resp = conn.urlopen( 14:35:29 method=request.method, 14:35:29 url=url, 14:35:29 body=request.body, 14:35:29 headers=request.headers, 14:35:29 redirect=False, 14:35:29 assert_same_host=False, 14:35:29 preload_content=False, 14:35:29 decode_content=False, 14:35:29 retries=self.max_retries, 14:35:29 timeout=timeout, 14:35:29 chunked=chunked, 14:35:29 ) 14:35:29 14:35:29 except (ProtocolError, OSError) as err: 14:35:29 raise ConnectionError(err, request=request) 14:35:29 14:35:29 except MaxRetryError as e: 14:35:29 if isinstance(e.reason, ConnectTimeoutError): 14:35:29 # TODO: Remove this in 3.0.0: see #2811 14:35:29 if not isinstance(e.reason, NewConnectionError): 14:35:29 raise ConnectTimeout(e, request=request) 14:35:29 14:35:29 if isinstance(e.reason, ResponseError): 14:35:29 raise RetryError(e, request=request) 14:35:29 14:35:29 if isinstance(e.reason, _ProxyError): 14:35:29 raise ProxyError(e, request=request) 14:35:29 14:35:29 if isinstance(e.reason, _SSLError): 14:35:29 # This branch is for urllib3 v1.22 and later. 14:35:29 raise SSLError(e, request=request) 14:35:29 14:35:29 > raise ConnectionError(e, request=request) 14:35:29 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 14:35:29 ---------------------------- Captured stdout setup ----------------------------- 14:35:29 starting OpenDaylight... 14:35:29 starting KARAF TransportPCE build... 14:35:29 Searching for pattern 'Transportpce controller started' in karaf.log... Pattern found! OpenDaylight started ! 14:35:29 starting simulator xpdra in OpenROADM device version 1.2.1... 14:35:29 Searching for pattern 'Data tree change listeners registered' in xpdra-121.log... Pattern found! simulator for xpdra started 14:35:29 starting simulator roadma in OpenROADM device version 1.2.1... 14:35:29 Searching for pattern 'Data tree change listeners registered' in roadma-121.log... Pattern found! simulator for roadma started 14:35:29 ----------------------------- Captured stdout call ----------------------------- 14:35:29 execution of test_01_rdm_device_connection 14:35:29 _________ TransportPCEPortMappingTesting.test_02_rdm_device_connected __________ 14:35:29 14:35:29 self = 14:35:29 14:35:29 def _new_conn(self) -> socket.socket: 14:35:29 """Establish a socket connection and set nodelay settings on it. 14:35:29 14:35:29 :return: New socket connection. 14:35:29 """ 14:35:29 try: 14:35:29 > sock = connection.create_connection( 14:35:29 (self._dns_host, self.port), 14:35:29 self.timeout, 14:35:29 source_address=self.source_address, 14:35:29 socket_options=self.socket_options, 14:35:29 ) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 14:35:29 raise err 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 address = ('localhost', 8182), timeout = 10, source_address = None 14:35:29 socket_options = [(6, 1, 1)] 14:35:29 14:35:29 def create_connection( 14:35:29 address: tuple[str, int], 14:35:29 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 14:35:29 source_address: tuple[str, int] | None = None, 14:35:29 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 14:35:29 ) -> socket.socket: 14:35:29 """Connect to *address* and return the socket object. 14:35:29 14:35:29 Convenience function. Connect to *address* (a 2-tuple ``(host, 14:35:29 port)``) and return the socket object. Passing the optional 14:35:29 *timeout* parameter will set the timeout on the socket instance 14:35:29 before attempting to connect. If no *timeout* is supplied, the 14:35:29 global default timeout setting returned by :func:`socket.getdefaulttimeout` 14:35:29 is used. If *source_address* is set it must be a tuple of (host, port) 14:35:29 for the socket to bind as a source address before making the connection. 14:35:29 An host of '' or port 0 tells the OS to use the default. 14:35:29 """ 14:35:29 14:35:29 host, port = address 14:35:29 if host.startswith("["): 14:35:29 host = host.strip("[]") 14:35:29 err = None 14:35:29 14:35:29 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 14:35:29 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 14:35:29 # The original create_connection function always returns all records. 14:35:29 family = allowed_gai_family() 14:35:29 14:35:29 try: 14:35:29 host.encode("idna") 14:35:29 except UnicodeError: 14:35:29 raise LocationParseError(f"'{host}', label empty or too long") from None 14:35:29 14:35:29 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 14:35:29 af, socktype, proto, canonname, sa = res 14:35:29 sock = None 14:35:29 try: 14:35:29 sock = socket.socket(af, socktype, proto) 14:35:29 14:35:29 # If provided, set socket level options before connecting. 14:35:29 _set_socket_options(sock, socket_options) 14:35:29 14:35:29 if timeout is not _DEFAULT_TIMEOUT: 14:35:29 sock.settimeout(timeout) 14:35:29 if source_address: 14:35:29 sock.bind(source_address) 14:35:29 > sock.connect(sa) 14:35:29 E ConnectionRefusedError: [Errno 111] Connection refused 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 14:35:29 14:35:29 The above exception was the direct cause of the following exception: 14:35:29 14:35:29 self = 14:35:29 method = 'GET' 14:35:29 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' 14:35:29 body = None 14:35:29 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 14:35:29 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 14:35:29 redirect = False, assert_same_host = False 14:35:29 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 14:35:29 release_conn = False, chunked = False, body_pos = None, preload_content = False 14:35:29 decode_content = False, response_kw = {} 14:35:29 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query='content=nonconfig', fragment=None) 14:35:29 destination_scheme = None, conn = None, release_this_conn = True 14:35:29 http_tunnel_required = False, err = None, clean_exit = False 14:35:29 14:35:29 def urlopen( # type: ignore[override] 14:35:29 self, 14:35:29 method: str, 14:35:29 url: str, 14:35:29 body: _TYPE_BODY | None = None, 14:35:29 headers: typing.Mapping[str, str] | None = None, 14:35:29 retries: Retry | bool | int | None = None, 14:35:29 redirect: bool = True, 14:35:29 assert_same_host: bool = True, 14:35:29 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 14:35:29 pool_timeout: int | None = None, 14:35:29 release_conn: bool | None = None, 14:35:29 chunked: bool = False, 14:35:29 body_pos: _TYPE_BODY_POSITION | None = None, 14:35:29 preload_content: bool = True, 14:35:29 decode_content: bool = True, 14:35:29 **response_kw: typing.Any, 14:35:29 ) -> BaseHTTPResponse: 14:35:29 """ 14:35:29 Get a connection from the pool and perform an HTTP request. This is the 14:35:29 lowest level call for making a request, so you'll need to specify all 14:35:29 the raw details. 14:35:29 14:35:29 .. note:: 14:35:29 14:35:29 More commonly, it's appropriate to use a convenience method 14:35:29 such as :meth:`request`. 14:35:29 14:35:29 .. note:: 14:35:29 14:35:29 `release_conn` will only behave as expected if 14:35:29 `preload_content=False` because we want to make 14:35:29 `preload_content=False` the default behaviour someday soon without 14:35:29 breaking backwards compatibility. 14:35:29 14:35:29 :param method: 14:35:29 HTTP request method (such as GET, POST, PUT, etc.) 14:35:29 14:35:29 :param url: 14:35:29 The URL to perform the request on. 14:35:29 14:35:29 :param body: 14:35:29 Data to send in the request body, either :class:`str`, :class:`bytes`, 14:35:29 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 14:35:29 14:35:29 :param headers: 14:35:29 Dictionary of custom headers to send, such as User-Agent, 14:35:29 If-None-Match, etc. If None, pool headers are used. If provided, 14:35:29 these headers completely replace any pool-specific headers. 14:35:29 14:35:29 :param retries: 14:35:29 Configure the number of retries to allow before raising a 14:35:29 :class:`~urllib3.exceptions.MaxRetryError` exception. 14:35:29 14:35:29 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 14:35:29 :class:`~urllib3.util.retry.Retry` object for fine-grained control 14:35:29 over different types of retries. 14:35:29 Pass an integer number to retry connection errors that many times, 14:35:29 but no other types of errors. Pass zero to never retry. 14:35:29 14:35:29 If ``False``, then retries are disabled and any exception is raised 14:35:29 immediately. Also, instead of raising a MaxRetryError on redirects, 14:35:29 the redirect response will be returned. 14:35:29 14:35:29 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 14:35:29 14:35:29 :param redirect: 14:35:29 If True, automatically handle redirects (status codes 301, 302, 14:35:29 303, 307, 308). Each redirect counts as a retry. Disabling retries 14:35:29 will disable redirect, too. 14:35:29 14:35:29 :param assert_same_host: 14:35:29 If ``True``, will make sure that the host of the pool requests is 14:35:29 consistent else will raise HostChangedError. When ``False``, you can 14:35:29 use the pool on an HTTP proxy and request foreign hosts. 14:35:29 14:35:29 :param timeout: 14:35:29 If specified, overrides the default timeout for this one 14:35:29 request. It may be a float (in seconds) or an instance of 14:35:29 :class:`urllib3.util.Timeout`. 14:35:29 14:35:29 :param pool_timeout: 14:35:29 If set and the pool is set to block=True, then this method will 14:35:29 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 14:35:29 connection is available within the time period. 14:35:29 14:35:29 :param bool preload_content: 14:35:29 If True, the response's body will be preloaded into memory. 14:35:29 14:35:29 :param bool decode_content: 14:35:29 If True, will attempt to decode the body based on the 14:35:29 'content-encoding' header. 14:35:29 14:35:29 :param release_conn: 14:35:29 If False, then the urlopen call will not release the connection 14:35:29 back into the pool once a response is received (but will release if 14:35:29 you read the entire contents of the response such as when 14:35:29 `preload_content=True`). This is useful if you're not preloading 14:35:29 the response's content immediately. You will need to call 14:35:29 ``r.release_conn()`` on the response ``r`` to return the connection 14:35:29 back into the pool. If None, it takes the value of ``preload_content`` 14:35:29 which defaults to ``True``. 14:35:29 14:35:29 :param bool chunked: 14:35:29 If True, urllib3 will send the body using chunked transfer 14:35:29 encoding. Otherwise, urllib3 will send the body using the standard 14:35:29 content-length form. Defaults to False. 14:35:29 14:35:29 :param int body_pos: 14:35:29 Position to seek to in file-like body in the event of a retry or 14:35:29 redirect. Typically this won't need to be set because urllib3 will 14:35:29 auto-populate the value when needed. 14:35:29 """ 14:35:29 parsed_url = parse_url(url) 14:35:29 destination_scheme = parsed_url.scheme 14:35:29 14:35:29 if headers is None: 14:35:29 headers = self.headers 14:35:29 14:35:29 if not isinstance(retries, Retry): 14:35:29 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 14:35:29 14:35:29 if release_conn is None: 14:35:29 release_conn = preload_content 14:35:29 14:35:29 # Check host 14:35:29 if assert_same_host and not self.is_same_host(url): 14:35:29 raise HostChangedError(self, url, retries) 14:35:29 14:35:29 # Ensure that the URL we're connecting to is properly encoded 14:35:29 if url.startswith("/"): 14:35:29 url = to_str(_encode_target(url)) 14:35:29 else: 14:35:29 url = to_str(parsed_url.url) 14:35:29 14:35:29 conn = None 14:35:29 14:35:29 # Track whether `conn` needs to be released before 14:35:29 # returning/raising/recursing. Update this variable if necessary, and 14:35:29 # leave `release_conn` constant throughout the function. That way, if 14:35:29 # the function recurses, the original value of `release_conn` will be 14:35:29 # passed down into the recursive call, and its value will be respected. 14:35:29 # 14:35:29 # See issue #651 [1] for details. 14:35:29 # 14:35:29 # [1] 14:35:29 release_this_conn = release_conn 14:35:29 14:35:29 http_tunnel_required = connection_requires_http_tunnel( 14:35:29 self.proxy, self.proxy_config, destination_scheme 14:35:29 ) 14:35:29 14:35:29 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 14:35:29 # have to copy the headers dict so we can safely change it without those 14:35:29 # changes being reflected in anyone else's copy. 14:35:29 if not http_tunnel_required: 14:35:29 headers = headers.copy() # type: ignore[attr-defined] 14:35:29 headers.update(self.proxy_headers) # type: ignore[union-attr] 14:35:29 14:35:29 # Must keep the exception bound to a separate variable or else Python 3 14:35:29 # complains about UnboundLocalError. 14:35:29 err = None 14:35:29 14:35:29 # Keep track of whether we cleanly exited the except block. This 14:35:29 # ensures we do proper cleanup in finally. 14:35:29 clean_exit = False 14:35:29 14:35:29 # Rewind body position, if needed. Record current position 14:35:29 # for future rewinds in the event of a redirect/retry. 14:35:29 body_pos = set_file_position(body, body_pos) 14:35:29 14:35:29 try: 14:35:29 # Request a connection from the queue. 14:35:29 timeout_obj = self._get_timeout(timeout) 14:35:29 conn = self._get_conn(timeout=pool_timeout) 14:35:29 14:35:29 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 14:35:29 14:35:29 # Is this a closed/new connection that requires CONNECT tunnelling? 14:35:29 if self.proxy is not None and http_tunnel_required and conn.is_closed: 14:35:29 try: 14:35:29 self._prepare_proxy(conn) 14:35:29 except (BaseSSLError, OSError, SocketTimeout) as e: 14:35:29 self._raise_timeout( 14:35:29 err=e, url=self.proxy.url, timeout_value=conn.timeout 14:35:29 ) 14:35:29 raise 14:35:29 14:35:29 # If we're going to release the connection in ``finally:``, then 14:35:29 # the response doesn't need to know about the connection. Otherwise 14:35:29 # it will also try to release it and we'll have a double-release 14:35:29 # mess. 14:35:29 response_conn = conn if not release_conn else None 14:35:29 14:35:29 # Make the request on the HTTPConnection object 14:35:29 > response = self._make_request( 14:35:29 conn, 14:35:29 method, 14:35:29 url, 14:35:29 timeout=timeout_obj, 14:35:29 body=body, 14:35:29 headers=headers, 14:35:29 chunked=chunked, 14:35:29 retries=retries, 14:35:29 response_conn=response_conn, 14:35:29 preload_content=preload_content, 14:35:29 decode_content=decode_content, 14:35:29 **response_kw, 14:35:29 ) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 14:35:29 conn.request( 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 14:35:29 self.endheaders() 14:35:29 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 14:35:29 self._send_output(message_body, encode_chunked=encode_chunked) 14:35:29 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 14:35:29 self.send(msg) 14:35:29 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 14:35:29 self.connect() 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 14:35:29 self.sock = self._new_conn() 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 self = 14:35:29 14:35:29 def _new_conn(self) -> socket.socket: 14:35:29 """Establish a socket connection and set nodelay settings on it. 14:35:29 14:35:29 :return: New socket connection. 14:35:29 """ 14:35:29 try: 14:35:29 sock = connection.create_connection( 14:35:29 (self._dns_host, self.port), 14:35:29 self.timeout, 14:35:29 source_address=self.source_address, 14:35:29 socket_options=self.socket_options, 14:35:29 ) 14:35:29 except socket.gaierror as e: 14:35:29 raise NameResolutionError(self.host, self, e) from e 14:35:29 except SocketTimeout as e: 14:35:29 raise ConnectTimeoutError( 14:35:29 self, 14:35:29 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 14:35:29 ) from e 14:35:29 14:35:29 except OSError as e: 14:35:29 > raise NewConnectionError( 14:35:29 self, f"Failed to establish a new connection: {e}" 14:35:29 ) from e 14:35:29 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 14:35:29 14:35:29 The above exception was the direct cause of the following exception: 14:35:29 14:35:29 self = 14:35:29 request = , stream = False 14:35:29 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 14:35:29 proxies = OrderedDict() 14:35:29 14:35:29 def send( 14:35:29 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 14:35:29 ): 14:35:29 """Sends PreparedRequest object. Returns Response object. 14:35:29 14:35:29 :param request: The :class:`PreparedRequest ` being sent. 14:35:29 :param stream: (optional) Whether to stream the request content. 14:35:29 :param timeout: (optional) How long to wait for the server to send 14:35:29 data before giving up, as a float, or a :ref:`(connect timeout, 14:35:29 read timeout) ` tuple. 14:35:29 :type timeout: float or tuple or urllib3 Timeout object 14:35:29 :param verify: (optional) Either a boolean, in which case it controls whether 14:35:29 we verify the server's TLS certificate, or a string, in which case it 14:35:29 must be a path to a CA bundle to use 14:35:29 :param cert: (optional) Any user-provided SSL certificate to be trusted. 14:35:29 :param proxies: (optional) The proxies dictionary to apply to the request. 14:35:29 :rtype: requests.Response 14:35:29 """ 14:35:29 14:35:29 try: 14:35:29 conn = self.get_connection_with_tls_context( 14:35:29 request, verify, proxies=proxies, cert=cert 14:35:29 ) 14:35:29 except LocationValueError as e: 14:35:29 raise InvalidURL(e, request=request) 14:35:29 14:35:29 self.cert_verify(conn, request.url, verify, cert) 14:35:29 url = self.request_url(request, proxies) 14:35:29 self.add_headers( 14:35:29 request, 14:35:29 stream=stream, 14:35:29 timeout=timeout, 14:35:29 verify=verify, 14:35:29 cert=cert, 14:35:29 proxies=proxies, 14:35:29 ) 14:35:29 14:35:29 chunked = not (request.body is None or "Content-Length" in request.headers) 14:35:29 14:35:29 if isinstance(timeout, tuple): 14:35:29 try: 14:35:29 connect, read = timeout 14:35:29 timeout = TimeoutSauce(connect=connect, read=read) 14:35:29 except ValueError: 14:35:29 raise ValueError( 14:35:29 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 14:35:29 f"or a single float to set both timeouts to the same value." 14:35:29 ) 14:35:29 elif isinstance(timeout, TimeoutSauce): 14:35:29 pass 14:35:29 else: 14:35:29 timeout = TimeoutSauce(connect=timeout, read=timeout) 14:35:29 14:35:29 try: 14:35:29 > resp = conn.urlopen( 14:35:29 method=request.method, 14:35:29 url=url, 14:35:29 body=request.body, 14:35:29 headers=request.headers, 14:35:29 redirect=False, 14:35:29 assert_same_host=False, 14:35:29 preload_content=False, 14:35:29 decode_content=False, 14:35:29 retries=self.max_retries, 14:35:29 timeout=timeout, 14:35:29 chunked=chunked, 14:35:29 ) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 14:35:29 retries = retries.increment( 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 14:35:29 method = 'GET' 14:35:29 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' 14:35:29 response = None 14:35:29 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 14:35:29 _pool = 14:35:29 _stacktrace = 14:35:29 14:35:29 def increment( 14:35:29 self, 14:35:29 method: str | None = None, 14:35:29 url: str | None = None, 14:35:29 response: BaseHTTPResponse | None = None, 14:35:29 error: Exception | None = None, 14:35:29 _pool: ConnectionPool | None = None, 14:35:29 _stacktrace: TracebackType | None = None, 14:35:29 ) -> Self: 14:35:29 """Return a new Retry object with incremented retry counters. 14:35:29 14:35:29 :param response: A response object, or None, if the server did not 14:35:29 return a response. 14:35:29 :type response: :class:`~urllib3.response.BaseHTTPResponse` 14:35:29 :param Exception error: An error encountered during the request, or 14:35:29 None if the response was received successfully. 14:35:29 14:35:29 :return: A new ``Retry`` object. 14:35:29 """ 14:35:29 if self.total is False and error: 14:35:29 # Disabled, indicate to re-raise the error. 14:35:29 raise reraise(type(error), error, _stacktrace) 14:35:29 14:35:29 total = self.total 14:35:29 if total is not None: 14:35:29 total -= 1 14:35:29 14:35:29 connect = self.connect 14:35:29 read = self.read 14:35:29 redirect = self.redirect 14:35:29 status_count = self.status 14:35:29 other = self.other 14:35:29 cause = "unknown" 14:35:29 status = None 14:35:29 redirect_location = None 14:35:29 14:35:29 if error and self._is_connection_error(error): 14:35:29 # Connect retry? 14:35:29 if connect is False: 14:35:29 raise reraise(type(error), error, _stacktrace) 14:35:29 elif connect is not None: 14:35:29 connect -= 1 14:35:29 14:35:29 elif error and self._is_read_error(error): 14:35:29 # Read retry? 14:35:29 if read is False or method is None or not self._is_method_retryable(method): 14:35:29 raise reraise(type(error), error, _stacktrace) 14:35:29 elif read is not None: 14:35:29 read -= 1 14:35:29 14:35:29 elif error: 14:35:29 # Other retry? 14:35:29 if other is not None: 14:35:29 other -= 1 14:35:29 14:35:29 elif response and response.get_redirect_location(): 14:35:29 # Redirect retry? 14:35:29 if redirect is not None: 14:35:29 redirect -= 1 14:35:29 cause = "too many redirects" 14:35:29 response_redirect_location = response.get_redirect_location() 14:35:29 if response_redirect_location: 14:35:29 redirect_location = response_redirect_location 14:35:29 status = response.status 14:35:29 14:35:29 else: 14:35:29 # Incrementing because of a server error like a 500 in 14:35:29 # status_forcelist and the given method is in the allowed_methods 14:35:29 cause = ResponseError.GENERIC_ERROR 14:35:29 if response and response.status: 14:35:29 if status_count is not None: 14:35:29 status_count -= 1 14:35:29 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 14:35:29 status = response.status 14:35:29 14:35:29 history = self.history + ( 14:35:29 RequestHistory(method, url, error, status, redirect_location), 14:35:29 ) 14:35:29 14:35:29 new_retry = self.new( 14:35:29 total=total, 14:35:29 connect=connect, 14:35:29 read=read, 14:35:29 redirect=redirect, 14:35:29 status=status_count, 14:35:29 other=other, 14:35:29 history=history, 14:35:29 ) 14:35:29 14:35:29 if new_retry.is_exhausted(): 14:35:29 reason = error or ResponseError(cause) 14:35:29 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 14:35:29 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 14:35:29 14:35:29 During handling of the above exception, another exception occurred: 14:35:29 14:35:29 self = 14:35:29 14:35:29 def test_02_rdm_device_connected(self): 14:35:29 > response = test_utils.check_device_connection("ROADMA01") 14:35:29 14:35:29 transportpce_tests/1.2.1/test01_portmapping.py:53: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 transportpce_tests/common/test_utils.py:369: in check_device_connection 14:35:29 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 14:35:29 transportpce_tests/common/test_utils.py:116: in get_request 14:35:29 return requests.request( 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 14:35:29 return session.request(method=method, url=url, **kwargs) 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 14:35:29 resp = self.send(prep, **send_kwargs) 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 14:35:29 r = adapter.send(request, **kwargs) 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 self = 14:35:29 request = , stream = False 14:35:29 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 14:35:29 proxies = OrderedDict() 14:35:29 14:35:29 def send( 14:35:29 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 14:35:29 ): 14:35:29 """Sends PreparedRequest object. Returns Response object. 14:35:29 14:35:29 :param request: The :class:`PreparedRequest ` being sent. 14:35:29 :param stream: (optional) Whether to stream the request content. 14:35:29 :param timeout: (optional) How long to wait for the server to send 14:35:29 data before giving up, as a float, or a :ref:`(connect timeout, 14:35:29 read timeout) ` tuple. 14:35:29 :type timeout: float or tuple or urllib3 Timeout object 14:35:29 :param verify: (optional) Either a boolean, in which case it controls whether 14:35:29 we verify the server's TLS certificate, or a string, in which case it 14:35:29 must be a path to a CA bundle to use 14:35:29 :param cert: (optional) Any user-provided SSL certificate to be trusted. 14:35:29 :param proxies: (optional) The proxies dictionary to apply to the request. 14:35:29 :rtype: requests.Response 14:35:29 """ 14:35:29 14:35:29 try: 14:35:29 conn = self.get_connection_with_tls_context( 14:35:29 request, verify, proxies=proxies, cert=cert 14:35:29 ) 14:35:29 except LocationValueError as e: 14:35:29 raise InvalidURL(e, request=request) 14:35:29 14:35:29 self.cert_verify(conn, request.url, verify, cert) 14:35:29 url = self.request_url(request, proxies) 14:35:29 self.add_headers( 14:35:29 request, 14:35:29 stream=stream, 14:35:29 timeout=timeout, 14:35:29 verify=verify, 14:35:29 cert=cert, 14:35:29 proxies=proxies, 14:35:29 ) 14:35:29 14:35:29 chunked = not (request.body is None or "Content-Length" in request.headers) 14:35:29 14:35:29 if isinstance(timeout, tuple): 14:35:29 try: 14:35:29 connect, read = timeout 14:35:29 timeout = TimeoutSauce(connect=connect, read=read) 14:35:29 except ValueError: 14:35:29 raise ValueError( 14:35:29 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 14:35:29 f"or a single float to set both timeouts to the same value." 14:35:29 ) 14:35:29 elif isinstance(timeout, TimeoutSauce): 14:35:29 pass 14:35:29 else: 14:35:29 timeout = TimeoutSauce(connect=timeout, read=timeout) 14:35:29 14:35:29 try: 14:35:29 resp = conn.urlopen( 14:35:29 method=request.method, 14:35:29 url=url, 14:35:29 body=request.body, 14:35:29 headers=request.headers, 14:35:29 redirect=False, 14:35:29 assert_same_host=False, 14:35:29 preload_content=False, 14:35:29 decode_content=False, 14:35:29 retries=self.max_retries, 14:35:29 timeout=timeout, 14:35:29 chunked=chunked, 14:35:29 ) 14:35:29 14:35:29 except (ProtocolError, OSError) as err: 14:35:29 raise ConnectionError(err, request=request) 14:35:29 14:35:29 except MaxRetryError as e: 14:35:29 if isinstance(e.reason, ConnectTimeoutError): 14:35:29 # TODO: Remove this in 3.0.0: see #2811 14:35:29 if not isinstance(e.reason, NewConnectionError): 14:35:29 raise ConnectTimeout(e, request=request) 14:35:29 14:35:29 if isinstance(e.reason, ResponseError): 14:35:29 raise RetryError(e, request=request) 14:35:29 14:35:29 if isinstance(e.reason, _ProxyError): 14:35:29 raise ProxyError(e, request=request) 14:35:29 14:35:29 if isinstance(e.reason, _SSLError): 14:35:29 # This branch is for urllib3 v1.22 and later. 14:35:29 raise SSLError(e, request=request) 14:35:29 14:35:29 > raise ConnectionError(e, request=request) 14:35:29 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 14:35:29 ----------------------------- Captured stdout call ----------------------------- 14:35:29 execution of test_02_rdm_device_connected 14:35:29 _________ TransportPCEPortMappingTesting.test_03_rdm_portmapping_info __________ 14:35:29 14:35:29 self = 14:35:29 14:35:29 def _new_conn(self) -> socket.socket: 14:35:29 """Establish a socket connection and set nodelay settings on it. 14:35:29 14:35:29 :return: New socket connection. 14:35:29 """ 14:35:29 try: 14:35:29 > sock = connection.create_connection( 14:35:29 (self._dns_host, self.port), 14:35:29 self.timeout, 14:35:29 source_address=self.source_address, 14:35:29 socket_options=self.socket_options, 14:35:29 ) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 14:35:29 raise err 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 address = ('localhost', 8182), timeout = 10, source_address = None 14:35:29 socket_options = [(6, 1, 1)] 14:35:29 14:35:29 def create_connection( 14:35:29 address: tuple[str, int], 14:35:29 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 14:35:29 source_address: tuple[str, int] | None = None, 14:35:29 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 14:35:29 ) -> socket.socket: 14:35:29 """Connect to *address* and return the socket object. 14:35:29 14:35:29 Convenience function. Connect to *address* (a 2-tuple ``(host, 14:35:29 port)``) and return the socket object. Passing the optional 14:35:29 *timeout* parameter will set the timeout on the socket instance 14:35:29 before attempting to connect. If no *timeout* is supplied, the 14:35:29 global default timeout setting returned by :func:`socket.getdefaulttimeout` 14:35:29 is used. If *source_address* is set it must be a tuple of (host, port) 14:35:29 for the socket to bind as a source address before making the connection. 14:35:29 An host of '' or port 0 tells the OS to use the default. 14:35:29 """ 14:35:29 14:35:29 host, port = address 14:35:29 if host.startswith("["): 14:35:29 host = host.strip("[]") 14:35:29 err = None 14:35:29 14:35:29 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 14:35:29 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 14:35:29 # The original create_connection function always returns all records. 14:35:29 family = allowed_gai_family() 14:35:29 14:35:29 try: 14:35:29 host.encode("idna") 14:35:29 except UnicodeError: 14:35:29 raise LocationParseError(f"'{host}', label empty or too long") from None 14:35:29 14:35:29 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 14:35:29 af, socktype, proto, canonname, sa = res 14:35:29 sock = None 14:35:29 try: 14:35:29 sock = socket.socket(af, socktype, proto) 14:35:29 14:35:29 # If provided, set socket level options before connecting. 14:35:29 _set_socket_options(sock, socket_options) 14:35:29 14:35:29 if timeout is not _DEFAULT_TIMEOUT: 14:35:29 sock.settimeout(timeout) 14:35:29 if source_address: 14:35:29 sock.bind(source_address) 14:35:29 > sock.connect(sa) 14:35:29 E ConnectionRefusedError: [Errno 111] Connection refused 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 14:35:29 14:35:29 The above exception was the direct cause of the following exception: 14:35:29 14:35:29 self = 14:35:29 method = 'GET' 14:35:29 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' 14:35:29 body = None 14:35:29 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 14:35:29 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 14:35:29 redirect = False, assert_same_host = False 14:35:29 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 14:35:29 release_conn = False, chunked = False, body_pos = None, preload_content = False 14:35:29 decode_content = False, response_kw = {} 14:35:29 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info', query=None, fragment=None) 14:35:29 destination_scheme = None, conn = None, release_this_conn = True 14:35:29 http_tunnel_required = False, err = None, clean_exit = False 14:35:29 14:35:29 def urlopen( # type: ignore[override] 14:35:29 self, 14:35:29 method: str, 14:35:29 url: str, 14:35:29 body: _TYPE_BODY | None = None, 14:35:29 headers: typing.Mapping[str, str] | None = None, 14:35:29 retries: Retry | bool | int | None = None, 14:35:29 redirect: bool = True, 14:35:29 assert_same_host: bool = True, 14:35:29 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 14:35:29 pool_timeout: int | None = None, 14:35:29 release_conn: bool | None = None, 14:35:29 chunked: bool = False, 14:35:29 body_pos: _TYPE_BODY_POSITION | None = None, 14:35:29 preload_content: bool = True, 14:35:29 decode_content: bool = True, 14:35:29 **response_kw: typing.Any, 14:35:29 ) -> BaseHTTPResponse: 14:35:29 """ 14:35:29 Get a connection from the pool and perform an HTTP request. This is the 14:35:29 lowest level call for making a request, so you'll need to specify all 14:35:29 the raw details. 14:35:29 14:35:29 .. note:: 14:35:29 14:35:29 More commonly, it's appropriate to use a convenience method 14:35:29 such as :meth:`request`. 14:35:29 14:35:29 .. note:: 14:35:29 14:35:29 `release_conn` will only behave as expected if 14:35:29 `preload_content=False` because we want to make 14:35:29 `preload_content=False` the default behaviour someday soon without 14:35:29 breaking backwards compatibility. 14:35:29 14:35:29 :param method: 14:35:29 HTTP request method (such as GET, POST, PUT, etc.) 14:35:29 14:35:29 :param url: 14:35:29 The URL to perform the request on. 14:35:29 14:35:29 :param body: 14:35:29 Data to send in the request body, either :class:`str`, :class:`bytes`, 14:35:29 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 14:35:29 14:35:29 :param headers: 14:35:29 Dictionary of custom headers to send, such as User-Agent, 14:35:29 If-None-Match, etc. If None, pool headers are used. If provided, 14:35:29 these headers completely replace any pool-specific headers. 14:35:29 14:35:29 :param retries: 14:35:29 Configure the number of retries to allow before raising a 14:35:29 :class:`~urllib3.exceptions.MaxRetryError` exception. 14:35:29 14:35:29 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 14:35:29 :class:`~urllib3.util.retry.Retry` object for fine-grained control 14:35:29 over different types of retries. 14:35:29 Pass an integer number to retry connection errors that many times, 14:35:29 but no other types of errors. Pass zero to never retry. 14:35:29 14:35:29 If ``False``, then retries are disabled and any exception is raised 14:35:29 immediately. Also, instead of raising a MaxRetryError on redirects, 14:35:29 the redirect response will be returned. 14:35:29 14:35:29 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 14:35:29 14:35:29 :param redirect: 14:35:29 If True, automatically handle redirects (status codes 301, 302, 14:35:29 303, 307, 308). Each redirect counts as a retry. Disabling retries 14:35:29 will disable redirect, too. 14:35:29 14:35:29 :param assert_same_host: 14:35:29 If ``True``, will make sure that the host of the pool requests is 14:35:29 consistent else will raise HostChangedError. When ``False``, you can 14:35:29 use the pool on an HTTP proxy and request foreign hosts. 14:35:29 14:35:29 :param timeout: 14:35:29 If specified, overrides the default timeout for this one 14:35:29 request. It may be a float (in seconds) or an instance of 14:35:29 :class:`urllib3.util.Timeout`. 14:35:29 14:35:29 :param pool_timeout: 14:35:29 If set and the pool is set to block=True, then this method will 14:35:29 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 14:35:29 connection is available within the time period. 14:35:29 14:35:29 :param bool preload_content: 14:35:29 If True, the response's body will be preloaded into memory. 14:35:29 14:35:29 :param bool decode_content: 14:35:29 If True, will attempt to decode the body based on the 14:35:29 'content-encoding' header. 14:35:29 14:35:29 :param release_conn: 14:35:29 If False, then the urlopen call will not release the connection 14:35:29 back into the pool once a response is received (but will release if 14:35:29 you read the entire contents of the response such as when 14:35:29 `preload_content=True`). This is useful if you're not preloading 14:35:29 the response's content immediately. You will need to call 14:35:29 ``r.release_conn()`` on the response ``r`` to return the connection 14:35:29 back into the pool. If None, it takes the value of ``preload_content`` 14:35:29 which defaults to ``True``. 14:35:29 14:35:29 :param bool chunked: 14:35:29 If True, urllib3 will send the body using chunked transfer 14:35:29 encoding. Otherwise, urllib3 will send the body using the standard 14:35:29 content-length form. Defaults to False. 14:35:29 14:35:29 :param int body_pos: 14:35:29 Position to seek to in file-like body in the event of a retry or 14:35:29 redirect. Typically this won't need to be set because urllib3 will 14:35:29 auto-populate the value when needed. 14:35:29 """ 14:35:29 parsed_url = parse_url(url) 14:35:29 destination_scheme = parsed_url.scheme 14:35:29 14:35:29 if headers is None: 14:35:29 headers = self.headers 14:35:29 14:35:29 if not isinstance(retries, Retry): 14:35:29 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 14:35:29 14:35:29 if release_conn is None: 14:35:29 release_conn = preload_content 14:35:29 14:35:29 # Check host 14:35:29 if assert_same_host and not self.is_same_host(url): 14:35:29 raise HostChangedError(self, url, retries) 14:35:29 14:35:29 # Ensure that the URL we're connecting to is properly encoded 14:35:29 if url.startswith("/"): 14:35:29 url = to_str(_encode_target(url)) 14:35:29 else: 14:35:29 url = to_str(parsed_url.url) 14:35:29 14:35:29 conn = None 14:35:29 14:35:29 # Track whether `conn` needs to be released before 14:35:29 # returning/raising/recursing. Update this variable if necessary, and 14:35:29 # leave `release_conn` constant throughout the function. That way, if 14:35:29 # the function recurses, the original value of `release_conn` will be 14:35:29 # passed down into the recursive call, and its value will be respected. 14:35:29 # 14:35:29 # See issue #651 [1] for details. 14:35:29 # 14:35:29 # [1] 14:35:29 release_this_conn = release_conn 14:35:29 14:35:29 http_tunnel_required = connection_requires_http_tunnel( 14:35:29 self.proxy, self.proxy_config, destination_scheme 14:35:29 ) 14:35:29 14:35:29 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 14:35:29 # have to copy the headers dict so we can safely change it without those 14:35:29 # changes being reflected in anyone else's copy. 14:35:29 if not http_tunnel_required: 14:35:29 headers = headers.copy() # type: ignore[attr-defined] 14:35:29 headers.update(self.proxy_headers) # type: ignore[union-attr] 14:35:29 14:35:29 # Must keep the exception bound to a separate variable or else Python 3 14:35:29 # complains about UnboundLocalError. 14:35:29 err = None 14:35:29 14:35:29 # Keep track of whether we cleanly exited the except block. This 14:35:29 # ensures we do proper cleanup in finally. 14:35:29 clean_exit = False 14:35:29 14:35:29 # Rewind body position, if needed. Record current position 14:35:29 # for future rewinds in the event of a redirect/retry. 14:35:29 body_pos = set_file_position(body, body_pos) 14:35:29 14:35:29 try: 14:35:29 # Request a connection from the queue. 14:35:29 timeout_obj = self._get_timeout(timeout) 14:35:29 conn = self._get_conn(timeout=pool_timeout) 14:35:29 14:35:29 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 14:35:29 14:35:29 # Is this a closed/new connection that requires CONNECT tunnelling? 14:35:29 if self.proxy is not None and http_tunnel_required and conn.is_closed: 14:35:29 try: 14:35:29 self._prepare_proxy(conn) 14:35:29 except (BaseSSLError, OSError, SocketTimeout) as e: 14:35:29 self._raise_timeout( 14:35:29 err=e, url=self.proxy.url, timeout_value=conn.timeout 14:35:29 ) 14:35:29 raise 14:35:29 14:35:29 # If we're going to release the connection in ``finally:``, then 14:35:29 # the response doesn't need to know about the connection. Otherwise 14:35:29 # it will also try to release it and we'll have a double-release 14:35:29 # mess. 14:35:29 response_conn = conn if not release_conn else None 14:35:29 14:35:29 # Make the request on the HTTPConnection object 14:35:29 > response = self._make_request( 14:35:29 conn, 14:35:29 method, 14:35:29 url, 14:35:29 timeout=timeout_obj, 14:35:29 body=body, 14:35:29 headers=headers, 14:35:29 chunked=chunked, 14:35:29 retries=retries, 14:35:29 response_conn=response_conn, 14:35:29 preload_content=preload_content, 14:35:29 decode_content=decode_content, 14:35:29 **response_kw, 14:35:29 ) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 14:35:29 conn.request( 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 14:35:29 self.endheaders() 14:35:29 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 14:35:29 self._send_output(message_body, encode_chunked=encode_chunked) 14:35:29 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 14:35:29 self.send(msg) 14:35:29 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 14:35:29 self.connect() 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 14:35:29 self.sock = self._new_conn() 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 self = 14:35:29 14:35:29 def _new_conn(self) -> socket.socket: 14:35:29 """Establish a socket connection and set nodelay settings on it. 14:35:29 14:35:29 :return: New socket connection. 14:35:29 """ 14:35:29 try: 14:35:29 sock = connection.create_connection( 14:35:29 (self._dns_host, self.port), 14:35:29 self.timeout, 14:35:29 source_address=self.source_address, 14:35:29 socket_options=self.socket_options, 14:35:29 ) 14:35:29 except socket.gaierror as e: 14:35:29 raise NameResolutionError(self.host, self, e) from e 14:35:29 except SocketTimeout as e: 14:35:29 raise ConnectTimeoutError( 14:35:29 self, 14:35:29 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 14:35:29 ) from e 14:35:29 14:35:29 except OSError as e: 14:35:29 > raise NewConnectionError( 14:35:29 self, f"Failed to establish a new connection: {e}" 14:35:29 ) from e 14:35:29 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 14:35:29 14:35:29 The above exception was the direct cause of the following exception: 14:35:29 14:35:29 self = 14:35:29 request = , stream = False 14:35:29 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 14:35:29 proxies = OrderedDict() 14:35:29 14:35:29 def send( 14:35:29 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 14:35:29 ): 14:35:29 """Sends PreparedRequest object. Returns Response object. 14:35:29 14:35:29 :param request: The :class:`PreparedRequest ` being sent. 14:35:29 :param stream: (optional) Whether to stream the request content. 14:35:29 :param timeout: (optional) How long to wait for the server to send 14:35:29 data before giving up, as a float, or a :ref:`(connect timeout, 14:35:29 read timeout) ` tuple. 14:35:29 :type timeout: float or tuple or urllib3 Timeout object 14:35:29 :param verify: (optional) Either a boolean, in which case it controls whether 14:35:29 we verify the server's TLS certificate, or a string, in which case it 14:35:29 must be a path to a CA bundle to use 14:35:29 :param cert: (optional) Any user-provided SSL certificate to be trusted. 14:35:29 :param proxies: (optional) The proxies dictionary to apply to the request. 14:35:29 :rtype: requests.Response 14:35:29 """ 14:35:29 14:35:29 try: 14:35:29 conn = self.get_connection_with_tls_context( 14:35:29 request, verify, proxies=proxies, cert=cert 14:35:29 ) 14:35:29 except LocationValueError as e: 14:35:29 raise InvalidURL(e, request=request) 14:35:29 14:35:29 self.cert_verify(conn, request.url, verify, cert) 14:35:29 url = self.request_url(request, proxies) 14:35:29 self.add_headers( 14:35:29 request, 14:35:29 stream=stream, 14:35:29 timeout=timeout, 14:35:29 verify=verify, 14:35:29 cert=cert, 14:35:29 proxies=proxies, 14:35:29 ) 14:35:29 14:35:29 chunked = not (request.body is None or "Content-Length" in request.headers) 14:35:29 14:35:29 if isinstance(timeout, tuple): 14:35:29 try: 14:35:29 connect, read = timeout 14:35:29 timeout = TimeoutSauce(connect=connect, read=read) 14:35:29 except ValueError: 14:35:29 raise ValueError( 14:35:29 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 14:35:29 f"or a single float to set both timeouts to the same value." 14:35:29 ) 14:35:29 elif isinstance(timeout, TimeoutSauce): 14:35:29 pass 14:35:29 else: 14:35:29 timeout = TimeoutSauce(connect=timeout, read=timeout) 14:35:29 14:35:29 try: 14:35:29 > resp = conn.urlopen( 14:35:29 method=request.method, 14:35:29 url=url, 14:35:29 body=request.body, 14:35:29 headers=request.headers, 14:35:29 redirect=False, 14:35:29 assert_same_host=False, 14:35:29 preload_content=False, 14:35:29 decode_content=False, 14:35:29 retries=self.max_retries, 14:35:29 timeout=timeout, 14:35:29 chunked=chunked, 14:35:29 ) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 14:35:29 retries = retries.increment( 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 14:35:29 method = 'GET' 14:35:29 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' 14:35:29 response = None 14:35:29 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 14:35:29 _pool = 14:35:29 _stacktrace = 14:35:29 14:35:29 def increment( 14:35:29 self, 14:35:29 method: str | None = None, 14:35:29 url: str | None = None, 14:35:29 response: BaseHTTPResponse | None = None, 14:35:29 error: Exception | None = None, 14:35:29 _pool: ConnectionPool | None = None, 14:35:29 _stacktrace: TracebackType | None = None, 14:35:29 ) -> Self: 14:35:29 """Return a new Retry object with incremented retry counters. 14:35:29 14:35:29 :param response: A response object, or None, if the server did not 14:35:29 return a response. 14:35:29 :type response: :class:`~urllib3.response.BaseHTTPResponse` 14:35:29 :param Exception error: An error encountered during the request, or 14:35:29 None if the response was received successfully. 14:35:29 14:35:29 :return: A new ``Retry`` object. 14:35:29 """ 14:35:29 if self.total is False and error: 14:35:29 # Disabled, indicate to re-raise the error. 14:35:29 raise reraise(type(error), error, _stacktrace) 14:35:29 14:35:29 total = self.total 14:35:29 if total is not None: 14:35:29 total -= 1 14:35:29 14:35:29 connect = self.connect 14:35:29 read = self.read 14:35:29 redirect = self.redirect 14:35:29 status_count = self.status 14:35:29 other = self.other 14:35:29 cause = "unknown" 14:35:29 status = None 14:35:29 redirect_location = None 14:35:29 14:35:29 if error and self._is_connection_error(error): 14:35:29 # Connect retry? 14:35:29 if connect is False: 14:35:29 raise reraise(type(error), error, _stacktrace) 14:35:29 elif connect is not None: 14:35:29 connect -= 1 14:35:29 14:35:29 elif error and self._is_read_error(error): 14:35:29 # Read retry? 14:35:29 if read is False or method is None or not self._is_method_retryable(method): 14:35:29 raise reraise(type(error), error, _stacktrace) 14:35:29 elif read is not None: 14:35:29 read -= 1 14:35:29 14:35:29 elif error: 14:35:29 # Other retry? 14:35:29 if other is not None: 14:35:29 other -= 1 14:35:29 14:35:29 elif response and response.get_redirect_location(): 14:35:29 # Redirect retry? 14:35:29 if redirect is not None: 14:35:29 redirect -= 1 14:35:29 cause = "too many redirects" 14:35:29 response_redirect_location = response.get_redirect_location() 14:35:29 if response_redirect_location: 14:35:29 redirect_location = response_redirect_location 14:35:29 status = response.status 14:35:29 14:35:29 else: 14:35:29 # Incrementing because of a server error like a 500 in 14:35:29 # status_forcelist and the given method is in the allowed_methods 14:35:29 cause = ResponseError.GENERIC_ERROR 14:35:29 if response and response.status: 14:35:29 if status_count is not None: 14:35:29 status_count -= 1 14:35:29 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 14:35:29 status = response.status 14:35:29 14:35:29 history = self.history + ( 14:35:29 RequestHistory(method, url, error, status, redirect_location), 14:35:29 ) 14:35:29 14:35:29 new_retry = self.new( 14:35:29 total=total, 14:35:29 connect=connect, 14:35:29 read=read, 14:35:29 redirect=redirect, 14:35:29 status=status_count, 14:35:29 other=other, 14:35:29 history=history, 14:35:29 ) 14:35:29 14:35:29 if new_retry.is_exhausted(): 14:35:29 reason = error or ResponseError(cause) 14:35:29 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 14:35:29 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 14:35:29 14:35:29 During handling of the above exception, another exception occurred: 14:35:29 14:35:29 self = 14:35:29 14:35:29 def test_03_rdm_portmapping_info(self): 14:35:29 > response = test_utils.get_portmapping_node_attr("ROADMA01", "node-info", None) 14:35:29 14:35:29 transportpce_tests/1.2.1/test01_portmapping.py:59: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 transportpce_tests/common/test_utils.py:470: in get_portmapping_node_attr 14:35:29 response = get_request(target_url) 14:35:29 transportpce_tests/common/test_utils.py:116: in get_request 14:35:29 return requests.request( 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 14:35:29 return session.request(method=method, url=url, **kwargs) 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 14:35:29 resp = self.send(prep, **send_kwargs) 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 14:35:29 r = adapter.send(request, **kwargs) 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 self = 14:35:29 request = , stream = False 14:35:29 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 14:35:29 proxies = OrderedDict() 14:35:29 14:35:29 def send( 14:35:29 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 14:35:29 ): 14:35:29 """Sends PreparedRequest object. Returns Response object. 14:35:29 14:35:29 :param request: The :class:`PreparedRequest ` being sent. 14:35:29 :param stream: (optional) Whether to stream the request content. 14:35:29 :param timeout: (optional) How long to wait for the server to send 14:35:29 data before giving up, as a float, or a :ref:`(connect timeout, 14:35:29 read timeout) ` tuple. 14:35:29 :type timeout: float or tuple or urllib3 Timeout object 14:35:29 :param verify: (optional) Either a boolean, in which case it controls whether 14:35:29 we verify the server's TLS certificate, or a string, in which case it 14:35:29 must be a path to a CA bundle to use 14:35:29 :param cert: (optional) Any user-provided SSL certificate to be trusted. 14:35:29 :param proxies: (optional) The proxies dictionary to apply to the request. 14:35:29 :rtype: requests.Response 14:35:29 """ 14:35:29 14:35:29 try: 14:35:29 conn = self.get_connection_with_tls_context( 14:35:29 request, verify, proxies=proxies, cert=cert 14:35:29 ) 14:35:29 except LocationValueError as e: 14:35:29 raise InvalidURL(e, request=request) 14:35:29 14:35:29 self.cert_verify(conn, request.url, verify, cert) 14:35:29 url = self.request_url(request, proxies) 14:35:29 self.add_headers( 14:35:29 request, 14:35:29 stream=stream, 14:35:29 timeout=timeout, 14:35:29 verify=verify, 14:35:29 cert=cert, 14:35:29 proxies=proxies, 14:35:29 ) 14:35:29 14:35:29 chunked = not (request.body is None or "Content-Length" in request.headers) 14:35:29 14:35:29 if isinstance(timeout, tuple): 14:35:29 try: 14:35:29 connect, read = timeout 14:35:29 timeout = TimeoutSauce(connect=connect, read=read) 14:35:29 except ValueError: 14:35:29 raise ValueError( 14:35:29 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 14:35:29 f"or a single float to set both timeouts to the same value." 14:35:29 ) 14:35:29 elif isinstance(timeout, TimeoutSauce): 14:35:29 pass 14:35:29 else: 14:35:29 timeout = TimeoutSauce(connect=timeout, read=timeout) 14:35:29 14:35:29 try: 14:35:29 resp = conn.urlopen( 14:35:29 method=request.method, 14:35:29 url=url, 14:35:29 body=request.body, 14:35:29 headers=request.headers, 14:35:29 redirect=False, 14:35:29 assert_same_host=False, 14:35:29 preload_content=False, 14:35:29 decode_content=False, 14:35:29 retries=self.max_retries, 14:35:29 timeout=timeout, 14:35:29 chunked=chunked, 14:35:29 ) 14:35:29 14:35:29 except (ProtocolError, OSError) as err: 14:35:29 raise ConnectionError(err, request=request) 14:35:29 14:35:29 except MaxRetryError as e: 14:35:29 if isinstance(e.reason, ConnectTimeoutError): 14:35:29 # TODO: Remove this in 3.0.0: see #2811 14:35:29 if not isinstance(e.reason, NewConnectionError): 14:35:29 raise ConnectTimeout(e, request=request) 14:35:29 14:35:29 if isinstance(e.reason, ResponseError): 14:35:29 raise RetryError(e, request=request) 14:35:29 14:35:29 if isinstance(e.reason, _ProxyError): 14:35:29 raise ProxyError(e, request=request) 14:35:29 14:35:29 if isinstance(e.reason, _SSLError): 14:35:29 # This branch is for urllib3 v1.22 and later. 14:35:29 raise SSLError(e, request=request) 14:35:29 14:35:29 > raise ConnectionError(e, request=request) 14:35:29 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 14:35:29 ----------------------------- Captured stdout call ----------------------------- 14:35:29 execution of test_03_rdm_portmapping_info 14:35:29 _____ TransportPCEPortMappingTesting.test_04_rdm_portmapping_DEG1_TTP_TXRX _____ 14:35:29 14:35:29 self = 14:35:29 14:35:29 def _new_conn(self) -> socket.socket: 14:35:29 """Establish a socket connection and set nodelay settings on it. 14:35:29 14:35:29 :return: New socket connection. 14:35:29 """ 14:35:29 try: 14:35:29 > sock = connection.create_connection( 14:35:29 (self._dns_host, self.port), 14:35:29 self.timeout, 14:35:29 source_address=self.source_address, 14:35:29 socket_options=self.socket_options, 14:35:29 ) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 14:35:29 raise err 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 address = ('localhost', 8182), timeout = 10, source_address = None 14:35:29 socket_options = [(6, 1, 1)] 14:35:29 14:35:29 def create_connection( 14:35:29 address: tuple[str, int], 14:35:29 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 14:35:29 source_address: tuple[str, int] | None = None, 14:35:29 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 14:35:29 ) -> socket.socket: 14:35:29 """Connect to *address* and return the socket object. 14:35:29 14:35:29 Convenience function. Connect to *address* (a 2-tuple ``(host, 14:35:29 port)``) and return the socket object. Passing the optional 14:35:29 *timeout* parameter will set the timeout on the socket instance 14:35:29 before attempting to connect. If no *timeout* is supplied, the 14:35:29 global default timeout setting returned by :func:`socket.getdefaulttimeout` 14:35:29 is used. If *source_address* is set it must be a tuple of (host, port) 14:35:29 for the socket to bind as a source address before making the connection. 14:35:29 An host of '' or port 0 tells the OS to use the default. 14:35:29 """ 14:35:29 14:35:29 host, port = address 14:35:29 if host.startswith("["): 14:35:29 host = host.strip("[]") 14:35:29 err = None 14:35:29 14:35:29 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 14:35:29 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 14:35:29 # The original create_connection function always returns all records. 14:35:29 family = allowed_gai_family() 14:35:29 14:35:29 try: 14:35:29 host.encode("idna") 14:35:29 except UnicodeError: 14:35:29 raise LocationParseError(f"'{host}', label empty or too long") from None 14:35:29 14:35:29 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 14:35:29 af, socktype, proto, canonname, sa = res 14:35:29 sock = None 14:35:29 try: 14:35:29 sock = socket.socket(af, socktype, proto) 14:35:29 14:35:29 # If provided, set socket level options before connecting. 14:35:29 _set_socket_options(sock, socket_options) 14:35:29 14:35:29 if timeout is not _DEFAULT_TIMEOUT: 14:35:29 sock.settimeout(timeout) 14:35:29 if source_address: 14:35:29 sock.bind(source_address) 14:35:29 > sock.connect(sa) 14:35:29 E ConnectionRefusedError: [Errno 111] Connection refused 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 14:35:29 14:35:29 The above exception was the direct cause of the following exception: 14:35:29 14:35:29 self = 14:35:29 method = 'GET' 14:35:29 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX' 14:35:29 body = None 14:35:29 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 14:35:29 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 14:35:29 redirect = False, assert_same_host = False 14:35:29 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 14:35:29 release_conn = False, chunked = False, body_pos = None, preload_content = False 14:35:29 decode_content = False, response_kw = {} 14:35:29 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX', query=None, fragment=None) 14:35:29 destination_scheme = None, conn = None, release_this_conn = True 14:35:29 http_tunnel_required = False, err = None, clean_exit = False 14:35:29 14:35:29 def urlopen( # type: ignore[override] 14:35:29 self, 14:35:29 method: str, 14:35:29 url: str, 14:35:29 body: _TYPE_BODY | None = None, 14:35:29 headers: typing.Mapping[str, str] | None = None, 14:35:29 retries: Retry | bool | int | None = None, 14:35:29 redirect: bool = True, 14:35:29 assert_same_host: bool = True, 14:35:29 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 14:35:29 pool_timeout: int | None = None, 14:35:29 release_conn: bool | None = None, 14:35:29 chunked: bool = False, 14:35:29 body_pos: _TYPE_BODY_POSITION | None = None, 14:35:29 preload_content: bool = True, 14:35:29 decode_content: bool = True, 14:35:29 **response_kw: typing.Any, 14:35:29 ) -> BaseHTTPResponse: 14:35:29 """ 14:35:29 Get a connection from the pool and perform an HTTP request. This is the 14:35:29 lowest level call for making a request, so you'll need to specify all 14:35:29 the raw details. 14:35:29 14:35:29 .. note:: 14:35:29 14:35:29 More commonly, it's appropriate to use a convenience method 14:35:29 such as :meth:`request`. 14:35:29 14:35:29 .. note:: 14:35:29 14:35:29 `release_conn` will only behave as expected if 14:35:29 `preload_content=False` because we want to make 14:35:29 `preload_content=False` the default behaviour someday soon without 14:35:29 breaking backwards compatibility. 14:35:29 14:35:29 :param method: 14:35:29 HTTP request method (such as GET, POST, PUT, etc.) 14:35:29 14:35:29 :param url: 14:35:29 The URL to perform the request on. 14:35:29 14:35:29 :param body: 14:35:29 Data to send in the request body, either :class:`str`, :class:`bytes`, 14:35:29 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 14:35:29 14:35:29 :param headers: 14:35:29 Dictionary of custom headers to send, such as User-Agent, 14:35:29 If-None-Match, etc. If None, pool headers are used. If provided, 14:35:29 these headers completely replace any pool-specific headers. 14:35:29 14:35:29 :param retries: 14:35:29 Configure the number of retries to allow before raising a 14:35:29 :class:`~urllib3.exceptions.MaxRetryError` exception. 14:35:29 14:35:29 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 14:35:29 :class:`~urllib3.util.retry.Retry` object for fine-grained control 14:35:29 over different types of retries. 14:35:29 Pass an integer number to retry connection errors that many times, 14:35:29 but no other types of errors. Pass zero to never retry. 14:35:29 14:35:29 If ``False``, then retries are disabled and any exception is raised 14:35:29 immediately. Also, instead of raising a MaxRetryError on redirects, 14:35:29 the redirect response will be returned. 14:35:29 14:35:29 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 14:35:29 14:35:29 :param redirect: 14:35:29 If True, automatically handle redirects (status codes 301, 302, 14:35:29 303, 307, 308). Each redirect counts as a retry. Disabling retries 14:35:29 will disable redirect, too. 14:35:29 14:35:29 :param assert_same_host: 14:35:29 If ``True``, will make sure that the host of the pool requests is 14:35:29 consistent else will raise HostChangedError. When ``False``, you can 14:35:29 use the pool on an HTTP proxy and request foreign hosts. 14:35:29 14:35:29 :param timeout: 14:35:29 If specified, overrides the default timeout for this one 14:35:29 request. It may be a float (in seconds) or an instance of 14:35:29 :class:`urllib3.util.Timeout`. 14:35:29 14:35:29 :param pool_timeout: 14:35:29 If set and the pool is set to block=True, then this method will 14:35:29 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 14:35:29 connection is available within the time period. 14:35:29 14:35:29 :param bool preload_content: 14:35:29 If True, the response's body will be preloaded into memory. 14:35:29 14:35:29 :param bool decode_content: 14:35:29 If True, will attempt to decode the body based on the 14:35:29 'content-encoding' header. 14:35:29 14:35:29 :param release_conn: 14:35:29 If False, then the urlopen call will not release the connection 14:35:29 back into the pool once a response is received (but will release if 14:35:29 you read the entire contents of the response such as when 14:35:29 `preload_content=True`). This is useful if you're not preloading 14:35:29 the response's content immediately. You will need to call 14:35:29 ``r.release_conn()`` on the response ``r`` to return the connection 14:35:29 back into the pool. If None, it takes the value of ``preload_content`` 14:35:29 which defaults to ``True``. 14:35:29 14:35:29 :param bool chunked: 14:35:29 If True, urllib3 will send the body using chunked transfer 14:35:29 encoding. Otherwise, urllib3 will send the body using the standard 14:35:29 content-length form. Defaults to False. 14:35:29 14:35:29 :param int body_pos: 14:35:29 Position to seek to in file-like body in the event of a retry or 14:35:29 redirect. Typically this won't need to be set because urllib3 will 14:35:29 auto-populate the value when needed. 14:35:29 """ 14:35:29 parsed_url = parse_url(url) 14:35:29 destination_scheme = parsed_url.scheme 14:35:29 14:35:29 if headers is None: 14:35:29 headers = self.headers 14:35:29 14:35:29 if not isinstance(retries, Retry): 14:35:29 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 14:35:29 14:35:29 if release_conn is None: 14:35:29 release_conn = preload_content 14:35:29 14:35:29 # Check host 14:35:29 if assert_same_host and not self.is_same_host(url): 14:35:29 raise HostChangedError(self, url, retries) 14:35:29 14:35:29 # Ensure that the URL we're connecting to is properly encoded 14:35:29 if url.startswith("/"): 14:35:29 url = to_str(_encode_target(url)) 14:35:29 else: 14:35:29 url = to_str(parsed_url.url) 14:35:29 14:35:29 conn = None 14:35:29 14:35:29 # Track whether `conn` needs to be released before 14:35:29 # returning/raising/recursing. Update this variable if necessary, and 14:35:29 # leave `release_conn` constant throughout the function. That way, if 14:35:29 # the function recurses, the original value of `release_conn` will be 14:35:29 # passed down into the recursive call, and its value will be respected. 14:35:29 # 14:35:29 # See issue #651 [1] for details. 14:35:29 # 14:35:29 # [1] 14:35:29 release_this_conn = release_conn 14:35:29 14:35:29 http_tunnel_required = connection_requires_http_tunnel( 14:35:29 self.proxy, self.proxy_config, destination_scheme 14:35:29 ) 14:35:29 14:35:29 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 14:35:29 # have to copy the headers dict so we can safely change it without those 14:35:29 # changes being reflected in anyone else's copy. 14:35:29 if not http_tunnel_required: 14:35:29 headers = headers.copy() # type: ignore[attr-defined] 14:35:29 headers.update(self.proxy_headers) # type: ignore[union-attr] 14:35:29 14:35:29 # Must keep the exception bound to a separate variable or else Python 3 14:35:29 # complains about UnboundLocalError. 14:35:29 err = None 14:35:29 14:35:29 # Keep track of whether we cleanly exited the except block. This 14:35:29 # ensures we do proper cleanup in finally. 14:35:29 clean_exit = False 14:35:29 14:35:29 # Rewind body position, if needed. Record current position 14:35:29 # for future rewinds in the event of a redirect/retry. 14:35:29 body_pos = set_file_position(body, body_pos) 14:35:29 14:35:29 try: 14:35:29 # Request a connection from the queue. 14:35:29 timeout_obj = self._get_timeout(timeout) 14:35:29 conn = self._get_conn(timeout=pool_timeout) 14:35:29 14:35:29 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 14:35:29 14:35:29 # Is this a closed/new connection that requires CONNECT tunnelling? 14:35:29 if self.proxy is not None and http_tunnel_required and conn.is_closed: 14:35:29 try: 14:35:29 self._prepare_proxy(conn) 14:35:29 except (BaseSSLError, OSError, SocketTimeout) as e: 14:35:29 self._raise_timeout( 14:35:29 err=e, url=self.proxy.url, timeout_value=conn.timeout 14:35:29 ) 14:35:29 raise 14:35:29 14:35:29 # If we're going to release the connection in ``finally:``, then 14:35:29 # the response doesn't need to know about the connection. Otherwise 14:35:29 # it will also try to release it and we'll have a double-release 14:35:29 # mess. 14:35:29 response_conn = conn if not release_conn else None 14:35:29 14:35:29 # Make the request on the HTTPConnection object 14:35:29 > response = self._make_request( 14:35:29 conn, 14:35:29 method, 14:35:29 url, 14:35:29 timeout=timeout_obj, 14:35:29 body=body, 14:35:29 headers=headers, 14:35:29 chunked=chunked, 14:35:29 retries=retries, 14:35:29 response_conn=response_conn, 14:35:29 preload_content=preload_content, 14:35:29 decode_content=decode_content, 14:35:29 **response_kw, 14:35:29 ) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 14:35:29 conn.request( 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 14:35:29 self.endheaders() 14:35:29 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 14:35:29 self._send_output(message_body, encode_chunked=encode_chunked) 14:35:29 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 14:35:29 self.send(msg) 14:35:29 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 14:35:29 self.connect() 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 14:35:29 self.sock = self._new_conn() 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 self = 14:35:29 14:35:29 def _new_conn(self) -> socket.socket: 14:35:29 """Establish a socket connection and set nodelay settings on it. 14:35:29 14:35:29 :return: New socket connection. 14:35:29 """ 14:35:29 try: 14:35:29 sock = connection.create_connection( 14:35:29 (self._dns_host, self.port), 14:35:29 self.timeout, 14:35:29 source_address=self.source_address, 14:35:29 socket_options=self.socket_options, 14:35:29 ) 14:35:29 except socket.gaierror as e: 14:35:29 raise NameResolutionError(self.host, self, e) from e 14:35:29 except SocketTimeout as e: 14:35:29 raise ConnectTimeoutError( 14:35:29 self, 14:35:29 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 14:35:29 ) from e 14:35:29 14:35:29 except OSError as e: 14:35:29 > raise NewConnectionError( 14:35:29 self, f"Failed to establish a new connection: {e}" 14:35:29 ) from e 14:35:29 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 14:35:29 14:35:29 The above exception was the direct cause of the following exception: 14:35:29 14:35:29 self = 14:35:29 request = , stream = False 14:35:29 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 14:35:29 proxies = OrderedDict() 14:35:29 14:35:29 def send( 14:35:29 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 14:35:29 ): 14:35:29 """Sends PreparedRequest object. Returns Response object. 14:35:29 14:35:29 :param request: The :class:`PreparedRequest ` being sent. 14:35:29 :param stream: (optional) Whether to stream the request content. 14:35:29 :param timeout: (optional) How long to wait for the server to send 14:35:29 data before giving up, as a float, or a :ref:`(connect timeout, 14:35:29 read timeout) ` tuple. 14:35:29 :type timeout: float or tuple or urllib3 Timeout object 14:35:29 :param verify: (optional) Either a boolean, in which case it controls whether 14:35:29 we verify the server's TLS certificate, or a string, in which case it 14:35:29 must be a path to a CA bundle to use 14:35:29 :param cert: (optional) Any user-provided SSL certificate to be trusted. 14:35:29 :param proxies: (optional) The proxies dictionary to apply to the request. 14:35:29 :rtype: requests.Response 14:35:29 """ 14:35:29 14:35:29 try: 14:35:29 conn = self.get_connection_with_tls_context( 14:35:29 request, verify, proxies=proxies, cert=cert 14:35:29 ) 14:35:29 except LocationValueError as e: 14:35:29 raise InvalidURL(e, request=request) 14:35:29 14:35:29 self.cert_verify(conn, request.url, verify, cert) 14:35:29 url = self.request_url(request, proxies) 14:35:29 self.add_headers( 14:35:29 request, 14:35:29 stream=stream, 14:35:29 timeout=timeout, 14:35:29 verify=verify, 14:35:29 cert=cert, 14:35:29 proxies=proxies, 14:35:29 ) 14:35:29 14:35:29 chunked = not (request.body is None or "Content-Length" in request.headers) 14:35:29 14:35:29 if isinstance(timeout, tuple): 14:35:29 try: 14:35:29 connect, read = timeout 14:35:29 timeout = TimeoutSauce(connect=connect, read=read) 14:35:29 except ValueError: 14:35:29 raise ValueError( 14:35:29 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 14:35:29 f"or a single float to set both timeouts to the same value." 14:35:29 ) 14:35:29 elif isinstance(timeout, TimeoutSauce): 14:35:29 pass 14:35:29 else: 14:35:29 timeout = TimeoutSauce(connect=timeout, read=timeout) 14:35:29 14:35:29 try: 14:35:29 > resp = conn.urlopen( 14:35:29 method=request.method, 14:35:29 url=url, 14:35:29 body=request.body, 14:35:29 headers=request.headers, 14:35:29 redirect=False, 14:35:29 assert_same_host=False, 14:35:29 preload_content=False, 14:35:29 decode_content=False, 14:35:29 retries=self.max_retries, 14:35:29 timeout=timeout, 14:35:29 chunked=chunked, 14:35:29 ) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 14:35:29 retries = retries.increment( 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 14:35:29 method = 'GET' 14:35:29 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX' 14:35:29 response = None 14:35:29 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 14:35:29 _pool = 14:35:29 _stacktrace = 14:35:29 14:35:29 def increment( 14:35:29 self, 14:35:29 method: str | None = None, 14:35:29 url: str | None = None, 14:35:29 response: BaseHTTPResponse | None = None, 14:35:29 error: Exception | None = None, 14:35:29 _pool: ConnectionPool | None = None, 14:35:29 _stacktrace: TracebackType | None = None, 14:35:29 ) -> Self: 14:35:29 """Return a new Retry object with incremented retry counters. 14:35:29 14:35:29 :param response: A response object, or None, if the server did not 14:35:29 return a response. 14:35:29 :type response: :class:`~urllib3.response.BaseHTTPResponse` 14:35:29 :param Exception error: An error encountered during the request, or 14:35:29 None if the response was received successfully. 14:35:29 14:35:29 :return: A new ``Retry`` object. 14:35:29 """ 14:35:29 if self.total is False and error: 14:35:29 # Disabled, indicate to re-raise the error. 14:35:29 raise reraise(type(error), error, _stacktrace) 14:35:29 14:35:29 total = self.total 14:35:29 if total is not None: 14:35:29 total -= 1 14:35:29 14:35:29 connect = self.connect 14:35:29 read = self.read 14:35:29 redirect = self.redirect 14:35:29 status_count = self.status 14:35:29 other = self.other 14:35:29 cause = "unknown" 14:35:29 status = None 14:35:29 redirect_location = None 14:35:29 14:35:29 if error and self._is_connection_error(error): 14:35:29 # Connect retry? 14:35:29 if connect is False: 14:35:29 raise reraise(type(error), error, _stacktrace) 14:35:29 elif connect is not None: 14:35:29 connect -= 1 14:35:29 14:35:29 elif error and self._is_read_error(error): 14:35:29 # Read retry? 14:35:29 if read is False or method is None or not self._is_method_retryable(method): 14:35:29 raise reraise(type(error), error, _stacktrace) 14:35:29 elif read is not None: 14:35:29 read -= 1 14:35:29 14:35:29 elif error: 14:35:29 # Other retry? 14:35:29 if other is not None: 14:35:29 other -= 1 14:35:29 14:35:29 elif response and response.get_redirect_location(): 14:35:29 # Redirect retry? 14:35:29 if redirect is not None: 14:35:29 redirect -= 1 14:35:29 cause = "too many redirects" 14:35:29 response_redirect_location = response.get_redirect_location() 14:35:29 if response_redirect_location: 14:35:29 redirect_location = response_redirect_location 14:35:29 status = response.status 14:35:29 14:35:29 else: 14:35:29 # Incrementing because of a server error like a 500 in 14:35:29 # status_forcelist and the given method is in the allowed_methods 14:35:29 cause = ResponseError.GENERIC_ERROR 14:35:29 if response and response.status: 14:35:29 if status_count is not None: 14:35:29 status_count -= 1 14:35:29 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 14:35:29 status = response.status 14:35:29 14:35:29 history = self.history + ( 14:35:29 RequestHistory(method, url, error, status, redirect_location), 14:35:29 ) 14:35:29 14:35:29 new_retry = self.new( 14:35:29 total=total, 14:35:29 connect=connect, 14:35:29 read=read, 14:35:29 redirect=redirect, 14:35:29 status=status_count, 14:35:29 other=other, 14:35:29 history=history, 14:35:29 ) 14:35:29 14:35:29 if new_retry.is_exhausted(): 14:35:29 reason = error or ResponseError(cause) 14:35:29 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 14:35:29 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 14:35:29 14:35:29 During handling of the above exception, another exception occurred: 14:35:29 14:35:29 self = 14:35:29 14:35:29 def test_04_rdm_portmapping_DEG1_TTP_TXRX(self): 14:35:29 > response = test_utils.get_portmapping_node_attr("ROADMA01", "mapping", "DEG1-TTP-TXRX") 14:35:29 14:35:29 transportpce_tests/1.2.1/test01_portmapping.py:72: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 transportpce_tests/common/test_utils.py:470: in get_portmapping_node_attr 14:35:29 response = get_request(target_url) 14:35:29 transportpce_tests/common/test_utils.py:116: in get_request 14:35:29 return requests.request( 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 14:35:29 return session.request(method=method, url=url, **kwargs) 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 14:35:29 resp = self.send(prep, **send_kwargs) 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 14:35:29 r = adapter.send(request, **kwargs) 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 self = 14:35:29 request = , stream = False 14:35:29 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 14:35:29 proxies = OrderedDict() 14:35:29 14:35:29 def send( 14:35:29 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 14:35:29 ): 14:35:29 """Sends PreparedRequest object. Returns Response object. 14:35:29 14:35:29 :param request: The :class:`PreparedRequest ` being sent. 14:35:29 :param stream: (optional) Whether to stream the request content. 14:35:29 :param timeout: (optional) How long to wait for the server to send 14:35:29 data before giving up, as a float, or a :ref:`(connect timeout, 14:35:29 read timeout) ` tuple. 14:35:29 :type timeout: float or tuple or urllib3 Timeout object 14:35:29 :param verify: (optional) Either a boolean, in which case it controls whether 14:35:29 we verify the server's TLS certificate, or a string, in which case it 14:35:29 must be a path to a CA bundle to use 14:35:29 :param cert: (optional) Any user-provided SSL certificate to be trusted. 14:35:29 :param proxies: (optional) The proxies dictionary to apply to the request. 14:35:29 :rtype: requests.Response 14:35:29 """ 14:35:29 14:35:29 try: 14:35:29 conn = self.get_connection_with_tls_context( 14:35:29 request, verify, proxies=proxies, cert=cert 14:35:29 ) 14:35:29 except LocationValueError as e: 14:35:29 raise InvalidURL(e, request=request) 14:35:29 14:35:29 self.cert_verify(conn, request.url, verify, cert) 14:35:29 url = self.request_url(request, proxies) 14:35:29 self.add_headers( 14:35:29 request, 14:35:29 stream=stream, 14:35:29 timeout=timeout, 14:35:29 verify=verify, 14:35:29 cert=cert, 14:35:29 proxies=proxies, 14:35:29 ) 14:35:29 14:35:29 chunked = not (request.body is None or "Content-Length" in request.headers) 14:35:29 14:35:29 if isinstance(timeout, tuple): 14:35:29 try: 14:35:29 connect, read = timeout 14:35:29 timeout = TimeoutSauce(connect=connect, read=read) 14:35:29 except ValueError: 14:35:29 raise ValueError( 14:35:29 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 14:35:29 f"or a single float to set both timeouts to the same value." 14:35:29 ) 14:35:29 elif isinstance(timeout, TimeoutSauce): 14:35:29 pass 14:35:29 else: 14:35:29 timeout = TimeoutSauce(connect=timeout, read=timeout) 14:35:29 14:35:29 try: 14:35:29 resp = conn.urlopen( 14:35:29 method=request.method, 14:35:29 url=url, 14:35:29 body=request.body, 14:35:29 headers=request.headers, 14:35:29 redirect=False, 14:35:29 assert_same_host=False, 14:35:29 preload_content=False, 14:35:29 decode_content=False, 14:35:29 retries=self.max_retries, 14:35:29 timeout=timeout, 14:35:29 chunked=chunked, 14:35:29 ) 14:35:29 14:35:29 except (ProtocolError, OSError) as err: 14:35:29 raise ConnectionError(err, request=request) 14:35:29 14:35:29 except MaxRetryError as e: 14:35:29 if isinstance(e.reason, ConnectTimeoutError): 14:35:29 # TODO: Remove this in 3.0.0: see #2811 14:35:29 if not isinstance(e.reason, NewConnectionError): 14:35:29 raise ConnectTimeout(e, request=request) 14:35:29 14:35:29 if isinstance(e.reason, ResponseError): 14:35:29 raise RetryError(e, request=request) 14:35:29 14:35:29 if isinstance(e.reason, _ProxyError): 14:35:29 raise ProxyError(e, request=request) 14:35:29 14:35:29 if isinstance(e.reason, _SSLError): 14:35:29 # This branch is for urllib3 v1.22 and later. 14:35:29 raise SSLError(e, request=request) 14:35:29 14:35:29 > raise ConnectionError(e, request=request) 14:35:29 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 14:35:29 ----------------------------- Captured stdout call ----------------------------- 14:35:29 execution of test_04_rdm_portmapping_DEG1_TTP_TXRX 14:35:29 _____ TransportPCEPortMappingTesting.test_05_rdm_portmapping_SRG1_PP7_TXRX _____ 14:35:29 14:35:29 self = 14:35:29 14:35:29 def _new_conn(self) -> socket.socket: 14:35:29 """Establish a socket connection and set nodelay settings on it. 14:35:29 14:35:29 :return: New socket connection. 14:35:29 """ 14:35:29 try: 14:35:29 > sock = connection.create_connection( 14:35:29 (self._dns_host, self.port), 14:35:29 self.timeout, 14:35:29 source_address=self.source_address, 14:35:29 socket_options=self.socket_options, 14:35:29 ) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 14:35:29 raise err 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 address = ('localhost', 8182), timeout = 10, source_address = None 14:35:29 socket_options = [(6, 1, 1)] 14:35:29 14:35:29 def create_connection( 14:35:29 address: tuple[str, int], 14:35:29 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 14:35:29 source_address: tuple[str, int] | None = None, 14:35:29 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 14:35:29 ) -> socket.socket: 14:35:29 """Connect to *address* and return the socket object. 14:35:29 14:35:29 Convenience function. Connect to *address* (a 2-tuple ``(host, 14:35:29 port)``) and return the socket object. Passing the optional 14:35:29 *timeout* parameter will set the timeout on the socket instance 14:35:29 before attempting to connect. If no *timeout* is supplied, the 14:35:29 global default timeout setting returned by :func:`socket.getdefaulttimeout` 14:35:29 is used. If *source_address* is set it must be a tuple of (host, port) 14:35:29 for the socket to bind as a source address before making the connection. 14:35:29 An host of '' or port 0 tells the OS to use the default. 14:35:29 """ 14:35:29 14:35:29 host, port = address 14:35:29 if host.startswith("["): 14:35:29 host = host.strip("[]") 14:35:29 err = None 14:35:29 14:35:29 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 14:35:29 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 14:35:29 # The original create_connection function always returns all records. 14:35:29 family = allowed_gai_family() 14:35:29 14:35:29 try: 14:35:29 host.encode("idna") 14:35:29 except UnicodeError: 14:35:29 raise LocationParseError(f"'{host}', label empty or too long") from None 14:35:29 14:35:29 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 14:35:29 af, socktype, proto, canonname, sa = res 14:35:29 sock = None 14:35:29 try: 14:35:29 sock = socket.socket(af, socktype, proto) 14:35:29 14:35:29 # If provided, set socket level options before connecting. 14:35:29 _set_socket_options(sock, socket_options) 14:35:29 14:35:29 if timeout is not _DEFAULT_TIMEOUT: 14:35:29 sock.settimeout(timeout) 14:35:29 if source_address: 14:35:29 sock.bind(source_address) 14:35:29 > sock.connect(sa) 14:35:29 E ConnectionRefusedError: [Errno 111] Connection refused 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 14:35:29 14:35:29 The above exception was the direct cause of the following exception: 14:35:29 14:35:29 self = 14:35:29 method = 'GET' 14:35:29 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX' 14:35:29 body = None 14:35:29 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 14:35:29 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 14:35:29 redirect = False, assert_same_host = False 14:35:29 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 14:35:29 release_conn = False, chunked = False, body_pos = None, preload_content = False 14:35:29 decode_content = False, response_kw = {} 14:35:29 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX', query=None, fragment=None) 14:35:29 destination_scheme = None, conn = None, release_this_conn = True 14:35:29 http_tunnel_required = False, err = None, clean_exit = False 14:35:29 14:35:29 def urlopen( # type: ignore[override] 14:35:29 self, 14:35:29 method: str, 14:35:29 url: str, 14:35:29 body: _TYPE_BODY | None = None, 14:35:29 headers: typing.Mapping[str, str] | None = None, 14:35:29 retries: Retry | bool | int | None = None, 14:35:29 redirect: bool = True, 14:35:29 assert_same_host: bool = True, 14:35:29 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 14:35:29 pool_timeout: int | None = None, 14:35:29 release_conn: bool | None = None, 14:35:29 chunked: bool = False, 14:35:29 body_pos: _TYPE_BODY_POSITION | None = None, 14:35:29 preload_content: bool = True, 14:35:29 decode_content: bool = True, 14:35:29 **response_kw: typing.Any, 14:35:29 ) -> BaseHTTPResponse: 14:35:29 """ 14:35:29 Get a connection from the pool and perform an HTTP request. This is the 14:35:29 lowest level call for making a request, so you'll need to specify all 14:35:29 the raw details. 14:35:29 14:35:29 .. note:: 14:35:29 14:35:29 More commonly, it's appropriate to use a convenience method 14:35:29 such as :meth:`request`. 14:35:29 14:35:29 .. note:: 14:35:29 14:35:29 `release_conn` will only behave as expected if 14:35:29 `preload_content=False` because we want to make 14:35:29 `preload_content=False` the default behaviour someday soon without 14:35:29 breaking backwards compatibility. 14:35:29 14:35:29 :param method: 14:35:29 HTTP request method (such as GET, POST, PUT, etc.) 14:35:29 14:35:29 :param url: 14:35:29 The URL to perform the request on. 14:35:29 14:35:29 :param body: 14:35:29 Data to send in the request body, either :class:`str`, :class:`bytes`, 14:35:29 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 14:35:29 14:35:29 :param headers: 14:35:29 Dictionary of custom headers to send, such as User-Agent, 14:35:29 If-None-Match, etc. If None, pool headers are used. If provided, 14:35:29 these headers completely replace any pool-specific headers. 14:35:29 14:35:29 :param retries: 14:35:29 Configure the number of retries to allow before raising a 14:35:29 :class:`~urllib3.exceptions.MaxRetryError` exception. 14:35:29 14:35:29 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 14:35:29 :class:`~urllib3.util.retry.Retry` object for fine-grained control 14:35:29 over different types of retries. 14:35:29 Pass an integer number to retry connection errors that many times, 14:35:29 but no other types of errors. Pass zero to never retry. 14:35:29 14:35:29 If ``False``, then retries are disabled and any exception is raised 14:35:29 immediately. Also, instead of raising a MaxRetryError on redirects, 14:35:29 the redirect response will be returned. 14:35:29 14:35:29 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 14:35:29 14:35:29 :param redirect: 14:35:29 If True, automatically handle redirects (status codes 301, 302, 14:35:29 303, 307, 308). Each redirect counts as a retry. Disabling retries 14:35:29 will disable redirect, too. 14:35:29 14:35:29 :param assert_same_host: 14:35:29 If ``True``, will make sure that the host of the pool requests is 14:35:29 consistent else will raise HostChangedError. When ``False``, you can 14:35:29 use the pool on an HTTP proxy and request foreign hosts. 14:35:29 14:35:29 :param timeout: 14:35:29 If specified, overrides the default timeout for this one 14:35:29 request. It may be a float (in seconds) or an instance of 14:35:29 :class:`urllib3.util.Timeout`. 14:35:29 14:35:29 :param pool_timeout: 14:35:29 If set and the pool is set to block=True, then this method will 14:35:29 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 14:35:29 connection is available within the time period. 14:35:29 14:35:29 :param bool preload_content: 14:35:29 If True, the response's body will be preloaded into memory. 14:35:29 14:35:29 :param bool decode_content: 14:35:29 If True, will attempt to decode the body based on the 14:35:29 'content-encoding' header. 14:35:29 14:35:29 :param release_conn: 14:35:29 If False, then the urlopen call will not release the connection 14:35:29 back into the pool once a response is received (but will release if 14:35:29 you read the entire contents of the response such as when 14:35:29 `preload_content=True`). This is useful if you're not preloading 14:35:29 the response's content immediately. You will need to call 14:35:29 ``r.release_conn()`` on the response ``r`` to return the connection 14:35:29 back into the pool. If None, it takes the value of ``preload_content`` 14:35:29 which defaults to ``True``. 14:35:29 14:35:29 :param bool chunked: 14:35:29 If True, urllib3 will send the body using chunked transfer 14:35:29 encoding. Otherwise, urllib3 will send the body using the standard 14:35:29 content-length form. Defaults to False. 14:35:29 14:35:29 :param int body_pos: 14:35:29 Position to seek to in file-like body in the event of a retry or 14:35:29 redirect. Typically this won't need to be set because urllib3 will 14:35:29 auto-populate the value when needed. 14:35:29 """ 14:35:29 parsed_url = parse_url(url) 14:35:29 destination_scheme = parsed_url.scheme 14:35:29 14:35:29 if headers is None: 14:35:29 headers = self.headers 14:35:29 14:35:29 if not isinstance(retries, Retry): 14:35:29 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 14:35:29 14:35:29 if release_conn is None: 14:35:29 release_conn = preload_content 14:35:29 14:35:29 # Check host 14:35:29 if assert_same_host and not self.is_same_host(url): 14:35:29 raise HostChangedError(self, url, retries) 14:35:29 14:35:29 # Ensure that the URL we're connecting to is properly encoded 14:35:29 if url.startswith("/"): 14:35:29 url = to_str(_encode_target(url)) 14:35:29 else: 14:35:29 url = to_str(parsed_url.url) 14:35:29 14:35:29 conn = None 14:35:29 14:35:29 # Track whether `conn` needs to be released before 14:35:29 # returning/raising/recursing. Update this variable if necessary, and 14:35:29 # leave `release_conn` constant throughout the function. That way, if 14:35:29 # the function recurses, the original value of `release_conn` will be 14:35:29 # passed down into the recursive call, and its value will be respected. 14:35:29 # 14:35:29 # See issue #651 [1] for details. 14:35:29 # 14:35:29 # [1] 14:35:29 release_this_conn = release_conn 14:35:29 14:35:29 http_tunnel_required = connection_requires_http_tunnel( 14:35:29 self.proxy, self.proxy_config, destination_scheme 14:35:29 ) 14:35:29 14:35:29 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 14:35:29 # have to copy the headers dict so we can safely change it without those 14:35:29 # changes being reflected in anyone else's copy. 14:35:29 if not http_tunnel_required: 14:35:29 headers = headers.copy() # type: ignore[attr-defined] 14:35:29 headers.update(self.proxy_headers) # type: ignore[union-attr] 14:35:29 14:35:29 # Must keep the exception bound to a separate variable or else Python 3 14:35:29 # complains about UnboundLocalError. 14:35:29 err = None 14:35:29 14:35:29 # Keep track of whether we cleanly exited the except block. This 14:35:29 # ensures we do proper cleanup in finally. 14:35:29 clean_exit = False 14:35:29 14:35:29 # Rewind body position, if needed. Record current position 14:35:29 # for future rewinds in the event of a redirect/retry. 14:35:29 body_pos = set_file_position(body, body_pos) 14:35:29 14:35:29 try: 14:35:29 # Request a connection from the queue. 14:35:29 timeout_obj = self._get_timeout(timeout) 14:35:29 conn = self._get_conn(timeout=pool_timeout) 14:35:29 14:35:29 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 14:35:29 14:35:29 # Is this a closed/new connection that requires CONNECT tunnelling? 14:35:29 if self.proxy is not None and http_tunnel_required and conn.is_closed: 14:35:29 try: 14:35:29 self._prepare_proxy(conn) 14:35:29 except (BaseSSLError, OSError, SocketTimeout) as e: 14:35:29 self._raise_timeout( 14:35:29 err=e, url=self.proxy.url, timeout_value=conn.timeout 14:35:29 ) 14:35:29 raise 14:35:29 14:35:29 # If we're going to release the connection in ``finally:``, then 14:35:29 # the response doesn't need to know about the connection. Otherwise 14:35:29 # it will also try to release it and we'll have a double-release 14:35:29 # mess. 14:35:29 response_conn = conn if not release_conn else None 14:35:29 14:35:29 # Make the request on the HTTPConnection object 14:35:29 > response = self._make_request( 14:35:29 conn, 14:35:29 method, 14:35:29 url, 14:35:29 timeout=timeout_obj, 14:35:29 body=body, 14:35:29 headers=headers, 14:35:29 chunked=chunked, 14:35:29 retries=retries, 14:35:29 response_conn=response_conn, 14:35:29 preload_content=preload_content, 14:35:29 decode_content=decode_content, 14:35:29 **response_kw, 14:35:29 ) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 14:35:29 conn.request( 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 14:35:29 self.endheaders() 14:35:29 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 14:35:29 self._send_output(message_body, encode_chunked=encode_chunked) 14:35:29 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 14:35:29 self.send(msg) 14:35:29 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 14:35:29 self.connect() 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 14:35:29 self.sock = self._new_conn() 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 self = 14:35:29 14:35:29 def _new_conn(self) -> socket.socket: 14:35:29 """Establish a socket connection and set nodelay settings on it. 14:35:29 14:35:29 :return: New socket connection. 14:35:29 """ 14:35:29 try: 14:35:29 sock = connection.create_connection( 14:35:29 (self._dns_host, self.port), 14:35:29 self.timeout, 14:35:29 source_address=self.source_address, 14:35:29 socket_options=self.socket_options, 14:35:29 ) 14:35:29 except socket.gaierror as e: 14:35:29 raise NameResolutionError(self.host, self, e) from e 14:35:29 except SocketTimeout as e: 14:35:29 raise ConnectTimeoutError( 14:35:29 self, 14:35:29 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 14:35:29 ) from e 14:35:29 14:35:29 except OSError as e: 14:35:29 > raise NewConnectionError( 14:35:29 self, f"Failed to establish a new connection: {e}" 14:35:29 ) from e 14:35:29 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 14:35:29 14:35:29 The above exception was the direct cause of the following exception: 14:35:29 14:35:29 self = 14:35:29 request = , stream = False 14:35:29 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 14:35:29 proxies = OrderedDict() 14:35:29 14:35:29 def send( 14:35:29 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 14:35:29 ): 14:35:29 """Sends PreparedRequest object. Returns Response object. 14:35:29 14:35:29 :param request: The :class:`PreparedRequest ` being sent. 14:35:29 :param stream: (optional) Whether to stream the request content. 14:35:29 :param timeout: (optional) How long to wait for the server to send 14:35:29 data before giving up, as a float, or a :ref:`(connect timeout, 14:35:29 read timeout) ` tuple. 14:35:29 :type timeout: float or tuple or urllib3 Timeout object 14:35:29 :param verify: (optional) Either a boolean, in which case it controls whether 14:35:29 we verify the server's TLS certificate, or a string, in which case it 14:35:29 must be a path to a CA bundle to use 14:35:29 :param cert: (optional) Any user-provided SSL certificate to be trusted. 14:35:29 :param proxies: (optional) The proxies dictionary to apply to the request. 14:35:29 :rtype: requests.Response 14:35:29 """ 14:35:29 14:35:29 try: 14:35:29 conn = self.get_connection_with_tls_context( 14:35:29 request, verify, proxies=proxies, cert=cert 14:35:29 ) 14:35:29 except LocationValueError as e: 14:35:29 raise InvalidURL(e, request=request) 14:35:29 14:35:29 self.cert_verify(conn, request.url, verify, cert) 14:35:29 url = self.request_url(request, proxies) 14:35:29 self.add_headers( 14:35:29 request, 14:35:29 stream=stream, 14:35:29 timeout=timeout, 14:35:29 verify=verify, 14:35:29 cert=cert, 14:35:29 proxies=proxies, 14:35:29 ) 14:35:29 14:35:29 chunked = not (request.body is None or "Content-Length" in request.headers) 14:35:29 14:35:29 if isinstance(timeout, tuple): 14:35:29 try: 14:35:29 connect, read = timeout 14:35:29 timeout = TimeoutSauce(connect=connect, read=read) 14:35:29 except ValueError: 14:35:29 raise ValueError( 14:35:29 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 14:35:29 f"or a single float to set both timeouts to the same value." 14:35:29 ) 14:35:29 elif isinstance(timeout, TimeoutSauce): 14:35:29 pass 14:35:29 else: 14:35:29 timeout = TimeoutSauce(connect=timeout, read=timeout) 14:35:29 14:35:29 try: 14:35:29 > resp = conn.urlopen( 14:35:29 method=request.method, 14:35:29 url=url, 14:35:29 body=request.body, 14:35:29 headers=request.headers, 14:35:29 redirect=False, 14:35:29 assert_same_host=False, 14:35:29 preload_content=False, 14:35:29 decode_content=False, 14:35:29 retries=self.max_retries, 14:35:29 timeout=timeout, 14:35:29 chunked=chunked, 14:35:29 ) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 14:35:29 retries = retries.increment( 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 14:35:29 method = 'GET' 14:35:29 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX' 14:35:29 response = None 14:35:29 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 14:35:29 _pool = 14:35:29 _stacktrace = 14:35:29 14:35:29 def increment( 14:35:29 self, 14:35:29 method: str | None = None, 14:35:29 url: str | None = None, 14:35:29 response: BaseHTTPResponse | None = None, 14:35:29 error: Exception | None = None, 14:35:29 _pool: ConnectionPool | None = None, 14:35:29 _stacktrace: TracebackType | None = None, 14:35:29 ) -> Self: 14:35:29 """Return a new Retry object with incremented retry counters. 14:35:29 14:35:29 :param response: A response object, or None, if the server did not 14:35:29 return a response. 14:35:29 :type response: :class:`~urllib3.response.BaseHTTPResponse` 14:35:29 :param Exception error: An error encountered during the request, or 14:35:29 None if the response was received successfully. 14:35:29 14:35:29 :return: A new ``Retry`` object. 14:35:29 """ 14:35:29 if self.total is False and error: 14:35:29 # Disabled, indicate to re-raise the error. 14:35:29 raise reraise(type(error), error, _stacktrace) 14:35:29 14:35:29 total = self.total 14:35:29 if total is not None: 14:35:29 total -= 1 14:35:29 14:35:29 connect = self.connect 14:35:29 read = self.read 14:35:29 redirect = self.redirect 14:35:29 status_count = self.status 14:35:29 other = self.other 14:35:29 cause = "unknown" 14:35:29 status = None 14:35:29 redirect_location = None 14:35:29 14:35:29 if error and self._is_connection_error(error): 14:35:29 # Connect retry? 14:35:29 if connect is False: 14:35:29 raise reraise(type(error), error, _stacktrace) 14:35:29 elif connect is not None: 14:35:29 connect -= 1 14:35:29 14:35:29 elif error and self._is_read_error(error): 14:35:29 # Read retry? 14:35:29 if read is False or method is None or not self._is_method_retryable(method): 14:35:29 raise reraise(type(error), error, _stacktrace) 14:35:29 elif read is not None: 14:35:29 read -= 1 14:35:29 14:35:29 elif error: 14:35:29 # Other retry? 14:35:29 if other is not None: 14:35:29 other -= 1 14:35:29 14:35:29 elif response and response.get_redirect_location(): 14:35:29 # Redirect retry? 14:35:29 if redirect is not None: 14:35:29 redirect -= 1 14:35:29 cause = "too many redirects" 14:35:29 response_redirect_location = response.get_redirect_location() 14:35:29 if response_redirect_location: 14:35:29 redirect_location = response_redirect_location 14:35:29 status = response.status 14:35:29 14:35:29 else: 14:35:29 # Incrementing because of a server error like a 500 in 14:35:29 # status_forcelist and the given method is in the allowed_methods 14:35:29 cause = ResponseError.GENERIC_ERROR 14:35:29 if response and response.status: 14:35:29 if status_count is not None: 14:35:29 status_count -= 1 14:35:29 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 14:35:29 status = response.status 14:35:29 14:35:29 history = self.history + ( 14:35:29 RequestHistory(method, url, error, status, redirect_location), 14:35:29 ) 14:35:29 14:35:29 new_retry = self.new( 14:35:29 total=total, 14:35:29 connect=connect, 14:35:29 read=read, 14:35:29 redirect=redirect, 14:35:29 status=status_count, 14:35:29 other=other, 14:35:29 history=history, 14:35:29 ) 14:35:29 14:35:29 if new_retry.is_exhausted(): 14:35:29 reason = error or ResponseError(cause) 14:35:29 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 14:35:29 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 14:35:29 14:35:29 During handling of the above exception, another exception occurred: 14:35:29 14:35:29 self = 14:35:29 14:35:29 def test_05_rdm_portmapping_SRG1_PP7_TXRX(self): 14:35:29 > response = test_utils.get_portmapping_node_attr("ROADMA01", "mapping", "SRG1-PP7-TXRX") 14:35:29 14:35:29 transportpce_tests/1.2.1/test01_portmapping.py:81: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 transportpce_tests/common/test_utils.py:470: in get_portmapping_node_attr 14:35:29 response = get_request(target_url) 14:35:29 transportpce_tests/common/test_utils.py:116: in get_request 14:35:29 return requests.request( 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 14:35:29 return session.request(method=method, url=url, **kwargs) 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 14:35:29 resp = self.send(prep, **send_kwargs) 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 14:35:29 r = adapter.send(request, **kwargs) 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 self = 14:35:29 request = , stream = False 14:35:29 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 14:35:29 proxies = OrderedDict() 14:35:29 14:35:29 def send( 14:35:29 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 14:35:29 ): 14:35:29 """Sends PreparedRequest object. Returns Response object. 14:35:29 14:35:29 :param request: The :class:`PreparedRequest ` being sent. 14:35:29 :param stream: (optional) Whether to stream the request content. 14:35:29 :param timeout: (optional) How long to wait for the server to send 14:35:29 data before giving up, as a float, or a :ref:`(connect timeout, 14:35:29 read timeout) ` tuple. 14:35:29 :type timeout: float or tuple or urllib3 Timeout object 14:35:29 :param verify: (optional) Either a boolean, in which case it controls whether 14:35:29 we verify the server's TLS certificate, or a string, in which case it 14:35:29 must be a path to a CA bundle to use 14:35:29 :param cert: (optional) Any user-provided SSL certificate to be trusted. 14:35:29 :param proxies: (optional) The proxies dictionary to apply to the request. 14:35:29 :rtype: requests.Response 14:35:29 """ 14:35:29 14:35:29 try: 14:35:29 conn = self.get_connection_with_tls_context( 14:35:29 request, verify, proxies=proxies, cert=cert 14:35:29 ) 14:35:29 except LocationValueError as e: 14:35:29 raise InvalidURL(e, request=request) 14:35:29 14:35:29 self.cert_verify(conn, request.url, verify, cert) 14:35:29 url = self.request_url(request, proxies) 14:35:29 self.add_headers( 14:35:29 request, 14:35:29 stream=stream, 14:35:29 timeout=timeout, 14:35:29 verify=verify, 14:35:29 cert=cert, 14:35:29 proxies=proxies, 14:35:29 ) 14:35:29 14:35:29 chunked = not (request.body is None or "Content-Length" in request.headers) 14:35:29 14:35:29 if isinstance(timeout, tuple): 14:35:29 try: 14:35:29 connect, read = timeout 14:35:29 timeout = TimeoutSauce(connect=connect, read=read) 14:35:29 except ValueError: 14:35:29 raise ValueError( 14:35:29 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 14:35:29 f"or a single float to set both timeouts to the same value." 14:35:29 ) 14:35:29 elif isinstance(timeout, TimeoutSauce): 14:35:29 pass 14:35:29 else: 14:35:29 timeout = TimeoutSauce(connect=timeout, read=timeout) 14:35:29 14:35:29 try: 14:35:29 resp = conn.urlopen( 14:35:29 method=request.method, 14:35:29 url=url, 14:35:29 body=request.body, 14:35:29 headers=request.headers, 14:35:29 redirect=False, 14:35:29 assert_same_host=False, 14:35:29 preload_content=False, 14:35:29 decode_content=False, 14:35:29 retries=self.max_retries, 14:35:29 timeout=timeout, 14:35:29 chunked=chunked, 14:35:29 ) 14:35:29 14:35:29 except (ProtocolError, OSError) as err: 14:35:29 raise ConnectionError(err, request=request) 14:35:29 14:35:29 except MaxRetryError as e: 14:35:29 if isinstance(e.reason, ConnectTimeoutError): 14:35:29 # TODO: Remove this in 3.0.0: see #2811 14:35:29 if not isinstance(e.reason, NewConnectionError): 14:35:29 raise ConnectTimeout(e, request=request) 14:35:29 14:35:29 if isinstance(e.reason, ResponseError): 14:35:29 raise RetryError(e, request=request) 14:35:29 14:35:29 if isinstance(e.reason, _ProxyError): 14:35:29 raise ProxyError(e, request=request) 14:35:29 14:35:29 if isinstance(e.reason, _SSLError): 14:35:29 # This branch is for urllib3 v1.22 and later. 14:35:29 raise SSLError(e, request=request) 14:35:29 14:35:29 > raise ConnectionError(e, request=request) 14:35:29 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 14:35:29 ----------------------------- Captured stdout call ----------------------------- 14:35:29 execution of test_05_rdm_portmapping_SRG1_PP7_TXRX 14:35:29 _____ TransportPCEPortMappingTesting.test_06_rdm_portmapping_SRG3_PP1_TXRX _____ 14:35:29 14:35:29 self = 14:35:29 14:35:29 def _new_conn(self) -> socket.socket: 14:35:29 """Establish a socket connection and set nodelay settings on it. 14:35:29 14:35:29 :return: New socket connection. 14:35:29 """ 14:35:29 try: 14:35:29 > sock = connection.create_connection( 14:35:29 (self._dns_host, self.port), 14:35:29 self.timeout, 14:35:29 source_address=self.source_address, 14:35:29 socket_options=self.socket_options, 14:35:29 ) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 14:35:29 raise err 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 address = ('localhost', 8182), timeout = 10, source_address = None 14:35:29 socket_options = [(6, 1, 1)] 14:35:29 14:35:29 def create_connection( 14:35:29 address: tuple[str, int], 14:35:29 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 14:35:29 source_address: tuple[str, int] | None = None, 14:35:29 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 14:35:29 ) -> socket.socket: 14:35:29 """Connect to *address* and return the socket object. 14:35:29 14:35:29 Convenience function. Connect to *address* (a 2-tuple ``(host, 14:35:29 port)``) and return the socket object. Passing the optional 14:35:29 *timeout* parameter will set the timeout on the socket instance 14:35:29 before attempting to connect. If no *timeout* is supplied, the 14:35:29 global default timeout setting returned by :func:`socket.getdefaulttimeout` 14:35:29 is used. If *source_address* is set it must be a tuple of (host, port) 14:35:29 for the socket to bind as a source address before making the connection. 14:35:29 An host of '' or port 0 tells the OS to use the default. 14:35:29 """ 14:35:29 14:35:29 host, port = address 14:35:29 if host.startswith("["): 14:35:29 host = host.strip("[]") 14:35:29 err = None 14:35:29 14:35:29 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 14:35:29 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 14:35:29 # The original create_connection function always returns all records. 14:35:29 family = allowed_gai_family() 14:35:29 14:35:29 try: 14:35:29 host.encode("idna") 14:35:29 except UnicodeError: 14:35:29 raise LocationParseError(f"'{host}', label empty or too long") from None 14:35:29 14:35:29 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 14:35:29 af, socktype, proto, canonname, sa = res 14:35:29 sock = None 14:35:29 try: 14:35:29 sock = socket.socket(af, socktype, proto) 14:35:29 14:35:29 # If provided, set socket level options before connecting. 14:35:29 _set_socket_options(sock, socket_options) 14:35:29 14:35:29 if timeout is not _DEFAULT_TIMEOUT: 14:35:29 sock.settimeout(timeout) 14:35:29 if source_address: 14:35:29 sock.bind(source_address) 14:35:29 > sock.connect(sa) 14:35:29 E ConnectionRefusedError: [Errno 111] Connection refused 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 14:35:29 14:35:29 The above exception was the direct cause of the following exception: 14:35:29 14:35:29 self = 14:35:29 method = 'GET' 14:35:29 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX' 14:35:29 body = None 14:35:29 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 14:35:29 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 14:35:29 redirect = False, assert_same_host = False 14:35:29 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 14:35:29 release_conn = False, chunked = False, body_pos = None, preload_content = False 14:35:29 decode_content = False, response_kw = {} 14:35:29 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX', query=None, fragment=None) 14:35:29 destination_scheme = None, conn = None, release_this_conn = True 14:35:29 http_tunnel_required = False, err = None, clean_exit = False 14:35:29 14:35:29 def urlopen( # type: ignore[override] 14:35:29 self, 14:35:29 method: str, 14:35:29 url: str, 14:35:29 body: _TYPE_BODY | None = None, 14:35:29 headers: typing.Mapping[str, str] | None = None, 14:35:29 retries: Retry | bool | int | None = None, 14:35:29 redirect: bool = True, 14:35:29 assert_same_host: bool = True, 14:35:29 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 14:35:29 pool_timeout: int | None = None, 14:35:29 release_conn: bool | None = None, 14:35:29 chunked: bool = False, 14:35:29 body_pos: _TYPE_BODY_POSITION | None = None, 14:35:29 preload_content: bool = True, 14:35:29 decode_content: bool = True, 14:35:29 **response_kw: typing.Any, 14:35:29 ) -> BaseHTTPResponse: 14:35:29 """ 14:35:29 Get a connection from the pool and perform an HTTP request. This is the 14:35:29 lowest level call for making a request, so you'll need to specify all 14:35:29 the raw details. 14:35:29 14:35:29 .. note:: 14:35:29 14:35:29 More commonly, it's appropriate to use a convenience method 14:35:29 such as :meth:`request`. 14:35:29 14:35:29 .. note:: 14:35:29 14:35:29 `release_conn` will only behave as expected if 14:35:29 `preload_content=False` because we want to make 14:35:29 `preload_content=False` the default behaviour someday soon without 14:35:29 breaking backwards compatibility. 14:35:29 14:35:29 :param method: 14:35:29 HTTP request method (such as GET, POST, PUT, etc.) 14:35:29 14:35:29 :param url: 14:35:29 The URL to perform the request on. 14:35:29 14:35:29 :param body: 14:35:29 Data to send in the request body, either :class:`str`, :class:`bytes`, 14:35:29 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 14:35:29 14:35:29 :param headers: 14:35:29 Dictionary of custom headers to send, such as User-Agent, 14:35:29 If-None-Match, etc. If None, pool headers are used. If provided, 14:35:29 these headers completely replace any pool-specific headers. 14:35:29 14:35:29 :param retries: 14:35:29 Configure the number of retries to allow before raising a 14:35:29 :class:`~urllib3.exceptions.MaxRetryError` exception. 14:35:29 14:35:29 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 14:35:29 :class:`~urllib3.util.retry.Retry` object for fine-grained control 14:35:29 over different types of retries. 14:35:29 Pass an integer number to retry connection errors that many times, 14:35:29 but no other types of errors. Pass zero to never retry. 14:35:29 14:35:29 If ``False``, then retries are disabled and any exception is raised 14:35:29 immediately. Also, instead of raising a MaxRetryError on redirects, 14:35:29 the redirect response will be returned. 14:35:29 14:35:29 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 14:35:29 14:35:29 :param redirect: 14:35:29 If True, automatically handle redirects (status codes 301, 302, 14:35:29 303, 307, 308). Each redirect counts as a retry. Disabling retries 14:35:29 will disable redirect, too. 14:35:29 14:35:29 :param assert_same_host: 14:35:29 If ``True``, will make sure that the host of the pool requests is 14:35:29 consistent else will raise HostChangedError. When ``False``, you can 14:35:29 use the pool on an HTTP proxy and request foreign hosts. 14:35:29 14:35:29 :param timeout: 14:35:29 If specified, overrides the default timeout for this one 14:35:29 request. It may be a float (in seconds) or an instance of 14:35:29 :class:`urllib3.util.Timeout`. 14:35:29 14:35:29 :param pool_timeout: 14:35:29 If set and the pool is set to block=True, then this method will 14:35:29 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 14:35:29 connection is available within the time period. 14:35:29 14:35:29 :param bool preload_content: 14:35:29 If True, the response's body will be preloaded into memory. 14:35:29 14:35:29 :param bool decode_content: 14:35:29 If True, will attempt to decode the body based on the 14:35:29 'content-encoding' header. 14:35:29 14:35:29 :param release_conn: 14:35:29 If False, then the urlopen call will not release the connection 14:35:29 back into the pool once a response is received (but will release if 14:35:29 you read the entire contents of the response such as when 14:35:29 `preload_content=True`). This is useful if you're not preloading 14:35:29 the response's content immediately. You will need to call 14:35:29 ``r.release_conn()`` on the response ``r`` to return the connection 14:35:29 back into the pool. If None, it takes the value of ``preload_content`` 14:35:29 which defaults to ``True``. 14:35:29 14:35:29 :param bool chunked: 14:35:29 If True, urllib3 will send the body using chunked transfer 14:35:29 encoding. Otherwise, urllib3 will send the body using the standard 14:35:29 content-length form. Defaults to False. 14:35:29 14:35:29 :param int body_pos: 14:35:29 Position to seek to in file-like body in the event of a retry or 14:35:29 redirect. Typically this won't need to be set because urllib3 will 14:35:29 auto-populate the value when needed. 14:35:29 """ 14:35:29 parsed_url = parse_url(url) 14:35:29 destination_scheme = parsed_url.scheme 14:35:29 14:35:29 if headers is None: 14:35:29 headers = self.headers 14:35:29 14:35:29 if not isinstance(retries, Retry): 14:35:29 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 14:35:29 14:35:29 if release_conn is None: 14:35:29 release_conn = preload_content 14:35:29 14:35:29 # Check host 14:35:29 if assert_same_host and not self.is_same_host(url): 14:35:29 raise HostChangedError(self, url, retries) 14:35:29 14:35:29 # Ensure that the URL we're connecting to is properly encoded 14:35:29 if url.startswith("/"): 14:35:29 url = to_str(_encode_target(url)) 14:35:29 else: 14:35:29 url = to_str(parsed_url.url) 14:35:29 14:35:29 conn = None 14:35:29 14:35:29 # Track whether `conn` needs to be released before 14:35:29 # returning/raising/recursing. Update this variable if necessary, and 14:35:29 # leave `release_conn` constant throughout the function. That way, if 14:35:29 # the function recurses, the original value of `release_conn` will be 14:35:29 # passed down into the recursive call, and its value will be respected. 14:35:29 # 14:35:29 # See issue #651 [1] for details. 14:35:29 # 14:35:29 # [1] 14:35:29 release_this_conn = release_conn 14:35:29 14:35:29 http_tunnel_required = connection_requires_http_tunnel( 14:35:29 self.proxy, self.proxy_config, destination_scheme 14:35:29 ) 14:35:29 14:35:29 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 14:35:29 # have to copy the headers dict so we can safely change it without those 14:35:29 # changes being reflected in anyone else's copy. 14:35:29 if not http_tunnel_required: 14:35:29 headers = headers.copy() # type: ignore[attr-defined] 14:35:29 headers.update(self.proxy_headers) # type: ignore[union-attr] 14:35:29 14:35:29 # Must keep the exception bound to a separate variable or else Python 3 14:35:29 # complains about UnboundLocalError. 14:35:29 err = None 14:35:29 14:35:29 # Keep track of whether we cleanly exited the except block. This 14:35:29 # ensures we do proper cleanup in finally. 14:35:29 clean_exit = False 14:35:29 14:35:29 # Rewind body position, if needed. Record current position 14:35:29 # for future rewinds in the event of a redirect/retry. 14:35:29 body_pos = set_file_position(body, body_pos) 14:35:29 14:35:29 try: 14:35:29 # Request a connection from the queue. 14:35:29 timeout_obj = self._get_timeout(timeout) 14:35:29 conn = self._get_conn(timeout=pool_timeout) 14:35:29 14:35:29 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 14:35:29 14:35:29 # Is this a closed/new connection that requires CONNECT tunnelling? 14:35:29 if self.proxy is not None and http_tunnel_required and conn.is_closed: 14:35:29 try: 14:35:29 self._prepare_proxy(conn) 14:35:29 except (BaseSSLError, OSError, SocketTimeout) as e: 14:35:29 self._raise_timeout( 14:35:29 err=e, url=self.proxy.url, timeout_value=conn.timeout 14:35:29 ) 14:35:29 raise 14:35:29 14:35:29 # If we're going to release the connection in ``finally:``, then 14:35:29 # the response doesn't need to know about the connection. Otherwise 14:35:29 # it will also try to release it and we'll have a double-release 14:35:29 # mess. 14:35:29 response_conn = conn if not release_conn else None 14:35:29 14:35:29 # Make the request on the HTTPConnection object 14:35:29 > response = self._make_request( 14:35:29 conn, 14:35:29 method, 14:35:29 url, 14:35:29 timeout=timeout_obj, 14:35:29 body=body, 14:35:29 headers=headers, 14:35:29 chunked=chunked, 14:35:29 retries=retries, 14:35:29 response_conn=response_conn, 14:35:29 preload_content=preload_content, 14:35:29 decode_content=decode_content, 14:35:29 **response_kw, 14:35:29 ) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 14:35:29 conn.request( 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 14:35:29 self.endheaders() 14:35:29 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 14:35:29 self._send_output(message_body, encode_chunked=encode_chunked) 14:35:29 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 14:35:29 self.send(msg) 14:35:29 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 14:35:29 self.connect() 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 14:35:29 self.sock = self._new_conn() 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 self = 14:35:29 14:35:29 def _new_conn(self) -> socket.socket: 14:35:29 """Establish a socket connection and set nodelay settings on it. 14:35:29 14:35:29 :return: New socket connection. 14:35:29 """ 14:35:29 try: 14:35:29 sock = connection.create_connection( 14:35:29 (self._dns_host, self.port), 14:35:29 self.timeout, 14:35:29 source_address=self.source_address, 14:35:29 socket_options=self.socket_options, 14:35:29 ) 14:35:29 except socket.gaierror as e: 14:35:29 raise NameResolutionError(self.host, self, e) from e 14:35:29 except SocketTimeout as e: 14:35:29 raise ConnectTimeoutError( 14:35:29 self, 14:35:29 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 14:35:29 ) from e 14:35:29 14:35:29 except OSError as e: 14:35:29 > raise NewConnectionError( 14:35:29 self, f"Failed to establish a new connection: {e}" 14:35:29 ) from e 14:35:29 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 14:35:29 14:35:29 The above exception was the direct cause of the following exception: 14:35:29 14:35:29 self = 14:35:29 request = , stream = False 14:35:29 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 14:35:29 proxies = OrderedDict() 14:35:29 14:35:29 def send( 14:35:29 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 14:35:29 ): 14:35:29 """Sends PreparedRequest object. Returns Response object. 14:35:29 14:35:29 :param request: The :class:`PreparedRequest ` being sent. 14:35:29 :param stream: (optional) Whether to stream the request content. 14:35:29 :param timeout: (optional) How long to wait for the server to send 14:35:29 data before giving up, as a float, or a :ref:`(connect timeout, 14:35:29 read timeout) ` tuple. 14:35:29 :type timeout: float or tuple or urllib3 Timeout object 14:35:29 :param verify: (optional) Either a boolean, in which case it controls whether 14:35:29 we verify the server's TLS certificate, or a string, in which case it 14:35:29 must be a path to a CA bundle to use 14:35:29 :param cert: (optional) Any user-provided SSL certificate to be trusted. 14:35:29 :param proxies: (optional) The proxies dictionary to apply to the request. 14:35:29 :rtype: requests.Response 14:35:29 """ 14:35:29 14:35:29 try: 14:35:29 conn = self.get_connection_with_tls_context( 14:35:29 request, verify, proxies=proxies, cert=cert 14:35:29 ) 14:35:29 except LocationValueError as e: 14:35:29 raise InvalidURL(e, request=request) 14:35:29 14:35:29 self.cert_verify(conn, request.url, verify, cert) 14:35:29 url = self.request_url(request, proxies) 14:35:29 self.add_headers( 14:35:29 request, 14:35:29 stream=stream, 14:35:29 timeout=timeout, 14:35:29 verify=verify, 14:35:29 cert=cert, 14:35:29 proxies=proxies, 14:35:29 ) 14:35:29 14:35:29 chunked = not (request.body is None or "Content-Length" in request.headers) 14:35:29 14:35:29 if isinstance(timeout, tuple): 14:35:29 try: 14:35:29 connect, read = timeout 14:35:29 timeout = TimeoutSauce(connect=connect, read=read) 14:35:29 except ValueError: 14:35:29 raise ValueError( 14:35:29 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 14:35:29 f"or a single float to set both timeouts to the same value." 14:35:29 ) 14:35:29 elif isinstance(timeout, TimeoutSauce): 14:35:29 pass 14:35:29 else: 14:35:29 timeout = TimeoutSauce(connect=timeout, read=timeout) 14:35:29 14:35:29 try: 14:35:29 > resp = conn.urlopen( 14:35:29 method=request.method, 14:35:29 url=url, 14:35:29 body=request.body, 14:35:29 headers=request.headers, 14:35:29 redirect=False, 14:35:29 assert_same_host=False, 14:35:29 preload_content=False, 14:35:29 decode_content=False, 14:35:29 retries=self.max_retries, 14:35:29 timeout=timeout, 14:35:29 chunked=chunked, 14:35:29 ) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 14:35:29 retries = retries.increment( 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 14:35:29 method = 'GET' 14:35:29 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX' 14:35:29 response = None 14:35:29 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 14:35:29 _pool = 14:35:29 _stacktrace = 14:35:29 14:35:29 def increment( 14:35:29 self, 14:35:29 method: str | None = None, 14:35:29 url: str | None = None, 14:35:29 response: BaseHTTPResponse | None = None, 14:35:29 error: Exception | None = None, 14:35:29 _pool: ConnectionPool | None = None, 14:35:29 _stacktrace: TracebackType | None = None, 14:35:29 ) -> Self: 14:35:29 """Return a new Retry object with incremented retry counters. 14:35:29 14:35:29 :param response: A response object, or None, if the server did not 14:35:29 return a response. 14:35:29 :type response: :class:`~urllib3.response.BaseHTTPResponse` 14:35:29 :param Exception error: An error encountered during the request, or 14:35:29 None if the response was received successfully. 14:35:29 14:35:29 :return: A new ``Retry`` object. 14:35:29 """ 14:35:29 if self.total is False and error: 14:35:29 # Disabled, indicate to re-raise the error. 14:35:29 raise reraise(type(error), error, _stacktrace) 14:35:29 14:35:29 total = self.total 14:35:29 if total is not None: 14:35:29 total -= 1 14:35:29 14:35:29 connect = self.connect 14:35:29 read = self.read 14:35:29 redirect = self.redirect 14:35:29 status_count = self.status 14:35:29 other = self.other 14:35:29 cause = "unknown" 14:35:29 status = None 14:35:29 redirect_location = None 14:35:29 14:35:29 if error and self._is_connection_error(error): 14:35:29 # Connect retry? 14:35:29 if connect is False: 14:35:29 raise reraise(type(error), error, _stacktrace) 14:35:29 elif connect is not None: 14:35:29 connect -= 1 14:35:29 14:35:29 elif error and self._is_read_error(error): 14:35:29 # Read retry? 14:35:29 if read is False or method is None or not self._is_method_retryable(method): 14:35:29 raise reraise(type(error), error, _stacktrace) 14:35:29 elif read is not None: 14:35:29 read -= 1 14:35:29 14:35:29 elif error: 14:35:29 # Other retry? 14:35:29 if other is not None: 14:35:29 other -= 1 14:35:29 14:35:29 elif response and response.get_redirect_location(): 14:35:29 # Redirect retry? 14:35:29 if redirect is not None: 14:35:29 redirect -= 1 14:35:29 cause = "too many redirects" 14:35:29 response_redirect_location = response.get_redirect_location() 14:35:29 if response_redirect_location: 14:35:29 redirect_location = response_redirect_location 14:35:29 status = response.status 14:35:29 14:35:29 else: 14:35:29 # Incrementing because of a server error like a 500 in 14:35:29 # status_forcelist and the given method is in the allowed_methods 14:35:29 cause = ResponseError.GENERIC_ERROR 14:35:29 if response and response.status: 14:35:29 if status_count is not None: 14:35:29 status_count -= 1 14:35:29 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 14:35:29 status = response.status 14:35:29 14:35:29 history = self.history + ( 14:35:29 RequestHistory(method, url, error, status, redirect_location), 14:35:29 ) 14:35:29 14:35:29 new_retry = self.new( 14:35:29 total=total, 14:35:29 connect=connect, 14:35:29 read=read, 14:35:29 redirect=redirect, 14:35:29 status=status_count, 14:35:29 other=other, 14:35:29 history=history, 14:35:29 ) 14:35:29 14:35:29 if new_retry.is_exhausted(): 14:35:29 reason = error or ResponseError(cause) 14:35:29 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 14:35:29 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 14:35:29 14:35:29 During handling of the above exception, another exception occurred: 14:35:29 14:35:29 self = 14:35:29 14:35:29 def test_06_rdm_portmapping_SRG3_PP1_TXRX(self): 14:35:29 > response = test_utils.get_portmapping_node_attr("ROADMA01", "mapping", "SRG3-PP1-TXRX") 14:35:29 14:35:29 transportpce_tests/1.2.1/test01_portmapping.py:90: 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 transportpce_tests/common/test_utils.py:470: in get_portmapping_node_attr 14:35:29 response = get_request(target_url) 14:35:29 transportpce_tests/common/test_utils.py:116: in get_request 14:35:29 return requests.request( 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 14:35:29 return session.request(method=method, url=url, **kwargs) 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 14:35:29 resp = self.send(prep, **send_kwargs) 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 14:35:29 r = adapter.send(request, **kwargs) 14:35:29 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:35:29 14:35:29 self = 14:35:29 request = , stream = False 14:35:29 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 14:35:29 proxies = OrderedDict() 14:35:29 14:35:29 def send( 14:35:29 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 14:35:29 ): 14:35:29 """Sends PreparedRequest object. Returns Response object. 14:35:29 14:35:29 :param request: The :class:`PreparedRequest ` being sent. 14:35:29 :param stream: (optional) Whether to stream the request content. 14:35:29 :param timeout: (optional) How long to wait for the server to send 14:35:29 data before giving up, as a float, or a :ref:`(connect timeout, 14:35:29 read timeout) ` tuple. 14:35:29 :type timeout: float or tuple or urllib3 Timeout object 14:35:29 :param verify: (optional) Either a boolean, in which case it controls whether 14:35:29 we verify the server's TLS certificate, or a string, in which case it 14:35:29 must be a path to a CA bundle to use 14:35:29 :param cert: (optional) Any user-provided SSL certificate to be trusted. 14:35:29 :param proxies: (optional) The proxies dictionary to apply to the request. 14:35:29 :rtype: requests.Response 14:35:29 """ 14:35:29 14:35:29 try: 14:35:29 conn = self.get_connection_with_tls_context( 14:35:29 request, verify, proxies=proxies, cert=cert 14:35:29 ) 14:35:29 except LocationValueError as e: 14:35:29 raise InvalidURL(e, request=request) 14:35:29 14:35:29 self.cert_verify(conn, request.url, verify, cert) 14:35:29 url = self.request_url(request, proxies) 14:35:29 self.add_headers( 14:35:29 request, 14:35:29 stream=stream, 14:35:29 timeout=timeout, 14:35:29 verify=verify, 14:35:29 cert=cert, 14:35:29 proxies=proxies, 14:35:29 ) 14:35:29 14:35:29 chunked = not (request.body is None or "Content-Length" in request.headers) 14:35:29 14:35:29 if isinstance(timeout, tuple): 14:35:29 try: 14:35:29 connect, read = timeout 14:35:29 timeout = TimeoutSauce(connect=connect, read=read) 14:35:29 except ValueError: 14:35:29 raise ValueError( 14:35:29 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 14:35:29 f"or a single float to set both timeouts to the same value." 14:35:29 ) 14:35:29 elif isinstance(timeout, TimeoutSauce): 14:35:29 pass 14:35:29 else: 14:35:29 timeout = TimeoutSauce(connect=timeout, read=timeout) 14:35:29 14:35:29 try: 14:35:29 resp = conn.urlopen( 14:35:29 method=request.method, 14:35:29 url=url, 14:35:29 body=request.body, 14:35:29 headers=request.headers, 14:35:29 redirect=False, 14:35:29 assert_same_host=False, 14:35:29 preload_content=False, 14:35:29 decode_content=False, 14:35:29 retries=self.max_retries, 14:35:29 timeout=timeout, 14:35:29 chunked=chunked, 14:35:29 ) 14:35:29 14:35:29 except (ProtocolError, OSError) as err: 14:35:29 raise ConnectionError(err, request=request) 14:35:29 14:35:29 except MaxRetryError as e: 14:35:29 if isinstance(e.reason, ConnectTimeoutError): 14:35:29 # TODO: Remove this in 3.0.0: see #2811 14:35:29 if not isinstance(e.reason, NewConnectionError): 14:35:29 raise ConnectTimeout(e, request=request) 14:35:29 14:35:29 if isinstance(e.reason, ResponseError): 14:35:29 raise RetryError(e, request=request) 14:35:29 14:35:29 if isinstance(e.reason, _ProxyError): 14:35:29 raise ProxyError(e, request=request) 14:35:29 14:35:29 if isinstance(e.reason, _SSLError): 14:35:29 # This branch is for urllib3 v1.22 and later. 14:35:29 raise SSLError(e, request=request) 14:35:29 14:35:29 > raise ConnectionError(e, request=request) 14:35:29 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 14:35:29 14:35:29 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 14:35:29 ----------------------------- Captured stdout call ----------------------------- 14:35:29 execution of test_06_rdm_portmapping_SRG3_PP1_TXRX 14:35:29 _________ TransportPCEPortMappingTesting.test_08_xpdr_device_connected _________ 14:35:29 14:35:29 self = 14:35:29 14:35:29 def test_08_xpdr_device_connected(self): 14:35:29 response = test_utils.check_device_connection("XPDRA01") 14:35:29 > self.assertEqual(response['status_code'], requests.codes.ok) 14:35:29 E AssertionError: 409 != 200 14:35:29 14:35:29 transportpce_tests/1.2.1/test01_portmapping.py:104: AssertionError 14:35:29 ----------------------------- Captured stdout call ----------------------------- 14:35:29 execution of test_08_xpdr_device_connected 14:35:29 _________ TransportPCEPortMappingTesting.test_09_xpdr_portmapping_info _________ 14:35:29 14:35:29 self = 14:35:29 14:35:29 def test_09_xpdr_portmapping_info(self): 14:35:29 response = test_utils.get_portmapping_node_attr("XPDRA01", "node-info", None) 14:35:29 > self.assertEqual(response['status_code'], requests.codes.ok) 14:35:29 E AssertionError: 409 != 200 14:35:29 14:35:29 transportpce_tests/1.2.1/test01_portmapping.py:110: AssertionError 14:35:29 ----------------------------- Captured stdout call ----------------------------- 14:35:29 execution of test_09_xpdr_portmapping_info 14:35:29 _______ TransportPCEPortMappingTesting.test_10_xpdr_portmapping_NETWORK1 _______ 14:35:29 14:35:29 self = 14:35:29 14:35:29 def test_10_xpdr_portmapping_NETWORK1(self): 14:35:29 response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-NETWORK1") 14:35:29 > self.assertEqual(response['status_code'], requests.codes.ok) 14:35:29 E AssertionError: 409 != 200 14:35:29 14:35:29 transportpce_tests/1.2.1/test01_portmapping.py:123: AssertionError 14:35:29 ----------------------------- Captured stdout call ----------------------------- 14:35:29 execution of test_10_xpdr_portmapping_NETWORK1 14:35:29 _______ TransportPCEPortMappingTesting.test_11_xpdr_portmapping_NETWORK2 _______ 14:35:29 14:35:29 self = 14:35:29 14:35:29 def test_11_xpdr_portmapping_NETWORK2(self): 14:35:29 response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-NETWORK2") 14:35:29 > self.assertEqual(response['status_code'], requests.codes.ok) 14:35:29 E AssertionError: 409 != 200 14:35:29 14:35:29 transportpce_tests/1.2.1/test01_portmapping.py:134: AssertionError 14:35:29 ----------------------------- Captured stdout call ----------------------------- 14:35:29 execution of test_11_xpdr_portmapping_NETWORK2 14:35:29 _______ TransportPCEPortMappingTesting.test_12_xpdr_portmapping_CLIENT1 ________ 14:35:29 14:35:29 self = 14:35:29 14:35:29 def test_12_xpdr_portmapping_CLIENT1(self): 14:35:29 response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT1") 14:35:29 > self.assertEqual(response['status_code'], requests.codes.ok) 14:35:29 E AssertionError: 409 != 200 14:35:29 14:35:29 transportpce_tests/1.2.1/test01_portmapping.py:145: AssertionError 14:35:29 ----------------------------- Captured stdout call ----------------------------- 14:35:29 execution of test_12_xpdr_portmapping_CLIENT1 14:35:29 _______ TransportPCEPortMappingTesting.test_13_xpdr_portmapping_CLIENT2 ________ 14:35:29 14:35:29 self = 14:35:29 14:35:29 def test_13_xpdr_portmapping_CLIENT2(self): 14:35:29 response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT2") 14:35:29 > self.assertEqual(response['status_code'], requests.codes.ok) 14:35:29 E AssertionError: 409 != 200 14:35:29 14:35:29 transportpce_tests/1.2.1/test01_portmapping.py:157: AssertionError 14:35:29 ----------------------------- Captured stdout call ----------------------------- 14:35:29 execution of test_13_xpdr_portmapping_CLIENT2 14:35:29 _______ TransportPCEPortMappingTesting.test_14_xpdr_portmapping_CLIENT3 ________ 14:35:29 14:35:29 self = 14:35:29 14:35:29 def test_14_xpdr_portmapping_CLIENT3(self): 14:35:29 response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT3") 14:35:29 > self.assertEqual(response['status_code'], requests.codes.ok) 14:35:29 E AssertionError: 409 != 200 14:35:29 14:35:29 transportpce_tests/1.2.1/test01_portmapping.py:169: AssertionError 14:35:29 ----------------------------- Captured stdout call ----------------------------- 14:35:29 execution of test_14_xpdr_portmapping_CLIENT3 14:35:29 _______ TransportPCEPortMappingTesting.test_15_xpdr_portmapping_CLIENT4 ________ 14:35:29 14:35:29 self = 14:35:29 14:35:29 def test_15_xpdr_portmapping_CLIENT4(self): 14:35:29 response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT4") 14:35:29 > self.assertEqual(response['status_code'], requests.codes.ok) 14:35:29 E AssertionError: 409 != 200 14:35:29 14:35:29 transportpce_tests/1.2.1/test01_portmapping.py:181: AssertionError 14:35:29 ----------------------------- Captured stdout call ----------------------------- 14:35:29 execution of test_15_xpdr_portmapping_CLIENT4 14:35:29 _______ TransportPCEPortMappingTesting.test_16_xpdr_device_disconnection _______ 14:35:29 14:35:29 self = 14:35:29 14:35:29 def test_16_xpdr_device_disconnection(self): 14:35:29 response = test_utils.unmount_device("XPDRA01") 14:35:29 > self.assertIn(response.status_code, (requests.codes.ok, requests.codes.no_content)) 14:35:29 E AssertionError: 409 not found in (200, 204) 14:35:29 14:35:29 transportpce_tests/1.2.1/test01_portmapping.py:192: AssertionError 14:35:29 ----------------------------- Captured stdout call ----------------------------- 14:35:29 execution of test_16_xpdr_device_disconnection 14:35:29 Searching for pattern 'onDeviceDisConnected:\ XPDRA01' in karaf.log... Pattern not found after 180 seconds! Node XPDRA01 still not deleted from tpce topology... 14:35:29 _______ TransportPCEPortMappingTesting.test_19_rdm_device_disconnection ________ 14:35:29 14:35:29 self = 14:35:29 14:35:29 def test_19_rdm_device_disconnection(self): 14:35:29 response = test_utils.unmount_device("ROADMA01") 14:35:29 > self.assertIn(response.status_code, (requests.codes.ok, requests.codes.no_content)) 14:35:29 E AssertionError: 409 not found in (200, 204) 14:35:29 14:35:29 transportpce_tests/1.2.1/test01_portmapping.py:212: AssertionError 14:35:29 ----------------------------- Captured stdout call ----------------------------- 14:35:29 execution of test_19_rdm_device_disconnection 14:35:29 Searching for pattern 'onDeviceDisConnected:\ ROADMA01' in karaf.log... Pattern not found after 180 seconds! Node ROADMA01 still not deleted from tpce topology... 14:35:29 =========================== short test summary info ============================ 14:35:29 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_01_rdm_device_connection 14:35:29 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_02_rdm_device_connected 14:35:29 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_03_rdm_portmapping_info 14:35:29 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_04_rdm_portmapping_DEG1_TTP_TXRX 14:35:29 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_05_rdm_portmapping_SRG1_PP7_TXRX 14:35:29 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_06_rdm_portmapping_SRG3_PP1_TXRX 14:35:29 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_08_xpdr_device_connected 14:35:29 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_09_xpdr_portmapping_info 14:35:29 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_10_xpdr_portmapping_NETWORK1 14:35:29 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_11_xpdr_portmapping_NETWORK2 14:35:29 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_12_xpdr_portmapping_CLIENT1 14:35:29 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_13_xpdr_portmapping_CLIENT2 14:35:29 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_14_xpdr_portmapping_CLIENT3 14:35:29 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_15_xpdr_portmapping_CLIENT4 14:35:29 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_16_xpdr_device_disconnection 14:35:29 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_19_rdm_device_disconnection 14:35:29 16 failed, 5 passed in 603.14s (0:10:03) 14:35:29 tests121: exit 1 (603.37 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 1.2.1 pid=35731 14:36:09 ........ [100%] 14:36:18 35 passed in 254.44s (0:04:14) 14:36:19 pytest -q transportpce_tests/2.2.1/test02_topo_portmapping.py 14:36:49 .F..F. [100%] 14:37:02 =================================== FAILURES =================================== 14:37:02 _____ TransportPCEtesting.test_02_compareOpenroadmTopologyPortMapping_rdm ______ 14:37:02 14:37:02 self = 14:37:02 14:37:02 def test_02_compareOpenroadmTopologyPortMapping_rdm(self): 14:37:02 resTopo = test_utils.get_ietf_network_request('openroadm-topology', 'config') 14:37:02 self.assertEqual(resTopo['status_code'], requests.codes.ok) 14:37:02 nbMapCumul = 0 14:37:02 nbMappings = 0 14:37:02 for node in resTopo['network'][0]['node']: 14:37:02 nodeId = node['node-id'] 14:37:02 # pylint: disable=consider-using-f-string 14:37:02 print("nodeId={}".format(nodeId)) 14:37:02 nodeMapId = nodeId.split("-")[0] + "-" + nodeId.split("-")[1] 14:37:02 print("nodeMapId={}".format(nodeMapId)) 14:37:02 response = test_utils.get_portmapping_node_attr(nodeMapId, "node-info", None) 14:37:02 > self.assertEqual(response['status_code'], requests.codes.ok) 14:37:02 E AssertionError: 409 != 200 14:37:02 14:37:02 transportpce_tests/2.2.1/test02_topo_portmapping.py:64: AssertionError 14:37:02 ----------------------------- Captured stdout call ----------------------------- 14:37:02 nodeId=ROADM-A1-SRG3 14:37:02 nodeMapId=ROADM-A1 14:37:02 nodeId=ROADM-A1-DEG1 14:37:02 nodeMapId=ROADM-A1 14:37:02 nodeId=TAPI-SBI-ABS-NODE 14:37:02 nodeMapId=TAPI-SBI 14:37:02 _____ TransportPCEtesting.test_05_compareOpenroadmTopologyPortMapping_xpdr _____ 14:37:02 14:37:02 self = 14:37:02 14:37:02 def test_05_compareOpenroadmTopologyPortMapping_xpdr(self): 14:37:02 > self.test_02_compareOpenroadmTopologyPortMapping_rdm() 14:37:02 14:37:02 transportpce_tests/2.2.1/test02_topo_portmapping.py:91: 14:37:02 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:37:02 transportpce_tests/2.2.1/test02_topo_portmapping.py:64: in test_02_compareOpenroadmTopologyPortMapping_rdm 14:37:02 self.assertEqual(response['status_code'], requests.codes.ok) 14:37:02 E AssertionError: 409 != 200 14:37:02 ----------------------------- Captured stdout call ----------------------------- 14:37:02 nodeId=TAPI-SBI-ABS-NODE 14:37:02 nodeMapId=TAPI-SBI 14:37:02 =========================== short test summary info ============================ 14:37:02 FAILED transportpce_tests/2.2.1/test02_topo_portmapping.py::TransportPCEtesting::test_02_compareOpenroadmTopologyPortMapping_rdm 14:37:02 FAILED transportpce_tests/2.2.1/test02_topo_portmapping.py::TransportPCEtesting::test_05_compareOpenroadmTopologyPortMapping_xpdr 14:37:02 2 failed, 4 passed in 43.64s 14:37:02 tests121: FAIL ✖ in 10 minutes 9.75 seconds 14:37:02 tests221: exit 1 (298.53 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 2.2.1 pid=38837 14:37:03 tests221: FAIL ✖ in 5 minutes 5.27 seconds 14:37:03 tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 14:37:08 tests_hybrid: freeze> python -m pip freeze --all 14:37:08 tests_hybrid: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 14:37:08 tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh hybrid 14:37:08 using environment variables from ./karaf121.env 14:37:08 pytest -q transportpce_tests/hybrid/test01_device_change_notifications.py 14:37:53 ..............F...F...F..F...F..F...F..F...F....... [100%] 14:39:31 =================================== FAILURES =================================== 14:39:31 _________ TransportPCEFulltesting.test_15_check_update_openroadm_topo __________ 14:39:31 14:39:31 self = 14:39:31 14:39:31 def test_15_check_update_openroadm_topo(self): 14:39:31 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 14:39:31 self.assertEqual(response['status_code'], requests.codes.ok) 14:39:31 node_list = response['network'][0]['node'] 14:39:31 nb_updated_tp = 0 14:39:31 for node in node_list: 14:39:31 > self.assertEqual(node['org-openroadm-common-network:operational-state'], 'inService') 14:39:31 E KeyError: 'org-openroadm-common-network:operational-state' 14:39:31 14:39:31 transportpce_tests/hybrid/test01_device_change_notifications.py:234: KeyError 14:39:31 ----------------------------- Captured stdout call ----------------------------- 14:39:31 execution of test_15_check_update_openroadm_topo 14:39:31 ________ TransportPCEFulltesting.test_19_check_update_openroadm_topo_ok ________ 14:39:31 14:39:31 self = 14:39:31 14:39:31 def test_19_check_update_openroadm_topo_ok(self): 14:39:31 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 14:39:31 self.assertEqual(response['status_code'], requests.codes.ok) 14:39:31 node_list = response['network'][0]['node'] 14:39:31 for node in node_list: 14:39:31 > self.assertEqual(node['org-openroadm-common-network:operational-state'], 'inService') 14:39:31 E KeyError: 'org-openroadm-common-network:operational-state' 14:39:31 14:39:31 transportpce_tests/hybrid/test01_device_change_notifications.py:297: KeyError 14:39:31 ----------------------------- Captured stdout call ----------------------------- 14:39:31 execution of test_19_check_update_openroadm_topo_ok 14:39:31 _________ TransportPCEFulltesting.test_23_check_update_openroadm_topo __________ 14:39:31 14:39:31 self = 14:39:31 14:39:31 def test_23_check_update_openroadm_topo(self): 14:39:31 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 14:39:31 self.assertEqual(response['status_code'], requests.codes.ok) 14:39:31 node_list = response['network'][0]['node'] 14:39:31 nb_updated_tp = 0 14:39:31 for node in node_list: 14:39:31 > self.assertEqual(node['org-openroadm-common-network:operational-state'], 'inService') 14:39:31 E KeyError: 'org-openroadm-common-network:operational-state' 14:39:31 14:39:31 transportpce_tests/hybrid/test01_device_change_notifications.py:348: KeyError 14:39:31 ----------------------------- Captured stdout call ----------------------------- 14:39:31 execution of test_23_check_update_openroadm_topo 14:39:31 ________ TransportPCEFulltesting.test_26_check_update_openroadm_topo_ok ________ 14:39:31 14:39:31 self = 14:39:31 14:39:31 def test_26_check_update_openroadm_topo_ok(self): 14:39:31 > self.test_19_check_update_openroadm_topo_ok() 14:39:31 14:39:31 transportpce_tests/hybrid/test01_device_change_notifications.py:392: 14:39:31 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:39:31 14:39:31 self = 14:39:31 14:39:31 def test_19_check_update_openroadm_topo_ok(self): 14:39:31 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 14:39:31 self.assertEqual(response['status_code'], requests.codes.ok) 14:39:31 node_list = response['network'][0]['node'] 14:39:31 for node in node_list: 14:39:31 > self.assertEqual(node['org-openroadm-common-network:operational-state'], 'inService') 14:39:31 E KeyError: 'org-openroadm-common-network:operational-state' 14:39:31 14:39:31 transportpce_tests/hybrid/test01_device_change_notifications.py:297: KeyError 14:39:31 ----------------------------- Captured stdout call ----------------------------- 14:39:31 execution of test_26_check_update_openroadm_topo_ok 14:39:31 _________ TransportPCEFulltesting.test_30_check_update_openroadm_topo __________ 14:39:31 14:39:31 self = 14:39:31 14:39:31 def test_30_check_update_openroadm_topo(self): 14:39:31 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 14:39:31 self.assertEqual(response['status_code'], requests.codes.ok) 14:39:31 node_list = response['network'][0]['node'] 14:39:31 nb_updated_tp = 0 14:39:31 for node in node_list: 14:39:31 > self.assertEqual(node['org-openroadm-common-network:operational-state'], 'inService') 14:39:31 E KeyError: 'org-openroadm-common-network:operational-state' 14:39:31 14:39:31 transportpce_tests/hybrid/test01_device_change_notifications.py:432: KeyError 14:39:31 ----------------------------- Captured stdout call ----------------------------- 14:39:31 execution of test_30_check_update_openroadm_topo 14:39:31 ________ TransportPCEFulltesting.test_33_check_update_openroadm_topo_ok ________ 14:39:31 14:39:31 self = 14:39:31 14:39:31 def test_33_check_update_openroadm_topo_ok(self): 14:39:31 > self.test_19_check_update_openroadm_topo_ok() 14:39:31 14:39:31 transportpce_tests/hybrid/test01_device_change_notifications.py:476: 14:39:31 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:39:31 14:39:31 self = 14:39:31 14:39:31 def test_19_check_update_openroadm_topo_ok(self): 14:39:31 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 14:39:31 self.assertEqual(response['status_code'], requests.codes.ok) 14:39:31 node_list = response['network'][0]['node'] 14:39:31 for node in node_list: 14:39:31 > self.assertEqual(node['org-openroadm-common-network:operational-state'], 'inService') 14:39:31 E KeyError: 'org-openroadm-common-network:operational-state' 14:39:31 14:39:31 transportpce_tests/hybrid/test01_device_change_notifications.py:297: KeyError 14:39:31 ----------------------------- Captured stdout call ----------------------------- 14:39:31 execution of test_33_check_update_openroadm_topo_ok 14:39:31 _________ TransportPCEFulltesting.test_37_check_update_openroadm_topo __________ 14:39:31 14:39:31 self = 14:39:31 14:39:31 def test_37_check_update_openroadm_topo(self): 14:39:31 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 14:39:31 self.assertEqual(response['status_code'], requests.codes.ok) 14:39:31 node_list = response['network'][0]['node'] 14:39:31 nb_updated_tp = 0 14:39:31 for node in node_list: 14:39:31 > self.assertEqual(node['org-openroadm-common-network:operational-state'], 'inService') 14:39:31 E KeyError: 'org-openroadm-common-network:operational-state' 14:39:31 14:39:31 transportpce_tests/hybrid/test01_device_change_notifications.py:514: KeyError 14:39:31 ----------------------------- Captured stdout call ----------------------------- 14:39:31 execution of test_37_check_update_openroadm_topo 14:39:31 ________ TransportPCEFulltesting.test_40_check_update_openroadm_topo_ok ________ 14:39:31 14:39:31 self = 14:39:31 14:39:31 def test_40_check_update_openroadm_topo_ok(self): 14:39:31 > self.test_19_check_update_openroadm_topo_ok() 14:39:31 14:39:31 transportpce_tests/hybrid/test01_device_change_notifications.py:556: 14:39:31 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 14:39:31 14:39:31 self = 14:39:31 14:39:31 def test_19_check_update_openroadm_topo_ok(self): 14:39:31 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 14:39:31 self.assertEqual(response['status_code'], requests.codes.ok) 14:39:31 node_list = response['network'][0]['node'] 14:39:31 for node in node_list: 14:39:31 > self.assertEqual(node['org-openroadm-common-network:operational-state'], 'inService') 14:39:31 E KeyError: 'org-openroadm-common-network:operational-state' 14:39:31 14:39:31 transportpce_tests/hybrid/test01_device_change_notifications.py:297: KeyError 14:39:31 ----------------------------- Captured stdout call ----------------------------- 14:39:31 execution of test_40_check_update_openroadm_topo_ok 14:39:31 _________ TransportPCEFulltesting.test_44_check_update_openroadm_topo __________ 14:39:31 14:39:31 self = 14:39:31 14:39:31 def test_44_check_update_openroadm_topo(self): 14:39:31 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 14:39:31 self.assertEqual(response['status_code'], requests.codes.ok) 14:39:31 node_list = response['network'][0]['node'] 14:39:31 nb_updated_tp = 0 14:39:31 for node in node_list: 14:39:31 > self.assertEqual(node['org-openroadm-common-network:operational-state'], 'inService') 14:39:31 E KeyError: 'org-openroadm-common-network:operational-state' 14:39:31 14:39:31 transportpce_tests/hybrid/test01_device_change_notifications.py:596: KeyError 14:39:31 ----------------------------- Captured stdout call ----------------------------- 14:39:31 execution of test_44_check_update_openroadm_topo 14:39:31 =========================== short test summary info ============================ 14:39:31 FAILED transportpce_tests/hybrid/test01_device_change_notifications.py::TransportPCEFulltesting::test_15_check_update_openroadm_topo 14:39:31 FAILED transportpce_tests/hybrid/test01_device_change_notifications.py::TransportPCEFulltesting::test_19_check_update_openroadm_topo_ok 14:39:31 FAILED transportpce_tests/hybrid/test01_device_change_notifications.py::TransportPCEFulltesting::test_23_check_update_openroadm_topo 14:39:31 FAILED transportpce_tests/hybrid/test01_device_change_notifications.py::TransportPCEFulltesting::test_26_check_update_openroadm_topo_ok 14:39:31 FAILED transportpce_tests/hybrid/test01_device_change_notifications.py::TransportPCEFulltesting::test_30_check_update_openroadm_topo 14:39:31 FAILED transportpce_tests/hybrid/test01_device_change_notifications.py::TransportPCEFulltesting::test_33_check_update_openroadm_topo_ok 14:39:31 FAILED transportpce_tests/hybrid/test01_device_change_notifications.py::TransportPCEFulltesting::test_37_check_update_openroadm_topo 14:39:31 FAILED transportpce_tests/hybrid/test01_device_change_notifications.py::TransportPCEFulltesting::test_40_check_update_openroadm_topo_ok 14:39:31 FAILED transportpce_tests/hybrid/test01_device_change_notifications.py::TransportPCEFulltesting::test_44_check_update_openroadm_topo 14:39:31 9 failed, 42 passed in 142.16s (0:02:22) 14:39:31 tests_hybrid: exit 1 (142.40 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh hybrid pid=40311 14:39:31 tests_hybrid: FAIL ✖ in 2 minutes 28.09 seconds 14:39:31 buildlighty: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 14:39:36 buildlighty: freeze> python -m pip freeze --all 14:39:37 buildlighty: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 14:39:37 buildlighty: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/lighty> ./build.sh 14:39:37 NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED 14:39:49 [ERROR] COMPILATION ERROR : 14:39:49 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[17,42] cannot find symbol 14:39:49 symbol: class YangModuleInfo 14:39:49 location: package org.opendaylight.yangtools.binding 14:39:49 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[21,30] cannot find symbol 14:39:49 symbol: class YangModuleInfo 14:39:49 location: class io.lighty.controllers.tpce.utils.TPCEUtils 14:39:49 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[343,30] cannot find symbol 14:39:49 symbol: class YangModuleInfo 14:39:49 location: class io.lighty.controllers.tpce.utils.TPCEUtils 14:39:49 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[350,23] cannot find symbol 14:39:49 symbol: class YangModuleInfo 14:39:49 location: class io.lighty.controllers.tpce.utils.TPCEUtils 14:39:49 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.13.0:compile (default-compile) on project tpce: Compilation failure: Compilation failure: 14:39:49 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[17,42] cannot find symbol 14:39:49 [ERROR] symbol: class YangModuleInfo 14:39:49 [ERROR] location: package org.opendaylight.yangtools.binding 14:39:49 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[21,30] cannot find symbol 14:39:49 [ERROR] symbol: class YangModuleInfo 14:39:49 [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils 14:39:49 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[343,30] cannot find symbol 14:39:49 [ERROR] symbol: class YangModuleInfo 14:39:49 [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils 14:39:49 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[350,23] cannot find symbol 14:39:49 [ERROR] symbol: class YangModuleInfo 14:39:49 [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils 14:39:49 [ERROR] -> [Help 1] 14:39:49 [ERROR] 14:39:49 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. 14:39:49 [ERROR] Re-run Maven using the -X switch to enable full debug logging. 14:39:49 [ERROR] 14:39:49 [ERROR] For more information about the errors and possible solutions, please read the following articles: 14:39:49 [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException 14:39:49 unzip: cannot find or open target/tpce-bin.zip, target/tpce-bin.zip.zip or target/tpce-bin.zip.ZIP. 14:39:49 buildlighty: exit 9 (12.22 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/lighty> ./build.sh pid=41530 14:39:49 buildlighty: command failed but is marked ignore outcome so handling it as success 14:39:49 buildcontroller: OK (102.38=setup[7.70]+cmd[94.68] seconds) 14:39:49 testsPCE: OK (305.19=setup[72.71]+cmd[232.48] seconds) 14:39:49 sims: OK (11.55=setup[7.44]+cmd[4.11] seconds) 14:39:49 build_karaf_tests121: OK (52.01=setup[7.32]+cmd[44.69] seconds) 14:39:49 tests121: FAIL code 1 (609.75=setup[6.38]+cmd[603.37] seconds) 14:39:49 build_karaf_tests221: OK (53.47=setup[7.45]+cmd[46.02] seconds) 14:39:49 tests_tapi: FAIL code 1 (228.89=setup[7.76]+cmd[221.13] seconds) 14:39:49 tests221: FAIL code 1 (305.27=setup[6.74]+cmd[298.53] seconds) 14:39:49 build_karaf_tests71: OK (48.47=setup[12.50]+cmd[35.97] seconds) 14:39:49 tests71: OK (420.26=setup[7.15]+cmd[413.11] seconds) 14:39:49 build_karaf_tests_hybrid: OK (50.31=setup[9.19]+cmd[41.12] seconds) 14:39:49 tests_hybrid: FAIL code 1 (148.09=setup[5.69]+cmd[142.40] seconds) 14:39:49 buildlighty: OK (18.52=setup[6.30]+cmd[12.22] seconds) 14:39:49 docs: OK (31.62=setup[29.22]+cmd[2.41] seconds) 14:39:49 docs-linkcheck: OK (33.21=setup[29.56]+cmd[3.65] seconds) 14:39:49 checkbashisms: OK (2.88=setup[1.94]+cmd[0.02,0.05,0.87] seconds) 14:39:49 pre-commit: FAIL code 1 (37.12=setup[3.19]+cmd[0.00,0.01,33.92] seconds) 14:39:49 pylint: FAIL code 1 (27.44=setup[5.00]+cmd[22.44] seconds) 14:39:49 evaluation failed :( (1276.95 seconds) 14:39:49 + tox_status=255 14:39:49 + echo '---> Completed tox runs' 14:39:49 ---> Completed tox runs 14:39:49 + for i in .tox/*/log 14:39:49 ++ echo .tox/build_karaf_tests121/log 14:39:49 ++ awk -F/ '{print $2}' 14:39:49 + tox_env=build_karaf_tests121 14:39:49 + cp -r .tox/build_karaf_tests121/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests121 14:39:49 + for i in .tox/*/log 14:39:49 ++ echo .tox/build_karaf_tests221/log 14:39:49 ++ awk -F/ '{print $2}' 14:39:49 + tox_env=build_karaf_tests221 14:39:49 + cp -r .tox/build_karaf_tests221/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests221 14:39:49 + for i in .tox/*/log 14:39:49 ++ echo .tox/build_karaf_tests71/log 14:39:49 ++ awk -F/ '{print $2}' 14:39:49 + tox_env=build_karaf_tests71 14:39:49 + cp -r .tox/build_karaf_tests71/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests71 14:39:49 + for i in .tox/*/log 14:39:49 ++ echo .tox/build_karaf_tests_hybrid/log 14:39:49 ++ awk -F/ '{print $2}' 14:39:49 + tox_env=build_karaf_tests_hybrid 14:39:49 + cp -r .tox/build_karaf_tests_hybrid/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests_hybrid 14:39:49 + for i in .tox/*/log 14:39:49 ++ echo .tox/buildcontroller/log 14:39:49 ++ awk -F/ '{print $2}' 14:39:49 + tox_env=buildcontroller 14:39:49 + cp -r .tox/buildcontroller/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/buildcontroller 14:39:49 + for i in .tox/*/log 14:39:49 ++ echo .tox/buildlighty/log 14:39:49 ++ awk -F/ '{print $2}' 14:39:49 + tox_env=buildlighty 14:39:49 + cp -r .tox/buildlighty/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/buildlighty 14:39:49 + for i in .tox/*/log 14:39:49 ++ echo .tox/checkbashisms/log 14:39:49 ++ awk -F/ '{print $2}' 14:39:49 + tox_env=checkbashisms 14:39:49 + cp -r .tox/checkbashisms/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/checkbashisms 14:39:49 + for i in .tox/*/log 14:39:49 ++ echo .tox/docs-linkcheck/log 14:39:49 ++ awk -F/ '{print $2}' 14:39:49 + tox_env=docs-linkcheck 14:39:49 + cp -r .tox/docs-linkcheck/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/docs-linkcheck 14:39:49 + for i in .tox/*/log 14:39:49 ++ echo .tox/docs/log 14:39:49 ++ awk -F/ '{print $2}' 14:39:49 + tox_env=docs 14:39:49 + cp -r .tox/docs/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/docs 14:39:49 + for i in .tox/*/log 14:39:49 ++ echo .tox/pre-commit/log 14:39:49 ++ awk -F/ '{print $2}' 14:39:49 + tox_env=pre-commit 14:39:49 + cp -r .tox/pre-commit/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/pre-commit 14:39:49 + for i in .tox/*/log 14:39:49 ++ echo .tox/pylint/log 14:39:49 ++ awk -F/ '{print $2}' 14:39:49 + tox_env=pylint 14:39:49 + cp -r .tox/pylint/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/pylint 14:39:49 + for i in .tox/*/log 14:39:49 ++ echo .tox/sims/log 14:39:49 ++ awk -F/ '{print $2}' 14:39:49 + tox_env=sims 14:39:49 + cp -r .tox/sims/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/sims 14:39:49 + for i in .tox/*/log 14:39:49 ++ echo .tox/tests121/log 14:39:49 ++ awk -F/ '{print $2}' 14:39:49 + tox_env=tests121 14:39:49 + cp -r .tox/tests121/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests121 14:39:49 + for i in .tox/*/log 14:39:49 ++ echo .tox/tests221/log 14:39:49 ++ awk -F/ '{print $2}' 14:39:49 + tox_env=tests221 14:39:49 + cp -r .tox/tests221/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests221 14:39:49 + for i in .tox/*/log 14:39:49 ++ echo .tox/tests71/log 14:39:49 ++ awk -F/ '{print $2}' 14:39:49 + tox_env=tests71 14:39:49 + cp -r .tox/tests71/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests71 14:39:49 + for i in .tox/*/log 14:39:49 ++ echo .tox/testsPCE/log 14:39:49 ++ awk -F/ '{print $2}' 14:39:49 + tox_env=testsPCE 14:39:49 + cp -r .tox/testsPCE/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/testsPCE 14:39:49 + for i in .tox/*/log 14:39:49 ++ echo .tox/tests_hybrid/log 14:39:49 ++ awk -F/ '{print $2}' 14:39:49 + tox_env=tests_hybrid 14:39:49 + cp -r .tox/tests_hybrid/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests_hybrid 14:39:49 + for i in .tox/*/log 14:39:49 ++ echo .tox/tests_tapi/log 14:39:49 ++ awk -F/ '{print $2}' 14:39:49 + tox_env=tests_tapi 14:39:49 + cp -r .tox/tests_tapi/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests_tapi 14:39:49 + DOC_DIR=docs/_build/html 14:39:49 + [[ -d docs/_build/html ]] 14:39:49 + echo '---> Archiving generated docs' 14:39:49 ---> Archiving generated docs 14:39:49 + mv docs/_build/html /w/workspace/transportpce-tox-verify-transportpce-master/archives/docs 14:39:49 + echo '---> tox-run.sh ends' 14:39:49 ---> tox-run.sh ends 14:39:49 + test 255 -eq 0 14:39:49 + exit 255 14:39:49 ++ '[' 1 = 1 ']' 14:39:49 ++ '[' -x /usr/bin/clear_console ']' 14:39:49 ++ /usr/bin/clear_console -q 14:39:49 Build step 'Execute shell' marked build as failure 14:39:49 $ ssh-agent -k 14:39:49 unset SSH_AUTH_SOCK; 14:39:49 unset SSH_AGENT_PID; 14:39:49 echo Agent pid 28784 killed; 14:39:49 [ssh-agent] Stopped. 14:39:49 [PostBuildScript] - [INFO] Executing post build scripts. 14:39:49 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins12667497220407212178.sh 14:39:49 ---> sysstat.sh 14:39:50 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins6320591312515871789.sh 14:39:50 ---> package-listing.sh 14:39:50 ++ tr '[:upper:]' '[:lower:]' 14:39:50 ++ facter osfamily 14:39:50 + OS_FAMILY=debian 14:39:50 + workspace=/w/workspace/transportpce-tox-verify-transportpce-master 14:39:50 + START_PACKAGES=/tmp/packages_start.txt 14:39:50 + END_PACKAGES=/tmp/packages_end.txt 14:39:50 + DIFF_PACKAGES=/tmp/packages_diff.txt 14:39:50 + PACKAGES=/tmp/packages_start.txt 14:39:50 + '[' /w/workspace/transportpce-tox-verify-transportpce-master ']' 14:39:50 + PACKAGES=/tmp/packages_end.txt 14:39:50 + case "${OS_FAMILY}" in 14:39:50 + grep '^ii' 14:39:50 + dpkg -l 14:39:50 + '[' -f /tmp/packages_start.txt ']' 14:39:50 + '[' -f /tmp/packages_end.txt ']' 14:39:50 + diff /tmp/packages_start.txt /tmp/packages_end.txt 14:39:50 + '[' /w/workspace/transportpce-tox-verify-transportpce-master ']' 14:39:50 + mkdir -p /w/workspace/transportpce-tox-verify-transportpce-master/archives/ 14:39:50 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/transportpce-tox-verify-transportpce-master/archives/ 14:39:50 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins6230340941119952969.sh 14:39:50 ---> capture-instance-metadata.sh 14:39:50 Setup pyenv: 14:39:50 system 14:39:50 3.8.13 14:39:50 3.9.13 14:39:50 3.10.13 14:39:50 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 14:39:50 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-Voun from file:/tmp/.os_lf_venv 14:39:51 lf-activate-venv(): INFO: Installing: lftools 14:40:02 lf-activate-venv(): INFO: Adding /tmp/venv-Voun/bin to PATH 14:40:02 INFO: Running in OpenStack, capturing instance metadata 14:40:03 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins13028150574958352126.sh 14:40:03 provisioning config files... 14:40:03 Could not find credentials [logs] for transportpce-tox-verify-transportpce-master #2059 14:40:03 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/transportpce-tox-verify-transportpce-master@tmp/config13223845434847637002tmp 14:40:03 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[odl-logs-s3-cloudfront-index] 14:40:03 Run condition [Regular expression match] enabling perform for step [Provide Configuration files] 14:40:03 provisioning config files... 14:40:04 copy managed file [jenkins-s3-log-ship] to file:/home/jenkins/.aws/credentials 14:40:04 [EnvInject] - Injecting environment variables from a build step. 14:40:04 [EnvInject] - Injecting as environment variables the properties content 14:40:04 SERVER_ID=logs 14:40:04 14:40:04 [EnvInject] - Variables injected successfully. 14:40:04 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins5091343591726403524.sh 14:40:04 ---> create-netrc.sh 14:40:04 WARN: Log server credential not found. 14:40:04 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins3255845244754077058.sh 14:40:04 ---> python-tools-install.sh 14:40:04 Setup pyenv: 14:40:04 system 14:40:04 3.8.13 14:40:04 3.9.13 14:40:04 3.10.13 14:40:04 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 14:40:04 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-Voun from file:/tmp/.os_lf_venv 14:40:05 lf-activate-venv(): INFO: Installing: lftools 14:40:13 lf-activate-venv(): INFO: Adding /tmp/venv-Voun/bin to PATH 14:40:13 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins17969527319412276643.sh 14:40:13 ---> sudo-logs.sh 14:40:13 Archiving 'sudo' log.. 14:40:14 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins6522375604810875731.sh 14:40:14 ---> job-cost.sh 14:40:14 Setup pyenv: 14:40:14 system 14:40:14 3.8.13 14:40:14 3.9.13 14:40:14 3.10.13 14:40:14 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 14:40:14 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-Voun from file:/tmp/.os_lf_venv 14:40:15 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 14:40:19 lf-activate-venv(): INFO: Adding /tmp/venv-Voun/bin to PATH 14:40:19 INFO: No Stack... 14:40:20 INFO: Retrieving Pricing Info for: v3-standard-4 14:40:20 INFO: Archiving Costs 14:40:20 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins6914415108934792672.sh 14:40:20 ---> logs-deploy.sh 14:40:20 Setup pyenv: 14:40:20 system 14:40:20 3.8.13 14:40:20 3.9.13 14:40:20 3.10.13 14:40:20 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 14:40:20 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-Voun from file:/tmp/.os_lf_venv 14:40:21 lf-activate-venv(): INFO: Installing: lftools 14:40:29 lf-activate-venv(): INFO: Adding /tmp/venv-Voun/bin to PATH 14:40:29 WARNING: Nexus logging server not set 14:40:29 INFO: S3 path logs/releng/vex-yul-odl-jenkins-1/transportpce-tox-verify-transportpce-master/2059/ 14:40:29 INFO: archiving logs to S3 14:40:31 ---> uname -a: 14:40:31 Linux prd-ubuntu2004-docker-4c-16g-41493 5.4.0-190-generic #210-Ubuntu SMP Fri Jul 5 17:03:38 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 14:40:31 14:40:31 14:40:31 ---> lscpu: 14:40:31 Architecture: x86_64 14:40:31 CPU op-mode(s): 32-bit, 64-bit 14:40:31 Byte Order: Little Endian 14:40:31 Address sizes: 40 bits physical, 48 bits virtual 14:40:31 CPU(s): 4 14:40:31 On-line CPU(s) list: 0-3 14:40:31 Thread(s) per core: 1 14:40:31 Core(s) per socket: 1 14:40:31 Socket(s): 4 14:40:31 NUMA node(s): 1 14:40:31 Vendor ID: AuthenticAMD 14:40:31 CPU family: 23 14:40:31 Model: 49 14:40:31 Model name: AMD EPYC-Rome Processor 14:40:31 Stepping: 0 14:40:31 CPU MHz: 2800.000 14:40:31 BogoMIPS: 5600.00 14:40:31 Virtualization: AMD-V 14:40:31 Hypervisor vendor: KVM 14:40:31 Virtualization type: full 14:40:31 L1d cache: 128 KiB 14:40:31 L1i cache: 128 KiB 14:40:31 L2 cache: 2 MiB 14:40:31 L3 cache: 64 MiB 14:40:31 NUMA node0 CPU(s): 0-3 14:40:31 Vulnerability Gather data sampling: Not affected 14:40:31 Vulnerability Itlb multihit: Not affected 14:40:31 Vulnerability L1tf: Not affected 14:40:31 Vulnerability Mds: Not affected 14:40:31 Vulnerability Meltdown: Not affected 14:40:31 Vulnerability Mmio stale data: Not affected 14:40:31 Vulnerability Retbleed: Vulnerable 14:40:31 Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp 14:40:31 Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization 14:40:31 Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected 14:40:31 Vulnerability Srbds: Not affected 14:40:31 Vulnerability Tsx async abort: Not affected 14:40:31 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities 14:40:31 14:40:31 14:40:31 ---> nproc: 14:40:31 4 14:40:31 14:40:31 14:40:31 ---> df -h: 14:40:31 Filesystem Size Used Avail Use% Mounted on 14:40:31 udev 7.8G 0 7.8G 0% /dev 14:40:31 tmpfs 1.6G 1.1M 1.6G 1% /run 14:40:31 /dev/vda1 78G 16G 62G 21% / 14:40:31 tmpfs 7.9G 0 7.9G 0% /dev/shm 14:40:31 tmpfs 5.0M 0 5.0M 0% /run/lock 14:40:31 tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup 14:40:31 /dev/loop0 62M 62M 0 100% /snap/core20/1405 14:40:31 /dev/loop1 44M 44M 0 100% /snap/snapd/15177 14:40:31 /dev/loop2 68M 68M 0 100% /snap/lxd/22753 14:40:31 /dev/vda15 105M 6.1M 99M 6% /boot/efi 14:40:31 tmpfs 1.6G 0 1.6G 0% /run/user/1001 14:40:31 /dev/loop3 92M 92M 0 100% /snap/lxd/29619 14:40:31 14:40:31 14:40:31 ---> free -m: 14:40:31 total used free shared buff/cache available 14:40:31 Mem: 15997 661 5769 1 9566 14996 14:40:31 Swap: 1023 0 1023 14:40:31 14:40:31 14:40:31 ---> ip addr: 14:40:31 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 14:40:31 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 14:40:31 inet 127.0.0.1/8 scope host lo 14:40:31 valid_lft forever preferred_lft forever 14:40:31 inet6 ::1/128 scope host 14:40:31 valid_lft forever preferred_lft forever 14:40:31 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 14:40:31 link/ether fa:16:3e:32:31:b8 brd ff:ff:ff:ff:ff:ff 14:40:31 inet 10.30.171.129/23 brd 10.30.171.255 scope global dynamic ens3 14:40:31 valid_lft 82474sec preferred_lft 82474sec 14:40:31 inet6 fe80::f816:3eff:fe32:31b8/64 scope link 14:40:31 valid_lft forever preferred_lft forever 14:40:31 3: docker0: mtu 1458 qdisc noqueue state DOWN group default 14:40:31 link/ether 02:42:c2:e9:b4:04 brd ff:ff:ff:ff:ff:ff 14:40:31 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 14:40:31 valid_lft forever preferred_lft forever 14:40:31 14:40:31 14:40:31 ---> sar -b -r -n DEV: 14:40:31 Linux 5.4.0-190-generic (prd-ubuntu2004-docker-4c-16g-41493) 10/11/24 _x86_64_ (4 CPU) 14:40:31 14:40:31 13:35:09 LINUX RESTART (4 CPU) 14:40:31 14:40:31 13:36:01 tps rtps wtps dtps bread/s bwrtn/s bdscd/s 14:40:31 13:37:01 63.06 46.43 16.63 0.00 500.10 865.59 0.00 14:40:31 13:38:01 1.52 0.02 1.50 0.00 0.93 18.66 0.00 14:40:31 13:39:01 0.55 0.00 0.55 0.00 0.00 7.33 0.00 14:40:31 13:40:01 0.98 0.02 0.97 0.00 0.13 14.40 0.00 14:40:31 13:41:01 22.60 4.22 18.38 0.00 2582.50 3783.37 0.00 14:40:31 13:42:01 2.02 0.00 2.02 0.00 0.00 24.80 0.00 14:40:31 13:43:01 0.63 0.00 0.63 0.00 0.00 8.93 0.00 14:40:31 13:44:01 0.93 0.00 0.93 0.00 0.00 13.20 0.00 14:40:31 13:45:01 0.67 0.00 0.67 0.00 0.00 9.73 0.00 14:40:31 13:46:01 1.53 0.00 1.53 0.00 0.00 28.26 0.00 14:40:31 13:47:01 0.67 0.00 0.67 0.00 0.00 8.93 0.00 14:40:31 13:48:01 0.80 0.00 0.80 0.00 0.00 11.86 0.00 14:40:31 13:49:01 0.67 0.00 0.67 0.00 0.00 8.80 0.00 14:40:31 13:50:01 1.02 0.00 1.02 0.00 0.00 14.66 0.00 14:40:31 13:51:01 1.02 0.02 1.00 0.00 0.13 14.26 0.00 14:40:31 13:52:01 0.85 0.00 0.85 0.00 0.00 12.53 0.00 14:40:31 13:53:01 0.78 0.00 0.78 0.00 0.00 10.93 0.00 14:40:31 13:54:01 0.88 0.00 0.88 0.00 0.00 12.80 0.00 14:40:31 13:55:01 0.72 0.00 0.72 0.00 0.00 10.26 0.00 14:40:31 13:56:01 0.90 0.00 0.90 0.00 0.00 13.73 0.00 14:40:31 13:57:01 0.82 0.00 0.82 0.00 0.00 11.06 0.00 14:40:31 13:58:01 1.10 0.00 1.10 0.00 0.00 14.53 0.00 14:40:31 13:59:01 0.77 0.00 0.77 0.00 0.00 10.66 0.00 14:40:31 14:00:01 0.83 0.00 0.83 0.00 0.00 12.66 0.00 14:40:31 14:01:01 0.75 0.00 0.75 0.00 0.00 11.46 0.00 14:40:31 14:02:01 1.20 0.00 1.20 0.00 0.00 15.73 0.00 14:40:31 14:03:01 0.72 0.00 0.72 0.00 0.00 10.26 0.00 14:40:31 14:04:01 0.92 0.00 0.92 0.00 0.00 13.06 0.00 14:40:31 14:05:01 0.65 0.00 0.65 0.00 0.00 8.93 0.00 14:40:31 14:06:01 1.15 0.00 1.15 0.00 0.00 16.00 0.00 14:40:31 14:07:01 0.80 0.00 0.80 0.00 0.00 10.26 0.00 14:40:31 14:08:01 1.10 0.00 1.10 0.00 0.00 15.33 0.00 14:40:31 14:09:01 0.68 0.00 0.68 0.00 0.00 9.87 0.00 14:40:31 14:10:01 1.17 0.00 1.17 0.00 0.00 15.20 0.00 14:40:31 14:11:01 0.87 0.00 0.87 0.00 0.00 11.86 0.00 14:40:31 14:12:01 1.07 0.00 1.07 0.00 0.00 14.00 0.00 14:40:31 14:13:01 0.68 0.00 0.68 0.00 0.00 9.47 0.00 14:40:31 14:14:01 0.95 0.00 0.95 0.00 0.00 14.00 0.00 14:40:31 14:15:01 0.73 0.00 0.73 0.00 0.00 10.13 0.00 14:40:31 14:16:01 1.22 0.00 1.22 0.00 0.00 17.73 0.00 14:40:31 14:17:01 0.83 0.02 0.82 0.00 0.13 10.66 0.00 14:40:31 14:18:01 182.59 24.56 158.02 0.00 1985.27 6264.96 0.00 14:40:31 14:19:01 174.64 28.26 146.38 0.00 1979.47 22334.82 0.00 14:40:31 14:20:01 151.50 14.56 136.94 0.00 820.93 47712.63 0.00 14:40:31 14:21:01 126.11 0.43 125.67 0.00 35.05 81688.10 0.00 14:40:31 14:22:01 162.93 10.87 152.07 0.00 383.73 119156.80 0.00 14:40:31 14:23:01 196.62 1.12 195.50 0.00 49.58 29410.06 0.00 14:40:31 14:24:01 87.42 2.70 84.72 0.00 211.56 1582.94 0.00 14:40:31 14:25:01 99.72 0.53 99.18 0.00 97.70 1644.39 0.00 14:40:31 14:26:01 232.09 0.13 231.96 0.00 10.53 16110.25 0.00 14:40:31 14:27:01 73.28 0.07 73.21 0.00 2.13 6108.63 0.00 14:40:31 14:28:01 2.77 0.00 2.77 0.00 0.00 49.73 0.00 14:40:31 14:29:01 83.82 0.00 83.82 0.00 0.00 1205.93 0.00 14:40:31 14:30:01 2.97 0.00 2.97 0.00 0.00 66.26 0.00 14:40:31 14:31:01 26.91 0.00 26.91 0.00 0.00 416.73 0.00 14:40:31 14:32:01 56.66 0.00 56.66 0.00 0.00 917.18 0.00 14:40:31 14:33:01 96.72 0.00 96.72 0.00 0.00 3574.34 0.00 14:40:31 14:34:01 1.85 0.00 1.85 0.00 0.00 31.99 0.00 14:40:31 14:35:01 1.65 0.00 1.65 0.00 0.00 23.20 0.00 14:40:31 14:36:01 2.77 0.00 2.77 0.00 0.00 56.26 0.00 14:40:31 14:37:01 83.42 0.00 83.42 0.00 0.00 1202.87 0.00 14:40:31 14:38:01 102.78 0.02 102.77 0.00 0.13 9997.27 0.00 14:40:31 14:39:01 3.85 0.00 3.85 0.00 0.00 150.37 0.00 14:40:31 14:40:01 19.74 4.72 15.03 0.00 108.10 1334.89 0.00 14:40:31 Average: 32.73 2.17 30.56 0.00 137.01 5565.60 0.00 14:40:31 14:40:31 13:36:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 14:40:31 13:37:01 13886104 15566548 435472 2.66 42272 1853560 1211424 6.95 599200 1660716 8 14:40:31 13:38:01 13887240 15567784 433816 2.65 42328 1853588 1211424 6.95 598360 1660724 4 14:40:31 13:39:01 13886964 15567536 433944 2.65 42348 1853596 1211424 6.95 598376 1660732 12 14:40:31 13:40:01 13888340 15569024 432192 2.64 42380 1853604 1211424 6.95 596764 1660740 8 14:40:31 13:41:01 13689516 15552272 437956 2.67 46940 2028484 1237780 7.10 726908 1714976 376 14:40:31 13:42:01 13689260 15552076 438200 2.67 46988 2028492 1237780 7.10 726896 1714984 8 14:40:31 13:43:01 13689484 15552336 437896 2.67 47024 2028492 1237780 7.10 727252 1714984 124 14:40:31 13:44:01 13691164 15554036 436160 2.66 47060 2028496 1237780 7.10 724880 1714988 8 14:40:31 13:45:01 13691112 15554040 436116 2.66 47092 2028496 1237780 7.10 725400 1714988 132 14:40:31 13:46:01 13691484 15554496 435628 2.66 47196 2028476 1237780 7.10 725096 1714968 136 14:40:31 13:47:01 13691316 15554396 435656 2.66 47228 2028476 1237780 7.10 724932 1714968 4 14:40:31 13:48:01 13690960 15554092 435812 2.66 47268 2028480 1237780 7.10 725072 1714972 148 14:40:31 13:49:01 13690740 15553876 435980 2.66 47300 2028480 1237780 7.10 724912 1714972 12 14:40:31 13:50:01 13690700 15553868 435884 2.66 47332 2028480 1237780 7.10 724948 1714972 8 14:40:31 13:51:01 13690752 15554048 435828 2.66 47376 2028484 1253388 7.19 724532 1714984 20 14:40:31 13:52:01 13691176 15554512 435328 2.66 47408 2028492 1253388 7.19 724560 1714984 12 14:40:31 13:53:01 13690844 15554240 435588 2.66 47448 2028496 1253388 7.19 724868 1714988 4 14:40:31 13:54:01 13691240 15554656 435132 2.66 47496 2028496 1253388 7.19 724664 1714988 16 14:40:31 13:55:01 13691108 15554564 435196 2.66 47528 2028500 1253388 7.19 724736 1714992 8 14:40:31 13:56:01 13690920 15554420 435408 2.66 47572 2028488 1253388 7.19 725072 1714992 16 14:40:31 13:57:01 13690824 15554392 435408 2.66 47608 2028504 1253388 7.19 724856 1714996 16 14:40:31 13:58:01 13690856 15554600 435224 2.66 47644 2028632 1253388 7.19 724988 1715116 156 14:40:31 13:59:01 13690824 15554604 435188 2.66 47676 2028640 1253388 7.19 725220 1715120 136 14:40:31 14:00:01 13690888 15554700 435056 2.66 47708 2028640 1253388 7.19 725232 1715120 8 14:40:31 14:01:01 13690880 15554720 435020 2.66 47748 2028636 1253388 7.19 725092 1715124 40 14:40:31 14:02:01 13690708 15554616 435040 2.66 47788 2028644 1253388 7.19 725364 1715136 148 14:40:31 14:03:01 13690900 15554852 434972 2.66 47828 2028648 1253388 7.19 725708 1715140 180 14:40:31 14:04:01 13690980 15554960 434860 2.65 47860 2028648 1237284 7.10 725296 1715140 16 14:40:31 14:05:01 13690996 15555004 434784 2.65 47876 2028652 1237284 7.10 725512 1715144 148 14:40:31 14:06:01 13691044 15555076 434684 2.65 47916 2028652 1237284 7.10 725368 1715144 12 14:40:31 14:07:01 13691076 15555144 434588 2.65 47948 2028656 1237284 7.10 725368 1715148 8 14:40:31 14:08:01 13690580 15554680 435012 2.66 47980 2028664 1237284 7.10 725584 1715148 24 14:40:31 14:09:01 13690832 15554956 434704 2.65 47996 2028668 1237284 7.10 725404 1715152 8 14:40:31 14:10:01 13690800 15554972 434756 2.65 48020 2028668 1237284 7.10 725572 1715160 156 14:40:31 14:11:01 13690888 15555088 434628 2.65 48068 2028656 1237284 7.10 725812 1715164 24 14:40:31 14:12:01 13690984 15555232 434448 2.65 48100 2028672 1237284 7.10 725652 1715164 8 14:40:31 14:13:01 13690716 15554984 434636 2.65 48124 2028672 1237284 7.10 726180 1715164 4 14:40:31 14:14:01 13690992 15555292 434384 2.65 48156 2028676 1237284 7.10 725976 1715168 8 14:40:31 14:15:01 13690820 15555160 434476 2.65 48196 2028676 1237284 7.10 726280 1715168 8 14:40:31 14:16:01 13691120 15555548 434036 2.65 48264 2028676 1237284 7.10 725984 1715172 20 14:40:31 14:17:01 13689884 15554360 435188 2.66 48304 2028676 1237284 7.10 725984 1715172 16 14:40:31 14:18:01 13203424 15423856 546028 3.33 74800 2340768 1289796 7.40 927404 1961736 169204 14:40:31 14:19:01 10874032 14612116 1338704 8.17 130588 3688076 2156296 12.37 1872508 3212076 1102472 14:40:31 14:20:01 9339320 13923040 2026740 12.37 154708 4464660 2592092 14.87 2699252 3872088 508468 14:40:31 14:21:01 5318920 12996664 2950960 18.01 193272 7393988 3755112 21.54 4538620 5920312 1083968 14:40:31 14:22:01 5129996 14391716 1554168 9.49 219284 8878548 2603676 14.94 3681380 6891872 757928 14:40:31 14:23:01 154804 9218936 6724416 41.05 223628 8683064 8203268 47.06 8818172 6716820 444 14:40:31 14:24:01 400560 8765188 7177524 43.82 224436 7991740 8214072 47.13 9221384 6073252 460 14:40:31 14:25:01 6417784 14821568 1124344 6.86 229284 8023848 1947508 11.17 3208628 6091820 30796 14:40:31 14:26:01 4102688 12965292 2979252 18.19 246048 8447276 4166228 23.90 5191960 6395820 143980 14:40:31 14:27:01 2861292 11726376 4217340 25.74 248008 8447912 5271012 30.24 6441036 6385484 164 14:40:31 14:28:01 2848192 11713420 4230268 25.82 248024 8448044 5271012 30.24 6454412 6385568 200 14:40:31 14:29:01 2100732 10968452 4975096 30.37 250180 8448244 6027460 34.58 7211112 6374660 444 14:40:31 14:30:01 1982152 10850140 5093232 31.09 250208 8448472 6128748 35.16 7327536 6374564 120 14:40:31 14:31:01 3195008 12063452 3880652 23.69 250504 8448624 5220884 29.95 6119144 6373876 348 14:40:31 14:32:01 4369044 13271236 2672964 16.32 253360 8476436 3755656 21.55 4919168 6400864 29500 14:40:31 14:33:01 1374416 10315168 5627716 34.35 256008 8509116 6690012 38.38 7874828 6427880 188 14:40:31 14:34:01 1373364 10314136 5628748 34.36 256020 8509124 6690012 38.38 7875804 6427888 84 14:40:31 14:35:01 1371948 10312740 5630236 34.37 256036 8509124 6690012 38.38 7876728 6427884 88 14:40:31 14:36:01 3453780 12394800 3548664 21.66 256072 8509316 4382068 25.14 5802280 6427952 264 14:40:31 14:37:01 4179128 13121684 2823160 17.23 257260 8509608 3656888 20.98 5084336 6423148 196 14:40:31 14:38:01 2421776 11612844 4330896 26.44 264832 8736728 5164560 29.63 6665164 6583084 3264 14:40:31 14:39:01 2291364 11482640 4461008 27.23 264856 8736912 5238240 30.05 6794904 6582860 312 14:40:31 14:40:01 5952152 15353456 591984 3.61 269748 8927644 1341984 7.70 2965704 6748696 177616 14:40:31 Average: 10262873 14381323 1593412 9.73 112649 4094517 2457752 14.10 2481333 3243450 62699 14:40:31 14:40:31 13:36:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 14:40:31 13:37:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:37:01 ens3 0.28 0.02 0.02 0.00 0.00 0.00 0.00 0.00 14:40:31 13:37:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:38:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:38:01 ens3 0.30 0.13 0.02 0.01 0.00 0.00 0.00 0.00 14:40:31 13:38:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 13:39:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:39:01 ens3 0.13 0.00 0.01 0.00 0.00 0.00 0.00 0.00 14:40:31 13:39:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:40:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:40:01 ens3 0.28 0.13 0.02 0.01 0.00 0.00 0.00 0.00 14:40:31 13:40:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 13:41:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:41:01 ens3 102.60 52.47 1395.00 3.74 0.00 0.00 0.00 0.00 14:40:31 13:41:01 lo 0.27 0.27 0.03 0.03 0.00 0.00 0.00 0.00 14:40:31 13:42:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:42:01 ens3 0.73 0.13 0.07 0.01 0.00 0.00 0.00 0.00 14:40:31 13:42:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 13:43:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:43:01 ens3 0.53 0.37 0.31 0.25 0.00 0.00 0.00 0.00 14:40:31 13:43:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:44:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:44:01 ens3 0.57 0.12 0.06 0.01 0.00 0.00 0.00 0.00 14:40:31 13:44:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 13:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:45:01 ens3 0.70 0.40 0.32 0.25 0.00 0.00 0.00 0.00 14:40:31 13:45:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:46:01 ens3 0.43 0.22 0.15 0.07 0.00 0.00 0.00 0.00 14:40:31 13:46:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 13:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:47:01 ens3 0.15 0.00 0.01 0.00 0.00 0.00 0.00 0.00 14:40:31 13:47:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:48:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:48:01 ens3 0.22 0.10 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 13:48:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 13:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:49:01 ens3 0.13 0.00 0.01 0.00 0.00 0.00 0.00 0.00 14:40:31 13:49:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:50:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:50:01 ens3 0.27 0.10 0.02 0.01 0.00 0.00 0.00 0.00 14:40:31 13:50:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 13:51:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:51:01 ens3 0.43 0.13 0.14 0.07 0.00 0.00 0.00 0.00 14:40:31 13:51:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:52:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:52:01 ens3 0.42 0.10 0.04 0.01 0.00 0.00 0.00 0.00 14:40:31 13:52:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 13:53:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:53:01 ens3 0.23 0.03 0.01 0.00 0.00 0.00 0.00 0.00 14:40:31 13:53:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:54:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:54:01 ens3 0.32 0.10 0.02 0.01 0.00 0.00 0.00 0.00 14:40:31 13:54:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 13:55:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:55:01 ens3 0.52 0.00 0.05 0.00 0.00 0.00 0.00 0.00 14:40:31 13:55:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:56:01 ens3 0.77 0.43 0.40 0.28 0.00 0.00 0.00 0.00 14:40:31 13:56:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 13:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:57:01 ens3 0.17 0.00 0.01 0.00 0.00 0.00 0.00 0.00 14:40:31 13:57:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:58:01 ens3 0.30 0.10 0.02 0.01 0.00 0.00 0.00 0.00 14:40:31 13:58:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 13:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 13:59:01 ens3 0.13 0.00 0.01 0.00 0.00 0.00 0.00 0.00 14:40:31 13:59:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:00:01 ens3 0.22 0.10 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 14:00:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 14:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:01:01 ens3 0.37 0.18 0.14 0.07 0.00 0.00 0.00 0.00 14:40:31 14:01:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:02:01 ens3 0.67 0.10 0.06 0.01 0.00 0.00 0.00 0.00 14:40:31 14:02:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 14:03:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:03:01 ens3 1.40 0.58 0.62 0.44 0.00 0.00 0.00 0.00 14:40:31 14:03:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:04:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:04:01 ens3 0.18 0.07 0.02 0.01 0.00 0.00 0.00 0.00 14:40:31 14:04:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 14:05:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:05:01 ens3 0.15 0.00 0.01 0.00 0.00 0.00 0.00 0.00 14:40:31 14:05:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:06:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:06:01 ens3 0.45 0.22 0.15 0.07 0.00 0.00 0.00 0.00 14:40:31 14:06:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 14:07:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:07:01 ens3 0.17 0.00 0.01 0.00 0.00 0.00 0.00 0.00 14:40:31 14:07:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:08:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:08:01 ens3 0.23 0.10 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 14:08:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 14:09:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:09:01 ens3 0.18 0.00 0.02 0.00 0.00 0.00 0.00 0.00 14:40:31 14:09:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:10:01 ens3 0.23 0.13 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 14:10:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 14:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:11:01 ens3 0.35 0.15 0.14 0.07 0.00 0.00 0.00 0.00 14:40:31 14:11:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:12:01 ens3 0.72 0.08 0.08 0.01 0.00 0.00 0.00 0.00 14:40:31 14:12:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 14:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:13:01 ens3 0.53 0.33 0.31 0.24 0.00 0.00 0.00 0.00 14:40:31 14:13:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:14:01 ens3 0.55 0.10 0.06 0.01 0.00 0.00 0.00 0.00 14:40:31 14:14:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 14:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:15:01 ens3 0.67 0.38 0.32 0.25 0.00 0.00 0.00 0.00 14:40:31 14:15:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:16:01 ens3 0.38 0.22 0.14 0.07 0.00 0.00 0.00 0.00 14:40:31 14:16:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:40:31 14:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:17:01 ens3 0.13 0.00 0.01 0.00 0.00 0.00 0.00 0.00 14:40:31 14:17:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:18:01 ens3 212.26 164.52 1289.57 52.37 0.00 0.00 0.00 0.00 14:40:31 14:18:01 lo 0.93 0.93 0.09 0.09 0.00 0.00 0.00 0.00 14:40:31 14:19:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:19:01 ens3 405.96 316.31 6203.16 35.92 0.00 0.00 0.00 0.00 14:40:31 14:19:01 lo 3.67 3.67 0.36 0.36 0.00 0.00 0.00 0.00 14:40:31 14:20:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:20:01 ens3 371.81 309.28 5139.29 30.94 0.00 0.00 0.00 0.00 14:40:31 14:20:01 lo 3.07 3.07 0.32 0.32 0.00 0.00 0.00 0.00 14:40:31 14:21:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:21:01 ens3 203.45 113.20 3965.37 12.55 0.00 0.00 0.00 0.00 14:40:31 14:21:01 lo 1.00 1.00 0.10 0.10 0.00 0.00 0.00 0.00 14:40:31 14:22:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:22:01 ens3 2.75 3.62 1.48 1.41 0.00 0.00 0.00 0.00 14:40:31 14:22:01 lo 3.55 3.55 0.36 0.36 0.00 0.00 0.00 0.00 14:40:31 14:23:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:23:01 ens3 1.35 1.22 0.21 0.19 0.00 0.00 0.00 0.00 14:40:31 14:23:01 lo 6.91 6.91 14.13 14.13 0.00 0.00 0.00 0.00 14:40:31 14:24:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:24:01 ens3 1.37 1.23 0.26 0.24 0.00 0.00 0.00 0.00 14:40:31 14:24:01 lo 50.96 50.96 37.79 37.79 0.00 0.00 0.00 0.00 14:40:31 14:25:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:25:01 ens3 2.95 2.88 1.01 1.21 0.00 0.00 0.00 0.00 14:40:31 14:25:01 lo 35.74 35.74 14.11 14.11 0.00 0.00 0.00 0.00 14:40:31 14:26:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:26:01 ens3 2.58 2.97 1.11 0.97 0.00 0.00 0.00 0.00 14:40:31 14:26:01 lo 13.51 13.51 16.81 16.81 0.00 0.00 0.00 0.00 14:40:31 14:27:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:27:01 ens3 1.08 0.97 0.21 0.19 0.00 0.00 0.00 0.00 14:40:31 14:27:01 lo 22.33 22.33 14.84 14.84 0.00 0.00 0.00 0.00 14:40:31 14:28:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:28:01 ens3 1.17 1.07 0.23 0.21 0.00 0.00 0.00 0.00 14:40:31 14:28:01 lo 26.66 26.66 8.59 8.59 0.00 0.00 0.00 0.00 14:40:31 14:29:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:29:01 ens3 0.80 0.77 0.14 0.14 0.00 0.00 0.00 0.00 14:40:31 14:29:01 lo 13.55 13.55 6.63 6.63 0.00 0.00 0.00 0.00 14:40:31 14:30:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:30:01 ens3 1.55 2.03 0.31 0.34 0.00 0.00 0.00 0.00 14:40:31 14:30:01 lo 38.33 38.33 15.26 15.26 0.00 0.00 0.00 0.00 14:40:31 14:31:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:31:01 ens3 1.20 1.48 0.32 0.28 0.00 0.00 0.00 0.00 14:40:31 14:31:01 lo 22.10 22.10 6.60 6.60 0.00 0.00 0.00 0.00 14:40:31 14:32:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:32:01 ens3 3.15 2.58 1.12 0.92 0.00 0.00 0.00 0.00 14:40:31 14:32:01 lo 27.55 27.55 11.74 11.74 0.00 0.00 0.00 0.00 14:40:31 14:33:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:33:01 ens3 1.88 1.83 0.64 0.55 0.00 0.00 0.00 0.00 14:40:31 14:33:01 lo 13.70 13.70 9.66 9.66 0.00 0.00 0.00 0.00 14:40:31 14:34:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:34:01 ens3 0.42 0.32 0.06 0.06 0.00 0.00 0.00 0.00 14:40:31 14:34:01 lo 1.85 1.85 0.37 0.37 0.00 0.00 0.00 0.00 14:40:31 14:35:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:35:01 ens3 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:35:01 lo 0.38 0.38 0.04 0.04 0.00 0.00 0.00 0.00 14:40:31 14:36:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:36:01 ens3 7.23 5.47 1.60 4.23 0.00 0.00 0.00 0.00 14:40:31 14:36:01 lo 0.88 0.88 0.08 0.08 0.00 0.00 0.00 0.00 14:40:31 14:37:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:37:01 ens3 1.35 1.00 0.45 0.37 0.00 0.00 0.00 0.00 14:40:31 14:37:01 lo 12.03 12.03 4.28 4.28 0.00 0.00 0.00 0.00 14:40:31 14:38:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:38:01 ens3 2.00 2.25 0.84 0.79 0.00 0.00 0.00 0.00 14:40:31 14:38:01 lo 33.59 33.59 32.08 32.08 0.00 0.00 0.00 0.00 14:40:31 14:39:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:39:01 ens3 1.08 0.88 0.21 0.19 0.00 0.00 0.00 0.00 14:40:31 14:39:01 lo 28.21 28.21 12.35 12.35 0.00 0.00 0.00 0.00 14:40:31 14:40:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 14:40:01 ens3 141.99 115.61 1953.64 15.48 0.00 0.00 0.00 0.00 14:40:31 14:40:01 lo 11.85 11.85 3.41 3.41 0.00 0.00 0.00 0.00 14:40:31 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:40:31 Average: ens3 23.25 17.33 311.91 2.59 0.00 0.00 0.00 0.00 14:40:31 Average: lo 5.88 5.88 3.29 3.29 0.00 0.00 0.00 0.00 14:40:31 14:40:31 14:40:31 ---> sar -P ALL: 14:40:31 Linux 5.4.0-190-generic (prd-ubuntu2004-docker-4c-16g-41493) 10/11/24 _x86_64_ (4 CPU) 14:40:31 14:40:31 13:35:09 LINUX RESTART (4 CPU) 14:40:31 14:40:31 13:36:01 CPU %user %nice %system %iowait %steal %idle 14:40:31 13:37:01 all 0.32 9.64 6.17 0.45 0.08 83.34 14:40:31 13:37:01 0 0.02 9.39 6.71 0.43 0.08 83.36 14:40:31 13:37:01 1 0.00 9.93 5.98 0.41 0.07 83.62 14:40:31 13:37:01 2 1.22 9.46 6.18 0.55 0.07 82.52 14:40:31 13:37:01 3 0.02 9.81 5.79 0.42 0.08 83.88 14:40:31 13:38:01 all 0.05 0.00 0.01 0.01 0.01 99.91 14:40:31 13:38:01 0 0.02 0.00 0.02 0.00 0.02 99.95 14:40:31 13:38:01 1 0.03 0.00 0.02 0.02 0.02 99.92 14:40:31 13:38:01 2 0.12 0.00 0.00 0.03 0.00 99.85 14:40:31 13:38:01 3 0.03 0.00 0.02 0.00 0.02 99.93 14:40:31 13:39:01 all 0.02 0.00 0.01 0.00 0.01 99.96 14:40:31 13:39:01 0 0.02 0.00 0.02 0.02 0.02 99.93 14:40:31 13:39:01 1 0.02 0.00 0.00 0.00 0.00 99.98 14:40:31 13:39:01 2 0.02 0.00 0.02 0.00 0.02 99.95 14:40:31 13:39:01 3 0.02 0.00 0.02 0.00 0.00 99.97 14:40:31 13:40:01 all 0.02 0.00 0.02 0.01 0.01 99.95 14:40:31 13:40:01 0 0.00 0.00 0.00 0.02 0.00 99.98 14:40:31 13:40:01 1 0.02 0.00 0.02 0.02 0.02 99.93 14:40:31 13:40:01 2 0.02 0.00 0.02 0.00 0.00 99.97 14:40:31 13:40:01 3 0.03 0.00 0.03 0.00 0.02 99.92 14:40:31 13:41:01 all 2.24 0.00 0.75 1.31 0.02 95.68 14:40:31 13:41:01 0 3.56 0.00 0.82 2.66 0.02 92.95 14:40:31 13:41:01 1 1.37 0.00 0.99 2.32 0.02 95.31 14:40:31 13:41:01 2 2.50 0.00 0.65 0.22 0.02 96.62 14:40:31 13:41:01 3 1.52 0.00 0.55 0.05 0.03 97.84 14:40:31 13:42:01 all 0.09 0.00 0.03 0.01 0.01 99.86 14:40:31 13:42:01 0 0.00 0.00 0.00 0.03 0.02 99.95 14:40:31 13:42:01 1 0.02 0.00 0.03 0.02 0.00 99.93 14:40:31 13:42:01 2 0.33 0.00 0.02 0.00 0.00 99.65 14:40:31 13:42:01 3 0.02 0.00 0.05 0.00 0.02 99.92 14:40:31 13:43:01 all 0.03 0.00 0.02 0.00 0.02 99.93 14:40:31 13:43:01 0 0.03 0.00 0.02 0.02 0.00 99.93 14:40:31 13:43:01 1 0.02 0.00 0.00 0.00 0.02 99.97 14:40:31 13:43:01 2 0.00 0.00 0.00 0.00 0.02 99.98 14:40:31 13:43:01 3 0.08 0.00 0.05 0.00 0.03 99.83 14:40:31 13:44:01 all 0.03 0.00 0.01 0.01 0.01 99.94 14:40:31 13:44:01 0 0.00 0.00 0.02 0.02 0.00 99.97 14:40:31 13:44:01 1 0.03 0.00 0.00 0.02 0.00 99.95 14:40:31 13:44:01 2 0.03 0.00 0.02 0.00 0.02 99.93 14:40:31 13:44:01 3 0.05 0.00 0.02 0.00 0.02 99.92 14:40:31 13:45:01 all 0.04 0.00 0.02 0.00 0.01 99.93 14:40:31 13:45:01 0 0.03 0.00 0.02 0.02 0.02 99.92 14:40:31 13:45:01 1 0.02 0.00 0.02 0.00 0.02 99.95 14:40:31 13:45:01 2 0.03 0.00 0.02 0.00 0.00 99.95 14:40:31 13:45:01 3 0.07 0.00 0.02 0.00 0.02 99.90 14:40:31 13:46:01 all 0.02 0.00 0.03 0.02 0.01 99.92 14:40:31 13:46:01 0 0.02 0.00 0.05 0.07 0.00 99.87 14:40:31 13:46:01 1 0.00 0.00 0.00 0.02 0.00 99.98 14:40:31 13:46:01 2 0.05 0.00 0.02 0.00 0.02 99.92 14:40:31 13:46:01 3 0.00 0.00 0.03 0.00 0.03 99.93 14:40:31 13:47:01 all 0.03 0.00 0.01 0.01 0.01 99.94 14:40:31 13:47:01 0 0.02 0.00 0.02 0.03 0.00 99.93 14:40:31 13:47:01 1 0.03 0.00 0.02 0.00 0.02 99.93 14:40:31 13:47:01 2 0.03 0.00 0.00 0.00 0.00 99.97 14:40:31 13:47:01 3 0.03 0.00 0.02 0.00 0.02 99.93 14:40:31 14:40:31 13:47:01 CPU %user %nice %system %iowait %steal %idle 14:40:31 13:48:01 all 0.01 0.00 0.03 0.00 0.02 99.95 14:40:31 13:48:01 0 0.00 0.00 0.02 0.02 0.02 99.95 14:40:31 13:48:01 1 0.00 0.00 0.02 0.00 0.00 99.98 14:40:31 13:48:01 2 0.02 0.00 0.03 0.00 0.02 99.93 14:40:31 13:48:01 3 0.02 0.00 0.03 0.00 0.03 99.92 14:40:31 13:49:01 all 0.01 0.00 0.01 0.01 0.01 99.95 14:40:31 13:49:01 0 0.02 0.00 0.02 0.02 0.00 99.95 14:40:31 13:49:01 1 0.00 0.00 0.02 0.02 0.02 99.95 14:40:31 13:49:01 2 0.02 0.00 0.00 0.00 0.02 99.97 14:40:31 13:49:01 3 0.02 0.00 0.02 0.00 0.02 99.95 14:40:31 13:50:01 all 0.04 0.00 0.01 0.00 0.00 99.94 14:40:31 13:50:01 0 0.02 0.00 0.02 0.02 0.00 99.95 14:40:31 13:50:01 1 0.03 0.00 0.00 0.00 0.00 99.97 14:40:31 13:50:01 2 0.05 0.00 0.00 0.00 0.00 99.95 14:40:31 13:50:01 3 0.05 0.00 0.03 0.00 0.02 99.90 14:40:31 13:51:01 all 0.02 0.00 0.05 0.01 0.01 99.91 14:40:31 13:51:01 0 0.02 0.00 0.03 0.02 0.00 99.93 14:40:31 13:51:01 1 0.02 0.00 0.03 0.02 0.02 99.92 14:40:31 13:51:01 2 0.02 0.00 0.05 0.00 0.00 99.93 14:40:31 13:51:01 3 0.03 0.00 0.07 0.00 0.03 99.87 14:40:31 13:52:01 all 0.02 0.00 0.02 0.00 0.01 99.94 14:40:31 13:52:01 0 0.02 0.00 0.02 0.02 0.02 99.93 14:40:31 13:52:01 1 0.03 0.00 0.00 0.00 0.02 99.95 14:40:31 13:52:01 2 0.00 0.00 0.02 0.00 0.00 99.98 14:40:31 13:52:01 3 0.03 0.00 0.05 0.00 0.02 99.90 14:40:31 13:53:01 all 0.01 0.00 0.02 0.00 0.01 99.95 14:40:31 13:53:01 0 0.02 0.00 0.02 0.02 0.00 99.95 14:40:31 13:53:01 1 0.02 0.00 0.02 0.00 0.00 99.97 14:40:31 13:53:01 2 0.00 0.00 0.02 0.00 0.02 99.97 14:40:31 13:53:01 3 0.02 0.00 0.03 0.00 0.03 99.92 14:40:31 13:54:01 all 0.04 0.00 0.01 0.01 0.01 99.93 14:40:31 13:54:01 0 0.03 0.00 0.00 0.02 0.00 99.95 14:40:31 13:54:01 1 0.02 0.00 0.02 0.02 0.02 99.93 14:40:31 13:54:01 2 0.03 0.00 0.02 0.00 0.00 99.95 14:40:31 13:54:01 3 0.07 0.00 0.02 0.00 0.02 99.90 14:40:31 13:55:01 all 0.01 0.00 0.02 0.00 0.01 99.96 14:40:31 13:55:01 0 0.02 0.00 0.02 0.02 0.00 99.95 14:40:31 13:55:01 1 0.02 0.00 0.03 0.00 0.02 99.93 14:40:31 13:55:01 2 0.00 0.00 0.00 0.00 0.00 100.00 14:40:31 13:55:01 3 0.00 0.00 0.02 0.00 0.03 99.95 14:40:31 13:56:01 all 0.03 0.00 0.02 0.00 0.01 99.94 14:40:31 13:56:01 0 0.02 0.00 0.02 0.02 0.02 99.93 14:40:31 13:56:01 1 0.03 0.00 0.03 0.00 0.00 99.93 14:40:31 13:56:01 2 0.02 0.00 0.02 0.00 0.00 99.97 14:40:31 13:56:01 3 0.03 0.00 0.02 0.00 0.02 99.93 14:40:31 13:57:01 all 0.02 0.00 0.01 0.02 0.01 99.95 14:40:31 13:57:01 0 0.02 0.00 0.02 0.07 0.00 99.90 14:40:31 13:57:01 1 0.03 0.00 0.00 0.00 0.02 99.95 14:40:31 13:57:01 2 0.00 0.00 0.00 0.00 0.00 100.00 14:40:31 13:57:01 3 0.02 0.00 0.03 0.00 0.02 99.93 14:40:31 13:58:01 all 0.03 0.00 0.01 0.02 0.01 99.93 14:40:31 13:58:01 0 0.03 0.00 0.02 0.05 0.00 99.90 14:40:31 13:58:01 1 0.05 0.00 0.00 0.02 0.02 99.92 14:40:31 13:58:01 2 0.03 0.00 0.00 0.00 0.00 99.97 14:40:31 13:58:01 3 0.02 0.00 0.02 0.00 0.03 99.93 14:40:31 14:40:31 13:58:01 CPU %user %nice %system %iowait %steal %idle 14:40:31 13:59:01 all 0.01 0.00 0.02 0.00 0.01 99.96 14:40:31 13:59:01 0 0.00 0.00 0.00 0.02 0.00 99.98 14:40:31 13:59:01 1 0.02 0.00 0.00 0.00 0.00 99.98 14:40:31 13:59:01 2 0.00 0.00 0.02 0.00 0.02 99.97 14:40:31 13:59:01 3 0.03 0.00 0.05 0.00 0.02 99.90 14:40:31 14:00:01 all 0.03 0.00 0.02 0.01 0.02 99.92 14:40:31 14:00:01 0 0.03 0.00 0.02 0.02 0.02 99.92 14:40:31 14:00:01 1 0.05 0.00 0.02 0.02 0.02 99.90 14:40:31 14:00:01 2 0.02 0.00 0.00 0.00 0.00 99.98 14:40:31 14:00:01 3 0.03 0.00 0.03 0.00 0.03 99.90 14:40:31 14:01:01 all 0.03 0.00 0.01 0.01 0.01 99.93 14:40:31 14:01:01 0 0.03 0.00 0.02 0.05 0.00 99.90 14:40:31 14:01:01 1 0.02 0.00 0.00 0.00 0.02 99.97 14:40:31 14:01:01 2 0.02 0.00 0.00 0.00 0.00 99.98 14:40:31 14:01:01 3 0.05 0.00 0.03 0.00 0.03 99.88 14:40:31 14:02:01 all 0.01 0.00 0.03 0.01 0.00 99.95 14:40:31 14:02:01 0 0.02 0.00 0.02 0.02 0.00 99.95 14:40:31 14:02:01 1 0.00 0.00 0.05 0.02 0.00 99.93 14:40:31 14:02:01 2 0.00 0.00 0.00 0.00 0.00 100.00 14:40:31 14:02:01 3 0.03 0.00 0.03 0.00 0.02 99.92 14:40:31 14:03:01 all 0.21 0.00 0.02 0.00 0.02 99.75 14:40:31 14:03:01 0 0.02 0.00 0.00 0.02 0.02 99.95 14:40:31 14:03:01 1 0.03 0.00 0.03 0.00 0.02 99.92 14:40:31 14:03:01 2 0.00 0.00 0.02 0.00 0.00 99.98 14:40:31 14:03:01 3 0.80 0.00 0.03 0.00 0.03 99.14 14:40:31 14:04:01 all 0.02 0.00 0.01 0.00 0.01 99.95 14:40:31 14:04:01 0 0.00 0.00 0.00 0.02 0.00 99.98 14:40:31 14:04:01 1 0.02 0.00 0.00 0.00 0.02 99.97 14:40:31 14:04:01 2 0.02 0.00 0.00 0.00 0.00 99.98 14:40:31 14:04:01 3 0.05 0.00 0.03 0.00 0.03 99.88 14:40:31 14:05:01 all 0.01 0.00 0.01 0.00 0.00 99.97 14:40:31 14:05:01 0 0.00 0.00 0.00 0.00 0.00 100.00 14:40:31 14:05:01 1 0.02 0.00 0.00 0.02 0.00 99.97 14:40:31 14:05:01 2 0.00 0.00 0.02 0.00 0.00 99.98 14:40:31 14:05:01 3 0.03 0.00 0.03 0.00 0.02 99.92 14:40:31 14:06:01 all 0.01 0.00 0.02 0.01 0.01 99.95 14:40:31 14:06:01 0 0.03 0.00 0.02 0.02 0.00 99.93 14:40:31 14:06:01 1 0.00 0.00 0.03 0.02 0.02 99.93 14:40:31 14:06:01 2 0.00 0.00 0.00 0.00 0.00 100.00 14:40:31 14:06:01 3 0.02 0.00 0.03 0.00 0.03 99.92 14:40:31 14:07:01 all 0.01 0.00 0.02 0.00 0.01 99.96 14:40:31 14:07:01 0 0.00 0.00 0.02 0.02 0.00 99.97 14:40:31 14:07:01 1 0.02 0.00 0.03 0.00 0.02 99.93 14:40:31 14:07:01 2 0.00 0.00 0.02 0.00 0.00 99.98 14:40:31 14:07:01 3 0.02 0.00 0.02 0.00 0.02 99.95 14:40:31 14:08:01 all 0.15 0.00 0.02 0.00 0.02 99.80 14:40:31 14:08:01 0 0.02 0.00 0.00 0.02 0.02 99.95 14:40:31 14:08:01 1 0.03 0.00 0.03 0.00 0.03 99.90 14:40:31 14:08:01 2 0.00 0.00 0.00 0.00 0.00 100.00 14:40:31 14:08:01 3 0.57 0.00 0.05 0.00 0.02 99.37 14:40:31 14:09:01 all 0.27 0.00 0.02 0.00 0.01 99.69 14:40:31 14:09:01 0 0.02 0.00 0.03 0.00 0.00 99.95 14:40:31 14:09:01 1 0.03 0.00 0.03 0.02 0.02 99.90 14:40:31 14:09:01 2 0.00 0.00 0.00 0.00 0.02 99.98 14:40:31 14:09:01 3 1.03 0.00 0.03 0.00 0.00 98.94 14:40:31 14:40:31 14:09:01 CPU %user %nice %system %iowait %steal %idle 14:40:31 14:10:01 all 0.04 0.00 0.01 0.00 0.01 99.94 14:40:31 14:10:01 0 0.03 0.00 0.00 0.02 0.00 99.95 14:40:31 14:10:01 1 0.03 0.00 0.00 0.00 0.02 99.95 14:40:31 14:10:01 2 0.02 0.00 0.02 0.00 0.00 99.97 14:40:31 14:10:01 3 0.07 0.00 0.02 0.00 0.02 99.90 14:40:31 14:11:01 all 0.02 0.00 0.02 0.01 0.02 99.94 14:40:31 14:11:01 0 0.02 0.00 0.02 0.03 0.02 99.92 14:40:31 14:11:01 1 0.02 0.00 0.03 0.00 0.03 99.92 14:40:31 14:11:01 2 0.03 0.00 0.00 0.00 0.00 99.97 14:40:31 14:11:01 3 0.02 0.00 0.02 0.00 0.02 99.95 14:40:31 14:12:01 all 0.02 0.00 0.02 0.00 0.01 99.95 14:40:31 14:12:01 0 0.02 0.00 0.02 0.00 0.00 99.97 14:40:31 14:12:01 1 0.00 0.00 0.03 0.00 0.02 99.95 14:40:31 14:12:01 2 0.00 0.00 0.00 0.00 0.00 100.00 14:40:31 14:12:01 3 0.05 0.00 0.03 0.00 0.02 99.90 14:40:31 14:13:01 all 0.05 0.00 0.02 0.00 0.01 99.92 14:40:31 14:13:01 0 0.12 0.00 0.02 0.02 0.00 99.85 14:40:31 14:13:01 1 0.05 0.00 0.02 0.00 0.02 99.92 14:40:31 14:13:01 2 0.00 0.00 0.02 0.00 0.00 99.98 14:40:31 14:13:01 3 0.03 0.00 0.02 0.00 0.02 99.93 14:40:31 14:14:01 all 0.03 0.00 0.01 0.01 0.02 99.94 14:40:31 14:14:01 0 0.02 0.00 0.00 0.02 0.02 99.95 14:40:31 14:14:01 1 0.00 0.00 0.02 0.02 0.03 99.93 14:40:31 14:14:01 2 0.03 0.00 0.02 0.00 0.00 99.95 14:40:31 14:14:01 3 0.05 0.00 0.00 0.00 0.02 99.93 14:40:31 14:15:01 all 0.05 0.00 0.03 0.00 0.01 99.91 14:40:31 14:15:01 0 0.10 0.00 0.02 0.02 0.00 99.87 14:40:31 14:15:01 1 0.05 0.00 0.03 0.00 0.02 99.90 14:40:31 14:15:01 2 0.00 0.00 0.00 0.00 0.00 100.00 14:40:31 14:15:01 3 0.05 0.00 0.05 0.00 0.02 99.88 14:40:31 14:16:01 all 0.03 0.00 0.01 0.00 0.02 99.94 14:40:31 14:16:01 0 0.02 0.00 0.00 0.02 0.00 99.97 14:40:31 14:16:01 1 0.03 0.00 0.03 0.00 0.03 99.90 14:40:31 14:16:01 2 0.02 0.00 0.00 0.00 0.02 99.97 14:40:31 14:16:01 3 0.03 0.00 0.02 0.00 0.02 99.93 14:40:31 14:17:01 all 0.01 0.00 0.02 0.01 0.01 99.95 14:40:31 14:17:01 0 0.00 0.00 0.02 0.02 0.02 99.95 14:40:31 14:17:01 1 0.00 0.00 0.03 0.02 0.02 99.93 14:40:31 14:17:01 2 0.02 0.00 0.00 0.00 0.00 99.98 14:40:31 14:17:01 3 0.03 0.00 0.02 0.00 0.02 99.93 14:40:31 14:18:01 all 16.33 0.00 1.55 2.20 0.04 79.88 14:40:31 14:18:01 0 4.90 0.00 1.72 6.40 0.02 86.96 14:40:31 14:18:01 1 25.10 0.00 1.59 0.99 0.08 72.24 14:40:31 14:18:01 2 19.70 0.00 1.93 0.98 0.03 77.35 14:40:31 14:18:01 3 15.61 0.00 0.97 0.42 0.03 82.97 14:40:31 14:19:01 all 47.66 0.00 4.14 3.23 0.10 44.86 14:40:31 14:19:01 0 38.21 0.00 3.72 3.59 0.12 54.36 14:40:31 14:19:01 1 56.37 0.00 4.07 3.05 0.10 36.42 14:40:31 14:19:01 2 41.03 0.00 4.25 3.63 0.08 51.01 14:40:31 14:19:01 3 55.11 0.00 4.54 2.65 0.12 37.58 14:40:31 14:20:01 all 74.91 0.00 2.58 1.48 0.12 20.92 14:40:31 14:20:01 0 77.61 0.00 2.28 1.06 0.13 18.91 14:40:31 14:20:01 1 70.18 0.00 2.09 0.59 0.12 27.02 14:40:31 14:20:01 2 84.16 0.00 3.41 2.38 0.12 9.93 14:40:31 14:20:01 3 67.67 0.00 2.53 1.91 0.10 27.79 14:40:31 14:40:31 14:20:01 CPU %user %nice %system %iowait %steal %idle 14:40:31 14:21:01 all 73.26 0.00 3.85 1.90 0.11 20.88 14:40:31 14:21:01 0 68.08 0.00 3.49 1.60 0.13 26.69 14:40:31 14:21:01 1 69.97 0.00 2.84 1.35 0.10 25.73 14:40:31 14:21:01 2 86.91 0.00 4.77 1.89 0.12 6.32 14:40:31 14:21:01 3 68.14 0.00 4.29 2.75 0.10 24.73 14:40:31 14:22:01 all 84.82 0.00 3.60 6.44 0.13 5.02 14:40:31 14:22:01 0 87.63 0.00 3.69 2.43 0.12 6.14 14:40:31 14:22:01 1 81.14 0.00 2.94 10.41 0.10 5.41 14:40:31 14:22:01 2 86.56 0.00 3.44 2.83 0.15 7.02 14:40:31 14:22:01 3 83.97 0.00 4.33 10.05 0.13 1.53 14:40:31 14:23:01 all 64.20 0.00 2.32 0.56 0.13 32.79 14:40:31 14:23:01 0 67.08 0.00 2.12 0.23 0.12 30.45 14:40:31 14:23:01 1 64.39 0.00 1.77 0.20 0.12 33.52 14:40:31 14:23:01 2 65.16 0.00 3.56 0.50 0.15 30.62 14:40:31 14:23:01 3 60.21 0.00 1.85 1.30 0.12 36.53 14:40:31 14:24:01 all 38.79 0.00 1.17 0.27 0.11 59.66 14:40:31 14:24:01 0 38.39 0.00 1.27 0.08 0.12 60.14 14:40:31 14:24:01 1 41.82 0.00 0.99 0.29 0.12 56.79 14:40:31 14:24:01 2 38.10 0.00 1.08 0.02 0.10 60.71 14:40:31 14:24:01 3 36.85 0.00 1.36 0.69 0.12 60.99 14:40:31 14:25:01 all 36.13 0.00 1.43 0.33 0.09 62.02 14:40:31 14:25:01 0 35.93 0.00 1.44 0.28 0.08 62.26 14:40:31 14:25:01 1 36.73 0.00 1.19 0.03 0.08 61.97 14:40:31 14:25:01 2 36.88 0.00 1.32 0.10 0.08 61.61 14:40:31 14:25:01 3 34.99 0.00 1.79 0.89 0.10 62.23 14:40:31 14:26:01 all 84.42 0.00 2.90 0.54 0.11 12.03 14:40:31 14:26:01 0 82.41 0.00 2.87 0.17 0.12 14.43 14:40:31 14:26:01 1 86.94 0.00 2.43 0.25 0.12 10.26 14:40:31 14:26:01 2 84.05 0.00 3.32 1.09 0.10 11.44 14:40:31 14:26:01 3 84.27 0.00 2.98 0.64 0.12 11.99 14:40:31 14:27:01 all 26.99 0.00 0.70 0.64 0.08 71.59 14:40:31 14:27:01 0 28.42 0.00 0.67 1.03 0.08 69.79 14:40:31 14:27:01 1 27.29 0.00 0.50 0.03 0.07 72.10 14:40:31 14:27:01 2 25.91 0.00 0.75 1.07 0.08 72.18 14:40:31 14:27:01 3 26.35 0.00 0.86 0.44 0.08 72.27 14:40:31 14:28:01 all 2.98 0.00 0.25 0.01 0.08 96.68 14:40:31 14:28:01 0 2.94 0.00 0.22 0.03 0.07 96.74 14:40:31 14:28:01 1 2.57 0.00 0.23 0.00 0.08 97.12 14:40:31 14:28:01 2 3.19 0.00 0.27 0.02 0.08 96.44 14:40:31 14:28:01 3 3.23 0.00 0.27 0.00 0.10 96.41 14:40:31 14:29:01 all 34.58 0.00 1.17 0.29 0.09 63.87 14:40:31 14:29:01 0 34.17 0.00 1.50 0.74 0.10 63.49 14:40:31 14:29:01 1 34.53 0.00 0.95 0.07 0.10 64.34 14:40:31 14:29:01 2 33.90 0.00 1.07 0.34 0.08 64.60 14:40:31 14:29:01 3 35.70 0.00 1.17 0.00 0.08 63.04 14:40:31 14:30:01 all 8.76 0.00 0.38 0.01 0.08 90.78 14:40:31 14:30:01 0 8.79 0.00 0.54 0.03 0.08 90.56 14:40:31 14:30:01 1 9.15 0.00 0.34 0.00 0.08 90.43 14:40:31 14:30:01 2 8.58 0.00 0.28 0.00 0.07 91.07 14:40:31 14:30:01 3 8.51 0.00 0.37 0.00 0.07 91.05 14:40:31 14:31:01 all 22.57 0.00 0.87 0.04 0.09 76.43 14:40:31 14:31:01 0 23.29 0.00 0.91 0.08 0.08 75.63 14:40:31 14:31:01 1 21.71 0.00 0.82 0.02 0.08 77.37 14:40:31 14:31:01 2 23.37 0.00 1.13 0.05 0.08 75.37 14:40:31 14:31:01 3 21.91 0.00 0.64 0.00 0.10 77.35 14:40:31 14:40:31 14:31:01 CPU %user %nice %system %iowait %steal %idle 14:40:31 14:32:01 all 13.44 0.00 0.69 0.30 0.08 85.49 14:40:31 14:32:01 0 15.94 0.00 0.82 0.75 0.07 82.42 14:40:31 14:32:01 1 12.31 0.00 0.60 0.02 0.10 86.97 14:40:31 14:32:01 2 12.77 0.00 0.88 0.42 0.08 85.85 14:40:31 14:32:01 3 12.76 0.00 0.45 0.00 0.08 86.71 14:40:31 14:33:01 all 43.77 0.00 1.32 0.16 0.09 54.66 14:40:31 14:33:01 0 47.80 0.00 1.74 0.30 0.10 50.05 14:40:31 14:33:01 1 43.84 0.00 1.25 0.12 0.08 54.71 14:40:31 14:33:01 2 45.11 0.00 1.15 0.03 0.08 53.62 14:40:31 14:33:01 3 38.33 0.00 1.15 0.18 0.10 60.23 14:40:31 14:34:01 all 1.02 0.00 0.31 0.01 0.06 98.60 14:40:31 14:34:01 0 1.14 0.00 0.34 0.03 0.07 98.43 14:40:31 14:34:01 1 0.52 0.00 0.20 0.00 0.05 99.23 14:40:31 14:34:01 2 1.35 0.00 0.42 0.00 0.07 98.16 14:40:31 14:34:01 3 1.09 0.00 0.28 0.00 0.05 98.58 14:40:31 14:35:01 all 0.75 0.00 0.31 0.01 0.08 98.86 14:40:31 14:35:01 0 0.92 0.00 0.39 0.03 0.07 98.59 14:40:31 14:35:01 1 0.48 0.00 0.20 0.00 0.08 99.23 14:40:31 14:35:01 2 0.94 0.00 0.39 0.00 0.08 98.59 14:40:31 14:35:01 3 0.67 0.00 0.25 0.00 0.07 99.02 14:40:31 14:36:01 all 0.86 0.00 0.35 0.03 0.07 98.69 14:40:31 14:36:01 0 0.62 0.00 0.32 0.10 0.07 98.90 14:40:31 14:36:01 1 1.15 0.00 0.52 0.02 0.08 98.23 14:40:31 14:36:01 2 0.77 0.00 0.32 0.00 0.07 98.84 14:40:31 14:36:01 3 0.92 0.00 0.23 0.00 0.07 98.78 14:40:31 14:37:01 all 34.57 0.00 1.11 0.23 0.08 64.01 14:40:31 14:37:01 0 32.18 0.00 1.28 0.52 0.08 65.94 14:40:31 14:37:01 1 35.63 0.00 1.34 0.38 0.08 62.57 14:40:31 14:37:01 2 35.99 0.00 1.09 0.02 0.08 62.82 14:40:31 14:37:01 3 34.46 0.00 0.73 0.02 0.08 64.71 14:40:31 14:38:01 all 60.00 0.00 1.81 0.96 0.14 37.09 14:40:31 14:38:01 0 58.56 0.00 1.59 1.07 0.12 38.66 14:40:31 14:38:01 1 55.70 0.00 2.08 1.64 0.13 40.45 14:40:31 14:38:01 2 65.53 0.00 2.15 0.70 0.13 31.49 14:40:31 14:38:01 3 60.21 0.00 1.42 0.40 0.17 37.79 14:40:31 14:39:01 all 4.95 0.00 0.26 0.01 0.06 94.71 14:40:31 14:39:01 0 5.62 0.00 0.35 0.00 0.07 93.97 14:40:31 14:39:01 1 4.81 0.00 0.25 0.02 0.07 94.86 14:40:31 14:39:01 2 4.57 0.00 0.20 0.02 0.05 95.17 14:40:31 14:39:01 3 4.81 0.00 0.25 0.00 0.07 94.87 14:40:31 14:40:01 all 20.59 0.00 1.21 0.23 0.07 77.91 14:40:31 14:40:01 0 13.57 0.00 1.27 0.20 0.07 84.89 14:40:31 14:40:01 1 24.66 0.00 1.53 0.07 0.08 73.66 14:40:31 14:40:01 2 14.31 0.00 0.91 0.52 0.05 84.21 14:40:31 14:40:01 3 29.81 0.00 1.12 0.13 0.07 68.87 14:40:31 Average: all 12.47 0.15 0.65 0.34 0.04 86.34 14:40:31 Average: 0 12.12 0.15 0.66 0.38 0.04 86.65 14:40:31 Average: 1 12.61 0.15 0.60 0.35 0.04 86.24 14:40:31 Average: 2 12.82 0.15 0.71 0.27 0.04 86.02 14:40:31 Average: 3 12.35 0.15 0.64 0.36 0.05 86.45 14:40:31 14:40:31 14:40:31