20:32:56 Triggered by Gerrit: https://git.opendaylight.org/gerrit/c/transportpce/+/113593 20:32:56 Running as SYSTEM 20:32:57 [EnvInject] - Loading node environment variables. 20:32:57 Building remotely on prd-ubuntu2004-docker-4c-16g-24687 (ubuntu2004-docker-4c-16g) in workspace /w/workspace/transportpce-tox-verify-scandium 20:32:57 [ssh-agent] Looking for ssh-agent implementation... 20:32:57 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 20:32:57 $ ssh-agent 20:32:57 SSH_AUTH_SOCK=/tmp/ssh-myPly27TpxZx/agent.13748 20:32:57 SSH_AGENT_PID=13753 20:32:57 [ssh-agent] Started. 20:32:57 Running ssh-add (command line suppressed) 20:32:57 Identity added: /w/workspace/transportpce-tox-verify-scandium@tmp/private_key_1782884005649357238.key (/w/workspace/transportpce-tox-verify-scandium@tmp/private_key_1782884005649357238.key) 20:32:57 [ssh-agent] Using credentials jenkins (jenkins-ssh) 20:32:57 The recommended git tool is: NONE 20:33:03 using credential jenkins-ssh 20:33:03 Wiping out workspace first. 20:33:03 Cloning the remote Git repository 20:33:03 Cloning repository git://devvexx.opendaylight.org/mirror/transportpce 20:33:03 > git init /w/workspace/transportpce-tox-verify-scandium # timeout=10 20:33:03 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 20:33:03 > git --version # timeout=10 20:33:03 > git --version # 'git version 2.25.1' 20:33:03 using GIT_SSH to set credentials jenkins-ssh 20:33:04 Verifying host key using known hosts file 20:33:04 You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. 20:33:04 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce +refs/heads/*:refs/remotes/origin/* # timeout=10 20:33:07 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 20:33:07 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 20:33:08 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 20:33:08 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 20:33:08 using GIT_SSH to set credentials jenkins-ssh 20:33:08 Verifying host key using known hosts file 20:33:08 You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. 20:33:08 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce refs/changes/93/113593/4 # timeout=10 20:33:08 > git rev-parse 25e11989b58d618d97414a24deb1347ba1abc808^{commit} # timeout=10 20:33:08 Checking out Revision 25e11989b58d618d97414a24deb1347ba1abc808 (refs/changes/93/113593/4) 20:33:08 > git config core.sparsecheckout # timeout=10 20:33:08 > git checkout -f 25e11989b58d618d97414a24deb1347ba1abc808 # timeout=10 20:33:12 Commit message: "Bump netconf to 8.0.2" 20:33:12 > git rev-parse FETCH_HEAD^{commit} # timeout=10 20:33:12 > git rev-list --no-walk 0369165863367431bc63630f29759e6299e46ef7 # timeout=10 20:33:12 > git remote # timeout=10 20:33:12 > git submodule init # timeout=10 20:33:12 > git submodule sync # timeout=10 20:33:12 > git config --get remote.origin.url # timeout=10 20:33:12 > git submodule init # timeout=10 20:33:12 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 20:33:12 ERROR: No submodules found. 20:33:13 provisioning config files... 20:33:13 copy managed file [npmrc] to file:/home/jenkins/.npmrc 20:33:13 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 20:33:13 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins12906811165339661670.sh 20:33:13 ---> python-tools-install.sh 20:33:13 Setup pyenv: 20:33:13 * system (set by /opt/pyenv/version) 20:33:13 * 3.8.13 (set by /opt/pyenv/version) 20:33:13 * 3.9.13 (set by /opt/pyenv/version) 20:33:13 * 3.10.13 (set by /opt/pyenv/version) 20:33:13 * 3.11.7 (set by /opt/pyenv/version) 20:33:18 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-LJis 20:33:18 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 20:33:21 lf-activate-venv(): INFO: Installing: lftools 20:33:58 lf-activate-venv(): INFO: Adding /tmp/venv-LJis/bin to PATH 20:33:58 Generating Requirements File 20:34:21 Python 3.11.7 20:34:21 pip 24.2 from /tmp/venv-LJis/lib/python3.11/site-packages/pip (python 3.11) 20:34:21 appdirs==1.4.4 20:34:21 argcomplete==3.5.0 20:34:21 aspy.yaml==1.3.0 20:34:21 attrs==24.2.0 20:34:21 autopage==0.5.2 20:34:21 beautifulsoup4==4.12.3 20:34:21 boto3==1.35.24 20:34:21 botocore==1.35.24 20:34:21 bs4==0.0.2 20:34:21 cachetools==5.5.0 20:34:21 certifi==2024.8.30 20:34:21 cffi==1.17.1 20:34:21 cfgv==3.4.0 20:34:21 chardet==5.2.0 20:34:21 charset-normalizer==3.3.2 20:34:21 click==8.1.7 20:34:21 cliff==4.7.0 20:34:21 cmd2==2.4.3 20:34:21 cryptography==3.3.2 20:34:21 debtcollector==3.0.0 20:34:21 decorator==5.1.1 20:34:21 defusedxml==0.7.1 20:34:21 Deprecated==1.2.14 20:34:21 distlib==0.3.8 20:34:21 dnspython==2.6.1 20:34:21 docker==4.2.2 20:34:21 dogpile.cache==1.3.3 20:34:21 durationpy==0.7 20:34:21 email_validator==2.2.0 20:34:21 filelock==3.16.1 20:34:21 future==1.0.0 20:34:21 gitdb==4.0.11 20:34:21 GitPython==3.1.43 20:34:21 google-auth==2.35.0 20:34:21 httplib2==0.22.0 20:34:21 identify==2.6.1 20:34:21 idna==3.10 20:34:21 importlib-resources==1.5.0 20:34:21 iso8601==2.1.0 20:34:21 Jinja2==3.1.4 20:34:21 jmespath==1.0.1 20:34:21 jsonpatch==1.33 20:34:21 jsonpointer==3.0.0 20:34:21 jsonschema==4.23.0 20:34:21 jsonschema-specifications==2023.12.1 20:34:21 keystoneauth1==5.8.0 20:34:21 kubernetes==31.0.0 20:34:21 lftools==0.37.10 20:34:21 lxml==5.3.0 20:34:21 MarkupSafe==2.1.5 20:34:21 msgpack==1.1.0 20:34:21 multi_key_dict==2.0.3 20:34:21 munch==4.0.0 20:34:21 netaddr==1.3.0 20:34:21 netifaces==0.11.0 20:34:21 niet==1.4.2 20:34:21 nodeenv==1.9.1 20:34:21 oauth2client==4.1.3 20:34:21 oauthlib==3.2.2 20:34:21 openstacksdk==4.0.0 20:34:21 os-client-config==2.1.0 20:34:21 os-service-types==1.7.0 20:34:21 osc-lib==3.1.0 20:34:21 oslo.config==9.6.0 20:34:21 oslo.context==5.6.0 20:34:21 oslo.i18n==6.4.0 20:34:21 oslo.log==6.1.2 20:34:21 oslo.serialization==5.5.0 20:34:21 oslo.utils==7.3.0 20:34:21 packaging==24.1 20:34:21 pbr==6.1.0 20:34:21 platformdirs==4.3.6 20:34:21 prettytable==3.11.0 20:34:21 pyasn1==0.6.1 20:34:21 pyasn1_modules==0.4.1 20:34:21 pycparser==2.22 20:34:21 pygerrit2==2.0.15 20:34:21 PyGithub==2.4.0 20:34:21 PyJWT==2.9.0 20:34:21 PyNaCl==1.5.0 20:34:21 pyparsing==2.4.7 20:34:21 pyperclip==1.9.0 20:34:21 pyrsistent==0.20.0 20:34:21 python-cinderclient==9.6.0 20:34:21 python-dateutil==2.9.0.post0 20:34:21 python-heatclient==4.0.0 20:34:21 python-jenkins==1.8.2 20:34:21 python-keystoneclient==5.5.0 20:34:21 python-magnumclient==4.7.0 20:34:21 python-openstackclient==7.1.2 20:34:21 python-swiftclient==4.6.0 20:34:21 PyYAML==6.0.2 20:34:21 referencing==0.35.1 20:34:21 requests==2.32.3 20:34:21 requests-oauthlib==2.0.0 20:34:21 requestsexceptions==1.4.0 20:34:21 rfc3986==2.0.0 20:34:21 rpds-py==0.20.0 20:34:21 rsa==4.9 20:34:21 ruamel.yaml==0.18.6 20:34:21 ruamel.yaml.clib==0.2.8 20:34:21 s3transfer==0.10.2 20:34:21 simplejson==3.19.3 20:34:21 six==1.16.0 20:34:21 smmap==5.0.1 20:34:21 soupsieve==2.6 20:34:21 stevedore==5.3.0 20:34:21 tabulate==0.9.0 20:34:21 toml==0.10.2 20:34:21 tomlkit==0.13.2 20:34:21 tqdm==4.66.5 20:34:21 typing_extensions==4.12.2 20:34:21 tzdata==2024.1 20:34:21 urllib3==1.26.20 20:34:21 virtualenv==20.26.5 20:34:21 wcwidth==0.2.13 20:34:21 websocket-client==1.8.0 20:34:21 wrapt==1.16.0 20:34:21 xdg==6.0.0 20:34:21 xmltodict==0.13.0 20:34:21 yq==3.4.3 20:34:22 [EnvInject] - Injecting environment variables from a build step. 20:34:22 [EnvInject] - Injecting as environment variables the properties content 20:34:22 PYTHON=python3 20:34:22 20:34:22 [EnvInject] - Variables injected successfully. 20:34:22 [transportpce-tox-verify-scandium] $ /bin/bash -l /tmp/jenkins17097217420485241539.sh 20:34:22 ---> tox-install.sh 20:34:22 + source /home/jenkins/lf-env.sh 20:34:22 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 20:34:22 ++ mktemp -d /tmp/venv-XXXX 20:34:22 + lf_venv=/tmp/venv-1U55 20:34:22 + local venv_file=/tmp/.os_lf_venv 20:34:22 + local python=python3 20:34:22 + local options 20:34:22 + local set_path=true 20:34:22 + local install_args= 20:34:22 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 20:34:22 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 20:34:22 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 20:34:22 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 20:34:22 + true 20:34:22 + case $1 in 20:34:22 + venv_file=/tmp/.toxenv 20:34:22 + shift 2 20:34:22 + true 20:34:22 + case $1 in 20:34:22 + shift 20:34:22 + break 20:34:22 + case $python in 20:34:22 + local pkg_list= 20:34:22 + [[ -d /opt/pyenv ]] 20:34:22 + echo 'Setup pyenv:' 20:34:22 Setup pyenv: 20:34:22 + export PYENV_ROOT=/opt/pyenv 20:34:22 + PYENV_ROOT=/opt/pyenv 20:34:22 + export PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:34:22 + PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:34:22 + pyenv versions 20:34:22 system 20:34:22 3.8.13 20:34:22 3.9.13 20:34:22 3.10.13 20:34:22 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) 20:34:22 + command -v pyenv 20:34:22 ++ pyenv init - --no-rehash 20:34:22 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 20:34:22 for i in ${!paths[@]}; do 20:34:22 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 20:34:22 fi; done; 20:34:22 echo "${paths[*]}"'\'')" 20:34:22 export PATH="/opt/pyenv/shims:${PATH}" 20:34:22 export PYENV_SHELL=bash 20:34:22 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 20:34:22 pyenv() { 20:34:22 local command 20:34:22 command="${1:-}" 20:34:22 if [ "$#" -gt 0 ]; then 20:34:22 shift 20:34:22 fi 20:34:22 20:34:22 case "$command" in 20:34:22 rehash|shell) 20:34:22 eval "$(pyenv "sh-$command" "$@")" 20:34:22 ;; 20:34:22 *) 20:34:22 command pyenv "$command" "$@" 20:34:22 ;; 20:34:22 esac 20:34:22 }' 20:34:22 +++ bash --norc -ec 'IFS=:; paths=($PATH); 20:34:22 for i in ${!paths[@]}; do 20:34:22 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 20:34:22 fi; done; 20:34:22 echo "${paths[*]}"' 20:34:22 ++ PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:34:22 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:34:22 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:34:22 ++ export PYENV_SHELL=bash 20:34:22 ++ PYENV_SHELL=bash 20:34:22 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 20:34:22 +++ complete -F _pyenv pyenv 20:34:22 ++ lf-pyver python3 20:34:22 ++ local py_version_xy=python3 20:34:22 ++ local py_version_xyz= 20:34:22 ++ pyenv versions 20:34:22 ++ local command 20:34:22 ++ command=versions 20:34:22 ++ '[' 1 -gt 0 ']' 20:34:22 ++ shift 20:34:22 ++ case "$command" in 20:34:22 ++ command pyenv versions 20:34:22 ++ pyenv versions 20:34:22 ++ sed 's/^[ *]* //' 20:34:22 ++ awk '{ print $1 }' 20:34:22 ++ grep -E '^[0-9.]*[0-9]$' 20:34:22 ++ [[ ! -s /tmp/.pyenv_versions ]] 20:34:22 +++ grep '^3' /tmp/.pyenv_versions 20:34:22 +++ sort -V 20:34:22 +++ tail -n 1 20:34:22 ++ py_version_xyz=3.11.7 20:34:22 ++ [[ -z 3.11.7 ]] 20:34:22 ++ echo 3.11.7 20:34:22 ++ return 0 20:34:22 + pyenv local 3.11.7 20:34:22 + local command 20:34:22 + command=local 20:34:22 + '[' 2 -gt 0 ']' 20:34:22 + shift 20:34:22 + case "$command" in 20:34:22 + command pyenv local 3.11.7 20:34:22 + pyenv local 3.11.7 20:34:22 + for arg in "$@" 20:34:22 + case $arg in 20:34:22 + pkg_list+='tox ' 20:34:22 + for arg in "$@" 20:34:22 + case $arg in 20:34:22 + pkg_list+='virtualenv ' 20:34:22 + for arg in "$@" 20:34:22 + case $arg in 20:34:22 + pkg_list+='urllib3~=1.26.15 ' 20:34:22 + [[ -f /tmp/.toxenv ]] 20:34:22 + [[ ! -f /tmp/.toxenv ]] 20:34:22 + [[ -n '' ]] 20:34:22 + python3 -m venv /tmp/venv-1U55 20:34:26 + echo 'lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-1U55' 20:34:26 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-1U55 20:34:26 + echo /tmp/venv-1U55 20:34:26 + echo 'lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv' 20:34:26 lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv 20:34:26 + /tmp/venv-1U55/bin/python3 -m pip install --upgrade --quiet pip virtualenv 20:34:29 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 20:34:29 + echo 'lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 ' 20:34:29 lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 20:34:29 + /tmp/venv-1U55/bin/python3 -m pip install --upgrade --quiet --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 20:34:31 + type python3 20:34:31 + true 20:34:31 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-1U55/bin to PATH' 20:34:31 lf-activate-venv(): INFO: Adding /tmp/venv-1U55/bin to PATH 20:34:31 + PATH=/tmp/venv-1U55/bin:/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:34:31 + return 0 20:34:31 + python3 --version 20:34:31 Python 3.11.7 20:34:31 + python3 -m pip --version 20:34:31 pip 24.2 from /tmp/venv-1U55/lib/python3.11/site-packages/pip (python 3.11) 20:34:31 + python3 -m pip freeze 20:34:32 cachetools==5.5.0 20:34:32 chardet==5.2.0 20:34:32 colorama==0.4.6 20:34:32 distlib==0.3.8 20:34:32 filelock==3.16.1 20:34:32 packaging==24.1 20:34:32 platformdirs==4.3.6 20:34:32 pluggy==1.5.0 20:34:32 pyproject-api==1.8.0 20:34:32 tox==4.20.0 20:34:32 urllib3==1.26.20 20:34:32 virtualenv==20.26.5 20:34:32 [transportpce-tox-verify-scandium] $ /bin/sh -xe /tmp/jenkins6810618720613523077.sh 20:34:32 [EnvInject] - Injecting environment variables from a build step. 20:34:32 [EnvInject] - Injecting as environment variables the properties content 20:34:32 PARALLEL=True 20:34:32 20:34:32 [EnvInject] - Variables injected successfully. 20:34:32 [transportpce-tox-verify-scandium] $ /bin/bash -l /tmp/jenkins16994029898501692305.sh 20:34:32 ---> tox-run.sh 20:34:32 + PATH=/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:34:32 + ARCHIVE_TOX_DIR=/w/workspace/transportpce-tox-verify-scandium/archives/tox 20:34:32 + ARCHIVE_DOC_DIR=/w/workspace/transportpce-tox-verify-scandium/archives/docs 20:34:32 + mkdir -p /w/workspace/transportpce-tox-verify-scandium/archives/tox 20:34:32 + cd /w/workspace/transportpce-tox-verify-scandium/. 20:34:32 + source /home/jenkins/lf-env.sh 20:34:32 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 20:34:32 ++ mktemp -d /tmp/venv-XXXX 20:34:32 + lf_venv=/tmp/venv-FSd0 20:34:32 + local venv_file=/tmp/.os_lf_venv 20:34:32 + local python=python3 20:34:32 + local options 20:34:32 + local set_path=true 20:34:32 + local install_args= 20:34:32 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 20:34:32 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 20:34:32 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 20:34:32 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 20:34:32 + true 20:34:32 + case $1 in 20:34:32 + venv_file=/tmp/.toxenv 20:34:32 + shift 2 20:34:32 + true 20:34:32 + case $1 in 20:34:32 + shift 20:34:32 + break 20:34:32 + case $python in 20:34:32 + local pkg_list= 20:34:32 + [[ -d /opt/pyenv ]] 20:34:32 + echo 'Setup pyenv:' 20:34:32 Setup pyenv: 20:34:32 + export PYENV_ROOT=/opt/pyenv 20:34:32 + PYENV_ROOT=/opt/pyenv 20:34:32 + export PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:34:32 + PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:34:32 + pyenv versions 20:34:32 system 20:34:32 3.8.13 20:34:32 3.9.13 20:34:32 3.10.13 20:34:32 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) 20:34:32 + command -v pyenv 20:34:32 ++ pyenv init - --no-rehash 20:34:32 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 20:34:32 for i in ${!paths[@]}; do 20:34:32 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 20:34:32 fi; done; 20:34:32 echo "${paths[*]}"'\'')" 20:34:32 export PATH="/opt/pyenv/shims:${PATH}" 20:34:32 export PYENV_SHELL=bash 20:34:32 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 20:34:32 pyenv() { 20:34:32 local command 20:34:32 command="${1:-}" 20:34:32 if [ "$#" -gt 0 ]; then 20:34:32 shift 20:34:32 fi 20:34:32 20:34:32 case "$command" in 20:34:32 rehash|shell) 20:34:32 eval "$(pyenv "sh-$command" "$@")" 20:34:32 ;; 20:34:32 *) 20:34:32 command pyenv "$command" "$@" 20:34:32 ;; 20:34:32 esac 20:34:32 }' 20:34:32 +++ bash --norc -ec 'IFS=:; paths=($PATH); 20:34:32 for i in ${!paths[@]}; do 20:34:32 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 20:34:32 fi; done; 20:34:32 echo "${paths[*]}"' 20:34:32 ++ PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:34:32 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:34:32 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:34:32 ++ export PYENV_SHELL=bash 20:34:32 ++ PYENV_SHELL=bash 20:34:32 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 20:34:32 +++ complete -F _pyenv pyenv 20:34:32 ++ lf-pyver python3 20:34:32 ++ local py_version_xy=python3 20:34:32 ++ local py_version_xyz= 20:34:32 ++ pyenv versions 20:34:32 ++ local command 20:34:32 ++ command=versions 20:34:32 ++ '[' 1 -gt 0 ']' 20:34:32 ++ shift 20:34:32 ++ case "$command" in 20:34:32 ++ command pyenv versions 20:34:32 ++ pyenv versions 20:34:32 ++ awk '{ print $1 }' 20:34:32 ++ sed 's/^[ *]* //' 20:34:32 ++ grep -E '^[0-9.]*[0-9]$' 20:34:32 ++ [[ ! -s /tmp/.pyenv_versions ]] 20:34:32 +++ grep '^3' /tmp/.pyenv_versions 20:34:32 +++ sort -V 20:34:32 +++ tail -n 1 20:34:32 ++ py_version_xyz=3.11.7 20:34:32 ++ [[ -z 3.11.7 ]] 20:34:32 ++ echo 3.11.7 20:34:32 ++ return 0 20:34:32 + pyenv local 3.11.7 20:34:32 + local command 20:34:32 + command=local 20:34:32 + '[' 2 -gt 0 ']' 20:34:32 + shift 20:34:32 + case "$command" in 20:34:32 + command pyenv local 3.11.7 20:34:32 + pyenv local 3.11.7 20:34:32 + for arg in "$@" 20:34:32 + case $arg in 20:34:32 + pkg_list+='tox ' 20:34:32 + for arg in "$@" 20:34:32 + case $arg in 20:34:32 + pkg_list+='virtualenv ' 20:34:32 + for arg in "$@" 20:34:32 + case $arg in 20:34:32 + pkg_list+='urllib3~=1.26.15 ' 20:34:32 + [[ -f /tmp/.toxenv ]] 20:34:32 ++ cat /tmp/.toxenv 20:34:32 + lf_venv=/tmp/venv-1U55 20:34:32 + echo 'lf-activate-venv(): INFO: Reuse venv:/tmp/venv-1U55 from' file:/tmp/.toxenv 20:34:32 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-1U55 from file:/tmp/.toxenv 20:34:32 + /tmp/venv-1U55/bin/python3 -m pip install --upgrade --quiet pip virtualenv 20:34:33 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 20:34:33 + echo 'lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 ' 20:34:33 lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 20:34:33 + /tmp/venv-1U55/bin/python3 -m pip install --upgrade --quiet --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 20:34:34 + type python3 20:34:34 + true 20:34:34 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-1U55/bin to PATH' 20:34:34 lf-activate-venv(): INFO: Adding /tmp/venv-1U55/bin to PATH 20:34:34 + PATH=/tmp/venv-1U55/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:34:34 + return 0 20:34:34 + [[ -d /opt/pyenv ]] 20:34:34 + echo '---> Setting up pyenv' 20:34:34 ---> Setting up pyenv 20:34:34 + export PYENV_ROOT=/opt/pyenv 20:34:34 + PYENV_ROOT=/opt/pyenv 20:34:34 + export PATH=/opt/pyenv/bin:/tmp/venv-1U55/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:34:34 + PATH=/opt/pyenv/bin:/tmp/venv-1U55/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:34:34 ++ pwd 20:34:34 + PYTHONPATH=/w/workspace/transportpce-tox-verify-scandium 20:34:34 + export PYTHONPATH 20:34:34 + export TOX_TESTENV_PASSENV=PYTHONPATH 20:34:34 + TOX_TESTENV_PASSENV=PYTHONPATH 20:34:34 + tox --version 20:34:35 4.20.0 from /tmp/venv-1U55/lib/python3.11/site-packages/tox/__init__.py 20:34:35 + PARALLEL=True 20:34:35 + TOX_OPTIONS_LIST= 20:34:35 + [[ -n '' ]] 20:34:35 + case ${PARALLEL,,} in 20:34:35 + TOX_OPTIONS_LIST=' --parallel auto --parallel-live' 20:34:35 + tox --parallel auto --parallel-live 20:34:35 + tee -a /w/workspace/transportpce-tox-verify-scandium/archives/tox/tox.log 20:34:36 docs: install_deps> python -I -m pip install -r docs/requirements.txt 20:34:36 buildcontroller: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 20:34:36 checkbashisms: freeze> python -m pip freeze --all 20:34:36 docs-linkcheck: install_deps> python -I -m pip install -r docs/requirements.txt 20:34:37 checkbashisms: pip==24.2,setuptools==75.1.0,wheel==0.44.0 20:34:37 checkbashisms: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./fixCIcentOS8reposMirrors.sh 20:34:37 checkbashisms: commands[1] /w/workspace/transportpce-tox-verify-scandium/tests> sh -c 'command checkbashisms>/dev/null || sudo yum install -y devscripts-checkbashisms || sudo yum install -y devscripts-minimal || sudo yum install -y devscripts || sudo yum install -y https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/31/Everything/x86_64/os/Packages/d/devscripts-checkbashisms-2.19.6-2.fc31.x86_64.rpm || (echo "checkbashisms command not found - please install it (e.g. sudo apt-get install devscripts | yum install devscripts-minimal )" >&2 && exit 1)' 20:34:37 checkbashisms: commands[2] /w/workspace/transportpce-tox-verify-scandium/tests> find . -not -path '*/\.*' -name '*.sh' -exec checkbashisms -f '{}' + 20:34:38 script ./reflectwarn.sh does not appear to have a #! interpreter line; 20:34:38 you may get strange results 20:34:38 checkbashisms: OK ✔ in 2.89 seconds 20:34:38 pre-commit: install_deps> python -I -m pip install pre-commit 20:34:41 pre-commit: freeze> python -m pip freeze --all 20:34:41 pre-commit: cfgv==3.4.0,distlib==0.3.8,filelock==3.16.1,identify==2.6.1,nodeenv==1.9.1,pip==24.2,platformdirs==4.3.6,pre-commit==3.8.0,PyYAML==6.0.2,setuptools==75.1.0,virtualenv==20.26.5,wheel==0.44.0 20:34:41 pre-commit: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./fixCIcentOS8reposMirrors.sh 20:34:41 pre-commit: commands[1] /w/workspace/transportpce-tox-verify-scandium/tests> sh -c 'which cpan || sudo yum install -y perl-CPAN || (echo "cpan command not found - please install it (e.g. sudo apt-get install perl-modules | yum install perl-CPAN )" >&2 && exit 1)' 20:34:41 /usr/bin/cpan 20:34:41 pre-commit: commands[2] /w/workspace/transportpce-tox-verify-scandium/tests> pre-commit run --all-files --show-diff-on-failure 20:34:41 [INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks. 20:34:42 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint. 20:34:42 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint:./gitlint-core[trusted-deps]. 20:34:43 [INFO] Initializing environment for https://github.com/Lucas-C/pre-commit-hooks. 20:34:43 buildcontroller: freeze> python -m pip freeze --all 20:34:43 [INFO] Initializing environment for https://github.com/pre-commit/mirrors-autopep8. 20:34:43 buildcontroller: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 20:34:43 buildcontroller: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./build_controller.sh 20:34:43 + update-java-alternatives -l 20:34:43 java-1.11.0-openjdk-amd64 1111 /usr/lib/jvm/java-1.11.0-openjdk-amd64 20:34:43 java-1.12.0-openjdk-amd64 1211 /usr/lib/jvm/java-1.12.0-openjdk-amd64 20:34:43 java-1.17.0-openjdk-amd64 1711 /usr/lib/jvm/java-1.17.0-openjdk-amd64 20:34:43 java-1.21.0-openjdk-amd64 2111 /usr/lib/jvm/java-1.21.0-openjdk-amd64 20:34:43 + sudo update-java-alternatives -s java-1.21.0-openjdk-amd64 20:34:43 java-1.8.0-openjdk-amd64 1081 /usr/lib/jvm/java-1.8.0-openjdk-amd64 20:34:43 [INFO] Initializing environment for https://github.com/perltidy/perltidy. 20:34:44 + + sed -n ;s/.* version "\(.*\)\.\(.*\)\..*".*$/\1/p; 20:34:44 java -version 20:34:44 + JAVA_VER=21 20:34:44 + echo 21 20:34:44 21 20:34:44 + sed -n ;s/javac \(.*\)\.\(.*\)\..*.*$/\1/p; 20:34:44 + javac -version 20:34:44 [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. 20:34:44 [INFO] Once installed this environment will be reused. 20:34:44 [INFO] This may take a few minutes... 20:34:44 + JAVAC_VER=21 20:34:44 + echo 21 20:34:44 21 20:34:44 ok, java is 21 or newer 20:34:44 + [ 21 -ge 21 ] 20:34:44 + [ 21 -ge 21 ] 20:34:44 + echo ok, java is 21 or newer 20:34:44 + wget -nv https://dlcdn.apache.org/maven/maven-3/3.9.8/binaries/apache-maven-3.9.8-bin.tar.gz -P /tmp 20:34:46 2024-09-20 20:34:46 URL:https://dlcdn.apache.org/maven/maven-3/3.9.8/binaries/apache-maven-3.9.8-bin.tar.gz [9083702/9083702] -> "/tmp/apache-maven-3.9.8-bin.tar.gz" [1] 20:34:46 + sudo mkdir -p /opt 20:34:46 + sudo tar xf /tmp/apache-maven-3.9.8-bin.tar.gz -C /opt 20:34:46 + sudo ln -s /opt/apache-maven-3.9.8 /opt/maven 20:34:46 + sudo ln -s /opt/maven/bin/mvn /usr/bin/mvn 20:34:46 + mvn --version 20:34:47 Apache Maven 3.9.8 (36645f6c9b5079805ea5009217e36f2cffd34256) 20:34:47 Maven home: /opt/maven 20:34:47 Java version: 21.0.4, vendor: Ubuntu, runtime: /usr/lib/jvm/java-21-openjdk-amd64 20:34:47 Default locale: en, platform encoding: UTF-8 20:34:47 OS name: "linux", version: "5.4.0-190-generic", arch: "amd64", family: "unix" 20:34:47 NOTE: Picked up JDK_JAVA_OPTIONS: 20:34:47 --add-opens=java.base/java.io=ALL-UNNAMED 20:34:47 --add-opens=java.base/java.lang=ALL-UNNAMED 20:34:47 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 20:34:47 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 20:34:47 --add-opens=java.base/java.net=ALL-UNNAMED 20:34:47 --add-opens=java.base/java.nio=ALL-UNNAMED 20:34:47 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 20:34:47 --add-opens=java.base/java.nio.file=ALL-UNNAMED 20:34:47 --add-opens=java.base/java.util=ALL-UNNAMED 20:34:47 --add-opens=java.base/java.util.jar=ALL-UNNAMED 20:34:47 --add-opens=java.base/java.util.stream=ALL-UNNAMED 20:34:47 --add-opens=java.base/java.util.zip=ALL-UNNAMED 20:34:47 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 20:34:47 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 20:34:47 -Xlog:disable 20:34:48 [INFO] Installing environment for https://github.com/Lucas-C/pre-commit-hooks. 20:34:48 [INFO] Once installed this environment will be reused. 20:34:48 [INFO] This may take a few minutes... 20:34:53 [INFO] Installing environment for https://github.com/pre-commit/mirrors-autopep8. 20:34:53 [INFO] Once installed this environment will be reused. 20:34:53 [INFO] This may take a few minutes... 20:34:56 [INFO] Installing environment for https://github.com/perltidy/perltidy. 20:34:56 [INFO] Once installed this environment will be reused. 20:34:56 [INFO] This may take a few minutes... 20:35:05 docs: freeze> python -m pip freeze --all 20:35:05 docs: alabaster==0.7.16,attrs==24.2.0,babel==2.16.0,blockdiag==3.0.0,certifi==2024.8.30,charset-normalizer==3.3.2,contourpy==1.3.0,cycler==0.12.1,docutils==0.20.1,fonttools==4.53.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.10,imagesize==1.4.1,Jinja2==3.1.4,jsonschema==3.2.0,kiwisolver==1.4.7,lfdocs-conf==0.9.0,MarkupSafe==2.1.5,matplotlib==3.9.2,numpy==2.1.1,nwdiag==3.0.0,packaging==24.1,pillow==10.4.0,pip==24.2,Pygments==2.18.0,pyparsing==3.1.4,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.2,requests==2.32.3,requests-file==1.5.1,seqdiag==3.0.0,setuptools==75.1.0,six==1.16.0,snowballstemmer==2.2.0,Sphinx==7.4.7,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-rtd-theme==2.0.0,sphinx-tabs==3.4.5,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.30,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.2.3,webcolors==24.8.0,wheel==0.44.0 20:35:05 docs: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> sphinx-build -q -W --keep-going -b html -n -d /w/workspace/transportpce-tox-verify-scandium/.tox/docs/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-scandium/docs/_build/html 20:35:05 docs-linkcheck: freeze> python -m pip freeze --all 20:35:06 docs-linkcheck: alabaster==0.7.16,attrs==24.2.0,babel==2.16.0,blockdiag==3.0.0,certifi==2024.8.30,charset-normalizer==3.3.2,contourpy==1.3.0,cycler==0.12.1,docutils==0.20.1,fonttools==4.53.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.10,imagesize==1.4.1,Jinja2==3.1.4,jsonschema==3.2.0,kiwisolver==1.4.7,lfdocs-conf==0.9.0,MarkupSafe==2.1.5,matplotlib==3.9.2,numpy==2.1.1,nwdiag==3.0.0,packaging==24.1,pillow==10.4.0,pip==24.2,Pygments==2.18.0,pyparsing==3.1.4,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.2,requests==2.32.3,requests-file==1.5.1,seqdiag==3.0.0,setuptools==75.1.0,six==1.16.0,snowballstemmer==2.2.0,Sphinx==7.4.7,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-rtd-theme==2.0.0,sphinx-tabs==3.4.5,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.30,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.2.3,webcolors==24.8.0,wheel==0.44.0 20:35:06 docs-linkcheck: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> sphinx-build -q -b linkcheck -d /w/workspace/transportpce-tox-verify-scandium/.tox/docs-linkcheck/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-scandium/docs/_build/linkcheck 20:35:07 /w/workspace/transportpce-tox-verify-scandium/.tox/docs-linkcheck/lib/python3.11/site-packages/sphinx/builders/linkcheck.py:86: RemovedInSphinx80Warning: The default value for 'linkcheck_report_timeouts_as_broken' will change to False in Sphinx 8, meaning that request timeouts will be reported with a new 'timeout' status, instead of as 'broken'. This is intended to provide more detail as to the failure mode. See https://github.com/sphinx-doc/sphinx/issues/11868 for details. 20:35:07 warnings.warn(deprecation_msg, RemovedInSphinx80Warning, stacklevel=1) 20:35:08 docs: OK ✔ in 33.07 seconds 20:35:08 pylint: install_deps> python -I -m pip install 'pylint>=2.6.0' 20:35:09 trim trailing whitespace.................................................Passed 20:35:09 Tabs remover.............................................................Passed 20:35:10 autopep8.................................................................docs-linkcheck: OK ✔ in 35.3 seconds 20:35:14 pylint: freeze> python -m pip freeze --all 20:35:14 pylint: astroid==3.3.3,dill==0.3.8,isort==5.13.2,mccabe==0.7.0,pip==24.2,platformdirs==4.3.6,pylint==3.3.0,setuptools==75.1.0,tomlkit==0.13.2,wheel==0.44.0 20:35:14 pylint: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> find transportpce_tests/ -name '*.py' -exec pylint --fail-under=10 --max-line-length=120 --disable=missing-docstring,import-error --disable=fixme --disable=duplicate-code '--module-rgx=([a-z0-9_]+$)|([0-9.]{1,30}$)' '--method-rgx=(([a-z_][a-zA-Z0-9_]{2,})|(_[a-z0-9_]*)|(__[a-zA-Z][a-zA-Z0-9_]+__))$' '--variable-rgx=[a-zA-Z_][a-zA-Z0-9_]{1,30}$' '{}' + 20:35:14 Passed 20:35:14 perltidy.................................................................Passed 20:35:15 pre-commit: commands[3] /w/workspace/transportpce-tox-verify-scandium/tests> pre-commit run gitlint-ci --hook-stage manual 20:35:15 [INFO] Installing environment for https://github.com/jorisroovers/gitlint. 20:35:15 [INFO] Once installed this environment will be reused. 20:35:15 [INFO] This may take a few minutes... 20:35:22 gitlint..................................................................Passed 20:35:34 20:35:34 ------------------------------------ 20:35:34 Your code has been rated at 10.00/10 20:35:34 20:36:23 pre-commit: OK ✔ in 44.69 seconds 20:36:23 pylint: OK ✔ in 27.53 seconds 20:36:23 buildcontroller: OK ✔ in 1 minute 47.52 seconds 20:36:23 build_karaf_tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 20:36:23 build_karaf_tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 20:36:23 sims: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 20:36:23 testsPCE: install_deps> python -I -m pip install gnpy4tpce==2.4.7 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 20:36:29 sims: freeze> python -m pip freeze --all 20:36:30 build_karaf_tests221: freeze> python -m pip freeze --all 20:36:30 build_karaf_tests121: freeze> python -m pip freeze --all 20:36:30 sims: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 20:36:30 sims: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./install_lightynode.sh 20:36:30 Using lighynode version 20.1.0.2 20:36:30 Installing lightynode device to ./lightynode/lightynode-openroadm-device directory 20:36:30 build_karaf_tests221: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 20:36:30 build_karaf_tests221: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./build_karaf_for_tests.sh 20:36:30 NOTE: Picked up JDK_JAVA_OPTIONS: 20:36:30 --add-opens=java.base/java.io=ALL-UNNAMED 20:36:30 --add-opens=java.base/java.lang=ALL-UNNAMED 20:36:30 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 20:36:30 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 20:36:30 --add-opens=java.base/java.net=ALL-UNNAMED 20:36:30 --add-opens=java.base/java.nio=ALL-UNNAMED 20:36:30 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 20:36:30 --add-opens=java.base/java.nio.file=ALL-UNNAMED 20:36:30 --add-opens=java.base/java.util=ALL-UNNAMED 20:36:30 --add-opens=java.base/java.util.jar=ALL-UNNAMED 20:36:30 --add-opens=java.base/java.util.stream=ALL-UNNAMED 20:36:30 --add-opens=java.base/java.util.zip=ALL-UNNAMED 20:36:30 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 20:36:30 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 20:36:30 -Xlog:disable 20:36:30 build_karaf_tests121: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 20:36:30 build_karaf_tests121: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./build_karaf_for_tests.sh 20:36:30 NOTE: Picked up JDK_JAVA_OPTIONS: 20:36:30 --add-opens=java.base/java.io=ALL-UNNAMED 20:36:30 --add-opens=java.base/java.lang=ALL-UNNAMED 20:36:30 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 20:36:30 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 20:36:30 --add-opens=java.base/java.net=ALL-UNNAMED 20:36:30 --add-opens=java.base/java.nio=ALL-UNNAMED 20:36:30 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 20:36:30 --add-opens=java.base/java.nio.file=ALL-UNNAMED 20:36:30 --add-opens=java.base/java.util=ALL-UNNAMED 20:36:30 --add-opens=java.base/java.util.jar=ALL-UNNAMED 20:36:30 --add-opens=java.base/java.util.stream=ALL-UNNAMED 20:36:30 --add-opens=java.base/java.util.zip=ALL-UNNAMED 20:36:30 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 20:36:30 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 20:36:30 -Xlog:disable 20:36:33 sims: OK ✔ in 9.91 seconds 20:36:33 build_karaf_tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 20:36:43 build_karaf_tests71: freeze> python -m pip freeze --all 20:36:44 build_karaf_tests71: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 20:36:44 build_karaf_tests71: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./build_karaf_for_tests.sh 20:36:44 NOTE: Picked up JDK_JAVA_OPTIONS: 20:36:44 --add-opens=java.base/java.io=ALL-UNNAMED 20:36:44 --add-opens=java.base/java.lang=ALL-UNNAMED 20:36:44 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 20:36:44 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 20:36:44 --add-opens=java.base/java.net=ALL-UNNAMED 20:36:44 --add-opens=java.base/java.nio=ALL-UNNAMED 20:36:44 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 20:36:44 --add-opens=java.base/java.nio.file=ALL-UNNAMED 20:36:44 --add-opens=java.base/java.util=ALL-UNNAMED 20:36:44 --add-opens=java.base/java.util.jar=ALL-UNNAMED 20:36:44 --add-opens=java.base/java.util.stream=ALL-UNNAMED 20:36:44 --add-opens=java.base/java.util.zip=ALL-UNNAMED 20:36:44 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 20:36:44 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 20:36:44 -Xlog:disable 20:37:17 build_karaf_tests221: OK ✔ in 54.67 seconds 20:37:17 build_karaf_tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 20:37:18 build_karaf_tests121: OK ✔ in 55.75 seconds 20:37:18 tests_tapi: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 20:37:25 build_karaf_tests71: OK ✔ in 51.96 seconds 20:37:25 build_karaf_tests_hybrid: freeze> python -m pip freeze --all 20:37:25 tests_tapi: freeze> python -m pip freeze --all 20:37:25 build_karaf_tests_hybrid: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 20:37:25 build_karaf_tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./build_karaf_for_tests.sh 20:37:25 NOTE: Picked up JDK_JAVA_OPTIONS: 20:37:25 --add-opens=java.base/java.io=ALL-UNNAMED 20:37:25 --add-opens=java.base/java.lang=ALL-UNNAMED 20:37:25 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 20:37:25 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 20:37:25 --add-opens=java.base/java.net=ALL-UNNAMED 20:37:25 --add-opens=java.base/java.nio=ALL-UNNAMED 20:37:25 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 20:37:25 --add-opens=java.base/java.nio.file=ALL-UNNAMED 20:37:25 --add-opens=java.base/java.util=ALL-UNNAMED 20:37:25 --add-opens=java.base/java.util.jar=ALL-UNNAMED 20:37:25 --add-opens=java.base/java.util.stream=ALL-UNNAMED 20:37:25 --add-opens=java.base/java.util.zip=ALL-UNNAMED 20:37:25 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 20:37:25 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 20:37:25 -Xlog:disable 20:37:25 tests_tapi: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 20:37:25 tests_tapi: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh tapi 20:37:25 using environment variables from ./karaf221.env 20:37:25 pytest -q transportpce_tests/tapi/test01_abstracted_topology.py 20:37:34 testsPCE: freeze> python -m pip freeze --all 20:37:35 testsPCE: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,click==8.1.7,contourpy==1.3.0,cryptography==3.3.2,cycler==0.12.1,dict2xml==1.7.6,Flask==2.1.3,Flask-Injector==0.14.0,fonttools==4.53.1,gnpy4tpce==2.4.7,idna==3.10,iniconfig==2.0.0,injector==0.22.0,itsdangerous==2.2.0,Jinja2==3.1.4,kiwisolver==1.4.7,lxml==5.3.0,MarkupSafe==2.1.5,matplotlib==3.9.2,netconf-client==3.1.1,networkx==2.8.8,numpy==1.26.4,packaging==24.1,pandas==1.5.3,paramiko==3.5.0,pbr==5.11.1,pillow==10.4.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pyparsing==3.1.4,pytest==8.3.3,python-dateutil==2.9.0.post0,pytz==2024.2,requests==2.32.3,scipy==1.14.1,setuptools==50.3.2,six==1.16.0,urllib3==2.2.3,Werkzeug==2.0.3,wheel==0.44.0,xlrd==1.2.0 20:37:35 testsPCE: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh pce 20:37:35 pytest -q transportpce_tests/pce/test01_pce.py 20:38:31 ........................................ [100%] 20:39:36 20 passed in 120.29s (0:02:00) 20:39:36 pytest -q transportpce_tests/pce/test02_pce_400G.py 20:39:36 ..................... [100%] 20:40:19 9 passed in 42.88s 20:40:19 pytest -q transportpce_tests/pce/test03_gnpy.py 20:40:25 .............. [100%] 20:40:58 8 passed in 38.65s 20:40:58 pytest -q transportpce_tests/pce/test04_pce_bug_fix.py 20:41:09 ............. [100%] 20:41:34 3 passed in 35.50s 20:41:34 build_karaf_tests_hybrid: OK ✔ in 53.8 seconds 20:41:34 testsPCE: OK ✔ in 5 minutes 11.76 seconds 20:41:34 tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 20:41:40 tests121: freeze> python -m pip freeze --all 20:41:40 tests121: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 20:41:40 tests121: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh 1.2.1 20:41:40 using environment variables from ./karaf121.env 20:41:40 pytest -q transportpce_tests/1.2.1/test01_portmapping.py 20:42:16 ..................... [100%] 20:43:06 21 passed in 85.08s (0:01:25) 20:43:06 pytest -q transportpce_tests/1.2.1/test02_topo_portmapping.py 20:43:36 ...... [100%] 20:43:50 6 passed in 44.07s 20:43:50 pytest -q transportpce_tests/1.2.1/test03_topology.py 20:44:10 ............................................. [100%] 20:46:07 44 passed in 136.99s (0:02:16) 20:46:07 pytest -q transportpce_tests/1.2.1/test04_renderer_service_path_nominal.py 20:46:37 .................. [100%] 20:47:14 50 passed in 588.61s (0:09:48) 20:47:14 pytest -q transportpce_tests/tapi/test02_full_topology.py 20:47:14 ....... [100%] 20:47:29 24 passed in 81.26s (0:01:21) 20:47:29 pytest -q transportpce_tests/1.2.1/test05_olm.py 20:48:33 ...........F.FFFFFFFFFFF...... [100%] 20:51:00 =================================== FAILURES =================================== 20:51:00 _____________ TransportPCEtesting.test_12_check_openroadm_topology _____________ 20:51:00 20:51:00 self = 20:51:00 20:51:00 def test_12_check_openroadm_topology(self): 20:51:00 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 20:51:00 self.assertEqual(response['status_code'], requests.codes.ok) 20:51:00 > self.assertEqual(len(response['network'][0]['node']), 13, 'There should be 13 openroadm nodes') 20:51:00 E AssertionError: 14 != 13 : There should be 13 openroadm nodes 20:51:00 20:51:00 transportpce_tests/tapi/test02_full_topology.py:272: AssertionError 20:51:00 ________________ TransportPCEtesting.test_14_check_sip_details _________________ 20:51:00 20:51:00 self = 20:51:00 20:51:00 def test_14_check_sip_details(self): 20:51:00 response = test_utils.transportpce_api_rpc_request( 20:51:00 'tapi-common', 'get-service-interface-point-list', None) 20:51:00 > self.assertEqual(len(response['output']['sip']), 72, 'There should be 72 service interface point') 20:51:00 E AssertionError: 36 != 72 : There should be 72 service interface point 20:51:00 20:51:00 transportpce_tests/tapi/test02_full_topology.py:291: AssertionError 20:51:00 ____ TransportPCEtesting.test_15_create_connectivity_service_PhotonicMedia _____ 20:51:00 20:51:00 self = 20:51:00 20:51:00 def test_15_create_connectivity_service_PhotonicMedia(self): 20:51:00 self.cr_serv_input_data["end-point"][0]["service-interface-point"]["service-interface-point-uuid"] = self.sAOTS 20:51:00 self.cr_serv_input_data["end-point"][1]["service-interface-point"]["service-interface-point-uuid"] = self.sZOTS 20:51:00 response = test_utils.transportpce_api_rpc_request( 20:51:00 'tapi-connectivity', 'create-connectivity-service', self.cr_serv_input_data) 20:51:00 time.sleep(self.WAITING) 20:51:00 > self.assertEqual(response['status_code'], requests.codes.ok) 20:51:00 E AssertionError: 500 != 200 20:51:00 20:51:00 transportpce_tests/tapi/test02_full_topology.py:300: AssertionError 20:51:00 ____________ TransportPCEtesting.test_16_get_service_PhotonicMedia _____________ 20:51:00 20:51:00 self = 20:51:00 20:51:00 def test_16_get_service_PhotonicMedia(self): 20:51:00 response = test_utils.get_ordm_serv_list_attr_request("services", str(self.uuid_services.pm)) 20:51:00 > self.assertEqual(response['status_code'], requests.codes.ok) 20:51:00 E AssertionError: 409 != 200 20:51:00 20:51:00 transportpce_tests/tapi/test02_full_topology.py:330: AssertionError 20:51:00 _________ TransportPCEtesting.test_17_create_connectivity_service_ODU __________ 20:51:00 20:51:00 self = 20:51:00 20:51:00 def test_17_create_connectivity_service_ODU(self): 20:51:00 # pylint: disable=line-too-long 20:51:00 self.cr_serv_input_data["layer-protocol-name"] = "ODU" 20:51:00 self.cr_serv_input_data["end-point"][0]["layer-protocol-name"] = "ODU" 20:51:00 self.cr_serv_input_data["end-point"][0]["service-interface-point"]["service-interface-point-uuid"] = self.sAeODU 20:51:00 self.cr_serv_input_data["end-point"][1]["layer-protocol-name"] = "ODU" 20:51:00 self.cr_serv_input_data["end-point"][1]["service-interface-point"]["service-interface-point-uuid"] = self.sZeODU 20:51:00 # self.cr_serv_input_data["connectivity-constraint"]["service-layer"] = "ODU" 20:51:00 self.cr_serv_input_data["connectivity-constraint"]["service-level"] = self.uuid_services.pm 20:51:00 20:51:00 response = test_utils.transportpce_api_rpc_request( 20:51:00 'tapi-connectivity', 'create-connectivity-service', self.cr_serv_input_data) 20:51:00 time.sleep(self.WAITING) 20:51:00 > self.assertEqual(response['status_code'], requests.codes.ok) 20:51:00 E AssertionError: 500 != 200 20:51:00 20:51:00 transportpce_tests/tapi/test02_full_topology.py:351: AssertionError 20:51:00 _________________ TransportPCEtesting.test_18_get_service_ODU __________________ 20:51:00 20:51:00 self = 20:51:00 20:51:00 def test_18_get_service_ODU(self): 20:51:00 response = test_utils.get_ordm_serv_list_attr_request("services", str(self.uuid_services.odu)) 20:51:00 > self.assertEqual(response['status_code'], requests.codes.ok) 20:51:00 E AssertionError: 409 != 200 20:51:00 20:51:00 transportpce_tests/tapi/test02_full_topology.py:379: AssertionError 20:51:00 _________ TransportPCEtesting.test_19_create_connectivity_service_DSR __________ 20:51:00 20:51:00 self = 20:51:00 20:51:00 def test_19_create_connectivity_service_DSR(self): 20:51:00 # pylint: disable=line-too-long 20:51:00 self.cr_serv_input_data["layer-protocol-name"] = "DSR" 20:51:00 self.cr_serv_input_data["end-point"][0]["layer-protocol-name"] = "DSR" 20:51:00 self.cr_serv_input_data["end-point"][0]["service-interface-point"]["service-interface-point-uuid"] = self.sADSR 20:51:00 self.cr_serv_input_data["end-point"][1]["layer-protocol-name"] = "DSR" 20:51:00 self.cr_serv_input_data["end-point"][1]["service-interface-point"]["service-interface-point-uuid"] = self.sZDSR 20:51:00 # self.cr_serv_input_data["connectivity-constraint"]["service-layer"] = "DSR" 20:51:00 self.cr_serv_input_data["connectivity-constraint"]["requested-capacity"]["total-size"]["value"] = "10" 20:51:00 self.cr_serv_input_data["connectivity-constraint"]["service-level"] = self.uuid_services.odu 20:51:00 20:51:00 response = test_utils.transportpce_api_rpc_request( 20:51:00 'tapi-connectivity', 'create-connectivity-service', self.cr_serv_input_data) 20:51:00 time.sleep(self.WAITING) 20:51:00 > self.assertEqual(response['status_code'], requests.codes.ok) 20:51:00 E AssertionError: 500 != 200 20:51:00 20:51:00 transportpce_tests/tapi/test02_full_topology.py:401: AssertionError 20:51:00 _________________ TransportPCEtesting.test_20_get_service_DSR __________________ 20:51:00 20:51:00 self = 20:51:00 20:51:00 def test_20_get_service_DSR(self): 20:51:00 response = test_utils.get_ordm_serv_list_attr_request("services", str(self.uuid_services.dsr)) 20:51:00 > self.assertEqual(response['status_code'], requests.codes.ok) 20:51:00 E AssertionError: 409 != 200 20:51:00 20:51:00 transportpce_tests/tapi/test02_full_topology.py:432: AssertionError 20:51:00 __________ TransportPCEtesting.test_21_get_connectivity_service_list ___________ 20:51:00 20:51:00 self = 20:51:00 20:51:00 def test_21_get_connectivity_service_list(self): 20:51:00 response = test_utils.transportpce_api_rpc_request( 20:51:00 'tapi-connectivity', 'get-connectivity-service-list', None) 20:51:00 > self.assertEqual(response['status_code'], requests.codes.ok) 20:51:00 E AssertionError: 500 != 200 20:51:00 20:51:00 transportpce_tests/tapi/test02_full_topology.py:442: AssertionError 20:51:00 _________ TransportPCEtesting.test_22_delete_connectivity_service_DSR __________ 20:51:00 20:51:00 self = 20:51:00 20:51:00 def test_22_delete_connectivity_service_DSR(self): 20:51:00 self.del_serv_input_data["uuid"] = str(self.uuid_services.dsr) 20:51:00 response = test_utils.transportpce_api_rpc_request( 20:51:00 'tapi-connectivity', 'delete-connectivity-service', self.del_serv_input_data) 20:51:00 > self.assertIn(response["status_code"], (requests.codes.ok, requests.codes.no_content)) 20:51:00 E AssertionError: 500 not found in (200, 204) 20:51:00 20:51:00 transportpce_tests/tapi/test02_full_topology.py:471: AssertionError 20:51:00 _________ TransportPCEtesting.test_23_delete_connectivity_service_ODU __________ 20:51:00 20:51:00 self = 20:51:00 20:51:00 def test_23_delete_connectivity_service_ODU(self): 20:51:00 self.del_serv_input_data["uuid"] = str(self.uuid_services.odu) 20:51:00 response = test_utils.transportpce_api_rpc_request( 20:51:00 'tapi-connectivity', 'delete-connectivity-service', self.del_serv_input_data) 20:51:00 > self.assertIn(response["status_code"], (requests.codes.ok, requests.codes.no_content)) 20:51:00 E AssertionError: 500 not found in (200, 204) 20:51:00 20:51:00 transportpce_tests/tapi/test02_full_topology.py:478: AssertionError 20:51:00 ____ TransportPCEtesting.test_24_delete_connectivity_service_PhotonicMedia _____ 20:51:00 20:51:00 self = 20:51:00 20:51:00 def test_24_delete_connectivity_service_PhotonicMedia(self): 20:51:00 self.del_serv_input_data["uuid"] = str(self.uuid_services.pm) 20:51:00 response = test_utils.transportpce_api_rpc_request( 20:51:00 'tapi-connectivity', 'delete-connectivity-service', self.del_serv_input_data) 20:51:00 > self.assertIn(response["status_code"], (requests.codes.ok, requests.codes.no_content)) 20:51:00 E AssertionError: 500 not found in (200, 204) 20:51:00 20:51:00 transportpce_tests/tapi/test02_full_topology.py:485: AssertionError 20:51:00 =========================== short test summary info ============================ 20:51:00 FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_12_check_openroadm_topology 20:51:00 FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_14_check_sip_details 20:51:00 FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_15_create_connectivity_service_PhotonicMedia 20:51:00 FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_16_get_service_PhotonicMedia 20:51:00 FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_17_create_connectivity_service_ODU 20:51:00 FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_18_get_service_ODU 20:51:00 FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_19_create_connectivity_service_DSR 20:51:00 FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_20_get_service_DSR 20:51:00 FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_21_get_connectivity_service_list 20:51:00 FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_22_delete_connectivity_service_DSR 20:51:00 FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_23_delete_connectivity_service_ODU 20:51:00 FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_24_delete_connectivity_service_PhotonicMedia 20:51:00 12 failed, 18 passed in 222.18s (0:03:42) 20:51:00 tests_tapi: exit 1 (811.40 seconds) /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh tapi pid=30666 20:51:00 tests_tapi: FAIL ✖ in 13 minutes 38.36 seconds 20:51:00 tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 20:51:03 tests71: freeze> python -m pip freeze --all 20:51:03 tests71: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 20:51:03 tests71: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh 7.1 20:51:03 using environment variables from ./karaf71.env 20:51:03 pytest -q transportpce_tests/7.1/test01_portmapping.py 20:51:29 ............. [100%] 20:51:49 12 passed in 45.42s 20:51:49 pytest -q transportpce_tests/7.1/test02_otn_renderer.py 20:52:17 .............................................................. [100%] 20:54:28 62 passed in 158.44s (0:02:38) 20:54:28 pytest -q transportpce_tests/7.1/test03_renderer_or_modes.py 20:54:32 .FFFFFFFFFFFFFFFFFFFFFFF.FFFF.FF.F.FFF............................................ [100%] 20:56:44 48 passed in 136.03s (0:02:16) 20:56:44 pytest -q transportpce_tests/7.1/test04_renderer_regen_mode.py 20:57:09 ...................... [100%] 20:57:57 22 passed in 72.41s (0:01:12) 20:57:57 tests71: OK ✔ in 7 minutes 0.2 seconds 20:57:57 tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 20:58:03 tests221: freeze> python -m pip freeze --all 20:58:03 tests221: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 20:58:03 tests221: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh 2.2.1 20:58:03 using environment variables from ./karaf221.env 20:58:03 pytest -q transportpce_tests/2.2.1/test01_portmapping.py 20:58:10 FFFFF [100%] 20:58:18 =================================== FAILURES =================================== 20:58:18 ______________ TransportOlmTesting.test_03_rdmA_device_connected _______________ 20:58:18 20:58:18 self = 20:58:18 20:58:18 def _new_conn(self) -> socket.socket: 20:58:18 """Establish a socket connection and set nodelay settings on it. 20:58:18 20:58:18 :return: New socket connection. 20:58:18 """ 20:58:18 try: 20:58:18 > sock = connection.create_connection( 20:58:18 (self._dns_host, self.port), 20:58:18 self.timeout, 20:58:18 source_address=self.source_address, 20:58:18 socket_options=self.socket_options, 20:58:18 ) 20:58:18 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 20:58:18 raise err 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 20:58:18 address = ('localhost', 8182), timeout = 10, source_address = None 20:58:18 socket_options = [(6, 1, 1)] 20:58:18 20:58:18 def create_connection( 20:58:18 address: tuple[str, int], 20:58:18 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:18 source_address: tuple[str, int] | None = None, 20:58:18 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 20:58:18 ) -> socket.socket: 20:58:18 """Connect to *address* and return the socket object. 20:58:18 20:58:18 Convenience function. Connect to *address* (a 2-tuple ``(host, 20:58:18 port)``) and return the socket object. Passing the optional 20:58:18 *timeout* parameter will set the timeout on the socket instance 20:58:18 before attempting to connect. If no *timeout* is supplied, the 20:58:18 global default timeout setting returned by :func:`socket.getdefaulttimeout` 20:58:18 is used. If *source_address* is set it must be a tuple of (host, port) 20:58:18 for the socket to bind as a source address before making the connection. 20:58:18 An host of '' or port 0 tells the OS to use the default. 20:58:18 """ 20:58:18 20:58:18 host, port = address 20:58:18 if host.startswith("["): 20:58:18 host = host.strip("[]") 20:58:18 err = None 20:58:18 20:58:18 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 20:58:18 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 20:58:18 # The original create_connection function always returns all records. 20:58:18 family = allowed_gai_family() 20:58:18 20:58:18 try: 20:58:18 host.encode("idna") 20:58:18 except UnicodeError: 20:58:18 raise LocationParseError(f"'{host}', label empty or too long") from None 20:58:18 20:58:18 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 20:58:18 af, socktype, proto, canonname, sa = res 20:58:18 sock = None 20:58:18 try: 20:58:18 sock = socket.socket(af, socktype, proto) 20:58:18 20:58:18 # If provided, set socket level options before connecting. 20:58:18 _set_socket_options(sock, socket_options) 20:58:18 20:58:18 if timeout is not _DEFAULT_TIMEOUT: 20:58:18 sock.settimeout(timeout) 20:58:18 if source_address: 20:58:18 sock.bind(source_address) 20:58:18 > sock.connect(sa) 20:58:18 E ConnectionRefusedError: [Errno 111] Connection refused 20:58:18 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 20:58:18 20:58:18 The above exception was the direct cause of the following exception: 20:58:18 20:58:18 self = 20:58:18 method = 'PUT' 20:58:18 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' 20:58:18 body = '{"node": [{"node-id": "ROADMA01", "netconf-node-topology:netconf-node": {"netconf-node-topology:host": "127.0.0.1", "...is": "60000", "netconf-node-topology:max-connection-attempts": "0", "netconf-node-topology:keepalive-delay": "120"}}]}' 20:58:18 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '629', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 20:58:18 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:18 redirect = False, assert_same_host = False 20:58:18 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 20:58:18 release_conn = False, chunked = False, body_pos = None, preload_content = False 20:58:18 decode_content = False, response_kw = {} 20:58:18 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query=None, fragment=None) 20:58:18 destination_scheme = None, conn = None, release_this_conn = True 20:58:18 http_tunnel_required = False, err = None, clean_exit = False 20:58:18 20:58:18 def urlopen( # type: ignore[override] 20:58:18 self, 20:58:18 method: str, 20:58:18 url: str, 20:58:18 body: _TYPE_BODY | None = None, 20:58:18 headers: typing.Mapping[str, str] | None = None, 20:58:18 retries: Retry | bool | int | None = None, 20:58:18 redirect: bool = True, 20:58:18 assert_same_host: bool = True, 20:58:18 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:18 pool_timeout: int | None = None, 20:58:18 release_conn: bool | None = None, 20:58:18 chunked: bool = False, 20:58:18 body_pos: _TYPE_BODY_POSITION | None = None, 20:58:18 preload_content: bool = True, 20:58:18 decode_content: bool = True, 20:58:18 **response_kw: typing.Any, 20:58:18 ) -> BaseHTTPResponse: 20:58:18 """ 20:58:18 Get a connection from the pool and perform an HTTP request. This is the 20:58:18 lowest level call for making a request, so you'll need to specify all 20:58:18 the raw details. 20:58:18 20:58:18 .. note:: 20:58:18 20:58:18 More commonly, it's appropriate to use a convenience method 20:58:18 such as :meth:`request`. 20:58:18 20:58:18 .. note:: 20:58:18 20:58:18 `release_conn` will only behave as expected if 20:58:18 `preload_content=False` because we want to make 20:58:18 `preload_content=False` the default behaviour someday soon without 20:58:18 breaking backwards compatibility. 20:58:18 20:58:18 :param method: 20:58:18 HTTP request method (such as GET, POST, PUT, etc.) 20:58:18 20:58:18 :param url: 20:58:18 The URL to perform the request on. 20:58:18 20:58:18 :param body: 20:58:18 Data to send in the request body, either :class:`str`, :class:`bytes`, 20:58:18 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 20:58:18 20:58:18 :param headers: 20:58:18 Dictionary of custom headers to send, such as User-Agent, 20:58:18 If-None-Match, etc. If None, pool headers are used. If provided, 20:58:18 these headers completely replace any pool-specific headers. 20:58:18 20:58:18 :param retries: 20:58:18 Configure the number of retries to allow before raising a 20:58:18 :class:`~urllib3.exceptions.MaxRetryError` exception. 20:58:18 20:58:18 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 20:58:18 :class:`~urllib3.util.retry.Retry` object for fine-grained control 20:58:18 over different types of retries. 20:58:18 Pass an integer number to retry connection errors that many times, 20:58:18 but no other types of errors. Pass zero to never retry. 20:58:18 20:58:18 If ``False``, then retries are disabled and any exception is raised 20:58:18 immediately. Also, instead of raising a MaxRetryError on redirects, 20:58:18 the redirect response will be returned. 20:58:18 20:58:18 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 20:58:18 20:58:18 :param redirect: 20:58:18 If True, automatically handle redirects (status codes 301, 302, 20:58:18 303, 307, 308). Each redirect counts as a retry. Disabling retries 20:58:18 will disable redirect, too. 20:58:18 20:58:18 :param assert_same_host: 20:58:18 If ``True``, will make sure that the host of the pool requests is 20:58:18 consistent else will raise HostChangedError. When ``False``, you can 20:58:18 use the pool on an HTTP proxy and request foreign hosts. 20:58:18 20:58:18 :param timeout: 20:58:18 If specified, overrides the default timeout for this one 20:58:18 request. It may be a float (in seconds) or an instance of 20:58:18 :class:`urllib3.util.Timeout`. 20:58:18 20:58:18 :param pool_timeout: 20:58:18 If set and the pool is set to block=True, then this method will 20:58:18 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 20:58:18 connection is available within the time period. 20:58:18 20:58:18 :param bool preload_content: 20:58:18 If True, the response's body will be preloaded into memory. 20:58:18 20:58:18 :param bool decode_content: 20:58:18 If True, will attempt to decode the body based on the 20:58:18 'content-encoding' header. 20:58:18 20:58:18 :param release_conn: 20:58:18 If False, then the urlopen call will not release the connection 20:58:18 back into the pool once a response is received (but will release if 20:58:18 you read the entire contents of the response such as when 20:58:18 `preload_content=True`). This is useful if you're not preloading 20:58:18 the response's content immediately. You will need to call 20:58:18 ``r.release_conn()`` on the response ``r`` to return the connection 20:58:18 back into the pool. If None, it takes the value of ``preload_content`` 20:58:18 which defaults to ``True``. 20:58:18 20:58:18 :param bool chunked: 20:58:18 If True, urllib3 will send the body using chunked transfer 20:58:18 encoding. Otherwise, urllib3 will send the body using the standard 20:58:18 content-length form. Defaults to False. 20:58:18 20:58:18 :param int body_pos: 20:58:18 Position to seek to in file-like body in the event of a retry or 20:58:18 redirect. Typically this won't need to be set because urllib3 will 20:58:18 auto-populate the value when needed. 20:58:18 """ 20:58:18 parsed_url = parse_url(url) 20:58:18 destination_scheme = parsed_url.scheme 20:58:18 20:58:18 if headers is None: 20:58:18 headers = self.headers 20:58:18 20:58:18 if not isinstance(retries, Retry): 20:58:18 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 20:58:18 20:58:18 if release_conn is None: 20:58:18 release_conn = preload_content 20:58:18 20:58:18 # Check host 20:58:18 if assert_same_host and not self.is_same_host(url): 20:58:18 raise HostChangedError(self, url, retries) 20:58:18 20:58:18 # Ensure that the URL we're connecting to is properly encoded 20:58:18 if url.startswith("/"): 20:58:18 url = to_str(_encode_target(url)) 20:58:18 else: 20:58:18 url = to_str(parsed_url.url) 20:58:18 20:58:18 conn = None 20:58:18 20:58:18 # Track whether `conn` needs to be released before 20:58:18 # returning/raising/recursing. Update this variable if necessary, and 20:58:18 # leave `release_conn` constant throughout the function. That way, if 20:58:18 # the function recurses, the original value of `release_conn` will be 20:58:18 # passed down into the recursive call, and its value will be respected. 20:58:18 # 20:58:18 # See issue #651 [1] for details. 20:58:18 # 20:58:18 # [1] 20:58:18 release_this_conn = release_conn 20:58:18 20:58:18 http_tunnel_required = connection_requires_http_tunnel( 20:58:18 self.proxy, self.proxy_config, destination_scheme 20:58:18 ) 20:58:18 20:58:18 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 20:58:18 # have to copy the headers dict so we can safely change it without those 20:58:18 # changes being reflected in anyone else's copy. 20:58:18 if not http_tunnel_required: 20:58:18 headers = headers.copy() # type: ignore[attr-defined] 20:58:18 headers.update(self.proxy_headers) # type: ignore[union-attr] 20:58:18 20:58:18 # Must keep the exception bound to a separate variable or else Python 3 20:58:18 # complains about UnboundLocalError. 20:58:18 err = None 20:58:18 20:58:18 # Keep track of whether we cleanly exited the except block. This 20:58:18 # ensures we do proper cleanup in finally. 20:58:18 clean_exit = False 20:58:18 20:58:18 # Rewind body position, if needed. Record current position 20:58:18 # for future rewinds in the event of a redirect/retry. 20:58:18 body_pos = set_file_position(body, body_pos) 20:58:18 20:58:18 try: 20:58:18 # Request a connection from the queue. 20:58:18 timeout_obj = self._get_timeout(timeout) 20:58:18 conn = self._get_conn(timeout=pool_timeout) 20:58:18 20:58:18 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 20:58:18 20:58:18 # Is this a closed/new connection that requires CONNECT tunnelling? 20:58:18 if self.proxy is not None and http_tunnel_required and conn.is_closed: 20:58:18 try: 20:58:18 self._prepare_proxy(conn) 20:58:18 except (BaseSSLError, OSError, SocketTimeout) as e: 20:58:18 self._raise_timeout( 20:58:18 err=e, url=self.proxy.url, timeout_value=conn.timeout 20:58:18 ) 20:58:18 raise 20:58:18 20:58:18 # If we're going to release the connection in ``finally:``, then 20:58:18 # the response doesn't need to know about the connection. Otherwise 20:58:18 # it will also try to release it and we'll have a double-release 20:58:18 # mess. 20:58:18 response_conn = conn if not release_conn else None 20:58:18 20:58:18 # Make the request on the HTTPConnection object 20:58:18 > response = self._make_request( 20:58:18 conn, 20:58:18 method, 20:58:18 url, 20:58:18 timeout=timeout_obj, 20:58:18 body=body, 20:58:18 headers=headers, 20:58:18 chunked=chunked, 20:58:18 retries=retries, 20:58:18 response_conn=response_conn, 20:58:18 preload_content=preload_content, 20:58:18 decode_content=decode_content, 20:58:18 **response_kw, 20:58:18 ) 20:58:18 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 20:58:18 conn.request( 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 20:58:18 self.endheaders() 20:58:18 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 20:58:18 self._send_output(message_body, encode_chunked=encode_chunked) 20:58:18 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 20:58:18 self.send(msg) 20:58:18 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 20:58:18 self.connect() 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 20:58:18 self.sock = self._new_conn() 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 20:58:18 self = 20:58:18 20:58:18 def _new_conn(self) -> socket.socket: 20:58:18 """Establish a socket connection and set nodelay settings on it. 20:58:18 20:58:18 :return: New socket connection. 20:58:18 """ 20:58:18 try: 20:58:18 sock = connection.create_connection( 20:58:18 (self._dns_host, self.port), 20:58:18 self.timeout, 20:58:18 source_address=self.source_address, 20:58:18 socket_options=self.socket_options, 20:58:18 ) 20:58:18 except socket.gaierror as e: 20:58:18 raise NameResolutionError(self.host, self, e) from e 20:58:18 except SocketTimeout as e: 20:58:18 raise ConnectTimeoutError( 20:58:18 self, 20:58:18 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 20:58:18 ) from e 20:58:18 20:58:18 except OSError as e: 20:58:18 > raise NewConnectionError( 20:58:18 self, f"Failed to establish a new connection: {e}" 20:58:18 ) from e 20:58:18 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 20:58:18 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 20:58:18 20:58:18 The above exception was the direct cause of the following exception: 20:58:18 20:58:18 self = 20:58:18 request = , stream = False 20:58:18 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:18 proxies = OrderedDict() 20:58:18 20:58:18 def send( 20:58:18 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:18 ): 20:58:18 """Sends PreparedRequest object. Returns Response object. 20:58:18 20:58:18 :param request: The :class:`PreparedRequest ` being sent. 20:58:18 :param stream: (optional) Whether to stream the request content. 20:58:18 :param timeout: (optional) How long to wait for the server to send 20:58:18 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:18 read timeout) ` tuple. 20:58:18 :type timeout: float or tuple or urllib3 Timeout object 20:58:18 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:18 we verify the server's TLS certificate, or a string, in which case it 20:58:18 must be a path to a CA bundle to use 20:58:18 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:18 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:18 :rtype: requests.Response 20:58:18 """ 20:58:18 20:58:18 try: 20:58:18 conn = self.get_connection_with_tls_context( 20:58:18 request, verify, proxies=proxies, cert=cert 20:58:18 ) 20:58:18 except LocationValueError as e: 20:58:18 raise InvalidURL(e, request=request) 20:58:18 20:58:18 self.cert_verify(conn, request.url, verify, cert) 20:58:18 url = self.request_url(request, proxies) 20:58:18 self.add_headers( 20:58:18 request, 20:58:18 stream=stream, 20:58:18 timeout=timeout, 20:58:18 verify=verify, 20:58:18 cert=cert, 20:58:18 proxies=proxies, 20:58:18 ) 20:58:18 20:58:18 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:18 20:58:18 if isinstance(timeout, tuple): 20:58:18 try: 20:58:18 connect, read = timeout 20:58:18 timeout = TimeoutSauce(connect=connect, read=read) 20:58:18 except ValueError: 20:58:18 raise ValueError( 20:58:18 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:18 f"or a single float to set both timeouts to the same value." 20:58:18 ) 20:58:18 elif isinstance(timeout, TimeoutSauce): 20:58:18 pass 20:58:18 else: 20:58:18 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:18 20:58:18 try: 20:58:18 > resp = conn.urlopen( 20:58:18 method=request.method, 20:58:18 url=url, 20:58:18 body=request.body, 20:58:18 headers=request.headers, 20:58:18 redirect=False, 20:58:18 assert_same_host=False, 20:58:18 preload_content=False, 20:58:18 decode_content=False, 20:58:18 retries=self.max_retries, 20:58:18 timeout=timeout, 20:58:18 chunked=chunked, 20:58:18 ) 20:58:18 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 20:58:18 retries = retries.increment( 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 20:58:18 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:18 method = 'PUT' 20:58:18 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' 20:58:18 response = None 20:58:18 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 20:58:18 _pool = 20:58:18 _stacktrace = 20:58:18 20:58:18 def increment( 20:58:18 self, 20:58:18 method: str | None = None, 20:58:18 url: str | None = None, 20:58:18 response: BaseHTTPResponse | None = None, 20:58:18 error: Exception | None = None, 20:58:18 _pool: ConnectionPool | None = None, 20:58:18 _stacktrace: TracebackType | None = None, 20:58:18 ) -> Self: 20:58:18 """Return a new Retry object with incremented retry counters. 20:58:18 20:58:18 :param response: A response object, or None, if the server did not 20:58:18 return a response. 20:58:18 :type response: :class:`~urllib3.response.BaseHTTPResponse` 20:58:18 :param Exception error: An error encountered during the request, or 20:58:18 None if the response was received successfully. 20:58:18 20:58:18 :return: A new ``Retry`` object. 20:58:18 """ 20:58:18 if self.total is False and error: 20:58:18 # Disabled, indicate to re-raise the error. 20:58:18 raise reraise(type(error), error, _stacktrace) 20:58:18 20:58:18 total = self.total 20:58:18 if total is not None: 20:58:18 total -= 1 20:58:18 20:58:18 connect = self.connect 20:58:18 read = self.read 20:58:18 redirect = self.redirect 20:58:18 status_count = self.status 20:58:18 other = self.other 20:58:18 cause = "unknown" 20:58:18 status = None 20:58:18 redirect_location = None 20:58:18 20:58:18 if error and self._is_connection_error(error): 20:58:18 # Connect retry? 20:58:18 if connect is False: 20:58:18 raise reraise(type(error), error, _stacktrace) 20:58:18 elif connect is not None: 20:58:18 connect -= 1 20:58:18 20:58:18 elif error and self._is_read_error(error): 20:58:18 # Read retry? 20:58:18 if read is False or method is None or not self._is_method_retryable(method): 20:58:18 raise reraise(type(error), error, _stacktrace) 20:58:18 elif read is not None: 20:58:18 read -= 1 20:58:18 20:58:18 elif error: 20:58:18 # Other retry? 20:58:18 if other is not None: 20:58:18 other -= 1 20:58:18 20:58:18 elif response and response.get_redirect_location(): 20:58:18 # Redirect retry? 20:58:18 if redirect is not None: 20:58:18 redirect -= 1 20:58:18 cause = "too many redirects" 20:58:18 response_redirect_location = response.get_redirect_location() 20:58:18 if response_redirect_location: 20:58:18 redirect_location = response_redirect_location 20:58:18 status = response.status 20:58:18 20:58:18 else: 20:58:18 # Incrementing because of a server error like a 500 in 20:58:18 # status_forcelist and the given method is in the allowed_methods 20:58:18 cause = ResponseError.GENERIC_ERROR 20:58:18 if response and response.status: 20:58:18 if status_count is not None: 20:58:18 status_count -= 1 20:58:18 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 20:58:18 status = response.status 20:58:18 20:58:18 history = self.history + ( 20:58:18 RequestHistory(method, url, error, status, redirect_location), 20:58:18 ) 20:58:18 20:58:18 new_retry = self.new( 20:58:18 total=total, 20:58:18 connect=connect, 20:58:18 read=read, 20:58:18 redirect=redirect, 20:58:18 status=status_count, 20:58:18 other=other, 20:58:18 history=history, 20:58:18 ) 20:58:18 20:58:18 if new_retry.is_exhausted(): 20:58:18 reason = error or ResponseError(cause) 20:58:18 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 20:58:18 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:18 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 20:58:18 20:58:18 During handling of the above exception, another exception occurred: 20:58:18 20:58:18 self = 20:58:18 20:58:18 def test_03_rdmA_device_connected(self): 20:58:18 > response = test_utils.mount_device("ROADMA01", ('roadma-full', self.NODE_VERSION)) 20:58:18 20:58:18 transportpce_tests/1.2.1/test05_olm.py:60: 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 transportpce_tests/common/test_utils.py:341: in mount_device 20:58:18 response = put_request(url[RESTCONF_VERSION].format('{}', node), body) 20:58:18 transportpce_tests/common/test_utils.py:124: in put_request 20:58:18 return requests.request( 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 20:58:18 return session.request(method=method, url=url, **kwargs) 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 20:58:18 resp = self.send(prep, **send_kwargs) 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 20:58:18 r = adapter.send(request, **kwargs) 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 20:58:18 self = 20:58:18 request = , stream = False 20:58:18 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:18 proxies = OrderedDict() 20:58:18 20:58:18 def send( 20:58:18 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:18 ): 20:58:18 """Sends PreparedRequest object. Returns Response object. 20:58:18 20:58:18 :param request: The :class:`PreparedRequest ` being sent. 20:58:18 :param stream: (optional) Whether to stream the request content. 20:58:18 :param timeout: (optional) How long to wait for the server to send 20:58:18 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:18 read timeout) ` tuple. 20:58:18 :type timeout: float or tuple or urllib3 Timeout object 20:58:18 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:18 we verify the server's TLS certificate, or a string, in which case it 20:58:18 must be a path to a CA bundle to use 20:58:18 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:18 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:18 :rtype: requests.Response 20:58:18 """ 20:58:18 20:58:18 try: 20:58:18 conn = self.get_connection_with_tls_context( 20:58:18 request, verify, proxies=proxies, cert=cert 20:58:18 ) 20:58:18 except LocationValueError as e: 20:58:18 raise InvalidURL(e, request=request) 20:58:18 20:58:18 self.cert_verify(conn, request.url, verify, cert) 20:58:18 url = self.request_url(request, proxies) 20:58:18 self.add_headers( 20:58:18 request, 20:58:18 stream=stream, 20:58:18 timeout=timeout, 20:58:18 verify=verify, 20:58:18 cert=cert, 20:58:18 proxies=proxies, 20:58:18 ) 20:58:18 20:58:18 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:18 20:58:18 if isinstance(timeout, tuple): 20:58:18 try: 20:58:18 connect, read = timeout 20:58:18 timeout = TimeoutSauce(connect=connect, read=read) 20:58:18 except ValueError: 20:58:18 raise ValueError( 20:58:18 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:18 f"or a single float to set both timeouts to the same value." 20:58:18 ) 20:58:18 elif isinstance(timeout, TimeoutSauce): 20:58:18 pass 20:58:18 else: 20:58:18 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:18 20:58:18 try: 20:58:18 resp = conn.urlopen( 20:58:18 method=request.method, 20:58:18 url=url, 20:58:18 body=request.body, 20:58:18 headers=request.headers, 20:58:18 redirect=False, 20:58:18 assert_same_host=False, 20:58:18 preload_content=False, 20:58:18 decode_content=False, 20:58:18 retries=self.max_retries, 20:58:18 timeout=timeout, 20:58:18 chunked=chunked, 20:58:18 ) 20:58:18 20:58:18 except (ProtocolError, OSError) as err: 20:58:18 raise ConnectionError(err, request=request) 20:58:18 20:58:18 except MaxRetryError as e: 20:58:18 if isinstance(e.reason, ConnectTimeoutError): 20:58:18 # TODO: Remove this in 3.0.0: see #2811 20:58:18 if not isinstance(e.reason, NewConnectionError): 20:58:18 raise ConnectTimeout(e, request=request) 20:58:18 20:58:18 if isinstance(e.reason, ResponseError): 20:58:18 raise RetryError(e, request=request) 20:58:18 20:58:18 if isinstance(e.reason, _ProxyError): 20:58:18 raise ProxyError(e, request=request) 20:58:18 20:58:18 if isinstance(e.reason, _SSLError): 20:58:18 # This branch is for urllib3 v1.22 and later. 20:58:18 raise SSLError(e, request=request) 20:58:18 20:58:18 > raise ConnectionError(e, request=request) 20:58:18 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:18 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 20:58:18 ----------------------------- Captured stdout call ----------------------------- 20:58:18 execution of test_03_rdmA_device_connected 20:58:18 ______________ TransportOlmTesting.test_04_rdmC_device_connected _______________ 20:58:18 20:58:18 self = 20:58:18 20:58:18 def _new_conn(self) -> socket.socket: 20:58:18 """Establish a socket connection and set nodelay settings on it. 20:58:18 20:58:18 :return: New socket connection. 20:58:18 """ 20:58:18 try: 20:58:18 > sock = connection.create_connection( 20:58:18 (self._dns_host, self.port), 20:58:18 self.timeout, 20:58:18 source_address=self.source_address, 20:58:18 socket_options=self.socket_options, 20:58:18 ) 20:58:18 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 20:58:18 raise err 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 20:58:18 address = ('localhost', 8182), timeout = 10, source_address = None 20:58:18 socket_options = [(6, 1, 1)] 20:58:18 20:58:18 def create_connection( 20:58:18 address: tuple[str, int], 20:58:18 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:18 source_address: tuple[str, int] | None = None, 20:58:18 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 20:58:18 ) -> socket.socket: 20:58:18 """Connect to *address* and return the socket object. 20:58:18 20:58:18 Convenience function. Connect to *address* (a 2-tuple ``(host, 20:58:18 port)``) and return the socket object. Passing the optional 20:58:18 *timeout* parameter will set the timeout on the socket instance 20:58:18 before attempting to connect. If no *timeout* is supplied, the 20:58:18 global default timeout setting returned by :func:`socket.getdefaulttimeout` 20:58:18 is used. If *source_address* is set it must be a tuple of (host, port) 20:58:18 for the socket to bind as a source address before making the connection. 20:58:18 An host of '' or port 0 tells the OS to use the default. 20:58:18 """ 20:58:18 20:58:18 host, port = address 20:58:18 if host.startswith("["): 20:58:18 host = host.strip("[]") 20:58:18 err = None 20:58:18 20:58:18 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 20:58:18 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 20:58:18 # The original create_connection function always returns all records. 20:58:18 family = allowed_gai_family() 20:58:18 20:58:18 try: 20:58:18 host.encode("idna") 20:58:18 except UnicodeError: 20:58:18 raise LocationParseError(f"'{host}', label empty or too long") from None 20:58:18 20:58:18 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 20:58:18 af, socktype, proto, canonname, sa = res 20:58:18 sock = None 20:58:18 try: 20:58:18 sock = socket.socket(af, socktype, proto) 20:58:18 20:58:18 # If provided, set socket level options before connecting. 20:58:18 _set_socket_options(sock, socket_options) 20:58:18 20:58:18 if timeout is not _DEFAULT_TIMEOUT: 20:58:18 sock.settimeout(timeout) 20:58:18 if source_address: 20:58:18 sock.bind(source_address) 20:58:18 > sock.connect(sa) 20:58:18 E ConnectionRefusedError: [Errno 111] Connection refused 20:58:18 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 20:58:18 20:58:18 The above exception was the direct cause of the following exception: 20:58:18 20:58:18 self = 20:58:18 method = 'PUT' 20:58:18 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMC01' 20:58:18 body = '{"node": [{"node-id": "ROADMC01", "netconf-node-topology:netconf-node": {"netconf-node-topology:host": "127.0.0.1", "...is": "60000", "netconf-node-topology:max-connection-attempts": "0", "netconf-node-topology:keepalive-delay": "120"}}]}' 20:58:18 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '629', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 20:58:18 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:18 redirect = False, assert_same_host = False 20:58:18 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 20:58:18 release_conn = False, chunked = False, body_pos = None, preload_content = False 20:58:18 decode_content = False, response_kw = {} 20:58:18 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMC01', query=None, fragment=None) 20:58:18 destination_scheme = None, conn = None, release_this_conn = True 20:58:18 http_tunnel_required = False, err = None, clean_exit = False 20:58:18 20:58:18 def urlopen( # type: ignore[override] 20:58:18 self, 20:58:18 method: str, 20:58:18 url: str, 20:58:18 body: _TYPE_BODY | None = None, 20:58:18 headers: typing.Mapping[str, str] | None = None, 20:58:18 retries: Retry | bool | int | None = None, 20:58:18 redirect: bool = True, 20:58:18 assert_same_host: bool = True, 20:58:18 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:18 pool_timeout: int | None = None, 20:58:18 release_conn: bool | None = None, 20:58:18 chunked: bool = False, 20:58:18 body_pos: _TYPE_BODY_POSITION | None = None, 20:58:18 preload_content: bool = True, 20:58:18 decode_content: bool = True, 20:58:18 **response_kw: typing.Any, 20:58:18 ) -> BaseHTTPResponse: 20:58:18 """ 20:58:18 Get a connection from the pool and perform an HTTP request. This is the 20:58:18 lowest level call for making a request, so you'll need to specify all 20:58:18 the raw details. 20:58:18 20:58:18 .. note:: 20:58:18 20:58:18 More commonly, it's appropriate to use a convenience method 20:58:18 such as :meth:`request`. 20:58:18 20:58:18 .. note:: 20:58:18 20:58:18 `release_conn` will only behave as expected if 20:58:18 `preload_content=False` because we want to make 20:58:18 `preload_content=False` the default behaviour someday soon without 20:58:18 breaking backwards compatibility. 20:58:18 20:58:18 :param method: 20:58:18 HTTP request method (such as GET, POST, PUT, etc.) 20:58:18 20:58:18 :param url: 20:58:18 The URL to perform the request on. 20:58:18 20:58:18 :param body: 20:58:18 Data to send in the request body, either :class:`str`, :class:`bytes`, 20:58:18 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 20:58:18 20:58:18 :param headers: 20:58:18 Dictionary of custom headers to send, such as User-Agent, 20:58:18 If-None-Match, etc. If None, pool headers are used. If provided, 20:58:18 these headers completely replace any pool-specific headers. 20:58:18 20:58:18 :param retries: 20:58:18 Configure the number of retries to allow before raising a 20:58:18 :class:`~urllib3.exceptions.MaxRetryError` exception. 20:58:18 20:58:18 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 20:58:18 :class:`~urllib3.util.retry.Retry` object for fine-grained control 20:58:18 over different types of retries. 20:58:18 Pass an integer number to retry connection errors that many times, 20:58:18 but no other types of errors. Pass zero to never retry. 20:58:18 20:58:18 If ``False``, then retries are disabled and any exception is raised 20:58:18 immediately. Also, instead of raising a MaxRetryError on redirects, 20:58:18 the redirect response will be returned. 20:58:18 20:58:18 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 20:58:18 20:58:18 :param redirect: 20:58:18 If True, automatically handle redirects (status codes 301, 302, 20:58:18 303, 307, 308). Each redirect counts as a retry. Disabling retries 20:58:18 will disable redirect, too. 20:58:18 20:58:18 :param assert_same_host: 20:58:18 If ``True``, will make sure that the host of the pool requests is 20:58:18 consistent else will raise HostChangedError. When ``False``, you can 20:58:18 use the pool on an HTTP proxy and request foreign hosts. 20:58:18 20:58:18 :param timeout: 20:58:18 If specified, overrides the default timeout for this one 20:58:18 request. It may be a float (in seconds) or an instance of 20:58:18 :class:`urllib3.util.Timeout`. 20:58:18 20:58:18 :param pool_timeout: 20:58:18 If set and the pool is set to block=True, then this method will 20:58:18 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 20:58:18 connection is available within the time period. 20:58:18 20:58:18 :param bool preload_content: 20:58:18 If True, the response's body will be preloaded into memory. 20:58:18 20:58:18 :param bool decode_content: 20:58:18 If True, will attempt to decode the body based on the 20:58:18 'content-encoding' header. 20:58:18 20:58:18 :param release_conn: 20:58:18 If False, then the urlopen call will not release the connection 20:58:18 back into the pool once a response is received (but will release if 20:58:18 you read the entire contents of the response such as when 20:58:18 `preload_content=True`). This is useful if you're not preloading 20:58:18 the response's content immediately. You will need to call 20:58:18 ``r.release_conn()`` on the response ``r`` to return the connection 20:58:18 back into the pool. If None, it takes the value of ``preload_content`` 20:58:18 which defaults to ``True``. 20:58:18 20:58:18 :param bool chunked: 20:58:18 If True, urllib3 will send the body using chunked transfer 20:58:18 encoding. Otherwise, urllib3 will send the body using the standard 20:58:18 content-length form. Defaults to False. 20:58:18 20:58:18 :param int body_pos: 20:58:18 Position to seek to in file-like body in the event of a retry or 20:58:18 redirect. Typically this won't need to be set because urllib3 will 20:58:18 auto-populate the value when needed. 20:58:18 """ 20:58:18 parsed_url = parse_url(url) 20:58:18 destination_scheme = parsed_url.scheme 20:58:18 20:58:18 if headers is None: 20:58:18 headers = self.headers 20:58:18 20:58:18 if not isinstance(retries, Retry): 20:58:18 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 20:58:18 20:58:18 if release_conn is None: 20:58:18 release_conn = preload_content 20:58:18 20:58:18 # Check host 20:58:18 if assert_same_host and not self.is_same_host(url): 20:58:18 raise HostChangedError(self, url, retries) 20:58:18 20:58:18 # Ensure that the URL we're connecting to is properly encoded 20:58:18 if url.startswith("/"): 20:58:18 url = to_str(_encode_target(url)) 20:58:18 else: 20:58:18 url = to_str(parsed_url.url) 20:58:18 20:58:18 conn = None 20:58:18 20:58:18 # Track whether `conn` needs to be released before 20:58:18 # returning/raising/recursing. Update this variable if necessary, and 20:58:18 # leave `release_conn` constant throughout the function. That way, if 20:58:18 # the function recurses, the original value of `release_conn` will be 20:58:18 # passed down into the recursive call, and its value will be respected. 20:58:18 # 20:58:18 # See issue #651 [1] for details. 20:58:18 # 20:58:18 # [1] 20:58:18 release_this_conn = release_conn 20:58:18 20:58:18 http_tunnel_required = connection_requires_http_tunnel( 20:58:18 self.proxy, self.proxy_config, destination_scheme 20:58:18 ) 20:58:18 20:58:18 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 20:58:18 # have to copy the headers dict so we can safely change it without those 20:58:18 # changes being reflected in anyone else's copy. 20:58:18 if not http_tunnel_required: 20:58:18 headers = headers.copy() # type: ignore[attr-defined] 20:58:18 headers.update(self.proxy_headers) # type: ignore[union-attr] 20:58:18 20:58:18 # Must keep the exception bound to a separate variable or else Python 3 20:58:18 # complains about UnboundLocalError. 20:58:18 err = None 20:58:18 20:58:18 # Keep track of whether we cleanly exited the except block. This 20:58:18 # ensures we do proper cleanup in finally. 20:58:18 clean_exit = False 20:58:18 20:58:18 # Rewind body position, if needed. Record current position 20:58:18 # for future rewinds in the event of a redirect/retry. 20:58:18 body_pos = set_file_position(body, body_pos) 20:58:18 20:58:18 try: 20:58:18 # Request a connection from the queue. 20:58:18 timeout_obj = self._get_timeout(timeout) 20:58:18 conn = self._get_conn(timeout=pool_timeout) 20:58:18 20:58:18 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 20:58:18 20:58:18 # Is this a closed/new connection that requires CONNECT tunnelling? 20:58:18 if self.proxy is not None and http_tunnel_required and conn.is_closed: 20:58:18 try: 20:58:18 self._prepare_proxy(conn) 20:58:18 except (BaseSSLError, OSError, SocketTimeout) as e: 20:58:18 self._raise_timeout( 20:58:18 err=e, url=self.proxy.url, timeout_value=conn.timeout 20:58:18 ) 20:58:18 raise 20:58:18 20:58:18 # If we're going to release the connection in ``finally:``, then 20:58:18 # the response doesn't need to know about the connection. Otherwise 20:58:18 # it will also try to release it and we'll have a double-release 20:58:18 # mess. 20:58:18 response_conn = conn if not release_conn else None 20:58:18 20:58:18 # Make the request on the HTTPConnection object 20:58:18 > response = self._make_request( 20:58:18 conn, 20:58:18 method, 20:58:18 url, 20:58:18 timeout=timeout_obj, 20:58:18 body=body, 20:58:18 headers=headers, 20:58:18 chunked=chunked, 20:58:18 retries=retries, 20:58:18 response_conn=response_conn, 20:58:18 preload_content=preload_content, 20:58:18 decode_content=decode_content, 20:58:18 **response_kw, 20:58:18 ) 20:58:18 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 20:58:18 conn.request( 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 20:58:18 self.endheaders() 20:58:18 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 20:58:18 self._send_output(message_body, encode_chunked=encode_chunked) 20:58:18 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 20:58:18 self.send(msg) 20:58:18 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 20:58:18 self.connect() 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 20:58:18 self.sock = self._new_conn() 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 20:58:18 self = 20:58:18 20:58:18 def _new_conn(self) -> socket.socket: 20:58:18 """Establish a socket connection and set nodelay settings on it. 20:58:18 20:58:18 :return: New socket connection. 20:58:18 """ 20:58:18 try: 20:58:18 sock = connection.create_connection( 20:58:18 (self._dns_host, self.port), 20:58:18 self.timeout, 20:58:18 source_address=self.source_address, 20:58:18 socket_options=self.socket_options, 20:58:18 ) 20:58:18 except socket.gaierror as e: 20:58:18 raise NameResolutionError(self.host, self, e) from e 20:58:18 except SocketTimeout as e: 20:58:18 raise ConnectTimeoutError( 20:58:18 self, 20:58:18 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 20:58:18 ) from e 20:58:18 20:58:18 except OSError as e: 20:58:18 > raise NewConnectionError( 20:58:18 self, f"Failed to establish a new connection: {e}" 20:58:18 ) from e 20:58:18 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 20:58:18 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 20:58:18 20:58:18 The above exception was the direct cause of the following exception: 20:58:18 20:58:18 self = 20:58:18 request = , stream = False 20:58:18 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:18 proxies = OrderedDict() 20:58:18 20:58:18 def send( 20:58:18 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:18 ): 20:58:18 """Sends PreparedRequest object. Returns Response object. 20:58:18 20:58:18 :param request: The :class:`PreparedRequest ` being sent. 20:58:18 :param stream: (optional) Whether to stream the request content. 20:58:18 :param timeout: (optional) How long to wait for the server to send 20:58:18 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:18 read timeout) ` tuple. 20:58:18 :type timeout: float or tuple or urllib3 Timeout object 20:58:18 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:18 we verify the server's TLS certificate, or a string, in which case it 20:58:18 must be a path to a CA bundle to use 20:58:18 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:18 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:18 :rtype: requests.Response 20:58:18 """ 20:58:18 20:58:18 try: 20:58:18 conn = self.get_connection_with_tls_context( 20:58:18 request, verify, proxies=proxies, cert=cert 20:58:18 ) 20:58:18 except LocationValueError as e: 20:58:18 raise InvalidURL(e, request=request) 20:58:18 20:58:18 self.cert_verify(conn, request.url, verify, cert) 20:58:18 url = self.request_url(request, proxies) 20:58:18 self.add_headers( 20:58:18 request, 20:58:18 stream=stream, 20:58:18 timeout=timeout, 20:58:18 verify=verify, 20:58:18 cert=cert, 20:58:18 proxies=proxies, 20:58:18 ) 20:58:18 20:58:18 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:18 20:58:18 if isinstance(timeout, tuple): 20:58:18 try: 20:58:18 connect, read = timeout 20:58:18 timeout = TimeoutSauce(connect=connect, read=read) 20:58:18 except ValueError: 20:58:18 raise ValueError( 20:58:18 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:18 f"or a single float to set both timeouts to the same value." 20:58:18 ) 20:58:18 elif isinstance(timeout, TimeoutSauce): 20:58:18 pass 20:58:18 else: 20:58:18 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:18 20:58:18 try: 20:58:18 > resp = conn.urlopen( 20:58:18 method=request.method, 20:58:18 url=url, 20:58:18 body=request.body, 20:58:18 headers=request.headers, 20:58:18 redirect=False, 20:58:18 assert_same_host=False, 20:58:18 preload_content=False, 20:58:18 decode_content=False, 20:58:18 retries=self.max_retries, 20:58:18 timeout=timeout, 20:58:18 chunked=chunked, 20:58:18 ) 20:58:18 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 20:58:18 retries = retries.increment( 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 20:58:18 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:18 method = 'PUT' 20:58:18 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMC01' 20:58:18 response = None 20:58:18 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 20:58:18 _pool = 20:58:18 _stacktrace = 20:58:18 20:58:18 def increment( 20:58:18 self, 20:58:18 method: str | None = None, 20:58:18 url: str | None = None, 20:58:18 response: BaseHTTPResponse | None = None, 20:58:18 error: Exception | None = None, 20:58:18 _pool: ConnectionPool | None = None, 20:58:18 _stacktrace: TracebackType | None = None, 20:58:18 ) -> Self: 20:58:18 """Return a new Retry object with incremented retry counters. 20:58:18 20:58:18 :param response: A response object, or None, if the server did not 20:58:18 return a response. 20:58:18 :type response: :class:`~urllib3.response.BaseHTTPResponse` 20:58:18 :param Exception error: An error encountered during the request, or 20:58:18 None if the response was received successfully. 20:58:18 20:58:18 :return: A new ``Retry`` object. 20:58:18 """ 20:58:18 if self.total is False and error: 20:58:18 # Disabled, indicate to re-raise the error. 20:58:18 raise reraise(type(error), error, _stacktrace) 20:58:18 20:58:18 total = self.total 20:58:18 if total is not None: 20:58:18 total -= 1 20:58:18 20:58:18 connect = self.connect 20:58:18 read = self.read 20:58:18 redirect = self.redirect 20:58:18 status_count = self.status 20:58:18 other = self.other 20:58:18 cause = "unknown" 20:58:18 status = None 20:58:18 redirect_location = None 20:58:18 20:58:18 if error and self._is_connection_error(error): 20:58:18 # Connect retry? 20:58:18 if connect is False: 20:58:18 raise reraise(type(error), error, _stacktrace) 20:58:18 elif connect is not None: 20:58:18 connect -= 1 20:58:18 20:58:18 elif error and self._is_read_error(error): 20:58:18 # Read retry? 20:58:18 if read is False or method is None or not self._is_method_retryable(method): 20:58:18 raise reraise(type(error), error, _stacktrace) 20:58:18 elif read is not None: 20:58:18 read -= 1 20:58:18 20:58:18 elif error: 20:58:18 # Other retry? 20:58:18 if other is not None: 20:58:18 other -= 1 20:58:18 20:58:18 elif response and response.get_redirect_location(): 20:58:18 # Redirect retry? 20:58:18 if redirect is not None: 20:58:18 redirect -= 1 20:58:18 cause = "too many redirects" 20:58:18 response_redirect_location = response.get_redirect_location() 20:58:18 if response_redirect_location: 20:58:18 redirect_location = response_redirect_location 20:58:18 status = response.status 20:58:18 20:58:18 else: 20:58:18 # Incrementing because of a server error like a 500 in 20:58:18 # status_forcelist and the given method is in the allowed_methods 20:58:18 cause = ResponseError.GENERIC_ERROR 20:58:18 if response and response.status: 20:58:18 if status_count is not None: 20:58:18 status_count -= 1 20:58:18 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 20:58:18 status = response.status 20:58:18 20:58:18 history = self.history + ( 20:58:18 RequestHistory(method, url, error, status, redirect_location), 20:58:18 ) 20:58:18 20:58:18 new_retry = self.new( 20:58:18 total=total, 20:58:18 connect=connect, 20:58:18 read=read, 20:58:18 redirect=redirect, 20:58:18 status=status_count, 20:58:18 other=other, 20:58:18 history=history, 20:58:18 ) 20:58:18 20:58:18 if new_retry.is_exhausted(): 20:58:18 reason = error or ResponseError(cause) 20:58:18 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 20:58:18 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMC01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:18 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 20:58:18 20:58:18 During handling of the above exception, another exception occurred: 20:58:18 20:58:18 self = 20:58:18 20:58:18 def test_04_rdmC_device_connected(self): 20:58:18 > response = test_utils.mount_device("ROADMC01", ('roadmc-full', self.NODE_VERSION)) 20:58:18 20:58:18 transportpce_tests/1.2.1/test05_olm.py:64: 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 transportpce_tests/common/test_utils.py:341: in mount_device 20:58:18 response = put_request(url[RESTCONF_VERSION].format('{}', node), body) 20:58:18 transportpce_tests/common/test_utils.py:124: in put_request 20:58:18 return requests.request( 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 20:58:18 return session.request(method=method, url=url, **kwargs) 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 20:58:18 resp = self.send(prep, **send_kwargs) 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 20:58:18 r = adapter.send(request, **kwargs) 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 20:58:18 self = 20:58:18 request = , stream = False 20:58:18 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:18 proxies = OrderedDict() 20:58:18 20:58:18 def send( 20:58:18 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:18 ): 20:58:18 """Sends PreparedRequest object. Returns Response object. 20:58:18 20:58:18 :param request: The :class:`PreparedRequest ` being sent. 20:58:18 :param stream: (optional) Whether to stream the request content. 20:58:18 :param timeout: (optional) How long to wait for the server to send 20:58:18 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:18 read timeout) ` tuple. 20:58:18 :type timeout: float or tuple or urllib3 Timeout object 20:58:18 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:18 we verify the server's TLS certificate, or a string, in which case it 20:58:18 must be a path to a CA bundle to use 20:58:18 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:18 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:18 :rtype: requests.Response 20:58:18 """ 20:58:18 20:58:18 try: 20:58:18 conn = self.get_connection_with_tls_context( 20:58:18 request, verify, proxies=proxies, cert=cert 20:58:18 ) 20:58:18 except LocationValueError as e: 20:58:18 raise InvalidURL(e, request=request) 20:58:18 20:58:18 self.cert_verify(conn, request.url, verify, cert) 20:58:18 url = self.request_url(request, proxies) 20:58:18 self.add_headers( 20:58:18 request, 20:58:18 stream=stream, 20:58:18 timeout=timeout, 20:58:18 verify=verify, 20:58:18 cert=cert, 20:58:18 proxies=proxies, 20:58:18 ) 20:58:18 20:58:18 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:18 20:58:18 if isinstance(timeout, tuple): 20:58:18 try: 20:58:18 connect, read = timeout 20:58:18 timeout = TimeoutSauce(connect=connect, read=read) 20:58:18 except ValueError: 20:58:18 raise ValueError( 20:58:18 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:18 f"or a single float to set both timeouts to the same value." 20:58:18 ) 20:58:18 elif isinstance(timeout, TimeoutSauce): 20:58:18 pass 20:58:18 else: 20:58:18 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:18 20:58:18 try: 20:58:18 resp = conn.urlopen( 20:58:18 method=request.method, 20:58:18 url=url, 20:58:18 body=request.body, 20:58:18 headers=request.headers, 20:58:18 redirect=False, 20:58:18 assert_same_host=False, 20:58:18 preload_content=False, 20:58:18 decode_content=False, 20:58:18 retries=self.max_retries, 20:58:18 timeout=timeout, 20:58:18 chunked=chunked, 20:58:18 ) 20:58:18 20:58:18 except (ProtocolError, OSError) as err: 20:58:18 raise ConnectionError(err, request=request) 20:58:18 20:58:18 except MaxRetryError as e: 20:58:18 if isinstance(e.reason, ConnectTimeoutError): 20:58:18 # TODO: Remove this in 3.0.0: see #2811 20:58:18 if not isinstance(e.reason, NewConnectionError): 20:58:18 raise ConnectTimeout(e, request=request) 20:58:18 20:58:18 if isinstance(e.reason, ResponseError): 20:58:18 raise RetryError(e, request=request) 20:58:18 20:58:18 if isinstance(e.reason, _ProxyError): 20:58:18 raise ProxyError(e, request=request) 20:58:18 20:58:18 if isinstance(e.reason, _SSLError): 20:58:18 # This branch is for urllib3 v1.22 and later. 20:58:18 raise SSLError(e, request=request) 20:58:18 20:58:18 > raise ConnectionError(e, request=request) 20:58:18 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMC01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:18 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 20:58:18 ----------------------------- Captured stdout call ----------------------------- 20:58:18 execution of test_04_rdmC_device_connected 20:58:18 _____________ TransportOlmTesting.test_05_connect_xpdrA_to_roadmA ______________ 20:58:18 20:58:18 self = 20:58:18 20:58:18 def _new_conn(self) -> socket.socket: 20:58:18 """Establish a socket connection and set nodelay settings on it. 20:58:18 20:58:18 :return: New socket connection. 20:58:18 """ 20:58:18 try: 20:58:18 > sock = connection.create_connection( 20:58:18 (self._dns_host, self.port), 20:58:18 self.timeout, 20:58:18 source_address=self.source_address, 20:58:18 socket_options=self.socket_options, 20:58:18 ) 20:58:18 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 20:58:18 raise err 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 20:58:18 address = ('localhost', 8182), timeout = 10, source_address = None 20:58:18 socket_options = [(6, 1, 1)] 20:58:18 20:58:18 def create_connection( 20:58:18 address: tuple[str, int], 20:58:18 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:18 source_address: tuple[str, int] | None = None, 20:58:18 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 20:58:18 ) -> socket.socket: 20:58:18 """Connect to *address* and return the socket object. 20:58:18 20:58:18 Convenience function. Connect to *address* (a 2-tuple ``(host, 20:58:18 port)``) and return the socket object. Passing the optional 20:58:18 *timeout* parameter will set the timeout on the socket instance 20:58:18 before attempting to connect. If no *timeout* is supplied, the 20:58:18 global default timeout setting returned by :func:`socket.getdefaulttimeout` 20:58:18 is used. If *source_address* is set it must be a tuple of (host, port) 20:58:18 for the socket to bind as a source address before making the connection. 20:58:18 An host of '' or port 0 tells the OS to use the default. 20:58:18 """ 20:58:18 20:58:18 host, port = address 20:58:18 if host.startswith("["): 20:58:18 host = host.strip("[]") 20:58:18 err = None 20:58:18 20:58:18 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 20:58:18 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 20:58:18 # The original create_connection function always returns all records. 20:58:18 family = allowed_gai_family() 20:58:18 20:58:18 try: 20:58:18 host.encode("idna") 20:58:18 except UnicodeError: 20:58:18 raise LocationParseError(f"'{host}', label empty or too long") from None 20:58:18 20:58:18 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 20:58:18 af, socktype, proto, canonname, sa = res 20:58:18 sock = None 20:58:18 try: 20:58:18 sock = socket.socket(af, socktype, proto) 20:58:18 20:58:18 # If provided, set socket level options before connecting. 20:58:18 _set_socket_options(sock, socket_options) 20:58:18 20:58:18 if timeout is not _DEFAULT_TIMEOUT: 20:58:18 sock.settimeout(timeout) 20:58:18 if source_address: 20:58:18 sock.bind(source_address) 20:58:18 > sock.connect(sa) 20:58:18 E ConnectionRefusedError: [Errno 111] Connection refused 20:58:18 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 20:58:18 20:58:18 The above exception was the direct cause of the following exception: 20:58:18 20:58:18 self = 20:58:18 method = 'POST' 20:58:18 url = '/rests/operations/transportpce-networkutils:init-xpdr-rdm-links' 20:58:18 body = '{"input": {"links-input": {"xpdr-node": "XPDRA01", "xpdr-num": "1", "network-num": "1", "rdm-node": "ROADMA01", "srg-num": "1", "termination-point-num": "SRG1-PP1-TXRX"}}}' 20:58:18 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '171', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 20:58:18 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:18 redirect = False, assert_same_host = False 20:58:18 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 20:58:18 release_conn = False, chunked = False, body_pos = None, preload_content = False 20:58:18 decode_content = False, response_kw = {} 20:58:18 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/transportpce-networkutils:init-xpdr-rdm-links', query=None, fragment=None) 20:58:18 destination_scheme = None, conn = None, release_this_conn = True 20:58:18 http_tunnel_required = False, err = None, clean_exit = False 20:58:18 20:58:18 def urlopen( # type: ignore[override] 20:58:18 self, 20:58:18 method: str, 20:58:18 url: str, 20:58:18 body: _TYPE_BODY | None = None, 20:58:18 headers: typing.Mapping[str, str] | None = None, 20:58:18 retries: Retry | bool | int | None = None, 20:58:18 redirect: bool = True, 20:58:18 assert_same_host: bool = True, 20:58:18 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:18 pool_timeout: int | None = None, 20:58:18 release_conn: bool | None = None, 20:58:18 chunked: bool = False, 20:58:18 body_pos: _TYPE_BODY_POSITION | None = None, 20:58:18 preload_content: bool = True, 20:58:18 decode_content: bool = True, 20:58:18 **response_kw: typing.Any, 20:58:18 ) -> BaseHTTPResponse: 20:58:18 """ 20:58:18 Get a connection from the pool and perform an HTTP request. This is the 20:58:18 lowest level call for making a request, so you'll need to specify all 20:58:18 the raw details. 20:58:18 20:58:18 .. note:: 20:58:18 20:58:18 More commonly, it's appropriate to use a convenience method 20:58:18 such as :meth:`request`. 20:58:18 20:58:18 .. note:: 20:58:18 20:58:18 `release_conn` will only behave as expected if 20:58:18 `preload_content=False` because we want to make 20:58:18 `preload_content=False` the default behaviour someday soon without 20:58:18 breaking backwards compatibility. 20:58:18 20:58:18 :param method: 20:58:18 HTTP request method (such as GET, POST, PUT, etc.) 20:58:18 20:58:18 :param url: 20:58:18 The URL to perform the request on. 20:58:18 20:58:18 :param body: 20:58:18 Data to send in the request body, either :class:`str`, :class:`bytes`, 20:58:18 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 20:58:18 20:58:18 :param headers: 20:58:18 Dictionary of custom headers to send, such as User-Agent, 20:58:18 If-None-Match, etc. If None, pool headers are used. If provided, 20:58:18 these headers completely replace any pool-specific headers. 20:58:18 20:58:18 :param retries: 20:58:18 Configure the number of retries to allow before raising a 20:58:18 :class:`~urllib3.exceptions.MaxRetryError` exception. 20:58:18 20:58:18 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 20:58:18 :class:`~urllib3.util.retry.Retry` object for fine-grained control 20:58:18 over different types of retries. 20:58:18 Pass an integer number to retry connection errors that many times, 20:58:18 but no other types of errors. Pass zero to never retry. 20:58:18 20:58:18 If ``False``, then retries are disabled and any exception is raised 20:58:18 immediately. Also, instead of raising a MaxRetryError on redirects, 20:58:18 the redirect response will be returned. 20:58:18 20:58:18 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 20:58:18 20:58:18 :param redirect: 20:58:18 If True, automatically handle redirects (status codes 301, 302, 20:58:18 303, 307, 308). Each redirect counts as a retry. Disabling retries 20:58:18 will disable redirect, too. 20:58:18 20:58:18 :param assert_same_host: 20:58:18 If ``True``, will make sure that the host of the pool requests is 20:58:18 consistent else will raise HostChangedError. When ``False``, you can 20:58:18 use the pool on an HTTP proxy and request foreign hosts. 20:58:18 20:58:18 :param timeout: 20:58:18 If specified, overrides the default timeout for this one 20:58:18 request. It may be a float (in seconds) or an instance of 20:58:18 :class:`urllib3.util.Timeout`. 20:58:18 20:58:18 :param pool_timeout: 20:58:18 If set and the pool is set to block=True, then this method will 20:58:18 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 20:58:18 connection is available within the time period. 20:58:18 20:58:18 :param bool preload_content: 20:58:18 If True, the response's body will be preloaded into memory. 20:58:18 20:58:18 :param bool decode_content: 20:58:18 If True, will attempt to decode the body based on the 20:58:18 'content-encoding' header. 20:58:18 20:58:18 :param release_conn: 20:58:18 If False, then the urlopen call will not release the connection 20:58:18 back into the pool once a response is received (but will release if 20:58:18 you read the entire contents of the response such as when 20:58:18 `preload_content=True`). This is useful if you're not preloading 20:58:18 the response's content immediately. You will need to call 20:58:18 ``r.release_conn()`` on the response ``r`` to return the connection 20:58:18 back into the pool. If None, it takes the value of ``preload_content`` 20:58:18 which defaults to ``True``. 20:58:18 20:58:18 :param bool chunked: 20:58:18 If True, urllib3 will send the body using chunked transfer 20:58:18 encoding. Otherwise, urllib3 will send the body using the standard 20:58:18 content-length form. Defaults to False. 20:58:18 20:58:18 :param int body_pos: 20:58:18 Position to seek to in file-like body in the event of a retry or 20:58:18 redirect. Typically this won't need to be set because urllib3 will 20:58:18 auto-populate the value when needed. 20:58:18 """ 20:58:18 parsed_url = parse_url(url) 20:58:18 destination_scheme = parsed_url.scheme 20:58:18 20:58:18 if headers is None: 20:58:18 headers = self.headers 20:58:18 20:58:18 if not isinstance(retries, Retry): 20:58:18 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 20:58:18 20:58:18 if release_conn is None: 20:58:18 release_conn = preload_content 20:58:18 20:58:18 # Check host 20:58:18 if assert_same_host and not self.is_same_host(url): 20:58:18 raise HostChangedError(self, url, retries) 20:58:18 20:58:18 # Ensure that the URL we're connecting to is properly encoded 20:58:18 if url.startswith("/"): 20:58:18 url = to_str(_encode_target(url)) 20:58:18 else: 20:58:18 url = to_str(parsed_url.url) 20:58:18 20:58:18 conn = None 20:58:18 20:58:18 # Track whether `conn` needs to be released before 20:58:18 # returning/raising/recursing. Update this variable if necessary, and 20:58:18 # leave `release_conn` constant throughout the function. That way, if 20:58:18 # the function recurses, the original value of `release_conn` will be 20:58:18 # passed down into the recursive call, and its value will be respected. 20:58:18 # 20:58:18 # See issue #651 [1] for details. 20:58:18 # 20:58:18 # [1] 20:58:18 release_this_conn = release_conn 20:58:18 20:58:18 http_tunnel_required = connection_requires_http_tunnel( 20:58:18 self.proxy, self.proxy_config, destination_scheme 20:58:18 ) 20:58:18 20:58:18 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 20:58:18 # have to copy the headers dict so we can safely change it without those 20:58:18 # changes being reflected in anyone else's copy. 20:58:18 if not http_tunnel_required: 20:58:18 headers = headers.copy() # type: ignore[attr-defined] 20:58:18 headers.update(self.proxy_headers) # type: ignore[union-attr] 20:58:18 20:58:18 # Must keep the exception bound to a separate variable or else Python 3 20:58:18 # complains about UnboundLocalError. 20:58:18 err = None 20:58:18 20:58:18 # Keep track of whether we cleanly exited the except block. This 20:58:18 # ensures we do proper cleanup in finally. 20:58:18 clean_exit = False 20:58:18 20:58:18 # Rewind body position, if needed. Record current position 20:58:18 # for future rewinds in the event of a redirect/retry. 20:58:18 body_pos = set_file_position(body, body_pos) 20:58:18 20:58:18 try: 20:58:18 # Request a connection from the queue. 20:58:18 timeout_obj = self._get_timeout(timeout) 20:58:18 conn = self._get_conn(timeout=pool_timeout) 20:58:18 20:58:18 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 20:58:18 20:58:18 # Is this a closed/new connection that requires CONNECT tunnelling? 20:58:18 if self.proxy is not None and http_tunnel_required and conn.is_closed: 20:58:18 try: 20:58:18 self._prepare_proxy(conn) 20:58:18 except (BaseSSLError, OSError, SocketTimeout) as e: 20:58:18 self._raise_timeout( 20:58:18 err=e, url=self.proxy.url, timeout_value=conn.timeout 20:58:18 ) 20:58:18 raise 20:58:18 20:58:18 # If we're going to release the connection in ``finally:``, then 20:58:18 # the response doesn't need to know about the connection. Otherwise 20:58:18 # it will also try to release it and we'll have a double-release 20:58:18 # mess. 20:58:18 response_conn = conn if not release_conn else None 20:58:18 20:58:18 # Make the request on the HTTPConnection object 20:58:18 > response = self._make_request( 20:58:18 conn, 20:58:18 method, 20:58:18 url, 20:58:18 timeout=timeout_obj, 20:58:18 body=body, 20:58:18 headers=headers, 20:58:18 chunked=chunked, 20:58:18 retries=retries, 20:58:18 response_conn=response_conn, 20:58:18 preload_content=preload_content, 20:58:18 decode_content=decode_content, 20:58:18 **response_kw, 20:58:18 ) 20:58:18 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 20:58:18 conn.request( 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 20:58:18 self.endheaders() 20:58:18 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 20:58:18 self._send_output(message_body, encode_chunked=encode_chunked) 20:58:18 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 20:58:18 self.send(msg) 20:58:18 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 20:58:18 self.connect() 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 20:58:18 self.sock = self._new_conn() 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 20:58:18 self = 20:58:18 20:58:18 def _new_conn(self) -> socket.socket: 20:58:18 """Establish a socket connection and set nodelay settings on it. 20:58:18 20:58:18 :return: New socket connection. 20:58:18 """ 20:58:18 try: 20:58:18 sock = connection.create_connection( 20:58:18 (self._dns_host, self.port), 20:58:18 self.timeout, 20:58:18 source_address=self.source_address, 20:58:18 socket_options=self.socket_options, 20:58:18 ) 20:58:18 except socket.gaierror as e: 20:58:18 raise NameResolutionError(self.host, self, e) from e 20:58:18 except SocketTimeout as e: 20:58:18 raise ConnectTimeoutError( 20:58:18 self, 20:58:18 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 20:58:18 ) from e 20:58:18 20:58:18 except OSError as e: 20:58:18 > raise NewConnectionError( 20:58:18 self, f"Failed to establish a new connection: {e}" 20:58:18 ) from e 20:58:18 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 20:58:18 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 20:58:18 20:58:18 The above exception was the direct cause of the following exception: 20:58:18 20:58:18 self = 20:58:18 request = , stream = False 20:58:18 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:18 proxies = OrderedDict() 20:58:18 20:58:18 def send( 20:58:18 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:18 ): 20:58:18 """Sends PreparedRequest object. Returns Response object. 20:58:18 20:58:18 :param request: The :class:`PreparedRequest ` being sent. 20:58:18 :param stream: (optional) Whether to stream the request content. 20:58:18 :param timeout: (optional) How long to wait for the server to send 20:58:18 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:18 read timeout) ` tuple. 20:58:18 :type timeout: float or tuple or urllib3 Timeout object 20:58:18 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:18 we verify the server's TLS certificate, or a string, in which case it 20:58:18 must be a path to a CA bundle to use 20:58:18 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:18 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:18 :rtype: requests.Response 20:58:18 """ 20:58:18 20:58:18 try: 20:58:18 conn = self.get_connection_with_tls_context( 20:58:18 request, verify, proxies=proxies, cert=cert 20:58:18 ) 20:58:18 except LocationValueError as e: 20:58:18 raise InvalidURL(e, request=request) 20:58:18 20:58:18 self.cert_verify(conn, request.url, verify, cert) 20:58:18 url = self.request_url(request, proxies) 20:58:18 self.add_headers( 20:58:18 request, 20:58:18 stream=stream, 20:58:18 timeout=timeout, 20:58:18 verify=verify, 20:58:18 cert=cert, 20:58:18 proxies=proxies, 20:58:18 ) 20:58:18 20:58:18 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:18 20:58:18 if isinstance(timeout, tuple): 20:58:18 try: 20:58:18 connect, read = timeout 20:58:18 timeout = TimeoutSauce(connect=connect, read=read) 20:58:18 except ValueError: 20:58:18 raise ValueError( 20:58:18 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:18 f"or a single float to set both timeouts to the same value." 20:58:18 ) 20:58:18 elif isinstance(timeout, TimeoutSauce): 20:58:18 pass 20:58:18 else: 20:58:18 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:18 20:58:18 try: 20:58:18 > resp = conn.urlopen( 20:58:18 method=request.method, 20:58:18 url=url, 20:58:18 body=request.body, 20:58:18 headers=request.headers, 20:58:18 redirect=False, 20:58:18 assert_same_host=False, 20:58:18 preload_content=False, 20:58:18 decode_content=False, 20:58:18 retries=self.max_retries, 20:58:18 timeout=timeout, 20:58:18 chunked=chunked, 20:58:18 ) 20:58:18 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 20:58:18 retries = retries.increment( 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 20:58:18 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:18 method = 'POST' 20:58:18 url = '/rests/operations/transportpce-networkutils:init-xpdr-rdm-links' 20:58:18 response = None 20:58:18 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 20:58:18 _pool = 20:58:18 _stacktrace = 20:58:18 20:58:18 def increment( 20:58:18 self, 20:58:18 method: str | None = None, 20:58:18 url: str | None = None, 20:58:18 response: BaseHTTPResponse | None = None, 20:58:18 error: Exception | None = None, 20:58:18 _pool: ConnectionPool | None = None, 20:58:18 _stacktrace: TracebackType | None = None, 20:58:18 ) -> Self: 20:58:18 """Return a new Retry object with incremented retry counters. 20:58:18 20:58:18 :param response: A response object, or None, if the server did not 20:58:18 return a response. 20:58:18 :type response: :class:`~urllib3.response.BaseHTTPResponse` 20:58:18 :param Exception error: An error encountered during the request, or 20:58:18 None if the response was received successfully. 20:58:18 20:58:18 :return: A new ``Retry`` object. 20:58:18 """ 20:58:18 if self.total is False and error: 20:58:18 # Disabled, indicate to re-raise the error. 20:58:18 raise reraise(type(error), error, _stacktrace) 20:58:18 20:58:18 total = self.total 20:58:18 if total is not None: 20:58:18 total -= 1 20:58:18 20:58:18 connect = self.connect 20:58:18 read = self.read 20:58:18 redirect = self.redirect 20:58:18 status_count = self.status 20:58:18 other = self.other 20:58:18 cause = "unknown" 20:58:18 status = None 20:58:18 redirect_location = None 20:58:18 20:58:18 if error and self._is_connection_error(error): 20:58:18 # Connect retry? 20:58:18 if connect is False: 20:58:18 raise reraise(type(error), error, _stacktrace) 20:58:18 elif connect is not None: 20:58:18 connect -= 1 20:58:18 20:58:18 elif error and self._is_read_error(error): 20:58:18 # Read retry? 20:58:18 if read is False or method is None or not self._is_method_retryable(method): 20:58:18 raise reraise(type(error), error, _stacktrace) 20:58:18 elif read is not None: 20:58:18 read -= 1 20:58:18 20:58:18 elif error: 20:58:18 # Other retry? 20:58:18 if other is not None: 20:58:18 other -= 1 20:58:18 20:58:18 elif response and response.get_redirect_location(): 20:58:18 # Redirect retry? 20:58:18 if redirect is not None: 20:58:18 redirect -= 1 20:58:18 cause = "too many redirects" 20:58:18 response_redirect_location = response.get_redirect_location() 20:58:18 if response_redirect_location: 20:58:18 redirect_location = response_redirect_location 20:58:18 status = response.status 20:58:18 20:58:18 else: 20:58:18 # Incrementing because of a server error like a 500 in 20:58:18 # status_forcelist and the given method is in the allowed_methods 20:58:18 cause = ResponseError.GENERIC_ERROR 20:58:18 if response and response.status: 20:58:18 if status_count is not None: 20:58:18 status_count -= 1 20:58:18 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 20:58:18 status = response.status 20:58:18 20:58:18 history = self.history + ( 20:58:18 RequestHistory(method, url, error, status, redirect_location), 20:58:18 ) 20:58:18 20:58:18 new_retry = self.new( 20:58:18 total=total, 20:58:18 connect=connect, 20:58:18 read=read, 20:58:18 redirect=redirect, 20:58:18 status=status_count, 20:58:18 other=other, 20:58:18 history=history, 20:58:18 ) 20:58:18 20:58:18 if new_retry.is_exhausted(): 20:58:18 reason = error or ResponseError(cause) 20:58:18 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 20:58:18 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-xpdr-rdm-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:18 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 20:58:18 20:58:18 During handling of the above exception, another exception occurred: 20:58:18 20:58:18 self = 20:58:18 20:58:18 def test_05_connect_xpdrA_to_roadmA(self): 20:58:18 > response = test_utils.transportpce_api_rpc_request( 20:58:18 'transportpce-networkutils', 'init-xpdr-rdm-links', 20:58:18 {'links-input': {'xpdr-node': 'XPDRA01', 'xpdr-num': '1', 'network-num': '1', 20:58:18 'rdm-node': 'ROADMA01', 'srg-num': '1', 'termination-point-num': 'SRG1-PP1-TXRX'}}) 20:58:18 20:58:18 transportpce_tests/1.2.1/test05_olm.py:68: 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 transportpce_tests/common/test_utils.py:685: in transportpce_api_rpc_request 20:58:18 response = post_request(url, data) 20:58:18 transportpce_tests/common/test_utils.py:142: in post_request 20:58:18 return requests.request( 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 20:58:18 return session.request(method=method, url=url, **kwargs) 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 20:58:18 resp = self.send(prep, **send_kwargs) 20:58:18 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 20:58:18 r = adapter.send(request, **kwargs) 20:58:18 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:18 20:58:18 self = 20:58:18 request = , stream = False 20:58:18 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:18 proxies = OrderedDict() 20:58:18 20:58:18 def send( 20:58:18 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:18 ): 20:58:18 """Sends PreparedRequest object. Returns Response object. 20:58:18 20:58:18 :param request: The :class:`PreparedRequest ` being sent. 20:58:18 :param stream: (optional) Whether to stream the request content. 20:58:18 :param timeout: (optional) How long to wait for the server to send 20:58:18 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:18 read timeout) ` tuple. 20:58:18 :type timeout: float or tuple or urllib3 Timeout object 20:58:18 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:18 we verify the server's TLS certificate, or a string, in which case it 20:58:18 must be a path to a CA bundle to use 20:58:18 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:18 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:18 :rtype: requests.Response 20:58:18 """ 20:58:18 20:58:18 try: 20:58:18 conn = self.get_connection_with_tls_context( 20:58:18 request, verify, proxies=proxies, cert=cert 20:58:18 ) 20:58:18 except LocationValueError as e: 20:58:18 raise InvalidURL(e, request=request) 20:58:18 20:58:18 self.cert_verify(conn, request.url, verify, cert) 20:58:18 url = self.request_url(request, proxies) 20:58:18 self.add_headers( 20:58:18 request, 20:58:18 stream=stream, 20:58:18 timeout=timeout, 20:58:18 verify=verify, 20:58:18 cert=cert, 20:58:18 proxies=proxies, 20:58:18 ) 20:58:18 20:58:18 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:18 20:58:18 if isinstance(timeout, tuple): 20:58:18 try: 20:58:18 connect, read = timeout 20:58:18 timeout = TimeoutSauce(connect=connect, read=read) 20:58:18 except ValueError: 20:58:18 raise ValueError( 20:58:18 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:18 f"or a single float to set both timeouts to the same value." 20:58:18 ) 20:58:18 elif isinstance(timeout, TimeoutSauce): 20:58:18 pass 20:58:18 else: 20:58:18 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:18 20:58:18 try: 20:58:18 resp = conn.urlopen( 20:58:18 method=request.method, 20:58:18 url=url, 20:58:18 body=request.body, 20:58:18 headers=request.headers, 20:58:18 redirect=False, 20:58:18 assert_same_host=False, 20:58:18 preload_content=False, 20:58:18 decode_content=False, 20:58:18 retries=self.max_retries, 20:58:18 timeout=timeout, 20:58:18 chunked=chunked, 20:58:18 ) 20:58:18 20:58:18 except (ProtocolError, OSError) as err: 20:58:18 raise ConnectionError(err, request=request) 20:58:18 20:58:18 except MaxRetryError as e: 20:58:18 if isinstance(e.reason, ConnectTimeoutError): 20:58:18 # TODO: Remove this in 3.0.0: see #2811 20:58:18 if not isinstance(e.reason, NewConnectionError): 20:58:18 raise ConnectTimeout(e, request=request) 20:58:18 20:58:18 if isinstance(e.reason, ResponseError): 20:58:18 raise RetryError(e, request=request) 20:58:18 20:58:18 if isinstance(e.reason, _ProxyError): 20:58:18 raise ProxyError(e, request=request) 20:58:18 20:58:18 if isinstance(e.reason, _SSLError): 20:58:18 # This branch is for urllib3 v1.22 and later. 20:58:18 raise SSLError(e, request=request) 20:58:18 20:58:18 > raise ConnectionError(e, request=request) 20:58:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-xpdr-rdm-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_05_connect_xpdrA_to_roadmA 20:58:19 _____________ TransportOlmTesting.test_06_connect_roadmA_to_xpdrA ______________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def _new_conn(self) -> socket.socket: 20:58:19 """Establish a socket connection and set nodelay settings on it. 20:58:19 20:58:19 :return: New socket connection. 20:58:19 """ 20:58:19 try: 20:58:19 > sock = connection.create_connection( 20:58:19 (self._dns_host, self.port), 20:58:19 self.timeout, 20:58:19 source_address=self.source_address, 20:58:19 socket_options=self.socket_options, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 20:58:19 raise err 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 address = ('localhost', 8182), timeout = 10, source_address = None 20:58:19 socket_options = [(6, 1, 1)] 20:58:19 20:58:19 def create_connection( 20:58:19 address: tuple[str, int], 20:58:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:19 source_address: tuple[str, int] | None = None, 20:58:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 20:58:19 ) -> socket.socket: 20:58:19 """Connect to *address* and return the socket object. 20:58:19 20:58:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 20:58:19 port)``) and return the socket object. Passing the optional 20:58:19 *timeout* parameter will set the timeout on the socket instance 20:58:19 before attempting to connect. If no *timeout* is supplied, the 20:58:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 20:58:19 is used. If *source_address* is set it must be a tuple of (host, port) 20:58:19 for the socket to bind as a source address before making the connection. 20:58:19 An host of '' or port 0 tells the OS to use the default. 20:58:19 """ 20:58:19 20:58:19 host, port = address 20:58:19 if host.startswith("["): 20:58:19 host = host.strip("[]") 20:58:19 err = None 20:58:19 20:58:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 20:58:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 20:58:19 # The original create_connection function always returns all records. 20:58:19 family = allowed_gai_family() 20:58:19 20:58:19 try: 20:58:19 host.encode("idna") 20:58:19 except UnicodeError: 20:58:19 raise LocationParseError(f"'{host}', label empty or too long") from None 20:58:19 20:58:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 20:58:19 af, socktype, proto, canonname, sa = res 20:58:19 sock = None 20:58:19 try: 20:58:19 sock = socket.socket(af, socktype, proto) 20:58:19 20:58:19 # If provided, set socket level options before connecting. 20:58:19 _set_socket_options(sock, socket_options) 20:58:19 20:58:19 if timeout is not _DEFAULT_TIMEOUT: 20:58:19 sock.settimeout(timeout) 20:58:19 if source_address: 20:58:19 sock.bind(source_address) 20:58:19 > sock.connect(sa) 20:58:19 E ConnectionRefusedError: [Errno 111] Connection refused 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 20:58:19 20:58:19 The above exception was the direct cause of the following exception: 20:58:19 20:58:19 self = 20:58:19 method = 'POST' 20:58:19 url = '/rests/operations/transportpce-networkutils:init-rdm-xpdr-links' 20:58:19 body = '{"input": {"links-input": {"xpdr-node": "XPDRA01", "xpdr-num": "1", "network-num": "1", "rdm-node": "ROADMA01", "srg-num": "1", "termination-point-num": "SRG1-PP1-TXRX"}}}' 20:58:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '171', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 20:58:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:19 redirect = False, assert_same_host = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 20:58:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 20:58:19 decode_content = False, response_kw = {} 20:58:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/transportpce-networkutils:init-rdm-xpdr-links', query=None, fragment=None) 20:58:19 destination_scheme = None, conn = None, release_this_conn = True 20:58:19 http_tunnel_required = False, err = None, clean_exit = False 20:58:19 20:58:19 def urlopen( # type: ignore[override] 20:58:19 self, 20:58:19 method: str, 20:58:19 url: str, 20:58:19 body: _TYPE_BODY | None = None, 20:58:19 headers: typing.Mapping[str, str] | None = None, 20:58:19 retries: Retry | bool | int | None = None, 20:58:19 redirect: bool = True, 20:58:19 assert_same_host: bool = True, 20:58:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:19 pool_timeout: int | None = None, 20:58:19 release_conn: bool | None = None, 20:58:19 chunked: bool = False, 20:58:19 body_pos: _TYPE_BODY_POSITION | None = None, 20:58:19 preload_content: bool = True, 20:58:19 decode_content: bool = True, 20:58:19 **response_kw: typing.Any, 20:58:19 ) -> BaseHTTPResponse: 20:58:19 """ 20:58:19 Get a connection from the pool and perform an HTTP request. This is the 20:58:19 lowest level call for making a request, so you'll need to specify all 20:58:19 the raw details. 20:58:19 20:58:19 .. note:: 20:58:19 20:58:19 More commonly, it's appropriate to use a convenience method 20:58:19 such as :meth:`request`. 20:58:19 20:58:19 .. note:: 20:58:19 20:58:19 `release_conn` will only behave as expected if 20:58:19 `preload_content=False` because we want to make 20:58:19 `preload_content=False` the default behaviour someday soon without 20:58:19 breaking backwards compatibility. 20:58:19 20:58:19 :param method: 20:58:19 HTTP request method (such as GET, POST, PUT, etc.) 20:58:19 20:58:19 :param url: 20:58:19 The URL to perform the request on. 20:58:19 20:58:19 :param body: 20:58:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 20:58:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 20:58:19 20:58:19 :param headers: 20:58:19 Dictionary of custom headers to send, such as User-Agent, 20:58:19 If-None-Match, etc. If None, pool headers are used. If provided, 20:58:19 these headers completely replace any pool-specific headers. 20:58:19 20:58:19 :param retries: 20:58:19 Configure the number of retries to allow before raising a 20:58:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 20:58:19 20:58:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 20:58:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 20:58:19 over different types of retries. 20:58:19 Pass an integer number to retry connection errors that many times, 20:58:19 but no other types of errors. Pass zero to never retry. 20:58:19 20:58:19 If ``False``, then retries are disabled and any exception is raised 20:58:19 immediately. Also, instead of raising a MaxRetryError on redirects, 20:58:19 the redirect response will be returned. 20:58:19 20:58:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 20:58:19 20:58:19 :param redirect: 20:58:19 If True, automatically handle redirects (status codes 301, 302, 20:58:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 20:58:19 will disable redirect, too. 20:58:19 20:58:19 :param assert_same_host: 20:58:19 If ``True``, will make sure that the host of the pool requests is 20:58:19 consistent else will raise HostChangedError. When ``False``, you can 20:58:19 use the pool on an HTTP proxy and request foreign hosts. 20:58:19 20:58:19 :param timeout: 20:58:19 If specified, overrides the default timeout for this one 20:58:19 request. It may be a float (in seconds) or an instance of 20:58:19 :class:`urllib3.util.Timeout`. 20:58:19 20:58:19 :param pool_timeout: 20:58:19 If set and the pool is set to block=True, then this method will 20:58:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 20:58:19 connection is available within the time period. 20:58:19 20:58:19 :param bool preload_content: 20:58:19 If True, the response's body will be preloaded into memory. 20:58:19 20:58:19 :param bool decode_content: 20:58:19 If True, will attempt to decode the body based on the 20:58:19 'content-encoding' header. 20:58:19 20:58:19 :param release_conn: 20:58:19 If False, then the urlopen call will not release the connection 20:58:19 back into the pool once a response is received (but will release if 20:58:19 you read the entire contents of the response such as when 20:58:19 `preload_content=True`). This is useful if you're not preloading 20:58:19 the response's content immediately. You will need to call 20:58:19 ``r.release_conn()`` on the response ``r`` to return the connection 20:58:19 back into the pool. If None, it takes the value of ``preload_content`` 20:58:19 which defaults to ``True``. 20:58:19 20:58:19 :param bool chunked: 20:58:19 If True, urllib3 will send the body using chunked transfer 20:58:19 encoding. Otherwise, urllib3 will send the body using the standard 20:58:19 content-length form. Defaults to False. 20:58:19 20:58:19 :param int body_pos: 20:58:19 Position to seek to in file-like body in the event of a retry or 20:58:19 redirect. Typically this won't need to be set because urllib3 will 20:58:19 auto-populate the value when needed. 20:58:19 """ 20:58:19 parsed_url = parse_url(url) 20:58:19 destination_scheme = parsed_url.scheme 20:58:19 20:58:19 if headers is None: 20:58:19 headers = self.headers 20:58:19 20:58:19 if not isinstance(retries, Retry): 20:58:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 20:58:19 20:58:19 if release_conn is None: 20:58:19 release_conn = preload_content 20:58:19 20:58:19 # Check host 20:58:19 if assert_same_host and not self.is_same_host(url): 20:58:19 raise HostChangedError(self, url, retries) 20:58:19 20:58:19 # Ensure that the URL we're connecting to is properly encoded 20:58:19 if url.startswith("/"): 20:58:19 url = to_str(_encode_target(url)) 20:58:19 else: 20:58:19 url = to_str(parsed_url.url) 20:58:19 20:58:19 conn = None 20:58:19 20:58:19 # Track whether `conn` needs to be released before 20:58:19 # returning/raising/recursing. Update this variable if necessary, and 20:58:19 # leave `release_conn` constant throughout the function. That way, if 20:58:19 # the function recurses, the original value of `release_conn` will be 20:58:19 # passed down into the recursive call, and its value will be respected. 20:58:19 # 20:58:19 # See issue #651 [1] for details. 20:58:19 # 20:58:19 # [1] 20:58:19 release_this_conn = release_conn 20:58:19 20:58:19 http_tunnel_required = connection_requires_http_tunnel( 20:58:19 self.proxy, self.proxy_config, destination_scheme 20:58:19 ) 20:58:19 20:58:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 20:58:19 # have to copy the headers dict so we can safely change it without those 20:58:19 # changes being reflected in anyone else's copy. 20:58:19 if not http_tunnel_required: 20:58:19 headers = headers.copy() # type: ignore[attr-defined] 20:58:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 20:58:19 20:58:19 # Must keep the exception bound to a separate variable or else Python 3 20:58:19 # complains about UnboundLocalError. 20:58:19 err = None 20:58:19 20:58:19 # Keep track of whether we cleanly exited the except block. This 20:58:19 # ensures we do proper cleanup in finally. 20:58:19 clean_exit = False 20:58:19 20:58:19 # Rewind body position, if needed. Record current position 20:58:19 # for future rewinds in the event of a redirect/retry. 20:58:19 body_pos = set_file_position(body, body_pos) 20:58:19 20:58:19 try: 20:58:19 # Request a connection from the queue. 20:58:19 timeout_obj = self._get_timeout(timeout) 20:58:19 conn = self._get_conn(timeout=pool_timeout) 20:58:19 20:58:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 20:58:19 20:58:19 # Is this a closed/new connection that requires CONNECT tunnelling? 20:58:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 20:58:19 try: 20:58:19 self._prepare_proxy(conn) 20:58:19 except (BaseSSLError, OSError, SocketTimeout) as e: 20:58:19 self._raise_timeout( 20:58:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 20:58:19 ) 20:58:19 raise 20:58:19 20:58:19 # If we're going to release the connection in ``finally:``, then 20:58:19 # the response doesn't need to know about the connection. Otherwise 20:58:19 # it will also try to release it and we'll have a double-release 20:58:19 # mess. 20:58:19 response_conn = conn if not release_conn else None 20:58:19 20:58:19 # Make the request on the HTTPConnection object 20:58:19 > response = self._make_request( 20:58:19 conn, 20:58:19 method, 20:58:19 url, 20:58:19 timeout=timeout_obj, 20:58:19 body=body, 20:58:19 headers=headers, 20:58:19 chunked=chunked, 20:58:19 retries=retries, 20:58:19 response_conn=response_conn, 20:58:19 preload_content=preload_content, 20:58:19 decode_content=decode_content, 20:58:19 **response_kw, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 20:58:19 conn.request( 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 20:58:19 self.endheaders() 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 20:58:19 self._send_output(message_body, encode_chunked=encode_chunked) 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 20:58:19 self.send(msg) 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 20:58:19 self.connect() 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 20:58:19 self.sock = self._new_conn() 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def _new_conn(self) -> socket.socket: 20:58:19 """Establish a socket connection and set nodelay settings on it. 20:58:19 20:58:19 :return: New socket connection. 20:58:19 """ 20:58:19 try: 20:58:19 sock = connection.create_connection( 20:58:19 (self._dns_host, self.port), 20:58:19 self.timeout, 20:58:19 source_address=self.source_address, 20:58:19 socket_options=self.socket_options, 20:58:19 ) 20:58:19 except socket.gaierror as e: 20:58:19 raise NameResolutionError(self.host, self, e) from e 20:58:19 except SocketTimeout as e: 20:58:19 raise ConnectTimeoutError( 20:58:19 self, 20:58:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 20:58:19 ) from e 20:58:19 20:58:19 except OSError as e: 20:58:19 > raise NewConnectionError( 20:58:19 self, f"Failed to establish a new connection: {e}" 20:58:19 ) from e 20:58:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 20:58:19 20:58:19 The above exception was the direct cause of the following exception: 20:58:19 20:58:19 self = 20:58:19 request = , stream = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:19 proxies = OrderedDict() 20:58:19 20:58:19 def send( 20:58:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:19 ): 20:58:19 """Sends PreparedRequest object. Returns Response object. 20:58:19 20:58:19 :param request: The :class:`PreparedRequest ` being sent. 20:58:19 :param stream: (optional) Whether to stream the request content. 20:58:19 :param timeout: (optional) How long to wait for the server to send 20:58:19 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:19 read timeout) ` tuple. 20:58:19 :type timeout: float or tuple or urllib3 Timeout object 20:58:19 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:19 we verify the server's TLS certificate, or a string, in which case it 20:58:19 must be a path to a CA bundle to use 20:58:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:19 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:19 :rtype: requests.Response 20:58:19 """ 20:58:19 20:58:19 try: 20:58:19 conn = self.get_connection_with_tls_context( 20:58:19 request, verify, proxies=proxies, cert=cert 20:58:19 ) 20:58:19 except LocationValueError as e: 20:58:19 raise InvalidURL(e, request=request) 20:58:19 20:58:19 self.cert_verify(conn, request.url, verify, cert) 20:58:19 url = self.request_url(request, proxies) 20:58:19 self.add_headers( 20:58:19 request, 20:58:19 stream=stream, 20:58:19 timeout=timeout, 20:58:19 verify=verify, 20:58:19 cert=cert, 20:58:19 proxies=proxies, 20:58:19 ) 20:58:19 20:58:19 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:19 20:58:19 if isinstance(timeout, tuple): 20:58:19 try: 20:58:19 connect, read = timeout 20:58:19 timeout = TimeoutSauce(connect=connect, read=read) 20:58:19 except ValueError: 20:58:19 raise ValueError( 20:58:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:19 f"or a single float to set both timeouts to the same value." 20:58:19 ) 20:58:19 elif isinstance(timeout, TimeoutSauce): 20:58:19 pass 20:58:19 else: 20:58:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:19 20:58:19 try: 20:58:19 > resp = conn.urlopen( 20:58:19 method=request.method, 20:58:19 url=url, 20:58:19 body=request.body, 20:58:19 headers=request.headers, 20:58:19 redirect=False, 20:58:19 assert_same_host=False, 20:58:19 preload_content=False, 20:58:19 decode_content=False, 20:58:19 retries=self.max_retries, 20:58:19 timeout=timeout, 20:58:19 chunked=chunked, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 20:58:19 retries = retries.increment( 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:19 method = 'POST' 20:58:19 url = '/rests/operations/transportpce-networkutils:init-rdm-xpdr-links' 20:58:19 response = None 20:58:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 20:58:19 _pool = 20:58:19 _stacktrace = 20:58:19 20:58:19 def increment( 20:58:19 self, 20:58:19 method: str | None = None, 20:58:19 url: str | None = None, 20:58:19 response: BaseHTTPResponse | None = None, 20:58:19 error: Exception | None = None, 20:58:19 _pool: ConnectionPool | None = None, 20:58:19 _stacktrace: TracebackType | None = None, 20:58:19 ) -> Self: 20:58:19 """Return a new Retry object with incremented retry counters. 20:58:19 20:58:19 :param response: A response object, or None, if the server did not 20:58:19 return a response. 20:58:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 20:58:19 :param Exception error: An error encountered during the request, or 20:58:19 None if the response was received successfully. 20:58:19 20:58:19 :return: A new ``Retry`` object. 20:58:19 """ 20:58:19 if self.total is False and error: 20:58:19 # Disabled, indicate to re-raise the error. 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 20:58:19 total = self.total 20:58:19 if total is not None: 20:58:19 total -= 1 20:58:19 20:58:19 connect = self.connect 20:58:19 read = self.read 20:58:19 redirect = self.redirect 20:58:19 status_count = self.status 20:58:19 other = self.other 20:58:19 cause = "unknown" 20:58:19 status = None 20:58:19 redirect_location = None 20:58:19 20:58:19 if error and self._is_connection_error(error): 20:58:19 # Connect retry? 20:58:19 if connect is False: 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 elif connect is not None: 20:58:19 connect -= 1 20:58:19 20:58:19 elif error and self._is_read_error(error): 20:58:19 # Read retry? 20:58:19 if read is False or method is None or not self._is_method_retryable(method): 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 elif read is not None: 20:58:19 read -= 1 20:58:19 20:58:19 elif error: 20:58:19 # Other retry? 20:58:19 if other is not None: 20:58:19 other -= 1 20:58:19 20:58:19 elif response and response.get_redirect_location(): 20:58:19 # Redirect retry? 20:58:19 if redirect is not None: 20:58:19 redirect -= 1 20:58:19 cause = "too many redirects" 20:58:19 response_redirect_location = response.get_redirect_location() 20:58:19 if response_redirect_location: 20:58:19 redirect_location = response_redirect_location 20:58:19 status = response.status 20:58:19 20:58:19 else: 20:58:19 # Incrementing because of a server error like a 500 in 20:58:19 # status_forcelist and the given method is in the allowed_methods 20:58:19 cause = ResponseError.GENERIC_ERROR 20:58:19 if response and response.status: 20:58:19 if status_count is not None: 20:58:19 status_count -= 1 20:58:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 20:58:19 status = response.status 20:58:19 20:58:19 history = self.history + ( 20:58:19 RequestHistory(method, url, error, status, redirect_location), 20:58:19 ) 20:58:19 20:58:19 new_retry = self.new( 20:58:19 total=total, 20:58:19 connect=connect, 20:58:19 read=read, 20:58:19 redirect=redirect, 20:58:19 status=status_count, 20:58:19 other=other, 20:58:19 history=history, 20:58:19 ) 20:58:19 20:58:19 if new_retry.is_exhausted(): 20:58:19 reason = error or ResponseError(cause) 20:58:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 20:58:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-rdm-xpdr-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 20:58:19 20:58:19 During handling of the above exception, another exception occurred: 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_06_connect_roadmA_to_xpdrA(self): 20:58:19 > response = test_utils.transportpce_api_rpc_request( 20:58:19 'transportpce-networkutils', 'init-rdm-xpdr-links', 20:58:19 {'links-input': {'xpdr-node': 'XPDRA01', 'xpdr-num': '1', 'network-num': '1', 20:58:19 'rdm-node': 'ROADMA01', 'srg-num': '1', 'termination-point-num': 'SRG1-PP1-TXRX'}}) 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:75: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 transportpce_tests/common/test_utils.py:685: in transportpce_api_rpc_request 20:58:19 response = post_request(url, data) 20:58:19 transportpce_tests/common/test_utils.py:142: in post_request 20:58:19 return requests.request( 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 20:58:19 return session.request(method=method, url=url, **kwargs) 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 20:58:19 resp = self.send(prep, **send_kwargs) 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 20:58:19 r = adapter.send(request, **kwargs) 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = 20:58:19 request = , stream = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:19 proxies = OrderedDict() 20:58:19 20:58:19 def send( 20:58:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:19 ): 20:58:19 """Sends PreparedRequest object. Returns Response object. 20:58:19 20:58:19 :param request: The :class:`PreparedRequest ` being sent. 20:58:19 :param stream: (optional) Whether to stream the request content. 20:58:19 :param timeout: (optional) How long to wait for the server to send 20:58:19 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:19 read timeout) ` tuple. 20:58:19 :type timeout: float or tuple or urllib3 Timeout object 20:58:19 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:19 we verify the server's TLS certificate, or a string, in which case it 20:58:19 must be a path to a CA bundle to use 20:58:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:19 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:19 :rtype: requests.Response 20:58:19 """ 20:58:19 20:58:19 try: 20:58:19 conn = self.get_connection_with_tls_context( 20:58:19 request, verify, proxies=proxies, cert=cert 20:58:19 ) 20:58:19 except LocationValueError as e: 20:58:19 raise InvalidURL(e, request=request) 20:58:19 20:58:19 self.cert_verify(conn, request.url, verify, cert) 20:58:19 url = self.request_url(request, proxies) 20:58:19 self.add_headers( 20:58:19 request, 20:58:19 stream=stream, 20:58:19 timeout=timeout, 20:58:19 verify=verify, 20:58:19 cert=cert, 20:58:19 proxies=proxies, 20:58:19 ) 20:58:19 20:58:19 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:19 20:58:19 if isinstance(timeout, tuple): 20:58:19 try: 20:58:19 connect, read = timeout 20:58:19 timeout = TimeoutSauce(connect=connect, read=read) 20:58:19 except ValueError: 20:58:19 raise ValueError( 20:58:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:19 f"or a single float to set both timeouts to the same value." 20:58:19 ) 20:58:19 elif isinstance(timeout, TimeoutSauce): 20:58:19 pass 20:58:19 else: 20:58:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:19 20:58:19 try: 20:58:19 resp = conn.urlopen( 20:58:19 method=request.method, 20:58:19 url=url, 20:58:19 body=request.body, 20:58:19 headers=request.headers, 20:58:19 redirect=False, 20:58:19 assert_same_host=False, 20:58:19 preload_content=False, 20:58:19 decode_content=False, 20:58:19 retries=self.max_retries, 20:58:19 timeout=timeout, 20:58:19 chunked=chunked, 20:58:19 ) 20:58:19 20:58:19 except (ProtocolError, OSError) as err: 20:58:19 raise ConnectionError(err, request=request) 20:58:19 20:58:19 except MaxRetryError as e: 20:58:19 if isinstance(e.reason, ConnectTimeoutError): 20:58:19 # TODO: Remove this in 3.0.0: see #2811 20:58:19 if not isinstance(e.reason, NewConnectionError): 20:58:19 raise ConnectTimeout(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, ResponseError): 20:58:19 raise RetryError(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, _ProxyError): 20:58:19 raise ProxyError(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, _SSLError): 20:58:19 # This branch is for urllib3 v1.22 and later. 20:58:19 raise SSLError(e, request=request) 20:58:19 20:58:19 > raise ConnectionError(e, request=request) 20:58:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-rdm-xpdr-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_06_connect_roadmA_to_xpdrA 20:58:19 _____________ TransportOlmTesting.test_07_connect_xpdrC_to_roadmC ______________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def _new_conn(self) -> socket.socket: 20:58:19 """Establish a socket connection and set nodelay settings on it. 20:58:19 20:58:19 :return: New socket connection. 20:58:19 """ 20:58:19 try: 20:58:19 > sock = connection.create_connection( 20:58:19 (self._dns_host, self.port), 20:58:19 self.timeout, 20:58:19 source_address=self.source_address, 20:58:19 socket_options=self.socket_options, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 20:58:19 raise err 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 address = ('localhost', 8182), timeout = 10, source_address = None 20:58:19 socket_options = [(6, 1, 1)] 20:58:19 20:58:19 def create_connection( 20:58:19 address: tuple[str, int], 20:58:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:19 source_address: tuple[str, int] | None = None, 20:58:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 20:58:19 ) -> socket.socket: 20:58:19 """Connect to *address* and return the socket object. 20:58:19 20:58:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 20:58:19 port)``) and return the socket object. Passing the optional 20:58:19 *timeout* parameter will set the timeout on the socket instance 20:58:19 before attempting to connect. If no *timeout* is supplied, the 20:58:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 20:58:19 is used. If *source_address* is set it must be a tuple of (host, port) 20:58:19 for the socket to bind as a source address before making the connection. 20:58:19 An host of '' or port 0 tells the OS to use the default. 20:58:19 """ 20:58:19 20:58:19 host, port = address 20:58:19 if host.startswith("["): 20:58:19 host = host.strip("[]") 20:58:19 err = None 20:58:19 20:58:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 20:58:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 20:58:19 # The original create_connection function always returns all records. 20:58:19 family = allowed_gai_family() 20:58:19 20:58:19 try: 20:58:19 host.encode("idna") 20:58:19 except UnicodeError: 20:58:19 raise LocationParseError(f"'{host}', label empty or too long") from None 20:58:19 20:58:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 20:58:19 af, socktype, proto, canonname, sa = res 20:58:19 sock = None 20:58:19 try: 20:58:19 sock = socket.socket(af, socktype, proto) 20:58:19 20:58:19 # If provided, set socket level options before connecting. 20:58:19 _set_socket_options(sock, socket_options) 20:58:19 20:58:19 if timeout is not _DEFAULT_TIMEOUT: 20:58:19 sock.settimeout(timeout) 20:58:19 if source_address: 20:58:19 sock.bind(source_address) 20:58:19 > sock.connect(sa) 20:58:19 E ConnectionRefusedError: [Errno 111] Connection refused 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 20:58:19 20:58:19 The above exception was the direct cause of the following exception: 20:58:19 20:58:19 self = 20:58:19 method = 'POST' 20:58:19 url = '/rests/operations/transportpce-networkutils:init-xpdr-rdm-links' 20:58:19 body = '{"input": {"links-input": {"xpdr-node": "XPDRC01", "xpdr-num": "1", "network-num": "1", "rdm-node": "ROADMC01", "srg-num": "1", "termination-point-num": "SRG1-PP1-TXRX"}}}' 20:58:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '171', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 20:58:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:19 redirect = False, assert_same_host = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 20:58:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 20:58:19 decode_content = False, response_kw = {} 20:58:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/transportpce-networkutils:init-xpdr-rdm-links', query=None, fragment=None) 20:58:19 destination_scheme = None, conn = None, release_this_conn = True 20:58:19 http_tunnel_required = False, err = None, clean_exit = False 20:58:19 20:58:19 def urlopen( # type: ignore[override] 20:58:19 self, 20:58:19 method: str, 20:58:19 url: str, 20:58:19 body: _TYPE_BODY | None = None, 20:58:19 headers: typing.Mapping[str, str] | None = None, 20:58:19 retries: Retry | bool | int | None = None, 20:58:19 redirect: bool = True, 20:58:19 assert_same_host: bool = True, 20:58:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:19 pool_timeout: int | None = None, 20:58:19 release_conn: bool | None = None, 20:58:19 chunked: bool = False, 20:58:19 body_pos: _TYPE_BODY_POSITION | None = None, 20:58:19 preload_content: bool = True, 20:58:19 decode_content: bool = True, 20:58:19 **response_kw: typing.Any, 20:58:19 ) -> BaseHTTPResponse: 20:58:19 """ 20:58:19 Get a connection from the pool and perform an HTTP request. This is the 20:58:19 lowest level call for making a request, so you'll need to specify all 20:58:19 the raw details. 20:58:19 20:58:19 .. note:: 20:58:19 20:58:19 More commonly, it's appropriate to use a convenience method 20:58:19 such as :meth:`request`. 20:58:19 20:58:19 .. note:: 20:58:19 20:58:19 `release_conn` will only behave as expected if 20:58:19 `preload_content=False` because we want to make 20:58:19 `preload_content=False` the default behaviour someday soon without 20:58:19 breaking backwards compatibility. 20:58:19 20:58:19 :param method: 20:58:19 HTTP request method (such as GET, POST, PUT, etc.) 20:58:19 20:58:19 :param url: 20:58:19 The URL to perform the request on. 20:58:19 20:58:19 :param body: 20:58:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 20:58:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 20:58:19 20:58:19 :param headers: 20:58:19 Dictionary of custom headers to send, such as User-Agent, 20:58:19 If-None-Match, etc. If None, pool headers are used. If provided, 20:58:19 these headers completely replace any pool-specific headers. 20:58:19 20:58:19 :param retries: 20:58:19 Configure the number of retries to allow before raising a 20:58:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 20:58:19 20:58:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 20:58:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 20:58:19 over different types of retries. 20:58:19 Pass an integer number to retry connection errors that many times, 20:58:19 but no other types of errors. Pass zero to never retry. 20:58:19 20:58:19 If ``False``, then retries are disabled and any exception is raised 20:58:19 immediately. Also, instead of raising a MaxRetryError on redirects, 20:58:19 the redirect response will be returned. 20:58:19 20:58:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 20:58:19 20:58:19 :param redirect: 20:58:19 If True, automatically handle redirects (status codes 301, 302, 20:58:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 20:58:19 will disable redirect, too. 20:58:19 20:58:19 :param assert_same_host: 20:58:19 If ``True``, will make sure that the host of the pool requests is 20:58:19 consistent else will raise HostChangedError. When ``False``, you can 20:58:19 use the pool on an HTTP proxy and request foreign hosts. 20:58:19 20:58:19 :param timeout: 20:58:19 If specified, overrides the default timeout for this one 20:58:19 request. It may be a float (in seconds) or an instance of 20:58:19 :class:`urllib3.util.Timeout`. 20:58:19 20:58:19 :param pool_timeout: 20:58:19 If set and the pool is set to block=True, then this method will 20:58:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 20:58:19 connection is available within the time period. 20:58:19 20:58:19 :param bool preload_content: 20:58:19 If True, the response's body will be preloaded into memory. 20:58:19 20:58:19 :param bool decode_content: 20:58:19 If True, will attempt to decode the body based on the 20:58:19 'content-encoding' header. 20:58:19 20:58:19 :param release_conn: 20:58:19 If False, then the urlopen call will not release the connection 20:58:19 back into the pool once a response is received (but will release if 20:58:19 you read the entire contents of the response such as when 20:58:19 `preload_content=True`). This is useful if you're not preloading 20:58:19 the response's content immediately. You will need to call 20:58:19 ``r.release_conn()`` on the response ``r`` to return the connection 20:58:19 back into the pool. If None, it takes the value of ``preload_content`` 20:58:19 which defaults to ``True``. 20:58:19 20:58:19 :param bool chunked: 20:58:19 If True, urllib3 will send the body using chunked transfer 20:58:19 encoding. Otherwise, urllib3 will send the body using the standard 20:58:19 content-length form. Defaults to False. 20:58:19 20:58:19 :param int body_pos: 20:58:19 Position to seek to in file-like body in the event of a retry or 20:58:19 redirect. Typically this won't need to be set because urllib3 will 20:58:19 auto-populate the value when needed. 20:58:19 """ 20:58:19 parsed_url = parse_url(url) 20:58:19 destination_scheme = parsed_url.scheme 20:58:19 20:58:19 if headers is None: 20:58:19 headers = self.headers 20:58:19 20:58:19 if not isinstance(retries, Retry): 20:58:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 20:58:19 20:58:19 if release_conn is None: 20:58:19 release_conn = preload_content 20:58:19 20:58:19 # Check host 20:58:19 if assert_same_host and not self.is_same_host(url): 20:58:19 raise HostChangedError(self, url, retries) 20:58:19 20:58:19 # Ensure that the URL we're connecting to is properly encoded 20:58:19 if url.startswith("/"): 20:58:19 url = to_str(_encode_target(url)) 20:58:19 else: 20:58:19 url = to_str(parsed_url.url) 20:58:19 20:58:19 conn = None 20:58:19 20:58:19 # Track whether `conn` needs to be released before 20:58:19 # returning/raising/recursing. Update this variable if necessary, and 20:58:19 # leave `release_conn` constant throughout the function. That way, if 20:58:19 # the function recurses, the original value of `release_conn` will be 20:58:19 # passed down into the recursive call, and its value will be respected. 20:58:19 # 20:58:19 # See issue #651 [1] for details. 20:58:19 # 20:58:19 # [1] 20:58:19 release_this_conn = release_conn 20:58:19 20:58:19 http_tunnel_required = connection_requires_http_tunnel( 20:58:19 self.proxy, self.proxy_config, destination_scheme 20:58:19 ) 20:58:19 20:58:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 20:58:19 # have to copy the headers dict so we can safely change it without those 20:58:19 # changes being reflected in anyone else's copy. 20:58:19 if not http_tunnel_required: 20:58:19 headers = headers.copy() # type: ignore[attr-defined] 20:58:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 20:58:19 20:58:19 # Must keep the exception bound to a separate variable or else Python 3 20:58:19 # complains about UnboundLocalError. 20:58:19 err = None 20:58:19 20:58:19 # Keep track of whether we cleanly exited the except block. This 20:58:19 # ensures we do proper cleanup in finally. 20:58:19 clean_exit = False 20:58:19 20:58:19 # Rewind body position, if needed. Record current position 20:58:19 # for future rewinds in the event of a redirect/retry. 20:58:19 body_pos = set_file_position(body, body_pos) 20:58:19 20:58:19 try: 20:58:19 # Request a connection from the queue. 20:58:19 timeout_obj = self._get_timeout(timeout) 20:58:19 conn = self._get_conn(timeout=pool_timeout) 20:58:19 20:58:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 20:58:19 20:58:19 # Is this a closed/new connection that requires CONNECT tunnelling? 20:58:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 20:58:19 try: 20:58:19 self._prepare_proxy(conn) 20:58:19 except (BaseSSLError, OSError, SocketTimeout) as e: 20:58:19 self._raise_timeout( 20:58:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 20:58:19 ) 20:58:19 raise 20:58:19 20:58:19 # If we're going to release the connection in ``finally:``, then 20:58:19 # the response doesn't need to know about the connection. Otherwise 20:58:19 # it will also try to release it and we'll have a double-release 20:58:19 # mess. 20:58:19 response_conn = conn if not release_conn else None 20:58:19 20:58:19 # Make the request on the HTTPConnection object 20:58:19 > response = self._make_request( 20:58:19 conn, 20:58:19 method, 20:58:19 url, 20:58:19 timeout=timeout_obj, 20:58:19 body=body, 20:58:19 headers=headers, 20:58:19 chunked=chunked, 20:58:19 retries=retries, 20:58:19 response_conn=response_conn, 20:58:19 preload_content=preload_content, 20:58:19 decode_content=decode_content, 20:58:19 **response_kw, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 20:58:19 conn.request( 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 20:58:19 self.endheaders() 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 20:58:19 self._send_output(message_body, encode_chunked=encode_chunked) 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 20:58:19 self.send(msg) 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 20:58:19 self.connect() 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 20:58:19 self.sock = self._new_conn() 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def _new_conn(self) -> socket.socket: 20:58:19 """Establish a socket connection and set nodelay settings on it. 20:58:19 20:58:19 :return: New socket connection. 20:58:19 """ 20:58:19 try: 20:58:19 sock = connection.create_connection( 20:58:19 (self._dns_host, self.port), 20:58:19 self.timeout, 20:58:19 source_address=self.source_address, 20:58:19 socket_options=self.socket_options, 20:58:19 ) 20:58:19 except socket.gaierror as e: 20:58:19 raise NameResolutionError(self.host, self, e) from e 20:58:19 except SocketTimeout as e: 20:58:19 raise ConnectTimeoutError( 20:58:19 self, 20:58:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 20:58:19 ) from e 20:58:19 20:58:19 except OSError as e: 20:58:19 > raise NewConnectionError( 20:58:19 self, f"Failed to establish a new connection: {e}" 20:58:19 ) from e 20:58:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 20:58:19 20:58:19 The above exception was the direct cause of the following exception: 20:58:19 20:58:19 self = 20:58:19 request = , stream = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:19 proxies = OrderedDict() 20:58:19 20:58:19 def send( 20:58:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:19 ): 20:58:19 """Sends PreparedRequest object. Returns Response object. 20:58:19 20:58:19 :param request: The :class:`PreparedRequest ` being sent. 20:58:19 :param stream: (optional) Whether to stream the request content. 20:58:19 :param timeout: (optional) How long to wait for the server to send 20:58:19 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:19 read timeout) ` tuple. 20:58:19 :type timeout: float or tuple or urllib3 Timeout object 20:58:19 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:19 we verify the server's TLS certificate, or a string, in which case it 20:58:19 must be a path to a CA bundle to use 20:58:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:19 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:19 :rtype: requests.Response 20:58:19 """ 20:58:19 20:58:19 try: 20:58:19 conn = self.get_connection_with_tls_context( 20:58:19 request, verify, proxies=proxies, cert=cert 20:58:19 ) 20:58:19 except LocationValueError as e: 20:58:19 raise InvalidURL(e, request=request) 20:58:19 20:58:19 self.cert_verify(conn, request.url, verify, cert) 20:58:19 url = self.request_url(request, proxies) 20:58:19 self.add_headers( 20:58:19 request, 20:58:19 stream=stream, 20:58:19 timeout=timeout, 20:58:19 verify=verify, 20:58:19 cert=cert, 20:58:19 proxies=proxies, 20:58:19 ) 20:58:19 20:58:19 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:19 20:58:19 if isinstance(timeout, tuple): 20:58:19 try: 20:58:19 connect, read = timeout 20:58:19 timeout = TimeoutSauce(connect=connect, read=read) 20:58:19 except ValueError: 20:58:19 raise ValueError( 20:58:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:19 f"or a single float to set both timeouts to the same value." 20:58:19 ) 20:58:19 elif isinstance(timeout, TimeoutSauce): 20:58:19 pass 20:58:19 else: 20:58:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:19 20:58:19 try: 20:58:19 > resp = conn.urlopen( 20:58:19 method=request.method, 20:58:19 url=url, 20:58:19 body=request.body, 20:58:19 headers=request.headers, 20:58:19 redirect=False, 20:58:19 assert_same_host=False, 20:58:19 preload_content=False, 20:58:19 decode_content=False, 20:58:19 retries=self.max_retries, 20:58:19 timeout=timeout, 20:58:19 chunked=chunked, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 20:58:19 retries = retries.increment( 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:19 method = 'POST' 20:58:19 url = '/rests/operations/transportpce-networkutils:init-xpdr-rdm-links' 20:58:19 response = None 20:58:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 20:58:19 _pool = 20:58:19 _stacktrace = 20:58:19 20:58:19 def increment( 20:58:19 self, 20:58:19 method: str | None = None, 20:58:19 url: str | None = None, 20:58:19 response: BaseHTTPResponse | None = None, 20:58:19 error: Exception | None = None, 20:58:19 _pool: ConnectionPool | None = None, 20:58:19 _stacktrace: TracebackType | None = None, 20:58:19 ) -> Self: 20:58:19 """Return a new Retry object with incremented retry counters. 20:58:19 20:58:19 :param response: A response object, or None, if the server did not 20:58:19 return a response. 20:58:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 20:58:19 :param Exception error: An error encountered during the request, or 20:58:19 None if the response was received successfully. 20:58:19 20:58:19 :return: A new ``Retry`` object. 20:58:19 """ 20:58:19 if self.total is False and error: 20:58:19 # Disabled, indicate to re-raise the error. 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 20:58:19 total = self.total 20:58:19 if total is not None: 20:58:19 total -= 1 20:58:19 20:58:19 connect = self.connect 20:58:19 read = self.read 20:58:19 redirect = self.redirect 20:58:19 status_count = self.status 20:58:19 other = self.other 20:58:19 cause = "unknown" 20:58:19 status = None 20:58:19 redirect_location = None 20:58:19 20:58:19 if error and self._is_connection_error(error): 20:58:19 # Connect retry? 20:58:19 if connect is False: 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 elif connect is not None: 20:58:19 connect -= 1 20:58:19 20:58:19 elif error and self._is_read_error(error): 20:58:19 # Read retry? 20:58:19 if read is False or method is None or not self._is_method_retryable(method): 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 elif read is not None: 20:58:19 read -= 1 20:58:19 20:58:19 elif error: 20:58:19 # Other retry? 20:58:19 if other is not None: 20:58:19 other -= 1 20:58:19 20:58:19 elif response and response.get_redirect_location(): 20:58:19 # Redirect retry? 20:58:19 if redirect is not None: 20:58:19 redirect -= 1 20:58:19 cause = "too many redirects" 20:58:19 response_redirect_location = response.get_redirect_location() 20:58:19 if response_redirect_location: 20:58:19 redirect_location = response_redirect_location 20:58:19 status = response.status 20:58:19 20:58:19 else: 20:58:19 # Incrementing because of a server error like a 500 in 20:58:19 # status_forcelist and the given method is in the allowed_methods 20:58:19 cause = ResponseError.GENERIC_ERROR 20:58:19 if response and response.status: 20:58:19 if status_count is not None: 20:58:19 status_count -= 1 20:58:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 20:58:19 status = response.status 20:58:19 20:58:19 history = self.history + ( 20:58:19 RequestHistory(method, url, error, status, redirect_location), 20:58:19 ) 20:58:19 20:58:19 new_retry = self.new( 20:58:19 total=total, 20:58:19 connect=connect, 20:58:19 read=read, 20:58:19 redirect=redirect, 20:58:19 status=status_count, 20:58:19 other=other, 20:58:19 history=history, 20:58:19 ) 20:58:19 20:58:19 if new_retry.is_exhausted(): 20:58:19 reason = error or ResponseError(cause) 20:58:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 20:58:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-xpdr-rdm-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 20:58:19 20:58:19 During handling of the above exception, another exception occurred: 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_07_connect_xpdrC_to_roadmC(self): 20:58:19 > response = test_utils.transportpce_api_rpc_request( 20:58:19 'transportpce-networkutils', 'init-xpdr-rdm-links', 20:58:19 {'links-input': {'xpdr-node': 'XPDRC01', 'xpdr-num': '1', 'network-num': '1', 20:58:19 'rdm-node': 'ROADMC01', 'srg-num': '1', 'termination-point-num': 'SRG1-PP1-TXRX'}}) 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:82: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 transportpce_tests/common/test_utils.py:685: in transportpce_api_rpc_request 20:58:19 response = post_request(url, data) 20:58:19 transportpce_tests/common/test_utils.py:142: in post_request 20:58:19 return requests.request( 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 20:58:19 return session.request(method=method, url=url, **kwargs) 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 20:58:19 resp = self.send(prep, **send_kwargs) 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 20:58:19 r = adapter.send(request, **kwargs) 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = 20:58:19 request = , stream = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:19 proxies = OrderedDict() 20:58:19 20:58:19 def send( 20:58:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:19 ): 20:58:19 """Sends PreparedRequest object. Returns Response object. 20:58:19 20:58:19 :param request: The :class:`PreparedRequest ` being sent. 20:58:19 :param stream: (optional) Whether to stream the request content. 20:58:19 :param timeout: (optional) How long to wait for the server to send 20:58:19 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:19 read timeout) ` tuple. 20:58:19 :type timeout: float or tuple or urllib3 Timeout object 20:58:19 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:19 we verify the server's TLS certificate, or a string, in which case it 20:58:19 must be a path to a CA bundle to use 20:58:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:19 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:19 :rtype: requests.Response 20:58:19 """ 20:58:19 20:58:19 try: 20:58:19 conn = self.get_connection_with_tls_context( 20:58:19 request, verify, proxies=proxies, cert=cert 20:58:19 ) 20:58:19 except LocationValueError as e: 20:58:19 raise InvalidURL(e, request=request) 20:58:19 20:58:19 self.cert_verify(conn, request.url, verify, cert) 20:58:19 url = self.request_url(request, proxies) 20:58:19 self.add_headers( 20:58:19 request, 20:58:19 stream=stream, 20:58:19 timeout=timeout, 20:58:19 verify=verify, 20:58:19 cert=cert, 20:58:19 proxies=proxies, 20:58:19 ) 20:58:19 20:58:19 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:19 20:58:19 if isinstance(timeout, tuple): 20:58:19 try: 20:58:19 connect, read = timeout 20:58:19 timeout = TimeoutSauce(connect=connect, read=read) 20:58:19 except ValueError: 20:58:19 raise ValueError( 20:58:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:19 f"or a single float to set both timeouts to the same value." 20:58:19 ) 20:58:19 elif isinstance(timeout, TimeoutSauce): 20:58:19 pass 20:58:19 else: 20:58:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:19 20:58:19 try: 20:58:19 resp = conn.urlopen( 20:58:19 method=request.method, 20:58:19 url=url, 20:58:19 body=request.body, 20:58:19 headers=request.headers, 20:58:19 redirect=False, 20:58:19 assert_same_host=False, 20:58:19 preload_content=False, 20:58:19 decode_content=False, 20:58:19 retries=self.max_retries, 20:58:19 timeout=timeout, 20:58:19 chunked=chunked, 20:58:19 ) 20:58:19 20:58:19 except (ProtocolError, OSError) as err: 20:58:19 raise ConnectionError(err, request=request) 20:58:19 20:58:19 except MaxRetryError as e: 20:58:19 if isinstance(e.reason, ConnectTimeoutError): 20:58:19 # TODO: Remove this in 3.0.0: see #2811 20:58:19 if not isinstance(e.reason, NewConnectionError): 20:58:19 raise ConnectTimeout(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, ResponseError): 20:58:19 raise RetryError(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, _ProxyError): 20:58:19 raise ProxyError(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, _SSLError): 20:58:19 # This branch is for urllib3 v1.22 and later. 20:58:19 raise SSLError(e, request=request) 20:58:19 20:58:19 > raise ConnectionError(e, request=request) 20:58:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-xpdr-rdm-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_07_connect_xpdrC_to_roadmC 20:58:19 _____________ TransportOlmTesting.test_08_connect_roadmC_to_xpdrC ______________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def _new_conn(self) -> socket.socket: 20:58:19 """Establish a socket connection and set nodelay settings on it. 20:58:19 20:58:19 :return: New socket connection. 20:58:19 """ 20:58:19 try: 20:58:19 > sock = connection.create_connection( 20:58:19 (self._dns_host, self.port), 20:58:19 self.timeout, 20:58:19 source_address=self.source_address, 20:58:19 socket_options=self.socket_options, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 20:58:19 raise err 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 address = ('localhost', 8182), timeout = 10, source_address = None 20:58:19 socket_options = [(6, 1, 1)] 20:58:19 20:58:19 def create_connection( 20:58:19 address: tuple[str, int], 20:58:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:19 source_address: tuple[str, int] | None = None, 20:58:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 20:58:19 ) -> socket.socket: 20:58:19 """Connect to *address* and return the socket object. 20:58:19 20:58:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 20:58:19 port)``) and return the socket object. Passing the optional 20:58:19 *timeout* parameter will set the timeout on the socket instance 20:58:19 before attempting to connect. If no *timeout* is supplied, the 20:58:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 20:58:19 is used. If *source_address* is set it must be a tuple of (host, port) 20:58:19 for the socket to bind as a source address before making the connection. 20:58:19 An host of '' or port 0 tells the OS to use the default. 20:58:19 """ 20:58:19 20:58:19 host, port = address 20:58:19 if host.startswith("["): 20:58:19 host = host.strip("[]") 20:58:19 err = None 20:58:19 20:58:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 20:58:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 20:58:19 # The original create_connection function always returns all records. 20:58:19 family = allowed_gai_family() 20:58:19 20:58:19 try: 20:58:19 host.encode("idna") 20:58:19 except UnicodeError: 20:58:19 raise LocationParseError(f"'{host}', label empty or too long") from None 20:58:19 20:58:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 20:58:19 af, socktype, proto, canonname, sa = res 20:58:19 sock = None 20:58:19 try: 20:58:19 sock = socket.socket(af, socktype, proto) 20:58:19 20:58:19 # If provided, set socket level options before connecting. 20:58:19 _set_socket_options(sock, socket_options) 20:58:19 20:58:19 if timeout is not _DEFAULT_TIMEOUT: 20:58:19 sock.settimeout(timeout) 20:58:19 if source_address: 20:58:19 sock.bind(source_address) 20:58:19 > sock.connect(sa) 20:58:19 E ConnectionRefusedError: [Errno 111] Connection refused 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 20:58:19 20:58:19 The above exception was the direct cause of the following exception: 20:58:19 20:58:19 self = 20:58:19 method = 'POST' 20:58:19 url = '/rests/operations/transportpce-networkutils:init-rdm-xpdr-links' 20:58:19 body = '{"input": {"links-input": {"xpdr-node": "XPDRC01", "xpdr-num": "1", "network-num": "1", "rdm-node": "ROADMC01", "srg-num": "1", "termination-point-num": "SRG1-PP1-TXRX"}}}' 20:58:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '171', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 20:58:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:19 redirect = False, assert_same_host = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 20:58:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 20:58:19 decode_content = False, response_kw = {} 20:58:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/transportpce-networkutils:init-rdm-xpdr-links', query=None, fragment=None) 20:58:19 destination_scheme = None, conn = None, release_this_conn = True 20:58:19 http_tunnel_required = False, err = None, clean_exit = False 20:58:19 20:58:19 def urlopen( # type: ignore[override] 20:58:19 self, 20:58:19 method: str, 20:58:19 url: str, 20:58:19 body: _TYPE_BODY | None = None, 20:58:19 headers: typing.Mapping[str, str] | None = None, 20:58:19 retries: Retry | bool | int | None = None, 20:58:19 redirect: bool = True, 20:58:19 assert_same_host: bool = True, 20:58:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:19 pool_timeout: int | None = None, 20:58:19 release_conn: bool | None = None, 20:58:19 chunked: bool = False, 20:58:19 body_pos: _TYPE_BODY_POSITION | None = None, 20:58:19 preload_content: bool = True, 20:58:19 decode_content: bool = True, 20:58:19 **response_kw: typing.Any, 20:58:19 ) -> BaseHTTPResponse: 20:58:19 """ 20:58:19 Get a connection from the pool and perform an HTTP request. This is the 20:58:19 lowest level call for making a request, so you'll need to specify all 20:58:19 the raw details. 20:58:19 20:58:19 .. note:: 20:58:19 20:58:19 More commonly, it's appropriate to use a convenience method 20:58:19 such as :meth:`request`. 20:58:19 20:58:19 .. note:: 20:58:19 20:58:19 `release_conn` will only behave as expected if 20:58:19 `preload_content=False` because we want to make 20:58:19 `preload_content=False` the default behaviour someday soon without 20:58:19 breaking backwards compatibility. 20:58:19 20:58:19 :param method: 20:58:19 HTTP request method (such as GET, POST, PUT, etc.) 20:58:19 20:58:19 :param url: 20:58:19 The URL to perform the request on. 20:58:19 20:58:19 :param body: 20:58:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 20:58:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 20:58:19 20:58:19 :param headers: 20:58:19 Dictionary of custom headers to send, such as User-Agent, 20:58:19 If-None-Match, etc. If None, pool headers are used. If provided, 20:58:19 these headers completely replace any pool-specific headers. 20:58:19 20:58:19 :param retries: 20:58:19 Configure the number of retries to allow before raising a 20:58:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 20:58:19 20:58:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 20:58:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 20:58:19 over different types of retries. 20:58:19 Pass an integer number to retry connection errors that many times, 20:58:19 but no other types of errors. Pass zero to never retry. 20:58:19 20:58:19 If ``False``, then retries are disabled and any exception is raised 20:58:19 immediately. Also, instead of raising a MaxRetryError on redirects, 20:58:19 the redirect response will be returned. 20:58:19 20:58:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 20:58:19 20:58:19 :param redirect: 20:58:19 If True, automatically handle redirects (status codes 301, 302, 20:58:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 20:58:19 will disable redirect, too. 20:58:19 20:58:19 :param assert_same_host: 20:58:19 If ``True``, will make sure that the host of the pool requests is 20:58:19 consistent else will raise HostChangedError. When ``False``, you can 20:58:19 use the pool on an HTTP proxy and request foreign hosts. 20:58:19 20:58:19 :param timeout: 20:58:19 If specified, overrides the default timeout for this one 20:58:19 request. It may be a float (in seconds) or an instance of 20:58:19 :class:`urllib3.util.Timeout`. 20:58:19 20:58:19 :param pool_timeout: 20:58:19 If set and the pool is set to block=True, then this method will 20:58:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 20:58:19 connection is available within the time period. 20:58:19 20:58:19 :param bool preload_content: 20:58:19 If True, the response's body will be preloaded into memory. 20:58:19 20:58:19 :param bool decode_content: 20:58:19 If True, will attempt to decode the body based on the 20:58:19 'content-encoding' header. 20:58:19 20:58:19 :param release_conn: 20:58:19 If False, then the urlopen call will not release the connection 20:58:19 back into the pool once a response is received (but will release if 20:58:19 you read the entire contents of the response such as when 20:58:19 `preload_content=True`). This is useful if you're not preloading 20:58:19 the response's content immediately. You will need to call 20:58:19 ``r.release_conn()`` on the response ``r`` to return the connection 20:58:19 back into the pool. If None, it takes the value of ``preload_content`` 20:58:19 which defaults to ``True``. 20:58:19 20:58:19 :param bool chunked: 20:58:19 If True, urllib3 will send the body using chunked transfer 20:58:19 encoding. Otherwise, urllib3 will send the body using the standard 20:58:19 content-length form. Defaults to False. 20:58:19 20:58:19 :param int body_pos: 20:58:19 Position to seek to in file-like body in the event of a retry or 20:58:19 redirect. Typically this won't need to be set because urllib3 will 20:58:19 auto-populate the value when needed. 20:58:19 """ 20:58:19 parsed_url = parse_url(url) 20:58:19 destination_scheme = parsed_url.scheme 20:58:19 20:58:19 if headers is None: 20:58:19 headers = self.headers 20:58:19 20:58:19 if not isinstance(retries, Retry): 20:58:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 20:58:19 20:58:19 if release_conn is None: 20:58:19 release_conn = preload_content 20:58:19 20:58:19 # Check host 20:58:19 if assert_same_host and not self.is_same_host(url): 20:58:19 raise HostChangedError(self, url, retries) 20:58:19 20:58:19 # Ensure that the URL we're connecting to is properly encoded 20:58:19 if url.startswith("/"): 20:58:19 url = to_str(_encode_target(url)) 20:58:19 else: 20:58:19 url = to_str(parsed_url.url) 20:58:19 20:58:19 conn = None 20:58:19 20:58:19 # Track whether `conn` needs to be released before 20:58:19 # returning/raising/recursing. Update this variable if necessary, and 20:58:19 # leave `release_conn` constant throughout the function. That way, if 20:58:19 # the function recurses, the original value of `release_conn` will be 20:58:19 # passed down into the recursive call, and its value will be respected. 20:58:19 # 20:58:19 # See issue #651 [1] for details. 20:58:19 # 20:58:19 # [1] 20:58:19 release_this_conn = release_conn 20:58:19 20:58:19 http_tunnel_required = connection_requires_http_tunnel( 20:58:19 self.proxy, self.proxy_config, destination_scheme 20:58:19 ) 20:58:19 20:58:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 20:58:19 # have to copy the headers dict so we can safely change it without those 20:58:19 # changes being reflected in anyone else's copy. 20:58:19 if not http_tunnel_required: 20:58:19 headers = headers.copy() # type: ignore[attr-defined] 20:58:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 20:58:19 20:58:19 # Must keep the exception bound to a separate variable or else Python 3 20:58:19 # complains about UnboundLocalError. 20:58:19 err = None 20:58:19 20:58:19 # Keep track of whether we cleanly exited the except block. This 20:58:19 # ensures we do proper cleanup in finally. 20:58:19 clean_exit = False 20:58:19 20:58:19 # Rewind body position, if needed. Record current position 20:58:19 # for future rewinds in the event of a redirect/retry. 20:58:19 body_pos = set_file_position(body, body_pos) 20:58:19 20:58:19 try: 20:58:19 # Request a connection from the queue. 20:58:19 timeout_obj = self._get_timeout(timeout) 20:58:19 conn = self._get_conn(timeout=pool_timeout) 20:58:19 20:58:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 20:58:19 20:58:19 # Is this a closed/new connection that requires CONNECT tunnelling? 20:58:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 20:58:19 try: 20:58:19 self._prepare_proxy(conn) 20:58:19 except (BaseSSLError, OSError, SocketTimeout) as e: 20:58:19 self._raise_timeout( 20:58:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 20:58:19 ) 20:58:19 raise 20:58:19 20:58:19 # If we're going to release the connection in ``finally:``, then 20:58:19 # the response doesn't need to know about the connection. Otherwise 20:58:19 # it will also try to release it and we'll have a double-release 20:58:19 # mess. 20:58:19 response_conn = conn if not release_conn else None 20:58:19 20:58:19 # Make the request on the HTTPConnection object 20:58:19 > response = self._make_request( 20:58:19 conn, 20:58:19 method, 20:58:19 url, 20:58:19 timeout=timeout_obj, 20:58:19 body=body, 20:58:19 headers=headers, 20:58:19 chunked=chunked, 20:58:19 retries=retries, 20:58:19 response_conn=response_conn, 20:58:19 preload_content=preload_content, 20:58:19 decode_content=decode_content, 20:58:19 **response_kw, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 20:58:19 conn.request( 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 20:58:19 self.endheaders() 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 20:58:19 self._send_output(message_body, encode_chunked=encode_chunked) 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 20:58:19 self.send(msg) 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 20:58:19 self.connect() 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 20:58:19 self.sock = self._new_conn() 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def _new_conn(self) -> socket.socket: 20:58:19 """Establish a socket connection and set nodelay settings on it. 20:58:19 20:58:19 :return: New socket connection. 20:58:19 """ 20:58:19 try: 20:58:19 sock = connection.create_connection( 20:58:19 (self._dns_host, self.port), 20:58:19 self.timeout, 20:58:19 source_address=self.source_address, 20:58:19 socket_options=self.socket_options, 20:58:19 ) 20:58:19 except socket.gaierror as e: 20:58:19 raise NameResolutionError(self.host, self, e) from e 20:58:19 except SocketTimeout as e: 20:58:19 raise ConnectTimeoutError( 20:58:19 self, 20:58:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 20:58:19 ) from e 20:58:19 20:58:19 except OSError as e: 20:58:19 > raise NewConnectionError( 20:58:19 self, f"Failed to establish a new connection: {e}" 20:58:19 ) from e 20:58:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 20:58:19 20:58:19 The above exception was the direct cause of the following exception: 20:58:19 20:58:19 self = 20:58:19 request = , stream = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:19 proxies = OrderedDict() 20:58:19 20:58:19 def send( 20:58:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:19 ): 20:58:19 """Sends PreparedRequest object. Returns Response object. 20:58:19 20:58:19 :param request: The :class:`PreparedRequest ` being sent. 20:58:19 :param stream: (optional) Whether to stream the request content. 20:58:19 :param timeout: (optional) How long to wait for the server to send 20:58:19 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:19 read timeout) ` tuple. 20:58:19 :type timeout: float or tuple or urllib3 Timeout object 20:58:19 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:19 we verify the server's TLS certificate, or a string, in which case it 20:58:19 must be a path to a CA bundle to use 20:58:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:19 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:19 :rtype: requests.Response 20:58:19 """ 20:58:19 20:58:19 try: 20:58:19 conn = self.get_connection_with_tls_context( 20:58:19 request, verify, proxies=proxies, cert=cert 20:58:19 ) 20:58:19 except LocationValueError as e: 20:58:19 raise InvalidURL(e, request=request) 20:58:19 20:58:19 self.cert_verify(conn, request.url, verify, cert) 20:58:19 url = self.request_url(request, proxies) 20:58:19 self.add_headers( 20:58:19 request, 20:58:19 stream=stream, 20:58:19 timeout=timeout, 20:58:19 verify=verify, 20:58:19 cert=cert, 20:58:19 proxies=proxies, 20:58:19 ) 20:58:19 20:58:19 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:19 20:58:19 if isinstance(timeout, tuple): 20:58:19 try: 20:58:19 connect, read = timeout 20:58:19 timeout = TimeoutSauce(connect=connect, read=read) 20:58:19 except ValueError: 20:58:19 raise ValueError( 20:58:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:19 f"or a single float to set both timeouts to the same value." 20:58:19 ) 20:58:19 elif isinstance(timeout, TimeoutSauce): 20:58:19 pass 20:58:19 else: 20:58:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:19 20:58:19 try: 20:58:19 > resp = conn.urlopen( 20:58:19 method=request.method, 20:58:19 url=url, 20:58:19 body=request.body, 20:58:19 headers=request.headers, 20:58:19 redirect=False, 20:58:19 assert_same_host=False, 20:58:19 preload_content=False, 20:58:19 decode_content=False, 20:58:19 retries=self.max_retries, 20:58:19 timeout=timeout, 20:58:19 chunked=chunked, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 20:58:19 retries = retries.increment( 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:19 method = 'POST' 20:58:19 url = '/rests/operations/transportpce-networkutils:init-rdm-xpdr-links' 20:58:19 response = None 20:58:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 20:58:19 _pool = 20:58:19 _stacktrace = 20:58:19 20:58:19 def increment( 20:58:19 self, 20:58:19 method: str | None = None, 20:58:19 url: str | None = None, 20:58:19 response: BaseHTTPResponse | None = None, 20:58:19 error: Exception | None = None, 20:58:19 _pool: ConnectionPool | None = None, 20:58:19 _stacktrace: TracebackType | None = None, 20:58:19 ) -> Self: 20:58:19 """Return a new Retry object with incremented retry counters. 20:58:19 20:58:19 :param response: A response object, or None, if the server did not 20:58:19 return a response. 20:58:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 20:58:19 :param Exception error: An error encountered during the request, or 20:58:19 None if the response was received successfully. 20:58:19 20:58:19 :return: A new ``Retry`` object. 20:58:19 """ 20:58:19 if self.total is False and error: 20:58:19 # Disabled, indicate to re-raise the error. 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 20:58:19 total = self.total 20:58:19 if total is not None: 20:58:19 total -= 1 20:58:19 20:58:19 connect = self.connect 20:58:19 read = self.read 20:58:19 redirect = self.redirect 20:58:19 status_count = self.status 20:58:19 other = self.other 20:58:19 cause = "unknown" 20:58:19 status = None 20:58:19 redirect_location = None 20:58:19 20:58:19 if error and self._is_connection_error(error): 20:58:19 # Connect retry? 20:58:19 if connect is False: 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 elif connect is not None: 20:58:19 connect -= 1 20:58:19 20:58:19 elif error and self._is_read_error(error): 20:58:19 # Read retry? 20:58:19 if read is False or method is None or not self._is_method_retryable(method): 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 elif read is not None: 20:58:19 read -= 1 20:58:19 20:58:19 elif error: 20:58:19 # Other retry? 20:58:19 if other is not None: 20:58:19 other -= 1 20:58:19 20:58:19 elif response and response.get_redirect_location(): 20:58:19 # Redirect retry? 20:58:19 if redirect is not None: 20:58:19 redirect -= 1 20:58:19 cause = "too many redirects" 20:58:19 response_redirect_location = response.get_redirect_location() 20:58:19 if response_redirect_location: 20:58:19 redirect_location = response_redirect_location 20:58:19 status = response.status 20:58:19 20:58:19 else: 20:58:19 # Incrementing because of a server error like a 500 in 20:58:19 # status_forcelist and the given method is in the allowed_methods 20:58:19 cause = ResponseError.GENERIC_ERROR 20:58:19 if response and response.status: 20:58:19 if status_count is not None: 20:58:19 status_count -= 1 20:58:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 20:58:19 status = response.status 20:58:19 20:58:19 history = self.history + ( 20:58:19 RequestHistory(method, url, error, status, redirect_location), 20:58:19 ) 20:58:19 20:58:19 new_retry = self.new( 20:58:19 total=total, 20:58:19 connect=connect, 20:58:19 read=read, 20:58:19 redirect=redirect, 20:58:19 status=status_count, 20:58:19 other=other, 20:58:19 history=history, 20:58:19 ) 20:58:19 20:58:19 if new_retry.is_exhausted(): 20:58:19 reason = error or ResponseError(cause) 20:58:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 20:58:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-rdm-xpdr-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 20:58:19 20:58:19 During handling of the above exception, another exception occurred: 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_08_connect_roadmC_to_xpdrC(self): 20:58:19 > response = test_utils.transportpce_api_rpc_request( 20:58:19 'transportpce-networkutils', 'init-rdm-xpdr-links', 20:58:19 {'links-input': {'xpdr-node': 'XPDRC01', 'xpdr-num': '1', 'network-num': '1', 20:58:19 'rdm-node': 'ROADMC01', 'srg-num': '1', 'termination-point-num': 'SRG1-PP1-TXRX'}}) 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:89: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 transportpce_tests/common/test_utils.py:685: in transportpce_api_rpc_request 20:58:19 response = post_request(url, data) 20:58:19 transportpce_tests/common/test_utils.py:142: in post_request 20:58:19 return requests.request( 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 20:58:19 return session.request(method=method, url=url, **kwargs) 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 20:58:19 resp = self.send(prep, **send_kwargs) 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 20:58:19 r = adapter.send(request, **kwargs) 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = 20:58:19 request = , stream = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:19 proxies = OrderedDict() 20:58:19 20:58:19 def send( 20:58:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:19 ): 20:58:19 """Sends PreparedRequest object. Returns Response object. 20:58:19 20:58:19 :param request: The :class:`PreparedRequest ` being sent. 20:58:19 :param stream: (optional) Whether to stream the request content. 20:58:19 :param timeout: (optional) How long to wait for the server to send 20:58:19 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:19 read timeout) ` tuple. 20:58:19 :type timeout: float or tuple or urllib3 Timeout object 20:58:19 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:19 we verify the server's TLS certificate, or a string, in which case it 20:58:19 must be a path to a CA bundle to use 20:58:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:19 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:19 :rtype: requests.Response 20:58:19 """ 20:58:19 20:58:19 try: 20:58:19 conn = self.get_connection_with_tls_context( 20:58:19 request, verify, proxies=proxies, cert=cert 20:58:19 ) 20:58:19 except LocationValueError as e: 20:58:19 raise InvalidURL(e, request=request) 20:58:19 20:58:19 self.cert_verify(conn, request.url, verify, cert) 20:58:19 url = self.request_url(request, proxies) 20:58:19 self.add_headers( 20:58:19 request, 20:58:19 stream=stream, 20:58:19 timeout=timeout, 20:58:19 verify=verify, 20:58:19 cert=cert, 20:58:19 proxies=proxies, 20:58:19 ) 20:58:19 20:58:19 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:19 20:58:19 if isinstance(timeout, tuple): 20:58:19 try: 20:58:19 connect, read = timeout 20:58:19 timeout = TimeoutSauce(connect=connect, read=read) 20:58:19 except ValueError: 20:58:19 raise ValueError( 20:58:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:19 f"or a single float to set both timeouts to the same value." 20:58:19 ) 20:58:19 elif isinstance(timeout, TimeoutSauce): 20:58:19 pass 20:58:19 else: 20:58:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:19 20:58:19 try: 20:58:19 resp = conn.urlopen( 20:58:19 method=request.method, 20:58:19 url=url, 20:58:19 body=request.body, 20:58:19 headers=request.headers, 20:58:19 redirect=False, 20:58:19 assert_same_host=False, 20:58:19 preload_content=False, 20:58:19 decode_content=False, 20:58:19 retries=self.max_retries, 20:58:19 timeout=timeout, 20:58:19 chunked=chunked, 20:58:19 ) 20:58:19 20:58:19 except (ProtocolError, OSError) as err: 20:58:19 raise ConnectionError(err, request=request) 20:58:19 20:58:19 except MaxRetryError as e: 20:58:19 if isinstance(e.reason, ConnectTimeoutError): 20:58:19 # TODO: Remove this in 3.0.0: see #2811 20:58:19 if not isinstance(e.reason, NewConnectionError): 20:58:19 raise ConnectTimeout(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, ResponseError): 20:58:19 raise RetryError(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, _ProxyError): 20:58:19 raise ProxyError(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, _SSLError): 20:58:19 # This branch is for urllib3 v1.22 and later. 20:58:19 raise SSLError(e, request=request) 20:58:19 20:58:19 > raise ConnectionError(e, request=request) 20:58:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-rdm-xpdr-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_08_connect_roadmC_to_xpdrC 20:58:19 ________________ TransportOlmTesting.test_09_create_OTS_ROADMA _________________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def _new_conn(self) -> socket.socket: 20:58:19 """Establish a socket connection and set nodelay settings on it. 20:58:19 20:58:19 :return: New socket connection. 20:58:19 """ 20:58:19 try: 20:58:19 > sock = connection.create_connection( 20:58:19 (self._dns_host, self.port), 20:58:19 self.timeout, 20:58:19 source_address=self.source_address, 20:58:19 socket_options=self.socket_options, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 20:58:19 raise err 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 address = ('localhost', 8182), timeout = 10, source_address = None 20:58:19 socket_options = [(6, 1, 1)] 20:58:19 20:58:19 def create_connection( 20:58:19 address: tuple[str, int], 20:58:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:19 source_address: tuple[str, int] | None = None, 20:58:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 20:58:19 ) -> socket.socket: 20:58:19 """Connect to *address* and return the socket object. 20:58:19 20:58:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 20:58:19 port)``) and return the socket object. Passing the optional 20:58:19 *timeout* parameter will set the timeout on the socket instance 20:58:19 before attempting to connect. If no *timeout* is supplied, the 20:58:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 20:58:19 is used. If *source_address* is set it must be a tuple of (host, port) 20:58:19 for the socket to bind as a source address before making the connection. 20:58:19 An host of '' or port 0 tells the OS to use the default. 20:58:19 """ 20:58:19 20:58:19 host, port = address 20:58:19 if host.startswith("["): 20:58:19 host = host.strip("[]") 20:58:19 err = None 20:58:19 20:58:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 20:58:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 20:58:19 # The original create_connection function always returns all records. 20:58:19 family = allowed_gai_family() 20:58:19 20:58:19 try: 20:58:19 host.encode("idna") 20:58:19 except UnicodeError: 20:58:19 raise LocationParseError(f"'{host}', label empty or too long") from None 20:58:19 20:58:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 20:58:19 af, socktype, proto, canonname, sa = res 20:58:19 sock = None 20:58:19 try: 20:58:19 sock = socket.socket(af, socktype, proto) 20:58:19 20:58:19 # If provided, set socket level options before connecting. 20:58:19 _set_socket_options(sock, socket_options) 20:58:19 20:58:19 if timeout is not _DEFAULT_TIMEOUT: 20:58:19 sock.settimeout(timeout) 20:58:19 if source_address: 20:58:19 sock.bind(source_address) 20:58:19 > sock.connect(sa) 20:58:19 E ConnectionRefusedError: [Errno 111] Connection refused 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 20:58:19 20:58:19 The above exception was the direct cause of the following exception: 20:58:19 20:58:19 self = 20:58:19 method = 'POST' 20:58:19 url = '/rests/operations/transportpce-device-renderer:create-ots-oms' 20:58:19 body = '{"input": {"node-id": "ROADMA01", "logical-connection-point": "DEG1-TTP-TXRX"}}' 20:58:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '79', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 20:58:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:19 redirect = False, assert_same_host = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 20:58:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 20:58:19 decode_content = False, response_kw = {} 20:58:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/transportpce-device-renderer:create-ots-oms', query=None, fragment=None) 20:58:19 destination_scheme = None, conn = None, release_this_conn = True 20:58:19 http_tunnel_required = False, err = None, clean_exit = False 20:58:19 20:58:19 def urlopen( # type: ignore[override] 20:58:19 self, 20:58:19 method: str, 20:58:19 url: str, 20:58:19 body: _TYPE_BODY | None = None, 20:58:19 headers: typing.Mapping[str, str] | None = None, 20:58:19 retries: Retry | bool | int | None = None, 20:58:19 redirect: bool = True, 20:58:19 assert_same_host: bool = True, 20:58:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:19 pool_timeout: int | None = None, 20:58:19 release_conn: bool | None = None, 20:58:19 chunked: bool = False, 20:58:19 body_pos: _TYPE_BODY_POSITION | None = None, 20:58:19 preload_content: bool = True, 20:58:19 decode_content: bool = True, 20:58:19 **response_kw: typing.Any, 20:58:19 ) -> BaseHTTPResponse: 20:58:19 """ 20:58:19 Get a connection from the pool and perform an HTTP request. This is the 20:58:19 lowest level call for making a request, so you'll need to specify all 20:58:19 the raw details. 20:58:19 20:58:19 .. note:: 20:58:19 20:58:19 More commonly, it's appropriate to use a convenience method 20:58:19 such as :meth:`request`. 20:58:19 20:58:19 .. note:: 20:58:19 20:58:19 `release_conn` will only behave as expected if 20:58:19 `preload_content=False` because we want to make 20:58:19 `preload_content=False` the default behaviour someday soon without 20:58:19 breaking backwards compatibility. 20:58:19 20:58:19 :param method: 20:58:19 HTTP request method (such as GET, POST, PUT, etc.) 20:58:19 20:58:19 :param url: 20:58:19 The URL to perform the request on. 20:58:19 20:58:19 :param body: 20:58:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 20:58:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 20:58:19 20:58:19 :param headers: 20:58:19 Dictionary of custom headers to send, such as User-Agent, 20:58:19 If-None-Match, etc. If None, pool headers are used. If provided, 20:58:19 these headers completely replace any pool-specific headers. 20:58:19 20:58:19 :param retries: 20:58:19 Configure the number of retries to allow before raising a 20:58:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 20:58:19 20:58:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 20:58:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 20:58:19 over different types of retries. 20:58:19 Pass an integer number to retry connection errors that many times, 20:58:19 but no other types of errors. Pass zero to never retry. 20:58:19 20:58:19 If ``False``, then retries are disabled and any exception is raised 20:58:19 immediately. Also, instead of raising a MaxRetryError on redirects, 20:58:19 the redirect response will be returned. 20:58:19 20:58:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 20:58:19 20:58:19 :param redirect: 20:58:19 If True, automatically handle redirects (status codes 301, 302, 20:58:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 20:58:19 will disable redirect, too. 20:58:19 20:58:19 :param assert_same_host: 20:58:19 If ``True``, will make sure that the host of the pool requests is 20:58:19 consistent else will raise HostChangedError. When ``False``, you can 20:58:19 use the pool on an HTTP proxy and request foreign hosts. 20:58:19 20:58:19 :param timeout: 20:58:19 If specified, overrides the default timeout for this one 20:58:19 request. It may be a float (in seconds) or an instance of 20:58:19 :class:`urllib3.util.Timeout`. 20:58:19 20:58:19 :param pool_timeout: 20:58:19 If set and the pool is set to block=True, then this method will 20:58:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 20:58:19 connection is available within the time period. 20:58:19 20:58:19 :param bool preload_content: 20:58:19 If True, the response's body will be preloaded into memory. 20:58:19 20:58:19 :param bool decode_content: 20:58:19 If True, will attempt to decode the body based on the 20:58:19 'content-encoding' header. 20:58:19 20:58:19 :param release_conn: 20:58:19 If False, then the urlopen call will not release the connection 20:58:19 back into the pool once a response is received (but will release if 20:58:19 you read the entire contents of the response such as when 20:58:19 `preload_content=True`). This is useful if you're not preloading 20:58:19 the response's content immediately. You will need to call 20:58:19 ``r.release_conn()`` on the response ``r`` to return the connection 20:58:19 back into the pool. If None, it takes the value of ``preload_content`` 20:58:19 which defaults to ``True``. 20:58:19 20:58:19 :param bool chunked: 20:58:19 If True, urllib3 will send the body using chunked transfer 20:58:19 encoding. Otherwise, urllib3 will send the body using the standard 20:58:19 content-length form. Defaults to False. 20:58:19 20:58:19 :param int body_pos: 20:58:19 Position to seek to in file-like body in the event of a retry or 20:58:19 redirect. Typically this won't need to be set because urllib3 will 20:58:19 auto-populate the value when needed. 20:58:19 """ 20:58:19 parsed_url = parse_url(url) 20:58:19 destination_scheme = parsed_url.scheme 20:58:19 20:58:19 if headers is None: 20:58:19 headers = self.headers 20:58:19 20:58:19 if not isinstance(retries, Retry): 20:58:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 20:58:19 20:58:19 if release_conn is None: 20:58:19 release_conn = preload_content 20:58:19 20:58:19 # Check host 20:58:19 if assert_same_host and not self.is_same_host(url): 20:58:19 raise HostChangedError(self, url, retries) 20:58:19 20:58:19 # Ensure that the URL we're connecting to is properly encoded 20:58:19 if url.startswith("/"): 20:58:19 url = to_str(_encode_target(url)) 20:58:19 else: 20:58:19 url = to_str(parsed_url.url) 20:58:19 20:58:19 conn = None 20:58:19 20:58:19 # Track whether `conn` needs to be released before 20:58:19 # returning/raising/recursing. Update this variable if necessary, and 20:58:19 # leave `release_conn` constant throughout the function. That way, if 20:58:19 # the function recurses, the original value of `release_conn` will be 20:58:19 # passed down into the recursive call, and its value will be respected. 20:58:19 # 20:58:19 # See issue #651 [1] for details. 20:58:19 # 20:58:19 # [1] 20:58:19 release_this_conn = release_conn 20:58:19 20:58:19 http_tunnel_required = connection_requires_http_tunnel( 20:58:19 self.proxy, self.proxy_config, destination_scheme 20:58:19 ) 20:58:19 20:58:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 20:58:19 # have to copy the headers dict so we can safely change it without those 20:58:19 # changes being reflected in anyone else's copy. 20:58:19 if not http_tunnel_required: 20:58:19 headers = headers.copy() # type: ignore[attr-defined] 20:58:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 20:58:19 20:58:19 # Must keep the exception bound to a separate variable or else Python 3 20:58:19 # complains about UnboundLocalError. 20:58:19 err = None 20:58:19 20:58:19 # Keep track of whether we cleanly exited the except block. This 20:58:19 # ensures we do proper cleanup in finally. 20:58:19 clean_exit = False 20:58:19 20:58:19 # Rewind body position, if needed. Record current position 20:58:19 # for future rewinds in the event of a redirect/retry. 20:58:19 body_pos = set_file_position(body, body_pos) 20:58:19 20:58:19 try: 20:58:19 # Request a connection from the queue. 20:58:19 timeout_obj = self._get_timeout(timeout) 20:58:19 conn = self._get_conn(timeout=pool_timeout) 20:58:19 20:58:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 20:58:19 20:58:19 # Is this a closed/new connection that requires CONNECT tunnelling? 20:58:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 20:58:19 try: 20:58:19 self._prepare_proxy(conn) 20:58:19 except (BaseSSLError, OSError, SocketTimeout) as e: 20:58:19 self._raise_timeout( 20:58:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 20:58:19 ) 20:58:19 raise 20:58:19 20:58:19 # If we're going to release the connection in ``finally:``, then 20:58:19 # the response doesn't need to know about the connection. Otherwise 20:58:19 # it will also try to release it and we'll have a double-release 20:58:19 # mess. 20:58:19 response_conn = conn if not release_conn else None 20:58:19 20:58:19 # Make the request on the HTTPConnection object 20:58:19 > response = self._make_request( 20:58:19 conn, 20:58:19 method, 20:58:19 url, 20:58:19 timeout=timeout_obj, 20:58:19 body=body, 20:58:19 headers=headers, 20:58:19 chunked=chunked, 20:58:19 retries=retries, 20:58:19 response_conn=response_conn, 20:58:19 preload_content=preload_content, 20:58:19 decode_content=decode_content, 20:58:19 **response_kw, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 20:58:19 conn.request( 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 20:58:19 self.endheaders() 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 20:58:19 self._send_output(message_body, encode_chunked=encode_chunked) 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 20:58:19 self.send(msg) 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 20:58:19 self.connect() 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 20:58:19 self.sock = self._new_conn() 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def _new_conn(self) -> socket.socket: 20:58:19 """Establish a socket connection and set nodelay settings on it. 20:58:19 20:58:19 :return: New socket connection. 20:58:19 """ 20:58:19 try: 20:58:19 sock = connection.create_connection( 20:58:19 (self._dns_host, self.port), 20:58:19 self.timeout, 20:58:19 source_address=self.source_address, 20:58:19 socket_options=self.socket_options, 20:58:19 ) 20:58:19 except socket.gaierror as e: 20:58:19 raise NameResolutionError(self.host, self, e) from e 20:58:19 except SocketTimeout as e: 20:58:19 raise ConnectTimeoutError( 20:58:19 self, 20:58:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 20:58:19 ) from e 20:58:19 20:58:19 except OSError as e: 20:58:19 > raise NewConnectionError( 20:58:19 self, f"Failed to establish a new connection: {e}" 20:58:19 ) from e 20:58:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 20:58:19 20:58:19 The above exception was the direct cause of the following exception: 20:58:19 20:58:19 self = 20:58:19 request = , stream = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:19 proxies = OrderedDict() 20:58:19 20:58:19 def send( 20:58:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:19 ): 20:58:19 """Sends PreparedRequest object. Returns Response object. 20:58:19 20:58:19 :param request: The :class:`PreparedRequest ` being sent. 20:58:19 :param stream: (optional) Whether to stream the request content. 20:58:19 :param timeout: (optional) How long to wait for the server to send 20:58:19 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:19 read timeout) ` tuple. 20:58:19 :type timeout: float or tuple or urllib3 Timeout object 20:58:19 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:19 we verify the server's TLS certificate, or a string, in which case it 20:58:19 must be a path to a CA bundle to use 20:58:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:19 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:19 :rtype: requests.Response 20:58:19 """ 20:58:19 20:58:19 try: 20:58:19 conn = self.get_connection_with_tls_context( 20:58:19 request, verify, proxies=proxies, cert=cert 20:58:19 ) 20:58:19 except LocationValueError as e: 20:58:19 raise InvalidURL(e, request=request) 20:58:19 20:58:19 self.cert_verify(conn, request.url, verify, cert) 20:58:19 url = self.request_url(request, proxies) 20:58:19 self.add_headers( 20:58:19 request, 20:58:19 stream=stream, 20:58:19 timeout=timeout, 20:58:19 verify=verify, 20:58:19 cert=cert, 20:58:19 proxies=proxies, 20:58:19 ) 20:58:19 20:58:19 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:19 20:58:19 if isinstance(timeout, tuple): 20:58:19 try: 20:58:19 connect, read = timeout 20:58:19 timeout = TimeoutSauce(connect=connect, read=read) 20:58:19 except ValueError: 20:58:19 raise ValueError( 20:58:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:19 f"or a single float to set both timeouts to the same value." 20:58:19 ) 20:58:19 elif isinstance(timeout, TimeoutSauce): 20:58:19 pass 20:58:19 else: 20:58:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:19 20:58:19 try: 20:58:19 > resp = conn.urlopen( 20:58:19 method=request.method, 20:58:19 url=url, 20:58:19 body=request.body, 20:58:19 headers=request.headers, 20:58:19 redirect=False, 20:58:19 assert_same_host=False, 20:58:19 preload_content=False, 20:58:19 decode_content=False, 20:58:19 retries=self.max_retries, 20:58:19 timeout=timeout, 20:58:19 chunked=chunked, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 20:58:19 retries = retries.increment( 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:19 method = 'POST' 20:58:19 url = '/rests/operations/transportpce-device-renderer:create-ots-oms' 20:58:19 response = None 20:58:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 20:58:19 _pool = 20:58:19 _stacktrace = 20:58:19 20:58:19 def increment( 20:58:19 self, 20:58:19 method: str | None = None, 20:58:19 url: str | None = None, 20:58:19 response: BaseHTTPResponse | None = None, 20:58:19 error: Exception | None = None, 20:58:19 _pool: ConnectionPool | None = None, 20:58:19 _stacktrace: TracebackType | None = None, 20:58:19 ) -> Self: 20:58:19 """Return a new Retry object with incremented retry counters. 20:58:19 20:58:19 :param response: A response object, or None, if the server did not 20:58:19 return a response. 20:58:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 20:58:19 :param Exception error: An error encountered during the request, or 20:58:19 None if the response was received successfully. 20:58:19 20:58:19 :return: A new ``Retry`` object. 20:58:19 """ 20:58:19 if self.total is False and error: 20:58:19 # Disabled, indicate to re-raise the error. 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 20:58:19 total = self.total 20:58:19 if total is not None: 20:58:19 total -= 1 20:58:19 20:58:19 connect = self.connect 20:58:19 read = self.read 20:58:19 redirect = self.redirect 20:58:19 status_count = self.status 20:58:19 other = self.other 20:58:19 cause = "unknown" 20:58:19 status = None 20:58:19 redirect_location = None 20:58:19 20:58:19 if error and self._is_connection_error(error): 20:58:19 # Connect retry? 20:58:19 if connect is False: 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 elif connect is not None: 20:58:19 connect -= 1 20:58:19 20:58:19 elif error and self._is_read_error(error): 20:58:19 # Read retry? 20:58:19 if read is False or method is None or not self._is_method_retryable(method): 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 elif read is not None: 20:58:19 read -= 1 20:58:19 20:58:19 elif error: 20:58:19 # Other retry? 20:58:19 if other is not None: 20:58:19 other -= 1 20:58:19 20:58:19 elif response and response.get_redirect_location(): 20:58:19 # Redirect retry? 20:58:19 if redirect is not None: 20:58:19 redirect -= 1 20:58:19 cause = "too many redirects" 20:58:19 response_redirect_location = response.get_redirect_location() 20:58:19 if response_redirect_location: 20:58:19 redirect_location = response_redirect_location 20:58:19 status = response.status 20:58:19 20:58:19 else: 20:58:19 # Incrementing because of a server error like a 500 in 20:58:19 # status_forcelist and the given method is in the allowed_methods 20:58:19 cause = ResponseError.GENERIC_ERROR 20:58:19 if response and response.status: 20:58:19 if status_count is not None: 20:58:19 status_count -= 1 20:58:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 20:58:19 status = response.status 20:58:19 20:58:19 history = self.history + ( 20:58:19 RequestHistory(method, url, error, status, redirect_location), 20:58:19 ) 20:58:19 20:58:19 new_retry = self.new( 20:58:19 total=total, 20:58:19 connect=connect, 20:58:19 read=read, 20:58:19 redirect=redirect, 20:58:19 status=status_count, 20:58:19 other=other, 20:58:19 history=history, 20:58:19 ) 20:58:19 20:58:19 if new_retry.is_exhausted(): 20:58:19 reason = error or ResponseError(cause) 20:58:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 20:58:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/operations/transportpce-device-renderer:create-ots-oms (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 20:58:19 20:58:19 During handling of the above exception, another exception occurred: 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_09_create_OTS_ROADMA(self): 20:58:19 > response = test_utils.transportpce_api_rpc_request( 20:58:19 'transportpce-device-renderer', 'create-ots-oms', 20:58:19 { 20:58:19 'node-id': 'ROADMA01', 20:58:19 'logical-connection-point': 'DEG1-TTP-TXRX' 20:58:19 }) 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:96: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 transportpce_tests/common/test_utils.py:685: in transportpce_api_rpc_request 20:58:19 response = post_request(url, data) 20:58:19 transportpce_tests/common/test_utils.py:142: in post_request 20:58:19 return requests.request( 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 20:58:19 return session.request(method=method, url=url, **kwargs) 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 20:58:19 resp = self.send(prep, **send_kwargs) 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 20:58:19 r = adapter.send(request, **kwargs) 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = 20:58:19 request = , stream = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:19 proxies = OrderedDict() 20:58:19 20:58:19 def send( 20:58:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:19 ): 20:58:19 """Sends PreparedRequest object. Returns Response object. 20:58:19 20:58:19 :param request: The :class:`PreparedRequest ` being sent. 20:58:19 :param stream: (optional) Whether to stream the request content. 20:58:19 :param timeout: (optional) How long to wait for the server to send 20:58:19 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:19 read timeout) ` tuple. 20:58:19 :type timeout: float or tuple or urllib3 Timeout object 20:58:19 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:19 we verify the server's TLS certificate, or a string, in which case it 20:58:19 must be a path to a CA bundle to use 20:58:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:19 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:19 :rtype: requests.Response 20:58:19 """ 20:58:19 20:58:19 try: 20:58:19 conn = self.get_connection_with_tls_context( 20:58:19 request, verify, proxies=proxies, cert=cert 20:58:19 ) 20:58:19 except LocationValueError as e: 20:58:19 raise InvalidURL(e, request=request) 20:58:19 20:58:19 self.cert_verify(conn, request.url, verify, cert) 20:58:19 url = self.request_url(request, proxies) 20:58:19 self.add_headers( 20:58:19 request, 20:58:19 stream=stream, 20:58:19 timeout=timeout, 20:58:19 verify=verify, 20:58:19 cert=cert, 20:58:19 proxies=proxies, 20:58:19 ) 20:58:19 20:58:19 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:19 20:58:19 if isinstance(timeout, tuple): 20:58:19 try: 20:58:19 connect, read = timeout 20:58:19 timeout = TimeoutSauce(connect=connect, read=read) 20:58:19 except ValueError: 20:58:19 raise ValueError( 20:58:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:19 f"or a single float to set both timeouts to the same value." 20:58:19 ) 20:58:19 elif isinstance(timeout, TimeoutSauce): 20:58:19 pass 20:58:19 else: 20:58:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:19 20:58:19 try: 20:58:19 resp = conn.urlopen( 20:58:19 method=request.method, 20:58:19 url=url, 20:58:19 body=request.body, 20:58:19 headers=request.headers, 20:58:19 redirect=False, 20:58:19 assert_same_host=False, 20:58:19 preload_content=False, 20:58:19 decode_content=False, 20:58:19 retries=self.max_retries, 20:58:19 timeout=timeout, 20:58:19 chunked=chunked, 20:58:19 ) 20:58:19 20:58:19 except (ProtocolError, OSError) as err: 20:58:19 raise ConnectionError(err, request=request) 20:58:19 20:58:19 except MaxRetryError as e: 20:58:19 if isinstance(e.reason, ConnectTimeoutError): 20:58:19 # TODO: Remove this in 3.0.0: see #2811 20:58:19 if not isinstance(e.reason, NewConnectionError): 20:58:19 raise ConnectTimeout(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, ResponseError): 20:58:19 raise RetryError(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, _ProxyError): 20:58:19 raise ProxyError(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, _SSLError): 20:58:19 # This branch is for urllib3 v1.22 and later. 20:58:19 raise SSLError(e, request=request) 20:58:19 20:58:19 > raise ConnectionError(e, request=request) 20:58:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/operations/transportpce-device-renderer:create-ots-oms (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_09_create_OTS_ROADMA 20:58:19 ________________ TransportOlmTesting.test_10_create_OTS_ROADMC _________________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def _new_conn(self) -> socket.socket: 20:58:19 """Establish a socket connection and set nodelay settings on it. 20:58:19 20:58:19 :return: New socket connection. 20:58:19 """ 20:58:19 try: 20:58:19 > sock = connection.create_connection( 20:58:19 (self._dns_host, self.port), 20:58:19 self.timeout, 20:58:19 source_address=self.source_address, 20:58:19 socket_options=self.socket_options, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 20:58:19 raise err 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 address = ('localhost', 8182), timeout = 10, source_address = None 20:58:19 socket_options = [(6, 1, 1)] 20:58:19 20:58:19 def create_connection( 20:58:19 address: tuple[str, int], 20:58:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:19 source_address: tuple[str, int] | None = None, 20:58:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 20:58:19 ) -> socket.socket: 20:58:19 """Connect to *address* and return the socket object. 20:58:19 20:58:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 20:58:19 port)``) and return the socket object. Passing the optional 20:58:19 *timeout* parameter will set the timeout on the socket instance 20:58:19 before attempting to connect. If no *timeout* is supplied, the 20:58:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 20:58:19 is used. If *source_address* is set it must be a tuple of (host, port) 20:58:19 for the socket to bind as a source address before making the connection. 20:58:19 An host of '' or port 0 tells the OS to use the default. 20:58:19 """ 20:58:19 20:58:19 host, port = address 20:58:19 if host.startswith("["): 20:58:19 host = host.strip("[]") 20:58:19 err = None 20:58:19 20:58:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 20:58:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 20:58:19 # The original create_connection function always returns all records. 20:58:19 family = allowed_gai_family() 20:58:19 20:58:19 try: 20:58:19 host.encode("idna") 20:58:19 except UnicodeError: 20:58:19 raise LocationParseError(f"'{host}', label empty or too long") from None 20:58:19 20:58:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 20:58:19 af, socktype, proto, canonname, sa = res 20:58:19 sock = None 20:58:19 try: 20:58:19 sock = socket.socket(af, socktype, proto) 20:58:19 20:58:19 # If provided, set socket level options before connecting. 20:58:19 _set_socket_options(sock, socket_options) 20:58:19 20:58:19 if timeout is not _DEFAULT_TIMEOUT: 20:58:19 sock.settimeout(timeout) 20:58:19 if source_address: 20:58:19 sock.bind(source_address) 20:58:19 > sock.connect(sa) 20:58:19 E ConnectionRefusedError: [Errno 111] Connection refused 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 20:58:19 20:58:19 The above exception was the direct cause of the following exception: 20:58:19 20:58:19 self = 20:58:19 method = 'POST' 20:58:19 url = '/rests/operations/transportpce-device-renderer:create-ots-oms' 20:58:19 body = '{"input": {"node-id": "ROADMC01", "logical-connection-point": "DEG2-TTP-TXRX"}}' 20:58:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '79', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 20:58:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:19 redirect = False, assert_same_host = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 20:58:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 20:58:19 decode_content = False, response_kw = {} 20:58:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/transportpce-device-renderer:create-ots-oms', query=None, fragment=None) 20:58:19 destination_scheme = None, conn = None, release_this_conn = True 20:58:19 http_tunnel_required = False, err = None, clean_exit = False 20:58:19 20:58:19 def urlopen( # type: ignore[override] 20:58:19 self, 20:58:19 method: str, 20:58:19 url: str, 20:58:19 body: _TYPE_BODY | None = None, 20:58:19 headers: typing.Mapping[str, str] | None = None, 20:58:19 retries: Retry | bool | int | None = None, 20:58:19 redirect: bool = True, 20:58:19 assert_same_host: bool = True, 20:58:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:19 pool_timeout: int | None = None, 20:58:19 release_conn: bool | None = None, 20:58:19 chunked: bool = False, 20:58:19 body_pos: _TYPE_BODY_POSITION | None = None, 20:58:19 preload_content: bool = True, 20:58:19 decode_content: bool = True, 20:58:19 **response_kw: typing.Any, 20:58:19 ) -> BaseHTTPResponse: 20:58:19 """ 20:58:19 Get a connection from the pool and perform an HTTP request. This is the 20:58:19 lowest level call for making a request, so you'll need to specify all 20:58:19 the raw details. 20:58:19 20:58:19 .. note:: 20:58:19 20:58:19 More commonly, it's appropriate to use a convenience method 20:58:19 such as :meth:`request`. 20:58:19 20:58:19 .. note:: 20:58:19 20:58:19 `release_conn` will only behave as expected if 20:58:19 `preload_content=False` because we want to make 20:58:19 `preload_content=False` the default behaviour someday soon without 20:58:19 breaking backwards compatibility. 20:58:19 20:58:19 :param method: 20:58:19 HTTP request method (such as GET, POST, PUT, etc.) 20:58:19 20:58:19 :param url: 20:58:19 The URL to perform the request on. 20:58:19 20:58:19 :param body: 20:58:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 20:58:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 20:58:19 20:58:19 :param headers: 20:58:19 Dictionary of custom headers to send, such as User-Agent, 20:58:19 If-None-Match, etc. If None, pool headers are used. If provided, 20:58:19 these headers completely replace any pool-specific headers. 20:58:19 20:58:19 :param retries: 20:58:19 Configure the number of retries to allow before raising a 20:58:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 20:58:19 20:58:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 20:58:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 20:58:19 over different types of retries. 20:58:19 Pass an integer number to retry connection errors that many times, 20:58:19 but no other types of errors. Pass zero to never retry. 20:58:19 20:58:19 If ``False``, then retries are disabled and any exception is raised 20:58:19 immediately. Also, instead of raising a MaxRetryError on redirects, 20:58:19 the redirect response will be returned. 20:58:19 20:58:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 20:58:19 20:58:19 :param redirect: 20:58:19 If True, automatically handle redirects (status codes 301, 302, 20:58:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 20:58:19 will disable redirect, too. 20:58:19 20:58:19 :param assert_same_host: 20:58:19 If ``True``, will make sure that the host of the pool requests is 20:58:19 consistent else will raise HostChangedError. When ``False``, you can 20:58:19 use the pool on an HTTP proxy and request foreign hosts. 20:58:19 20:58:19 :param timeout: 20:58:19 If specified, overrides the default timeout for this one 20:58:19 request. It may be a float (in seconds) or an instance of 20:58:19 :class:`urllib3.util.Timeout`. 20:58:19 20:58:19 :param pool_timeout: 20:58:19 If set and the pool is set to block=True, then this method will 20:58:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 20:58:19 connection is available within the time period. 20:58:19 20:58:19 :param bool preload_content: 20:58:19 If True, the response's body will be preloaded into memory. 20:58:19 20:58:19 :param bool decode_content: 20:58:19 If True, will attempt to decode the body based on the 20:58:19 'content-encoding' header. 20:58:19 20:58:19 :param release_conn: 20:58:19 If False, then the urlopen call will not release the connection 20:58:19 back into the pool once a response is received (but will release if 20:58:19 you read the entire contents of the response such as when 20:58:19 `preload_content=True`). This is useful if you're not preloading 20:58:19 the response's content immediately. You will need to call 20:58:19 ``r.release_conn()`` on the response ``r`` to return the connection 20:58:19 back into the pool. If None, it takes the value of ``preload_content`` 20:58:19 which defaults to ``True``. 20:58:19 20:58:19 :param bool chunked: 20:58:19 If True, urllib3 will send the body using chunked transfer 20:58:19 encoding. Otherwise, urllib3 will send the body using the standard 20:58:19 content-length form. Defaults to False. 20:58:19 20:58:19 :param int body_pos: 20:58:19 Position to seek to in file-like body in the event of a retry or 20:58:19 redirect. Typically this won't need to be set because urllib3 will 20:58:19 auto-populate the value when needed. 20:58:19 """ 20:58:19 parsed_url = parse_url(url) 20:58:19 destination_scheme = parsed_url.scheme 20:58:19 20:58:19 if headers is None: 20:58:19 headers = self.headers 20:58:19 20:58:19 if not isinstance(retries, Retry): 20:58:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 20:58:19 20:58:19 if release_conn is None: 20:58:19 release_conn = preload_content 20:58:19 20:58:19 # Check host 20:58:19 if assert_same_host and not self.is_same_host(url): 20:58:19 raise HostChangedError(self, url, retries) 20:58:19 20:58:19 # Ensure that the URL we're connecting to is properly encoded 20:58:19 if url.startswith("/"): 20:58:19 url = to_str(_encode_target(url)) 20:58:19 else: 20:58:19 url = to_str(parsed_url.url) 20:58:19 20:58:19 conn = None 20:58:19 20:58:19 # Track whether `conn` needs to be released before 20:58:19 # returning/raising/recursing. Update this variable if necessary, and 20:58:19 # leave `release_conn` constant throughout the function. That way, if 20:58:19 # the function recurses, the original value of `release_conn` will be 20:58:19 # passed down into the recursive call, and its value will be respected. 20:58:19 # 20:58:19 # See issue #651 [1] for details. 20:58:19 # 20:58:19 # [1] 20:58:19 release_this_conn = release_conn 20:58:19 20:58:19 http_tunnel_required = connection_requires_http_tunnel( 20:58:19 self.proxy, self.proxy_config, destination_scheme 20:58:19 ) 20:58:19 20:58:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 20:58:19 # have to copy the headers dict so we can safely change it without those 20:58:19 # changes being reflected in anyone else's copy. 20:58:19 if not http_tunnel_required: 20:58:19 headers = headers.copy() # type: ignore[attr-defined] 20:58:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 20:58:19 20:58:19 # Must keep the exception bound to a separate variable or else Python 3 20:58:19 # complains about UnboundLocalError. 20:58:19 err = None 20:58:19 20:58:19 # Keep track of whether we cleanly exited the except block. This 20:58:19 # ensures we do proper cleanup in finally. 20:58:19 clean_exit = False 20:58:19 20:58:19 # Rewind body position, if needed. Record current position 20:58:19 # for future rewinds in the event of a redirect/retry. 20:58:19 body_pos = set_file_position(body, body_pos) 20:58:19 20:58:19 try: 20:58:19 # Request a connection from the queue. 20:58:19 timeout_obj = self._get_timeout(timeout) 20:58:19 conn = self._get_conn(timeout=pool_timeout) 20:58:19 20:58:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 20:58:19 20:58:19 # Is this a closed/new connection that requires CONNECT tunnelling? 20:58:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 20:58:19 try: 20:58:19 self._prepare_proxy(conn) 20:58:19 except (BaseSSLError, OSError, SocketTimeout) as e: 20:58:19 self._raise_timeout( 20:58:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 20:58:19 ) 20:58:19 raise 20:58:19 20:58:19 # If we're going to release the connection in ``finally:``, then 20:58:19 # the response doesn't need to know about the connection. Otherwise 20:58:19 # it will also try to release it and we'll have a double-release 20:58:19 # mess. 20:58:19 response_conn = conn if not release_conn else None 20:58:19 20:58:19 # Make the request on the HTTPConnection object 20:58:19 > response = self._make_request( 20:58:19 conn, 20:58:19 method, 20:58:19 url, 20:58:19 timeout=timeout_obj, 20:58:19 body=body, 20:58:19 headers=headers, 20:58:19 chunked=chunked, 20:58:19 retries=retries, 20:58:19 response_conn=response_conn, 20:58:19 preload_content=preload_content, 20:58:19 decode_content=decode_content, 20:58:19 **response_kw, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 20:58:19 conn.request( 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 20:58:19 self.endheaders() 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 20:58:19 self._send_output(message_body, encode_chunked=encode_chunked) 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 20:58:19 self.send(msg) 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 20:58:19 self.connect() 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 20:58:19 self.sock = self._new_conn() 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def _new_conn(self) -> socket.socket: 20:58:19 """Establish a socket connection and set nodelay settings on it. 20:58:19 20:58:19 :return: New socket connection. 20:58:19 """ 20:58:19 try: 20:58:19 sock = connection.create_connection( 20:58:19 (self._dns_host, self.port), 20:58:19 self.timeout, 20:58:19 source_address=self.source_address, 20:58:19 socket_options=self.socket_options, 20:58:19 ) 20:58:19 except socket.gaierror as e: 20:58:19 raise NameResolutionError(self.host, self, e) from e 20:58:19 except SocketTimeout as e: 20:58:19 raise ConnectTimeoutError( 20:58:19 self, 20:58:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 20:58:19 ) from e 20:58:19 20:58:19 except OSError as e: 20:58:19 > raise NewConnectionError( 20:58:19 self, f"Failed to establish a new connection: {e}" 20:58:19 ) from e 20:58:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 20:58:19 20:58:19 The above exception was the direct cause of the following exception: 20:58:19 20:58:19 self = 20:58:19 request = , stream = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:19 proxies = OrderedDict() 20:58:19 20:58:19 def send( 20:58:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:19 ): 20:58:19 """Sends PreparedRequest object. Returns Response object. 20:58:19 20:58:19 :param request: The :class:`PreparedRequest ` being sent. 20:58:19 :param stream: (optional) Whether to stream the request content. 20:58:19 :param timeout: (optional) How long to wait for the server to send 20:58:19 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:19 read timeout) ` tuple. 20:58:19 :type timeout: float or tuple or urllib3 Timeout object 20:58:19 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:19 we verify the server's TLS certificate, or a string, in which case it 20:58:19 must be a path to a CA bundle to use 20:58:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:19 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:19 :rtype: requests.Response 20:58:19 """ 20:58:19 20:58:19 try: 20:58:19 conn = self.get_connection_with_tls_context( 20:58:19 request, verify, proxies=proxies, cert=cert 20:58:19 ) 20:58:19 except LocationValueError as e: 20:58:19 raise InvalidURL(e, request=request) 20:58:19 20:58:19 self.cert_verify(conn, request.url, verify, cert) 20:58:19 url = self.request_url(request, proxies) 20:58:19 self.add_headers( 20:58:19 request, 20:58:19 stream=stream, 20:58:19 timeout=timeout, 20:58:19 verify=verify, 20:58:19 cert=cert, 20:58:19 proxies=proxies, 20:58:19 ) 20:58:19 20:58:19 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:19 20:58:19 if isinstance(timeout, tuple): 20:58:19 try: 20:58:19 connect, read = timeout 20:58:19 timeout = TimeoutSauce(connect=connect, read=read) 20:58:19 except ValueError: 20:58:19 raise ValueError( 20:58:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:19 f"or a single float to set both timeouts to the same value." 20:58:19 ) 20:58:19 elif isinstance(timeout, TimeoutSauce): 20:58:19 pass 20:58:19 else: 20:58:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:19 20:58:19 try: 20:58:19 > resp = conn.urlopen( 20:58:19 method=request.method, 20:58:19 url=url, 20:58:19 body=request.body, 20:58:19 headers=request.headers, 20:58:19 redirect=False, 20:58:19 assert_same_host=False, 20:58:19 preload_content=False, 20:58:19 decode_content=False, 20:58:19 retries=self.max_retries, 20:58:19 timeout=timeout, 20:58:19 chunked=chunked, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 20:58:19 retries = retries.increment( 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:19 method = 'POST' 20:58:19 url = '/rests/operations/transportpce-device-renderer:create-ots-oms' 20:58:19 response = None 20:58:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 20:58:19 _pool = 20:58:19 _stacktrace = 20:58:19 20:58:19 def increment( 20:58:19 self, 20:58:19 method: str | None = None, 20:58:19 url: str | None = None, 20:58:19 response: BaseHTTPResponse | None = None, 20:58:19 error: Exception | None = None, 20:58:19 _pool: ConnectionPool | None = None, 20:58:19 _stacktrace: TracebackType | None = None, 20:58:19 ) -> Self: 20:58:19 """Return a new Retry object with incremented retry counters. 20:58:19 20:58:19 :param response: A response object, or None, if the server did not 20:58:19 return a response. 20:58:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 20:58:19 :param Exception error: An error encountered during the request, or 20:58:19 None if the response was received successfully. 20:58:19 20:58:19 :return: A new ``Retry`` object. 20:58:19 """ 20:58:19 if self.total is False and error: 20:58:19 # Disabled, indicate to re-raise the error. 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 20:58:19 total = self.total 20:58:19 if total is not None: 20:58:19 total -= 1 20:58:19 20:58:19 connect = self.connect 20:58:19 read = self.read 20:58:19 redirect = self.redirect 20:58:19 status_count = self.status 20:58:19 other = self.other 20:58:19 cause = "unknown" 20:58:19 status = None 20:58:19 redirect_location = None 20:58:19 20:58:19 if error and self._is_connection_error(error): 20:58:19 # Connect retry? 20:58:19 if connect is False: 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 elif connect is not None: 20:58:19 connect -= 1 20:58:19 20:58:19 elif error and self._is_read_error(error): 20:58:19 # Read retry? 20:58:19 if read is False or method is None or not self._is_method_retryable(method): 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 elif read is not None: 20:58:19 read -= 1 20:58:19 20:58:19 elif error: 20:58:19 # Other retry? 20:58:19 if other is not None: 20:58:19 other -= 1 20:58:19 20:58:19 elif response and response.get_redirect_location(): 20:58:19 # Redirect retry? 20:58:19 if redirect is not None: 20:58:19 redirect -= 1 20:58:19 cause = "too many redirects" 20:58:19 response_redirect_location = response.get_redirect_location() 20:58:19 if response_redirect_location: 20:58:19 redirect_location = response_redirect_location 20:58:19 status = response.status 20:58:19 20:58:19 else: 20:58:19 # Incrementing because of a server error like a 500 in 20:58:19 # status_forcelist and the given method is in the allowed_methods 20:58:19 cause = ResponseError.GENERIC_ERROR 20:58:19 if response and response.status: 20:58:19 if status_count is not None: 20:58:19 status_count -= 1 20:58:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 20:58:19 status = response.status 20:58:19 20:58:19 history = self.history + ( 20:58:19 RequestHistory(method, url, error, status, redirect_location), 20:58:19 ) 20:58:19 20:58:19 new_retry = self.new( 20:58:19 total=total, 20:58:19 connect=connect, 20:58:19 read=read, 20:58:19 redirect=redirect, 20:58:19 status=status_count, 20:58:19 other=other, 20:58:19 history=history, 20:58:19 ) 20:58:19 20:58:19 if new_retry.is_exhausted(): 20:58:19 reason = error or ResponseError(cause) 20:58:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 20:58:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/operations/transportpce-device-renderer:create-ots-oms (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 20:58:19 20:58:19 During handling of the above exception, another exception occurred: 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_10_create_OTS_ROADMC(self): 20:58:19 > response = test_utils.transportpce_api_rpc_request( 20:58:19 'transportpce-device-renderer', 'create-ots-oms', 20:58:19 { 20:58:19 'node-id': 'ROADMC01', 20:58:19 'logical-connection-point': 'DEG2-TTP-TXRX' 20:58:19 }) 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:105: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 transportpce_tests/common/test_utils.py:685: in transportpce_api_rpc_request 20:58:19 response = post_request(url, data) 20:58:19 transportpce_tests/common/test_utils.py:142: in post_request 20:58:19 return requests.request( 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 20:58:19 return session.request(method=method, url=url, **kwargs) 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 20:58:19 resp = self.send(prep, **send_kwargs) 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 20:58:19 r = adapter.send(request, **kwargs) 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = 20:58:19 request = , stream = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:19 proxies = OrderedDict() 20:58:19 20:58:19 def send( 20:58:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:19 ): 20:58:19 """Sends PreparedRequest object. Returns Response object. 20:58:19 20:58:19 :param request: The :class:`PreparedRequest ` being sent. 20:58:19 :param stream: (optional) Whether to stream the request content. 20:58:19 :param timeout: (optional) How long to wait for the server to send 20:58:19 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:19 read timeout) ` tuple. 20:58:19 :type timeout: float or tuple or urllib3 Timeout object 20:58:19 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:19 we verify the server's TLS certificate, or a string, in which case it 20:58:19 must be a path to a CA bundle to use 20:58:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:19 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:19 :rtype: requests.Response 20:58:19 """ 20:58:19 20:58:19 try: 20:58:19 conn = self.get_connection_with_tls_context( 20:58:19 request, verify, proxies=proxies, cert=cert 20:58:19 ) 20:58:19 except LocationValueError as e: 20:58:19 raise InvalidURL(e, request=request) 20:58:19 20:58:19 self.cert_verify(conn, request.url, verify, cert) 20:58:19 url = self.request_url(request, proxies) 20:58:19 self.add_headers( 20:58:19 request, 20:58:19 stream=stream, 20:58:19 timeout=timeout, 20:58:19 verify=verify, 20:58:19 cert=cert, 20:58:19 proxies=proxies, 20:58:19 ) 20:58:19 20:58:19 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:19 20:58:19 if isinstance(timeout, tuple): 20:58:19 try: 20:58:19 connect, read = timeout 20:58:19 timeout = TimeoutSauce(connect=connect, read=read) 20:58:19 except ValueError: 20:58:19 raise ValueError( 20:58:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:19 f"or a single float to set both timeouts to the same value." 20:58:19 ) 20:58:19 elif isinstance(timeout, TimeoutSauce): 20:58:19 pass 20:58:19 else: 20:58:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:19 20:58:19 try: 20:58:19 resp = conn.urlopen( 20:58:19 method=request.method, 20:58:19 url=url, 20:58:19 body=request.body, 20:58:19 headers=request.headers, 20:58:19 redirect=False, 20:58:19 assert_same_host=False, 20:58:19 preload_content=False, 20:58:19 decode_content=False, 20:58:19 retries=self.max_retries, 20:58:19 timeout=timeout, 20:58:19 chunked=chunked, 20:58:19 ) 20:58:19 20:58:19 except (ProtocolError, OSError) as err: 20:58:19 raise ConnectionError(err, request=request) 20:58:19 20:58:19 except MaxRetryError as e: 20:58:19 if isinstance(e.reason, ConnectTimeoutError): 20:58:19 # TODO: Remove this in 3.0.0: see #2811 20:58:19 if not isinstance(e.reason, NewConnectionError): 20:58:19 raise ConnectTimeout(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, ResponseError): 20:58:19 raise RetryError(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, _ProxyError): 20:58:19 raise ProxyError(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, _SSLError): 20:58:19 # This branch is for urllib3 v1.22 and later. 20:58:19 raise SSLError(e, request=request) 20:58:19 20:58:19 > raise ConnectionError(e, request=request) 20:58:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/operations/transportpce-device-renderer:create-ots-oms (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_10_create_OTS_ROADMC 20:58:19 __________________ TransportOlmTesting.test_11_get_PM_ROADMA ___________________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_11_get_PM_ROADMA(self): 20:58:19 response = test_utils.transportpce_api_rpc_request( 20:58:19 'transportpce-olm', 'get-pm', 20:58:19 { 20:58:19 'node-id': 'ROADMA01', 20:58:19 'resource-type': 'interface', 20:58:19 'granularity': '15min', 20:58:19 'resource-identifier': { 20:58:19 'resource-name': 'OTS-DEG1-TTP-TXRX' 20:58:19 } 20:58:19 }) 20:58:19 > self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 E AssertionError: 204 != 200 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:124: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_11_get_PM_ROADMA 20:58:19 __________________ TransportOlmTesting.test_12_get_PM_ROADMC ___________________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_12_get_PM_ROADMC(self): 20:58:19 response = test_utils.transportpce_api_rpc_request( 20:58:19 'transportpce-olm', 'get-pm', 20:58:19 { 20:58:19 'node-id': 'ROADMC01', 20:58:19 'resource-type': 'interface', 20:58:19 'granularity': '15min', 20:58:19 'resource-identifier': { 20:58:19 'resource-name': 'OTS-DEG2-TTP-TXRX' 20:58:19 } 20:58:19 }) 20:58:19 > self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 E AssertionError: 204 != 200 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:147: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_12_get_PM_ROADMC 20:58:19 ______ TransportOlmTesting.test_13_calculate_span_loss_base_ROADMA_ROADMC ______ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_13_calculate_span_loss_base_ROADMA_ROADMC(self): 20:58:19 response = test_utils.transportpce_api_rpc_request( 20:58:19 'transportpce-olm', 'calculate-spanloss-base', 20:58:19 { 20:58:19 'src-type': 'link', 20:58:19 'link-id': 'ROADMA01-DEG1-DEG1-TTP-TXRXtoROADMC01-DEG2-DEG2-TTP-TXRX' 20:58:19 }) 20:58:19 > self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 E AssertionError: 500 != 200 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:166: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_13_calculate_span_loss_base_ROADMA_ROADMC 20:58:19 ___________ TransportOlmTesting.test_14_calculate_span_loss_base_all ___________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_14_calculate_span_loss_base_all(self): 20:58:19 response = test_utils.transportpce_api_rpc_request( 20:58:19 'transportpce-olm', 'calculate-spanloss-base', 20:58:19 { 20:58:19 'src-type': 'all' 20:58:19 }) 20:58:19 self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 > self.assertIn('Success', 20:58:19 response['output']['result']) 20:58:19 E AssertionError: 'Success' not found in 'Failed' 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:182: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_14_calculate_span_loss_base_all 20:58:19 ___________ TransportOlmTesting.test_15_get_OTS_DEG1_TTP_TXRX_ROADMA ___________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_15_get_OTS_DEG1_TTP_TXRX_ROADMA(self): 20:58:19 response = test_utils.check_node_attribute2_request( 20:58:19 'ROADMA01', 'interface', 'OTS-DEG1-TTP-TXRX', 'org-openroadm-optical-transport-interfaces:ots') 20:58:19 > self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 E AssertionError: 503 != 200 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:197: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_15_get_OTS_DEG1_TTP_TXRX_ROADMA 20:58:19 ___________ TransportOlmTesting.test_16_get_OTS_DEG2_TTP_TXRX_ROADMC ___________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_16_get_OTS_DEG2_TTP_TXRX_ROADMC(self): 20:58:19 response = test_utils.check_node_attribute2_request( 20:58:19 'ROADMC01', 'interface', 'OTS-DEG2-TTP-TXRX', 'org-openroadm-optical-transport-interfaces:ots') 20:58:19 > self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 E AssertionError: 503 != 200 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:208: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_16_get_OTS_DEG2_TTP_TXRX_ROADMC 20:58:19 _____________ TransportOlmTesting.test_17_servicePath_create_AToZ ______________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_17_servicePath_create_AToZ(self): 20:58:19 response = test_utils.transportpce_api_rpc_request( 20:58:19 'transportpce-device-renderer', 'service-path', 20:58:19 { 20:58:19 'service-name': 'test', 20:58:19 'wave-number': '1', 20:58:19 'modulation-format': 'dp-qpsk', 20:58:19 'operation': 'create', 20:58:19 'nodes': 20:58:19 [{'node-id': 'XPDRA01', 20:58:19 'dest-tp': 'XPDR1-NETWORK1', 'src-tp': 'XPDR1-CLIENT1'}, 20:58:19 {'node-id': 'ROADMA01', 20:58:19 'dest-tp': 'DEG1-TTP-TXRX', 'src-tp': 'SRG1-PP1-TXRX'}, 20:58:19 {'node-id': 'ROADMC01', 20:58:19 'dest-tp': 'SRG1-PP1-TXRX', 'src-tp': 'DEG2-TTP-TXRX'}, 20:58:19 {'node-id': 'XPDRC01', 20:58:19 'dest-tp': 'XPDR1-CLIENT1', 'src-tp': 'XPDR1-NETWORK1'}], 20:58:19 'center-freq': 196.1, 20:58:19 'nmc-width': 40, 20:58:19 'min-freq': 196.075, 20:58:19 'max-freq': 196.125, 20:58:19 'lower-spectral-slot-number': 761, 20:58:19 'higher-spectral-slot-number': 768 20:58:19 }) 20:58:19 self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 > self.assertIn('Interfaces created successfully for nodes: ', response['output']['result']) 20:58:19 E AssertionError: 'Interfaces created successfully for nodes: ' not found in 'ROADMC01 is not mounted on the controller\nXPDRC01 is not mounted on the controller\nROADMA01 is not mounted on the controller\nXPDRA01 is not mounted on the controller' 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:237: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_17_servicePath_create_AToZ 20:58:19 _____________ TransportOlmTesting.test_18_servicePath_create_ZToA ______________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_18_servicePath_create_ZToA(self): 20:58:19 response = test_utils.transportpce_api_rpc_request( 20:58:19 'transportpce-device-renderer', 'service-path', 20:58:19 { 20:58:19 'service-name': 'test', 20:58:19 'wave-number': '1', 20:58:19 'modulation-format': 'dp-qpsk', 20:58:19 'operation': 'create', 20:58:19 'nodes': 20:58:19 [{'node-id': 'XPDRC01', 20:58:19 'dest-tp': 'XPDR1-NETWORK1', 'src-tp': 'XPDR1-CLIENT1'}, 20:58:19 {'node-id': 'ROADMC01', 20:58:19 'dest-tp': 'DEG2-TTP-TXRX', 'src-tp': 'SRG1-PP1-TXRX'}, 20:58:19 {'node-id': 'ROADMA01', 20:58:19 'src-tp': 'DEG1-TTP-TXRX', 'dest-tp': 'SRG1-PP1-TXRX'}, 20:58:19 {'node-id': 'XPDRA01', 20:58:19 'src-tp': 'XPDR1-NETWORK1', 'dest-tp': 'XPDR1-CLIENT1'}], 20:58:19 'center-freq': 196.1, 20:58:19 'nmc-width': 40, 20:58:19 'min-freq': 196.075, 20:58:19 'max-freq': 196.125, 20:58:19 'lower-spectral-slot-number': 761, 20:58:19 'higher-spectral-slot-number': 768 20:58:19 }) 20:58:19 self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 > self.assertIn('Interfaces created successfully for nodes: ', response['output']['result']) 20:58:19 E AssertionError: 'Interfaces created successfully for nodes: ' not found in 'ROADMA01 is not mounted on the controller\nXPDRC01 is not mounted on the controller\nXPDRA01 is not mounted on the controller\nROADMC01 is not mounted on the controller' 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:265: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_18_servicePath_create_ZToA 20:58:19 _________ TransportOlmTesting.test_19_service_power_setup_XPDRA_XPDRC __________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_19_service_power_setup_XPDRA_XPDRC(self): 20:58:19 response = test_utils.transportpce_api_rpc_request( 20:58:19 'transportpce-olm', 'service-power-setup', 20:58:19 { 20:58:19 'service-name': 'test', 20:58:19 'wave-number': 1, 20:58:19 'nodes': [ 20:58:19 { 20:58:19 'dest-tp': 'XPDR1-NETWORK1', 20:58:19 'src-tp': 'XPDR1-CLIENT1', 20:58:19 'node-id': 'XPDRA01' 20:58:19 }, 20:58:19 { 20:58:19 'dest-tp': 'DEG1-TTP-TXRX', 20:58:19 'src-tp': 'SRG1-PP1-TXRX', 20:58:19 'node-id': 'ROADMA01' 20:58:19 }, 20:58:19 { 20:58:19 'dest-tp': 'SRG1-PP1-TXRX', 20:58:19 'src-tp': 'DEG2-TTP-TXRX', 20:58:19 'node-id': 'ROADMC01' 20:58:19 }, 20:58:19 { 20:58:19 'dest-tp': 'XPDR1-CLIENT1', 20:58:19 'src-tp': 'XPDR1-NETWORK1', 20:58:19 'node-id': 'XPDRC01' 20:58:19 } 20:58:19 ], 20:58:19 'center-freq': 196.1, 20:58:19 'nmc-width': 40, 20:58:19 'min-freq': 196.075, 20:58:19 'max-freq': 196.125, 20:58:19 'lower-spectral-slot-number': 761, 20:58:19 'higher-spectral-slot-number': 768 20:58:19 }) 20:58:19 self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 > self.assertIn('Success', response['output']['result']) 20:58:19 E AssertionError: 'Success' not found in 'Failed' 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:304: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_19_service_power_setup_XPDRA_XPDRC 20:58:19 ________ TransportOlmTesting.test_20_get_interface_XPDRA_XPDR1_NETWORK1 ________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_20_get_interface_XPDRA_XPDR1_NETWORK1(self): 20:58:19 response = test_utils.check_node_attribute2_request( 20:58:19 'XPDRA01', 'interface', 'XPDR1-NETWORK1-761:768', 'org-openroadm-optical-channel-interfaces:och') 20:58:19 > self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 E AssertionError: 503 != 200 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:309: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_20_get_interface_XPDRA_XPDR1_NETWORK1 20:58:19 ____________ TransportOlmTesting.test_21_get_roadmconnection_ROADMA ____________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_21_get_roadmconnection_ROADMA(self): 20:58:19 response = test_utils.check_node_attribute_request( 20:58:19 'ROADMA01', 'roadm-connections', 'SRG1-PP1-TXRX-DEG1-TTP-TXRX-761:768') 20:58:19 > self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 E AssertionError: 503 != 200 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:316: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_21_get_roadmconnection_ROADMA 20:58:19 ____________ TransportOlmTesting.test_22_get_roadmconnection_ROADMC ____________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_22_get_roadmconnection_ROADMC(self): 20:58:19 response = test_utils.check_node_attribute_request( 20:58:19 'ROADMC01', 'roadm-connections', 'DEG2-TTP-TXRX-SRG1-PP1-TXRX-761:768') 20:58:19 > self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 E AssertionError: 503 != 200 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:323: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_22_get_roadmconnection_ROADMC 20:58:19 _________ TransportOlmTesting.test_23_service_power_setup_XPDRC_XPDRA __________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_23_service_power_setup_XPDRC_XPDRA(self): 20:58:19 response = test_utils.transportpce_api_rpc_request( 20:58:19 'transportpce-olm', 'service-power-setup', 20:58:19 { 20:58:19 'service-name': 'test', 20:58:19 'wave-number': 1, 20:58:19 'nodes': [ 20:58:19 { 20:58:19 'dest-tp': 'XPDR1-NETWORK1', 20:58:19 'src-tp': 'XPDR1-CLIENT1', 20:58:19 'node-id': 'XPDRC01' 20:58:19 }, 20:58:19 { 20:58:19 'dest-tp': 'DEG2-TTP-TXRX', 20:58:19 'src-tp': 'SRG1-PP1-TXRX', 20:58:19 'node-id': 'ROADMC01' 20:58:19 }, 20:58:19 { 20:58:19 'src-tp': 'DEG1-TTP-TXRX', 20:58:19 'dest-tp': 'SRG1-PP1-TXRX', 20:58:19 'node-id': 'ROADMA01' 20:58:19 }, 20:58:19 { 20:58:19 'src-tp': 'XPDR1-NETWORK1', 20:58:19 'dest-tp': 'XPDR1-CLIENT1', 20:58:19 'node-id': 'XPDRA01' 20:58:19 } 20:58:19 ], 20:58:19 'center-freq': 196.1, 20:58:19 'nmc-width': 40, 20:58:19 'min-freq': 196.075, 20:58:19 'max-freq': 196.125, 20:58:19 'lower-spectral-slot-number': 761, 20:58:19 'higher-spectral-slot-number': 768 20:58:19 }) 20:58:19 self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 > self.assertIn('Success', response['output']['result']) 20:58:19 E AssertionError: 'Success' not found in 'Failed' 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:362: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_23_service_power_setup_XPDRC_XPDRA 20:58:19 ________ TransportOlmTesting.test_24_get_interface_XPDRC_XPDR1_NETWORK1 ________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_24_get_interface_XPDRC_XPDR1_NETWORK1(self): 20:58:19 response = test_utils.check_node_attribute2_request( 20:58:19 'XPDRC01', 'interface', 'XPDR1-NETWORK1-761:768', 'org-openroadm-optical-channel-interfaces:och') 20:58:19 > self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 E AssertionError: 503 != 200 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:367: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_24_get_interface_XPDRC_XPDR1_NETWORK1 20:58:19 ____________ TransportOlmTesting.test_25_get_roadmconnection_ROADMC ____________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_25_get_roadmconnection_ROADMC(self): 20:58:19 response = test_utils.check_node_attribute_request( 20:58:19 'ROADMC01', 'roadm-connections', 'SRG1-PP1-TXRX-DEG2-TTP-TXRX-761:768') 20:58:19 > self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 E AssertionError: 503 != 200 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:374: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_25_get_roadmconnection_ROADMC 20:58:19 ________ TransportOlmTesting.test_26_service_power_turndown_XPDRA_XPDRC ________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_26_service_power_turndown_XPDRA_XPDRC(self): 20:58:19 response = test_utils.transportpce_api_rpc_request( 20:58:19 'transportpce-olm', 'service-power-turndown', 20:58:19 { 20:58:19 'service-name': 'test', 20:58:19 'wave-number': 1, 20:58:19 'nodes': [ 20:58:19 { 20:58:19 'dest-tp': 'XPDR1-NETWORK1', 20:58:19 'src-tp': 'XPDR1-CLIENT1', 20:58:19 'node-id': 'XPDRA01' 20:58:19 }, 20:58:19 { 20:58:19 'dest-tp': 'DEG1-TTP-TXRX', 20:58:19 'src-tp': 'SRG1-PP1-TXRX', 20:58:19 'node-id': 'ROADMA01' 20:58:19 }, 20:58:19 { 20:58:19 'dest-tp': 'SRG1-PP1-TXRX', 20:58:19 'src-tp': 'DEG2-TTP-TXRX', 20:58:19 'node-id': 'ROADMC01' 20:58:19 }, 20:58:19 { 20:58:19 'dest-tp': 'XPDR1-CLIENT1', 20:58:19 'src-tp': 'XPDR1-NETWORK1', 20:58:19 'node-id': 'XPDRC01' 20:58:19 } 20:58:19 ], 20:58:19 'center-freq': 196.1, 20:58:19 'nmc-width': 40, 20:58:19 'min-freq': 196.075, 20:58:19 'max-freq': 196.125, 20:58:19 'lower-spectral-slot-number': 761, 20:58:19 'higher-spectral-slot-number': 768 20:58:19 }) 20:58:19 > self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 E AssertionError: 500 != 200 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:413: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_26_service_power_turndown_XPDRA_XPDRC 20:58:19 ____________ TransportOlmTesting.test_27_get_roadmconnection_ROADMA ____________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_27_get_roadmconnection_ROADMA(self): 20:58:19 response = test_utils.check_node_attribute_request( 20:58:19 'ROADMA01', 'roadm-connections', 'SRG1-PP1-TXRX-DEG1-TTP-TXRX-761:768') 20:58:19 > self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 E AssertionError: 503 != 200 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:419: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_27_get_roadmconnection_ROADMA 20:58:19 ____________ TransportOlmTesting.test_28_get_roadmconnection_ROADMC ____________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_28_get_roadmconnection_ROADMC(self): 20:58:19 response = test_utils.check_node_attribute_request( 20:58:19 'ROADMC01', 'roadm-connections', 'DEG2-TTP-TXRX-SRG1-PP1-TXRX-761:768') 20:58:19 > self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 E AssertionError: 503 != 200 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:426: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_28_get_roadmconnection_ROADMC 20:58:19 _____________ TransportOlmTesting.test_29_servicePath_delete_AToZ ______________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_29_servicePath_delete_AToZ(self): 20:58:19 response = test_utils.transportpce_api_rpc_request( 20:58:19 'transportpce-device-renderer', 'service-path', 20:58:19 { 20:58:19 'service-name': 'test', 20:58:19 'wave-number': '1', 20:58:19 'modulation-format': 'dp-qpsk', 20:58:19 'operation': 'delete', 20:58:19 'nodes': 20:58:19 [{'node-id': 'XPDRA01', 20:58:19 'dest-tp': 'XPDR1-NETWORK1', 'src-tp': 'XPDR1-CLIENT1'}, 20:58:19 {'node-id': 'ROADMA01', 20:58:19 'dest-tp': 'DEG1-TTP-TXRX', 'src-tp': 'SRG1-PP1-TXRX'}, 20:58:19 {'node-id': 'ROADMC01', 20:58:19 'dest-tp': 'SRG1-PP1-TXRX', 'src-tp': 'DEG2-TTP-TXRX'}, 20:58:19 {'node-id': 'XPDRC01', 20:58:19 'dest-tp': 'XPDR1-CLIENT1', 'src-tp': 'XPDR1-NETWORK1'}], 20:58:19 'center-freq': 196.1, 20:58:19 'nmc-width': 40, 20:58:19 'min-freq': 196.075, 20:58:19 'max-freq': 196.125, 20:58:19 'lower-spectral-slot-number': 761, 20:58:19 'higher-spectral-slot-number': 768 20:58:19 }) 20:58:19 self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 > self.assertIn('Request processed', response['output']['result']) 20:58:19 E AssertionError: 'Request processed' not found in 'ROADMC01 is not mounted on the controller\nXPDRC01 is not mounted on the controller\nXPDRA01 is not mounted on the controller\nROADMA01 is not mounted on the controller' 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:454: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_29_servicePath_delete_AToZ 20:58:19 _____________ TransportOlmTesting.test_30_servicePath_delete_ZToA ______________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_30_servicePath_delete_ZToA(self): 20:58:19 response = test_utils.transportpce_api_rpc_request( 20:58:19 'transportpce-device-renderer', 'service-path', 20:58:19 { 20:58:19 'service-name': 'test', 20:58:19 'wave-number': '1', 20:58:19 'modulation-format': 'dp-qpsk', 20:58:19 'operation': 'delete', 20:58:19 'nodes': 20:58:19 [{'node-id': 'XPDRC01', 20:58:19 'dest-tp': 'XPDR1-NETWORK1', 'src-tp': 'XPDR1-CLIENT1'}, 20:58:19 {'node-id': 'ROADMC01', 20:58:19 'dest-tp': 'DEG2-TTP-TXRX', 'src-tp': 'SRG1-PP1-TXRX'}, 20:58:19 {'node-id': 'ROADMA01', 20:58:19 'src-tp': 'DEG1-TTP-TXRX', 'dest-tp': 'SRG1-PP1-TXRX'}, 20:58:19 {'node-id': 'XPDRA01', 20:58:19 'src-tp': 'XPDR1-NETWORK1', 'dest-tp': 'XPDR1-CLIENT1'}], 20:58:19 'center-freq': 196.1, 20:58:19 'nmc-width': 40, 20:58:19 'min-freq': 196.075, 20:58:19 'max-freq': 196.125, 20:58:19 'lower-spectral-slot-number': 761, 20:58:19 'higher-spectral-slot-number': 768 20:58:19 }) 20:58:19 self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 > self.assertIn('Request processed', response['output']['result']) 20:58:19 E AssertionError: 'Request processed' not found in 'XPDRA01 is not mounted on the controller\nXPDRC01 is not mounted on the controller\nROADMA01 is not mounted on the controller\nROADMC01 is not mounted on the controller' 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:482: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_30_servicePath_delete_ZToA 20:58:19 _____________ TransportOlmTesting.test_31_connect_xpdrA_to_roadmA ______________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_31_connect_xpdrA_to_roadmA(self): 20:58:19 response = test_utils.transportpce_api_rpc_request( 20:58:19 'transportpce-networkutils', 'init-xpdr-rdm-links', 20:58:19 {'links-input': {'xpdr-node': 'XPDRA01', 'xpdr-num': '1', 'network-num': '2', 20:58:19 'rdm-node': 'ROADMA01', 'srg-num': '1', 'termination-point-num': 'SRG1-PP2-TXRX'}}) 20:58:19 > self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 E AssertionError: 204 != 200 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:492: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_31_connect_xpdrA_to_roadmA 20:58:19 _____________ TransportOlmTesting.test_32_connect_roadmA_to_xpdrA ______________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_32_connect_roadmA_to_xpdrA(self): 20:58:19 response = test_utils.transportpce_api_rpc_request( 20:58:19 'transportpce-networkutils', 'init-rdm-xpdr-links', 20:58:19 {'links-input': {'xpdr-node': 'XPDRA01', 'xpdr-num': '1', 'network-num': '2', 20:58:19 'rdm-node': 'ROADMA01', 'srg-num': '1', 'termination-point-num': 'SRG1-PP2-TXRX'}}) 20:58:19 > self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 E AssertionError: 204 != 200 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:499: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_32_connect_roadmA_to_xpdrA 20:58:19 _____________ TransportOlmTesting.test_33_servicePath_create_AToZ ______________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_33_servicePath_create_AToZ(self): 20:58:19 response = test_utils.transportpce_api_rpc_request( 20:58:19 'transportpce-device-renderer', 'service-path', 20:58:19 { 20:58:19 'service-name': 'test2', 20:58:19 'wave-number': '2', 20:58:19 'modulation-format': 'dp-qpsk', 20:58:19 'operation': 'create', 20:58:19 'nodes': 20:58:19 [{'node-id': 'XPDRA01', 20:58:19 'dest-tp': 'XPDR1-NETWORK2', 'src-tp': 'XPDR1-CLIENT2'}, 20:58:19 {'node-id': 'ROADMA01', 20:58:19 'dest-tp': 'DEG1-TTP-TXRX', 'src-tp': 'SRG1-PP2-TXRX'}], 20:58:19 'center-freq': 196.05, 20:58:19 'nmc-width': 40, 20:58:19 'min-freq': 196.025, 20:58:19 'max-freq': 196.075, 20:58:19 'lower-spectral-slot-number': 753, 20:58:19 'higher-spectral-slot-number': 760 20:58:19 }) 20:58:19 self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 > self.assertIn('Interfaces created successfully for nodes', response['output']['result']) 20:58:19 E AssertionError: 'Interfaces created successfully for nodes' not found in 'ROADMA01 is not mounted on the controller\nXPDRA01 is not mounted on the controller' 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:522: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_33_servicePath_create_AToZ 20:58:19 ________ TransportOlmTesting.test_34_get_interface_XPDRA_XPDR1_NETWORK2 ________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_34_get_interface_XPDRA_XPDR1_NETWORK2(self): 20:58:19 response = test_utils.check_node_attribute2_request( 20:58:19 'XPDRA01', 'interface', 'XPDR1-NETWORK2-753:760', 'org-openroadm-optical-channel-interfaces:och') 20:58:19 > self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 E AssertionError: 503 != 200 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:528: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_34_get_interface_XPDRA_XPDR1_NETWORK2 20:58:19 _____________ TransportOlmTesting.test_35_servicePath_delete_AToZ ______________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_35_servicePath_delete_AToZ(self): 20:58:19 response = test_utils.transportpce_api_rpc_request( 20:58:19 'transportpce-device-renderer', 'service-path', 20:58:19 { 20:58:19 'service-name': 'test2', 20:58:19 'wave-number': '2', 20:58:19 'modulation-format': 'dp-qpsk', 20:58:19 'operation': 'delete', 20:58:19 'nodes': 20:58:19 [{'node-id': 'XPDRA01', 20:58:19 'dest-tp': 'XPDR1-NETWORK2', 'src-tp': 'XPDR1-CLIENT2'}, 20:58:19 {'node-id': 'ROADMA01', 20:58:19 'dest-tp': 'DEG1-TTP-TXRX', 'src-tp': 'SRG1-PP2-TXRX'}], 20:58:19 'center-freq': 196.05, 20:58:19 'nmc-width': 40, 20:58:19 'min-freq': 196.025, 20:58:19 'max-freq': 196.075, 20:58:19 'lower-spectral-slot-number': 753, 20:58:19 'higher-spectral-slot-number': 760 20:58:19 }) 20:58:19 self.assertEqual(response['status_code'], requests.codes.ok) 20:58:19 > self.assertIn('Request processed', response['output']['result']) 20:58:19 E AssertionError: 'Request processed' not found in 'ROADMA01 is not mounted on the controller\nXPDRA01 is not mounted on the controller' 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:553: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_35_servicePath_delete_AToZ 20:58:19 ____________ TransportOlmTesting.test_36_xpdrA_device_disconnected _____________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_36_xpdrA_device_disconnected(self): 20:58:19 response = test_utils.unmount_device("XPDRA01") 20:58:19 > self.assertIn(response.status_code, (requests.codes.ok, requests.codes.no_content)) 20:58:19 E AssertionError: 409 not found in (200, 204) 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:558: AssertionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_36_xpdrA_device_disconnected 20:58:19 Searching for patterns in karaf.log... Pattern not found after 180 seconds! Node XPDRA01 still not deleted from tpce topology... 20:58:19 ____________ TransportOlmTesting.test_37_xpdrC_device_disconnected _____________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def _new_conn(self) -> socket.socket: 20:58:19 """Establish a socket connection and set nodelay settings on it. 20:58:19 20:58:19 :return: New socket connection. 20:58:19 """ 20:58:19 try: 20:58:19 > sock = connection.create_connection( 20:58:19 (self._dns_host, self.port), 20:58:19 self.timeout, 20:58:19 source_address=self.source_address, 20:58:19 socket_options=self.socket_options, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 20:58:19 raise err 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 address = ('localhost', 8182), timeout = 10, source_address = None 20:58:19 socket_options = [(6, 1, 1)] 20:58:19 20:58:19 def create_connection( 20:58:19 address: tuple[str, int], 20:58:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:19 source_address: tuple[str, int] | None = None, 20:58:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 20:58:19 ) -> socket.socket: 20:58:19 """Connect to *address* and return the socket object. 20:58:19 20:58:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 20:58:19 port)``) and return the socket object. Passing the optional 20:58:19 *timeout* parameter will set the timeout on the socket instance 20:58:19 before attempting to connect. If no *timeout* is supplied, the 20:58:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 20:58:19 is used. If *source_address* is set it must be a tuple of (host, port) 20:58:19 for the socket to bind as a source address before making the connection. 20:58:19 An host of '' or port 0 tells the OS to use the default. 20:58:19 """ 20:58:19 20:58:19 host, port = address 20:58:19 if host.startswith("["): 20:58:19 host = host.strip("[]") 20:58:19 err = None 20:58:19 20:58:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 20:58:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 20:58:19 # The original create_connection function always returns all records. 20:58:19 family = allowed_gai_family() 20:58:19 20:58:19 try: 20:58:19 host.encode("idna") 20:58:19 except UnicodeError: 20:58:19 raise LocationParseError(f"'{host}', label empty or too long") from None 20:58:19 20:58:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 20:58:19 af, socktype, proto, canonname, sa = res 20:58:19 sock = None 20:58:19 try: 20:58:19 sock = socket.socket(af, socktype, proto) 20:58:19 20:58:19 # If provided, set socket level options before connecting. 20:58:19 _set_socket_options(sock, socket_options) 20:58:19 20:58:19 if timeout is not _DEFAULT_TIMEOUT: 20:58:19 sock.settimeout(timeout) 20:58:19 if source_address: 20:58:19 sock.bind(source_address) 20:58:19 > sock.connect(sa) 20:58:19 E ConnectionRefusedError: [Errno 111] Connection refused 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 20:58:19 20:58:19 The above exception was the direct cause of the following exception: 20:58:19 20:58:19 self = 20:58:19 method = 'DELETE' 20:58:19 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRC01' 20:58:19 body = None 20:58:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 20:58:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:19 redirect = False, assert_same_host = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 20:58:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 20:58:19 decode_content = False, response_kw = {} 20:58:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRC01', query=None, fragment=None) 20:58:19 destination_scheme = None, conn = None, release_this_conn = True 20:58:19 http_tunnel_required = False, err = None, clean_exit = False 20:58:19 20:58:19 def urlopen( # type: ignore[override] 20:58:19 self, 20:58:19 method: str, 20:58:19 url: str, 20:58:19 body: _TYPE_BODY | None = None, 20:58:19 headers: typing.Mapping[str, str] | None = None, 20:58:19 retries: Retry | bool | int | None = None, 20:58:19 redirect: bool = True, 20:58:19 assert_same_host: bool = True, 20:58:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:19 pool_timeout: int | None = None, 20:58:19 release_conn: bool | None = None, 20:58:19 chunked: bool = False, 20:58:19 body_pos: _TYPE_BODY_POSITION | None = None, 20:58:19 preload_content: bool = True, 20:58:19 decode_content: bool = True, 20:58:19 **response_kw: typing.Any, 20:58:19 ) -> BaseHTTPResponse: 20:58:19 """ 20:58:19 Get a connection from the pool and perform an HTTP request. This is the 20:58:19 lowest level call for making a request, so you'll need to specify all 20:58:19 the raw details. 20:58:19 20:58:19 .. note:: 20:58:19 20:58:19 More commonly, it's appropriate to use a convenience method 20:58:19 such as :meth:`request`. 20:58:19 20:58:19 .. note:: 20:58:19 20:58:19 `release_conn` will only behave as expected if 20:58:19 `preload_content=False` because we want to make 20:58:19 `preload_content=False` the default behaviour someday soon without 20:58:19 breaking backwards compatibility. 20:58:19 20:58:19 :param method: 20:58:19 HTTP request method (such as GET, POST, PUT, etc.) 20:58:19 20:58:19 :param url: 20:58:19 The URL to perform the request on. 20:58:19 20:58:19 :param body: 20:58:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 20:58:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 20:58:19 20:58:19 :param headers: 20:58:19 Dictionary of custom headers to send, such as User-Agent, 20:58:19 If-None-Match, etc. If None, pool headers are used. If provided, 20:58:19 these headers completely replace any pool-specific headers. 20:58:19 20:58:19 :param retries: 20:58:19 Configure the number of retries to allow before raising a 20:58:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 20:58:19 20:58:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 20:58:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 20:58:19 over different types of retries. 20:58:19 Pass an integer number to retry connection errors that many times, 20:58:19 but no other types of errors. Pass zero to never retry. 20:58:19 20:58:19 If ``False``, then retries are disabled and any exception is raised 20:58:19 immediately. Also, instead of raising a MaxRetryError on redirects, 20:58:19 the redirect response will be returned. 20:58:19 20:58:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 20:58:19 20:58:19 :param redirect: 20:58:19 If True, automatically handle redirects (status codes 301, 302, 20:58:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 20:58:19 will disable redirect, too. 20:58:19 20:58:19 :param assert_same_host: 20:58:19 If ``True``, will make sure that the host of the pool requests is 20:58:19 consistent else will raise HostChangedError. When ``False``, you can 20:58:19 use the pool on an HTTP proxy and request foreign hosts. 20:58:19 20:58:19 :param timeout: 20:58:19 If specified, overrides the default timeout for this one 20:58:19 request. It may be a float (in seconds) or an instance of 20:58:19 :class:`urllib3.util.Timeout`. 20:58:19 20:58:19 :param pool_timeout: 20:58:19 If set and the pool is set to block=True, then this method will 20:58:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 20:58:19 connection is available within the time period. 20:58:19 20:58:19 :param bool preload_content: 20:58:19 If True, the response's body will be preloaded into memory. 20:58:19 20:58:19 :param bool decode_content: 20:58:19 If True, will attempt to decode the body based on the 20:58:19 'content-encoding' header. 20:58:19 20:58:19 :param release_conn: 20:58:19 If False, then the urlopen call will not release the connection 20:58:19 back into the pool once a response is received (but will release if 20:58:19 you read the entire contents of the response such as when 20:58:19 `preload_content=True`). This is useful if you're not preloading 20:58:19 the response's content immediately. You will need to call 20:58:19 ``r.release_conn()`` on the response ``r`` to return the connection 20:58:19 back into the pool. If None, it takes the value of ``preload_content`` 20:58:19 which defaults to ``True``. 20:58:19 20:58:19 :param bool chunked: 20:58:19 If True, urllib3 will send the body using chunked transfer 20:58:19 encoding. Otherwise, urllib3 will send the body using the standard 20:58:19 content-length form. Defaults to False. 20:58:19 20:58:19 :param int body_pos: 20:58:19 Position to seek to in file-like body in the event of a retry or 20:58:19 redirect. Typically this won't need to be set because urllib3 will 20:58:19 auto-populate the value when needed. 20:58:19 """ 20:58:19 parsed_url = parse_url(url) 20:58:19 destination_scheme = parsed_url.scheme 20:58:19 20:58:19 if headers is None: 20:58:19 headers = self.headers 20:58:19 20:58:19 if not isinstance(retries, Retry): 20:58:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 20:58:19 20:58:19 if release_conn is None: 20:58:19 release_conn = preload_content 20:58:19 20:58:19 # Check host 20:58:19 if assert_same_host and not self.is_same_host(url): 20:58:19 raise HostChangedError(self, url, retries) 20:58:19 20:58:19 # Ensure that the URL we're connecting to is properly encoded 20:58:19 if url.startswith("/"): 20:58:19 url = to_str(_encode_target(url)) 20:58:19 else: 20:58:19 url = to_str(parsed_url.url) 20:58:19 20:58:19 conn = None 20:58:19 20:58:19 # Track whether `conn` needs to be released before 20:58:19 # returning/raising/recursing. Update this variable if necessary, and 20:58:19 # leave `release_conn` constant throughout the function. That way, if 20:58:19 # the function recurses, the original value of `release_conn` will be 20:58:19 # passed down into the recursive call, and its value will be respected. 20:58:19 # 20:58:19 # See issue #651 [1] for details. 20:58:19 # 20:58:19 # [1] 20:58:19 release_this_conn = release_conn 20:58:19 20:58:19 http_tunnel_required = connection_requires_http_tunnel( 20:58:19 self.proxy, self.proxy_config, destination_scheme 20:58:19 ) 20:58:19 20:58:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 20:58:19 # have to copy the headers dict so we can safely change it without those 20:58:19 # changes being reflected in anyone else's copy. 20:58:19 if not http_tunnel_required: 20:58:19 headers = headers.copy() # type: ignore[attr-defined] 20:58:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 20:58:19 20:58:19 # Must keep the exception bound to a separate variable or else Python 3 20:58:19 # complains about UnboundLocalError. 20:58:19 err = None 20:58:19 20:58:19 # Keep track of whether we cleanly exited the except block. This 20:58:19 # ensures we do proper cleanup in finally. 20:58:19 clean_exit = False 20:58:19 20:58:19 # Rewind body position, if needed. Record current position 20:58:19 # for future rewinds in the event of a redirect/retry. 20:58:19 body_pos = set_file_position(body, body_pos) 20:58:19 20:58:19 try: 20:58:19 # Request a connection from the queue. 20:58:19 timeout_obj = self._get_timeout(timeout) 20:58:19 conn = self._get_conn(timeout=pool_timeout) 20:58:19 20:58:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 20:58:19 20:58:19 # Is this a closed/new connection that requires CONNECT tunnelling? 20:58:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 20:58:19 try: 20:58:19 self._prepare_proxy(conn) 20:58:19 except (BaseSSLError, OSError, SocketTimeout) as e: 20:58:19 self._raise_timeout( 20:58:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 20:58:19 ) 20:58:19 raise 20:58:19 20:58:19 # If we're going to release the connection in ``finally:``, then 20:58:19 # the response doesn't need to know about the connection. Otherwise 20:58:19 # it will also try to release it and we'll have a double-release 20:58:19 # mess. 20:58:19 response_conn = conn if not release_conn else None 20:58:19 20:58:19 # Make the request on the HTTPConnection object 20:58:19 > response = self._make_request( 20:58:19 conn, 20:58:19 method, 20:58:19 url, 20:58:19 timeout=timeout_obj, 20:58:19 body=body, 20:58:19 headers=headers, 20:58:19 chunked=chunked, 20:58:19 retries=retries, 20:58:19 response_conn=response_conn, 20:58:19 preload_content=preload_content, 20:58:19 decode_content=decode_content, 20:58:19 **response_kw, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 20:58:19 conn.request( 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 20:58:19 self.endheaders() 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 20:58:19 self._send_output(message_body, encode_chunked=encode_chunked) 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 20:58:19 self.send(msg) 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 20:58:19 self.connect() 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 20:58:19 self.sock = self._new_conn() 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def _new_conn(self) -> socket.socket: 20:58:19 """Establish a socket connection and set nodelay settings on it. 20:58:19 20:58:19 :return: New socket connection. 20:58:19 """ 20:58:19 try: 20:58:19 sock = connection.create_connection( 20:58:19 (self._dns_host, self.port), 20:58:19 self.timeout, 20:58:19 source_address=self.source_address, 20:58:19 socket_options=self.socket_options, 20:58:19 ) 20:58:19 except socket.gaierror as e: 20:58:19 raise NameResolutionError(self.host, self, e) from e 20:58:19 except SocketTimeout as e: 20:58:19 raise ConnectTimeoutError( 20:58:19 self, 20:58:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 20:58:19 ) from e 20:58:19 20:58:19 except OSError as e: 20:58:19 > raise NewConnectionError( 20:58:19 self, f"Failed to establish a new connection: {e}" 20:58:19 ) from e 20:58:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 20:58:19 20:58:19 The above exception was the direct cause of the following exception: 20:58:19 20:58:19 self = 20:58:19 request = , stream = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:19 proxies = OrderedDict() 20:58:19 20:58:19 def send( 20:58:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:19 ): 20:58:19 """Sends PreparedRequest object. Returns Response object. 20:58:19 20:58:19 :param request: The :class:`PreparedRequest ` being sent. 20:58:19 :param stream: (optional) Whether to stream the request content. 20:58:19 :param timeout: (optional) How long to wait for the server to send 20:58:19 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:19 read timeout) ` tuple. 20:58:19 :type timeout: float or tuple or urllib3 Timeout object 20:58:19 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:19 we verify the server's TLS certificate, or a string, in which case it 20:58:19 must be a path to a CA bundle to use 20:58:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:19 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:19 :rtype: requests.Response 20:58:19 """ 20:58:19 20:58:19 try: 20:58:19 conn = self.get_connection_with_tls_context( 20:58:19 request, verify, proxies=proxies, cert=cert 20:58:19 ) 20:58:19 except LocationValueError as e: 20:58:19 raise InvalidURL(e, request=request) 20:58:19 20:58:19 self.cert_verify(conn, request.url, verify, cert) 20:58:19 url = self.request_url(request, proxies) 20:58:19 self.add_headers( 20:58:19 request, 20:58:19 stream=stream, 20:58:19 timeout=timeout, 20:58:19 verify=verify, 20:58:19 cert=cert, 20:58:19 proxies=proxies, 20:58:19 ) 20:58:19 20:58:19 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:19 20:58:19 if isinstance(timeout, tuple): 20:58:19 try: 20:58:19 connect, read = timeout 20:58:19 timeout = TimeoutSauce(connect=connect, read=read) 20:58:19 except ValueError: 20:58:19 raise ValueError( 20:58:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:19 f"or a single float to set both timeouts to the same value." 20:58:19 ) 20:58:19 elif isinstance(timeout, TimeoutSauce): 20:58:19 pass 20:58:19 else: 20:58:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:19 20:58:19 try: 20:58:19 > resp = conn.urlopen( 20:58:19 method=request.method, 20:58:19 url=url, 20:58:19 body=request.body, 20:58:19 headers=request.headers, 20:58:19 redirect=False, 20:58:19 assert_same_host=False, 20:58:19 preload_content=False, 20:58:19 decode_content=False, 20:58:19 retries=self.max_retries, 20:58:19 timeout=timeout, 20:58:19 chunked=chunked, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 20:58:19 retries = retries.increment( 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:19 method = 'DELETE' 20:58:19 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRC01' 20:58:19 response = None 20:58:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 20:58:19 _pool = 20:58:19 _stacktrace = 20:58:19 20:58:19 def increment( 20:58:19 self, 20:58:19 method: str | None = None, 20:58:19 url: str | None = None, 20:58:19 response: BaseHTTPResponse | None = None, 20:58:19 error: Exception | None = None, 20:58:19 _pool: ConnectionPool | None = None, 20:58:19 _stacktrace: TracebackType | None = None, 20:58:19 ) -> Self: 20:58:19 """Return a new Retry object with incremented retry counters. 20:58:19 20:58:19 :param response: A response object, or None, if the server did not 20:58:19 return a response. 20:58:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 20:58:19 :param Exception error: An error encountered during the request, or 20:58:19 None if the response was received successfully. 20:58:19 20:58:19 :return: A new ``Retry`` object. 20:58:19 """ 20:58:19 if self.total is False and error: 20:58:19 # Disabled, indicate to re-raise the error. 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 20:58:19 total = self.total 20:58:19 if total is not None: 20:58:19 total -= 1 20:58:19 20:58:19 connect = self.connect 20:58:19 read = self.read 20:58:19 redirect = self.redirect 20:58:19 status_count = self.status 20:58:19 other = self.other 20:58:19 cause = "unknown" 20:58:19 status = None 20:58:19 redirect_location = None 20:58:19 20:58:19 if error and self._is_connection_error(error): 20:58:19 # Connect retry? 20:58:19 if connect is False: 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 elif connect is not None: 20:58:19 connect -= 1 20:58:19 20:58:19 elif error and self._is_read_error(error): 20:58:19 # Read retry? 20:58:19 if read is False or method is None or not self._is_method_retryable(method): 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 elif read is not None: 20:58:19 read -= 1 20:58:19 20:58:19 elif error: 20:58:19 # Other retry? 20:58:19 if other is not None: 20:58:19 other -= 1 20:58:19 20:58:19 elif response and response.get_redirect_location(): 20:58:19 # Redirect retry? 20:58:19 if redirect is not None: 20:58:19 redirect -= 1 20:58:19 cause = "too many redirects" 20:58:19 response_redirect_location = response.get_redirect_location() 20:58:19 if response_redirect_location: 20:58:19 redirect_location = response_redirect_location 20:58:19 status = response.status 20:58:19 20:58:19 else: 20:58:19 # Incrementing because of a server error like a 500 in 20:58:19 # status_forcelist and the given method is in the allowed_methods 20:58:19 cause = ResponseError.GENERIC_ERROR 20:58:19 if response and response.status: 20:58:19 if status_count is not None: 20:58:19 status_count -= 1 20:58:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 20:58:19 status = response.status 20:58:19 20:58:19 history = self.history + ( 20:58:19 RequestHistory(method, url, error, status, redirect_location), 20:58:19 ) 20:58:19 20:58:19 new_retry = self.new( 20:58:19 total=total, 20:58:19 connect=connect, 20:58:19 read=read, 20:58:19 redirect=redirect, 20:58:19 status=status_count, 20:58:19 other=other, 20:58:19 history=history, 20:58:19 ) 20:58:19 20:58:19 if new_retry.is_exhausted(): 20:58:19 reason = error or ResponseError(cause) 20:58:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 20:58:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRC01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 20:58:19 20:58:19 During handling of the above exception, another exception occurred: 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_37_xpdrC_device_disconnected(self): 20:58:19 > response = test_utils.unmount_device("XPDRC01") 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:561: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 transportpce_tests/common/test_utils.py:358: in unmount_device 20:58:19 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 20:58:19 transportpce_tests/common/test_utils.py:133: in delete_request 20:58:19 return requests.request( 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 20:58:19 return session.request(method=method, url=url, **kwargs) 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 20:58:19 resp = self.send(prep, **send_kwargs) 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 20:58:19 r = adapter.send(request, **kwargs) 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = 20:58:19 request = , stream = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:19 proxies = OrderedDict() 20:58:19 20:58:19 def send( 20:58:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:19 ): 20:58:19 """Sends PreparedRequest object. Returns Response object. 20:58:19 20:58:19 :param request: The :class:`PreparedRequest ` being sent. 20:58:19 :param stream: (optional) Whether to stream the request content. 20:58:19 :param timeout: (optional) How long to wait for the server to send 20:58:19 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:19 read timeout) ` tuple. 20:58:19 :type timeout: float or tuple or urllib3 Timeout object 20:58:19 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:19 we verify the server's TLS certificate, or a string, in which case it 20:58:19 must be a path to a CA bundle to use 20:58:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:19 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:19 :rtype: requests.Response 20:58:19 """ 20:58:19 20:58:19 try: 20:58:19 conn = self.get_connection_with_tls_context( 20:58:19 request, verify, proxies=proxies, cert=cert 20:58:19 ) 20:58:19 except LocationValueError as e: 20:58:19 raise InvalidURL(e, request=request) 20:58:19 20:58:19 self.cert_verify(conn, request.url, verify, cert) 20:58:19 url = self.request_url(request, proxies) 20:58:19 self.add_headers( 20:58:19 request, 20:58:19 stream=stream, 20:58:19 timeout=timeout, 20:58:19 verify=verify, 20:58:19 cert=cert, 20:58:19 proxies=proxies, 20:58:19 ) 20:58:19 20:58:19 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:19 20:58:19 if isinstance(timeout, tuple): 20:58:19 try: 20:58:19 connect, read = timeout 20:58:19 timeout = TimeoutSauce(connect=connect, read=read) 20:58:19 except ValueError: 20:58:19 raise ValueError( 20:58:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:19 f"or a single float to set both timeouts to the same value." 20:58:19 ) 20:58:19 elif isinstance(timeout, TimeoutSauce): 20:58:19 pass 20:58:19 else: 20:58:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:19 20:58:19 try: 20:58:19 resp = conn.urlopen( 20:58:19 method=request.method, 20:58:19 url=url, 20:58:19 body=request.body, 20:58:19 headers=request.headers, 20:58:19 redirect=False, 20:58:19 assert_same_host=False, 20:58:19 preload_content=False, 20:58:19 decode_content=False, 20:58:19 retries=self.max_retries, 20:58:19 timeout=timeout, 20:58:19 chunked=chunked, 20:58:19 ) 20:58:19 20:58:19 except (ProtocolError, OSError) as err: 20:58:19 raise ConnectionError(err, request=request) 20:58:19 20:58:19 except MaxRetryError as e: 20:58:19 if isinstance(e.reason, ConnectTimeoutError): 20:58:19 # TODO: Remove this in 3.0.0: see #2811 20:58:19 if not isinstance(e.reason, NewConnectionError): 20:58:19 raise ConnectTimeout(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, ResponseError): 20:58:19 raise RetryError(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, _ProxyError): 20:58:19 raise ProxyError(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, _SSLError): 20:58:19 # This branch is for urllib3 v1.22 and later. 20:58:19 raise SSLError(e, request=request) 20:58:19 20:58:19 > raise ConnectionError(e, request=request) 20:58:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRC01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_37_xpdrC_device_disconnected 20:58:19 ___________ TransportOlmTesting.test_38_calculate_span_loss_current ____________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def _new_conn(self) -> socket.socket: 20:58:19 """Establish a socket connection and set nodelay settings on it. 20:58:19 20:58:19 :return: New socket connection. 20:58:19 """ 20:58:19 try: 20:58:19 > sock = connection.create_connection( 20:58:19 (self._dns_host, self.port), 20:58:19 self.timeout, 20:58:19 source_address=self.source_address, 20:58:19 socket_options=self.socket_options, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 20:58:19 raise err 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 address = ('localhost', 8182), timeout = 10, source_address = None 20:58:19 socket_options = [(6, 1, 1)] 20:58:19 20:58:19 def create_connection( 20:58:19 address: tuple[str, int], 20:58:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:19 source_address: tuple[str, int] | None = None, 20:58:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 20:58:19 ) -> socket.socket: 20:58:19 """Connect to *address* and return the socket object. 20:58:19 20:58:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 20:58:19 port)``) and return the socket object. Passing the optional 20:58:19 *timeout* parameter will set the timeout on the socket instance 20:58:19 before attempting to connect. If no *timeout* is supplied, the 20:58:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 20:58:19 is used. If *source_address* is set it must be a tuple of (host, port) 20:58:19 for the socket to bind as a source address before making the connection. 20:58:19 An host of '' or port 0 tells the OS to use the default. 20:58:19 """ 20:58:19 20:58:19 host, port = address 20:58:19 if host.startswith("["): 20:58:19 host = host.strip("[]") 20:58:19 err = None 20:58:19 20:58:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 20:58:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 20:58:19 # The original create_connection function always returns all records. 20:58:19 family = allowed_gai_family() 20:58:19 20:58:19 try: 20:58:19 host.encode("idna") 20:58:19 except UnicodeError: 20:58:19 raise LocationParseError(f"'{host}', label empty or too long") from None 20:58:19 20:58:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 20:58:19 af, socktype, proto, canonname, sa = res 20:58:19 sock = None 20:58:19 try: 20:58:19 sock = socket.socket(af, socktype, proto) 20:58:19 20:58:19 # If provided, set socket level options before connecting. 20:58:19 _set_socket_options(sock, socket_options) 20:58:19 20:58:19 if timeout is not _DEFAULT_TIMEOUT: 20:58:19 sock.settimeout(timeout) 20:58:19 if source_address: 20:58:19 sock.bind(source_address) 20:58:19 > sock.connect(sa) 20:58:19 E ConnectionRefusedError: [Errno 111] Connection refused 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 20:58:19 20:58:19 The above exception was the direct cause of the following exception: 20:58:19 20:58:19 self = 20:58:19 method = 'POST' 20:58:19 url = '/rests/operations/transportpce-olm:calculate-spanloss-current' 20:58:19 body = None 20:58:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 20:58:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:19 redirect = False, assert_same_host = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 20:58:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 20:58:19 decode_content = False, response_kw = {} 20:58:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/transportpce-olm:calculate-spanloss-current', query=None, fragment=None) 20:58:19 destination_scheme = None, conn = None, release_this_conn = True 20:58:19 http_tunnel_required = False, err = None, clean_exit = False 20:58:19 20:58:19 def urlopen( # type: ignore[override] 20:58:19 self, 20:58:19 method: str, 20:58:19 url: str, 20:58:19 body: _TYPE_BODY | None = None, 20:58:19 headers: typing.Mapping[str, str] | None = None, 20:58:19 retries: Retry | bool | int | None = None, 20:58:19 redirect: bool = True, 20:58:19 assert_same_host: bool = True, 20:58:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:19 pool_timeout: int | None = None, 20:58:19 release_conn: bool | None = None, 20:58:19 chunked: bool = False, 20:58:19 body_pos: _TYPE_BODY_POSITION | None = None, 20:58:19 preload_content: bool = True, 20:58:19 decode_content: bool = True, 20:58:19 **response_kw: typing.Any, 20:58:19 ) -> BaseHTTPResponse: 20:58:19 """ 20:58:19 Get a connection from the pool and perform an HTTP request. This is the 20:58:19 lowest level call for making a request, so you'll need to specify all 20:58:19 the raw details. 20:58:19 20:58:19 .. note:: 20:58:19 20:58:19 More commonly, it's appropriate to use a convenience method 20:58:19 such as :meth:`request`. 20:58:19 20:58:19 .. note:: 20:58:19 20:58:19 `release_conn` will only behave as expected if 20:58:19 `preload_content=False` because we want to make 20:58:19 `preload_content=False` the default behaviour someday soon without 20:58:19 breaking backwards compatibility. 20:58:19 20:58:19 :param method: 20:58:19 HTTP request method (such as GET, POST, PUT, etc.) 20:58:19 20:58:19 :param url: 20:58:19 The URL to perform the request on. 20:58:19 20:58:19 :param body: 20:58:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 20:58:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 20:58:19 20:58:19 :param headers: 20:58:19 Dictionary of custom headers to send, such as User-Agent, 20:58:19 If-None-Match, etc. If None, pool headers are used. If provided, 20:58:19 these headers completely replace any pool-specific headers. 20:58:19 20:58:19 :param retries: 20:58:19 Configure the number of retries to allow before raising a 20:58:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 20:58:19 20:58:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 20:58:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 20:58:19 over different types of retries. 20:58:19 Pass an integer number to retry connection errors that many times, 20:58:19 but no other types of errors. Pass zero to never retry. 20:58:19 20:58:19 If ``False``, then retries are disabled and any exception is raised 20:58:19 immediately. Also, instead of raising a MaxRetryError on redirects, 20:58:19 the redirect response will be returned. 20:58:19 20:58:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 20:58:19 20:58:19 :param redirect: 20:58:19 If True, automatically handle redirects (status codes 301, 302, 20:58:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 20:58:19 will disable redirect, too. 20:58:19 20:58:19 :param assert_same_host: 20:58:19 If ``True``, will make sure that the host of the pool requests is 20:58:19 consistent else will raise HostChangedError. When ``False``, you can 20:58:19 use the pool on an HTTP proxy and request foreign hosts. 20:58:19 20:58:19 :param timeout: 20:58:19 If specified, overrides the default timeout for this one 20:58:19 request. It may be a float (in seconds) or an instance of 20:58:19 :class:`urllib3.util.Timeout`. 20:58:19 20:58:19 :param pool_timeout: 20:58:19 If set and the pool is set to block=True, then this method will 20:58:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 20:58:19 connection is available within the time period. 20:58:19 20:58:19 :param bool preload_content: 20:58:19 If True, the response's body will be preloaded into memory. 20:58:19 20:58:19 :param bool decode_content: 20:58:19 If True, will attempt to decode the body based on the 20:58:19 'content-encoding' header. 20:58:19 20:58:19 :param release_conn: 20:58:19 If False, then the urlopen call will not release the connection 20:58:19 back into the pool once a response is received (but will release if 20:58:19 you read the entire contents of the response such as when 20:58:19 `preload_content=True`). This is useful if you're not preloading 20:58:19 the response's content immediately. You will need to call 20:58:19 ``r.release_conn()`` on the response ``r`` to return the connection 20:58:19 back into the pool. If None, it takes the value of ``preload_content`` 20:58:19 which defaults to ``True``. 20:58:19 20:58:19 :param bool chunked: 20:58:19 If True, urllib3 will send the body using chunked transfer 20:58:19 encoding. Otherwise, urllib3 will send the body using the standard 20:58:19 content-length form. Defaults to False. 20:58:19 20:58:19 :param int body_pos: 20:58:19 Position to seek to in file-like body in the event of a retry or 20:58:19 redirect. Typically this won't need to be set because urllib3 will 20:58:19 auto-populate the value when needed. 20:58:19 """ 20:58:19 parsed_url = parse_url(url) 20:58:19 destination_scheme = parsed_url.scheme 20:58:19 20:58:19 if headers is None: 20:58:19 headers = self.headers 20:58:19 20:58:19 if not isinstance(retries, Retry): 20:58:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 20:58:19 20:58:19 if release_conn is None: 20:58:19 release_conn = preload_content 20:58:19 20:58:19 # Check host 20:58:19 if assert_same_host and not self.is_same_host(url): 20:58:19 raise HostChangedError(self, url, retries) 20:58:19 20:58:19 # Ensure that the URL we're connecting to is properly encoded 20:58:19 if url.startswith("/"): 20:58:19 url = to_str(_encode_target(url)) 20:58:19 else: 20:58:19 url = to_str(parsed_url.url) 20:58:19 20:58:19 conn = None 20:58:19 20:58:19 # Track whether `conn` needs to be released before 20:58:19 # returning/raising/recursing. Update this variable if necessary, and 20:58:19 # leave `release_conn` constant throughout the function. That way, if 20:58:19 # the function recurses, the original value of `release_conn` will be 20:58:19 # passed down into the recursive call, and its value will be respected. 20:58:19 # 20:58:19 # See issue #651 [1] for details. 20:58:19 # 20:58:19 # [1] 20:58:19 release_this_conn = release_conn 20:58:19 20:58:19 http_tunnel_required = connection_requires_http_tunnel( 20:58:19 self.proxy, self.proxy_config, destination_scheme 20:58:19 ) 20:58:19 20:58:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 20:58:19 # have to copy the headers dict so we can safely change it without those 20:58:19 # changes being reflected in anyone else's copy. 20:58:19 if not http_tunnel_required: 20:58:19 headers = headers.copy() # type: ignore[attr-defined] 20:58:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 20:58:19 20:58:19 # Must keep the exception bound to a separate variable or else Python 3 20:58:19 # complains about UnboundLocalError. 20:58:19 err = None 20:58:19 20:58:19 # Keep track of whether we cleanly exited the except block. This 20:58:19 # ensures we do proper cleanup in finally. 20:58:19 clean_exit = False 20:58:19 20:58:19 # Rewind body position, if needed. Record current position 20:58:19 # for future rewinds in the event of a redirect/retry. 20:58:19 body_pos = set_file_position(body, body_pos) 20:58:19 20:58:19 try: 20:58:19 # Request a connection from the queue. 20:58:19 timeout_obj = self._get_timeout(timeout) 20:58:19 conn = self._get_conn(timeout=pool_timeout) 20:58:19 20:58:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 20:58:19 20:58:19 # Is this a closed/new connection that requires CONNECT tunnelling? 20:58:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 20:58:19 try: 20:58:19 self._prepare_proxy(conn) 20:58:19 except (BaseSSLError, OSError, SocketTimeout) as e: 20:58:19 self._raise_timeout( 20:58:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 20:58:19 ) 20:58:19 raise 20:58:19 20:58:19 # If we're going to release the connection in ``finally:``, then 20:58:19 # the response doesn't need to know about the connection. Otherwise 20:58:19 # it will also try to release it and we'll have a double-release 20:58:19 # mess. 20:58:19 response_conn = conn if not release_conn else None 20:58:19 20:58:19 # Make the request on the HTTPConnection object 20:58:19 > response = self._make_request( 20:58:19 conn, 20:58:19 method, 20:58:19 url, 20:58:19 timeout=timeout_obj, 20:58:19 body=body, 20:58:19 headers=headers, 20:58:19 chunked=chunked, 20:58:19 retries=retries, 20:58:19 response_conn=response_conn, 20:58:19 preload_content=preload_content, 20:58:19 decode_content=decode_content, 20:58:19 **response_kw, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 20:58:19 conn.request( 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 20:58:19 self.endheaders() 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 20:58:19 self._send_output(message_body, encode_chunked=encode_chunked) 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 20:58:19 self.send(msg) 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 20:58:19 self.connect() 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 20:58:19 self.sock = self._new_conn() 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def _new_conn(self) -> socket.socket: 20:58:19 """Establish a socket connection and set nodelay settings on it. 20:58:19 20:58:19 :return: New socket connection. 20:58:19 """ 20:58:19 try: 20:58:19 sock = connection.create_connection( 20:58:19 (self._dns_host, self.port), 20:58:19 self.timeout, 20:58:19 source_address=self.source_address, 20:58:19 socket_options=self.socket_options, 20:58:19 ) 20:58:19 except socket.gaierror as e: 20:58:19 raise NameResolutionError(self.host, self, e) from e 20:58:19 except SocketTimeout as e: 20:58:19 raise ConnectTimeoutError( 20:58:19 self, 20:58:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 20:58:19 ) from e 20:58:19 20:58:19 except OSError as e: 20:58:19 > raise NewConnectionError( 20:58:19 self, f"Failed to establish a new connection: {e}" 20:58:19 ) from e 20:58:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 20:58:19 20:58:19 The above exception was the direct cause of the following exception: 20:58:19 20:58:19 self = 20:58:19 request = , stream = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:19 proxies = OrderedDict() 20:58:19 20:58:19 def send( 20:58:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:19 ): 20:58:19 """Sends PreparedRequest object. Returns Response object. 20:58:19 20:58:19 :param request: The :class:`PreparedRequest ` being sent. 20:58:19 :param stream: (optional) Whether to stream the request content. 20:58:19 :param timeout: (optional) How long to wait for the server to send 20:58:19 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:19 read timeout) ` tuple. 20:58:19 :type timeout: float or tuple or urllib3 Timeout object 20:58:19 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:19 we verify the server's TLS certificate, or a string, in which case it 20:58:19 must be a path to a CA bundle to use 20:58:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:19 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:19 :rtype: requests.Response 20:58:19 """ 20:58:19 20:58:19 try: 20:58:19 conn = self.get_connection_with_tls_context( 20:58:19 request, verify, proxies=proxies, cert=cert 20:58:19 ) 20:58:19 except LocationValueError as e: 20:58:19 raise InvalidURL(e, request=request) 20:58:19 20:58:19 self.cert_verify(conn, request.url, verify, cert) 20:58:19 url = self.request_url(request, proxies) 20:58:19 self.add_headers( 20:58:19 request, 20:58:19 stream=stream, 20:58:19 timeout=timeout, 20:58:19 verify=verify, 20:58:19 cert=cert, 20:58:19 proxies=proxies, 20:58:19 ) 20:58:19 20:58:19 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:19 20:58:19 if isinstance(timeout, tuple): 20:58:19 try: 20:58:19 connect, read = timeout 20:58:19 timeout = TimeoutSauce(connect=connect, read=read) 20:58:19 except ValueError: 20:58:19 raise ValueError( 20:58:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:19 f"or a single float to set both timeouts to the same value." 20:58:19 ) 20:58:19 elif isinstance(timeout, TimeoutSauce): 20:58:19 pass 20:58:19 else: 20:58:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:19 20:58:19 try: 20:58:19 > resp = conn.urlopen( 20:58:19 method=request.method, 20:58:19 url=url, 20:58:19 body=request.body, 20:58:19 headers=request.headers, 20:58:19 redirect=False, 20:58:19 assert_same_host=False, 20:58:19 preload_content=False, 20:58:19 decode_content=False, 20:58:19 retries=self.max_retries, 20:58:19 timeout=timeout, 20:58:19 chunked=chunked, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 20:58:19 retries = retries.increment( 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:19 method = 'POST' 20:58:19 url = '/rests/operations/transportpce-olm:calculate-spanloss-current' 20:58:19 response = None 20:58:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 20:58:19 _pool = 20:58:19 _stacktrace = 20:58:19 20:58:19 def increment( 20:58:19 self, 20:58:19 method: str | None = None, 20:58:19 url: str | None = None, 20:58:19 response: BaseHTTPResponse | None = None, 20:58:19 error: Exception | None = None, 20:58:19 _pool: ConnectionPool | None = None, 20:58:19 _stacktrace: TracebackType | None = None, 20:58:19 ) -> Self: 20:58:19 """Return a new Retry object with incremented retry counters. 20:58:19 20:58:19 :param response: A response object, or None, if the server did not 20:58:19 return a response. 20:58:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 20:58:19 :param Exception error: An error encountered during the request, or 20:58:19 None if the response was received successfully. 20:58:19 20:58:19 :return: A new ``Retry`` object. 20:58:19 """ 20:58:19 if self.total is False and error: 20:58:19 # Disabled, indicate to re-raise the error. 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 20:58:19 total = self.total 20:58:19 if total is not None: 20:58:19 total -= 1 20:58:19 20:58:19 connect = self.connect 20:58:19 read = self.read 20:58:19 redirect = self.redirect 20:58:19 status_count = self.status 20:58:19 other = self.other 20:58:19 cause = "unknown" 20:58:19 status = None 20:58:19 redirect_location = None 20:58:19 20:58:19 if error and self._is_connection_error(error): 20:58:19 # Connect retry? 20:58:19 if connect is False: 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 elif connect is not None: 20:58:19 connect -= 1 20:58:19 20:58:19 elif error and self._is_read_error(error): 20:58:19 # Read retry? 20:58:19 if read is False or method is None or not self._is_method_retryable(method): 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 elif read is not None: 20:58:19 read -= 1 20:58:19 20:58:19 elif error: 20:58:19 # Other retry? 20:58:19 if other is not None: 20:58:19 other -= 1 20:58:19 20:58:19 elif response and response.get_redirect_location(): 20:58:19 # Redirect retry? 20:58:19 if redirect is not None: 20:58:19 redirect -= 1 20:58:19 cause = "too many redirects" 20:58:19 response_redirect_location = response.get_redirect_location() 20:58:19 if response_redirect_location: 20:58:19 redirect_location = response_redirect_location 20:58:19 status = response.status 20:58:19 20:58:19 else: 20:58:19 # Incrementing because of a server error like a 500 in 20:58:19 # status_forcelist and the given method is in the allowed_methods 20:58:19 cause = ResponseError.GENERIC_ERROR 20:58:19 if response and response.status: 20:58:19 if status_count is not None: 20:58:19 status_count -= 1 20:58:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 20:58:19 status = response.status 20:58:19 20:58:19 history = self.history + ( 20:58:19 RequestHistory(method, url, error, status, redirect_location), 20:58:19 ) 20:58:19 20:58:19 new_retry = self.new( 20:58:19 total=total, 20:58:19 connect=connect, 20:58:19 read=read, 20:58:19 redirect=redirect, 20:58:19 status=status_count, 20:58:19 other=other, 20:58:19 history=history, 20:58:19 ) 20:58:19 20:58:19 if new_retry.is_exhausted(): 20:58:19 reason = error or ResponseError(cause) 20:58:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 20:58:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/operations/transportpce-olm:calculate-spanloss-current (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 20:58:19 20:58:19 During handling of the above exception, another exception occurred: 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_38_calculate_span_loss_current(self): 20:58:19 > response = test_utils.transportpce_api_rpc_request( 20:58:19 'transportpce-olm', 'calculate-spanloss-current', 20:58:19 None) 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:565: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 transportpce_tests/common/test_utils.py:685: in transportpce_api_rpc_request 20:58:19 response = post_request(url, data) 20:58:19 transportpce_tests/common/test_utils.py:148: in post_request 20:58:19 return requests.request( 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 20:58:19 return session.request(method=method, url=url, **kwargs) 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 20:58:19 resp = self.send(prep, **send_kwargs) 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 20:58:19 r = adapter.send(request, **kwargs) 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = 20:58:19 request = , stream = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:19 proxies = OrderedDict() 20:58:19 20:58:19 def send( 20:58:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:19 ): 20:58:19 """Sends PreparedRequest object. Returns Response object. 20:58:19 20:58:19 :param request: The :class:`PreparedRequest ` being sent. 20:58:19 :param stream: (optional) Whether to stream the request content. 20:58:19 :param timeout: (optional) How long to wait for the server to send 20:58:19 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:19 read timeout) ` tuple. 20:58:19 :type timeout: float or tuple or urllib3 Timeout object 20:58:19 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:19 we verify the server's TLS certificate, or a string, in which case it 20:58:19 must be a path to a CA bundle to use 20:58:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:19 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:19 :rtype: requests.Response 20:58:19 """ 20:58:19 20:58:19 try: 20:58:19 conn = self.get_connection_with_tls_context( 20:58:19 request, verify, proxies=proxies, cert=cert 20:58:19 ) 20:58:19 except LocationValueError as e: 20:58:19 raise InvalidURL(e, request=request) 20:58:19 20:58:19 self.cert_verify(conn, request.url, verify, cert) 20:58:19 url = self.request_url(request, proxies) 20:58:19 self.add_headers( 20:58:19 request, 20:58:19 stream=stream, 20:58:19 timeout=timeout, 20:58:19 verify=verify, 20:58:19 cert=cert, 20:58:19 proxies=proxies, 20:58:19 ) 20:58:19 20:58:19 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:19 20:58:19 if isinstance(timeout, tuple): 20:58:19 try: 20:58:19 connect, read = timeout 20:58:19 timeout = TimeoutSauce(connect=connect, read=read) 20:58:19 except ValueError: 20:58:19 raise ValueError( 20:58:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:19 f"or a single float to set both timeouts to the same value." 20:58:19 ) 20:58:19 elif isinstance(timeout, TimeoutSauce): 20:58:19 pass 20:58:19 else: 20:58:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:19 20:58:19 try: 20:58:19 resp = conn.urlopen( 20:58:19 method=request.method, 20:58:19 url=url, 20:58:19 body=request.body, 20:58:19 headers=request.headers, 20:58:19 redirect=False, 20:58:19 assert_same_host=False, 20:58:19 preload_content=False, 20:58:19 decode_content=False, 20:58:19 retries=self.max_retries, 20:58:19 timeout=timeout, 20:58:19 chunked=chunked, 20:58:19 ) 20:58:19 20:58:19 except (ProtocolError, OSError) as err: 20:58:19 raise ConnectionError(err, request=request) 20:58:19 20:58:19 except MaxRetryError as e: 20:58:19 if isinstance(e.reason, ConnectTimeoutError): 20:58:19 # TODO: Remove this in 3.0.0: see #2811 20:58:19 if not isinstance(e.reason, NewConnectionError): 20:58:19 raise ConnectTimeout(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, ResponseError): 20:58:19 raise RetryError(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, _ProxyError): 20:58:19 raise ProxyError(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, _SSLError): 20:58:19 # This branch is for urllib3 v1.22 and later. 20:58:19 raise SSLError(e, request=request) 20:58:19 20:58:19 > raise ConnectionError(e, request=request) 20:58:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/operations/transportpce-olm:calculate-spanloss-current (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_38_calculate_span_loss_current 20:58:19 _____________ TransportOlmTesting.test_39_rdmA_device_disconnected _____________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def _new_conn(self) -> socket.socket: 20:58:19 """Establish a socket connection and set nodelay settings on it. 20:58:19 20:58:19 :return: New socket connection. 20:58:19 """ 20:58:19 try: 20:58:19 > sock = connection.create_connection( 20:58:19 (self._dns_host, self.port), 20:58:19 self.timeout, 20:58:19 source_address=self.source_address, 20:58:19 socket_options=self.socket_options, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 20:58:19 raise err 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 address = ('localhost', 8182), timeout = 10, source_address = None 20:58:19 socket_options = [(6, 1, 1)] 20:58:19 20:58:19 def create_connection( 20:58:19 address: tuple[str, int], 20:58:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:19 source_address: tuple[str, int] | None = None, 20:58:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 20:58:19 ) -> socket.socket: 20:58:19 """Connect to *address* and return the socket object. 20:58:19 20:58:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 20:58:19 port)``) and return the socket object. Passing the optional 20:58:19 *timeout* parameter will set the timeout on the socket instance 20:58:19 before attempting to connect. If no *timeout* is supplied, the 20:58:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 20:58:19 is used. If *source_address* is set it must be a tuple of (host, port) 20:58:19 for the socket to bind as a source address before making the connection. 20:58:19 An host of '' or port 0 tells the OS to use the default. 20:58:19 """ 20:58:19 20:58:19 host, port = address 20:58:19 if host.startswith("["): 20:58:19 host = host.strip("[]") 20:58:19 err = None 20:58:19 20:58:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 20:58:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 20:58:19 # The original create_connection function always returns all records. 20:58:19 family = allowed_gai_family() 20:58:19 20:58:19 try: 20:58:19 host.encode("idna") 20:58:19 except UnicodeError: 20:58:19 raise LocationParseError(f"'{host}', label empty or too long") from None 20:58:19 20:58:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 20:58:19 af, socktype, proto, canonname, sa = res 20:58:19 sock = None 20:58:19 try: 20:58:19 sock = socket.socket(af, socktype, proto) 20:58:19 20:58:19 # If provided, set socket level options before connecting. 20:58:19 _set_socket_options(sock, socket_options) 20:58:19 20:58:19 if timeout is not _DEFAULT_TIMEOUT: 20:58:19 sock.settimeout(timeout) 20:58:19 if source_address: 20:58:19 sock.bind(source_address) 20:58:19 > sock.connect(sa) 20:58:19 E ConnectionRefusedError: [Errno 111] Connection refused 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 20:58:19 20:58:19 The above exception was the direct cause of the following exception: 20:58:19 20:58:19 self = 20:58:19 method = 'DELETE' 20:58:19 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' 20:58:19 body = None 20:58:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 20:58:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:19 redirect = False, assert_same_host = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 20:58:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 20:58:19 decode_content = False, response_kw = {} 20:58:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query=None, fragment=None) 20:58:19 destination_scheme = None, conn = None, release_this_conn = True 20:58:19 http_tunnel_required = False, err = None, clean_exit = False 20:58:19 20:58:19 def urlopen( # type: ignore[override] 20:58:19 self, 20:58:19 method: str, 20:58:19 url: str, 20:58:19 body: _TYPE_BODY | None = None, 20:58:19 headers: typing.Mapping[str, str] | None = None, 20:58:19 retries: Retry | bool | int | None = None, 20:58:19 redirect: bool = True, 20:58:19 assert_same_host: bool = True, 20:58:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:19 pool_timeout: int | None = None, 20:58:19 release_conn: bool | None = None, 20:58:19 chunked: bool = False, 20:58:19 body_pos: _TYPE_BODY_POSITION | None = None, 20:58:19 preload_content: bool = True, 20:58:19 decode_content: bool = True, 20:58:19 **response_kw: typing.Any, 20:58:19 ) -> BaseHTTPResponse: 20:58:19 """ 20:58:19 Get a connection from the pool and perform an HTTP request. This is the 20:58:19 lowest level call for making a request, so you'll need to specify all 20:58:19 the raw details. 20:58:19 20:58:19 .. note:: 20:58:19 20:58:19 More commonly, it's appropriate to use a convenience method 20:58:19 such as :meth:`request`. 20:58:19 20:58:19 .. note:: 20:58:19 20:58:19 `release_conn` will only behave as expected if 20:58:19 `preload_content=False` because we want to make 20:58:19 `preload_content=False` the default behaviour someday soon without 20:58:19 breaking backwards compatibility. 20:58:19 20:58:19 :param method: 20:58:19 HTTP request method (such as GET, POST, PUT, etc.) 20:58:19 20:58:19 :param url: 20:58:19 The URL to perform the request on. 20:58:19 20:58:19 :param body: 20:58:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 20:58:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 20:58:19 20:58:19 :param headers: 20:58:19 Dictionary of custom headers to send, such as User-Agent, 20:58:19 If-None-Match, etc. If None, pool headers are used. If provided, 20:58:19 these headers completely replace any pool-specific headers. 20:58:19 20:58:19 :param retries: 20:58:19 Configure the number of retries to allow before raising a 20:58:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 20:58:19 20:58:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 20:58:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 20:58:19 over different types of retries. 20:58:19 Pass an integer number to retry connection errors that many times, 20:58:19 but no other types of errors. Pass zero to never retry. 20:58:19 20:58:19 If ``False``, then retries are disabled and any exception is raised 20:58:19 immediately. Also, instead of raising a MaxRetryError on redirects, 20:58:19 the redirect response will be returned. 20:58:19 20:58:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 20:58:19 20:58:19 :param redirect: 20:58:19 If True, automatically handle redirects (status codes 301, 302, 20:58:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 20:58:19 will disable redirect, too. 20:58:19 20:58:19 :param assert_same_host: 20:58:19 If ``True``, will make sure that the host of the pool requests is 20:58:19 consistent else will raise HostChangedError. When ``False``, you can 20:58:19 use the pool on an HTTP proxy and request foreign hosts. 20:58:19 20:58:19 :param timeout: 20:58:19 If specified, overrides the default timeout for this one 20:58:19 request. It may be a float (in seconds) or an instance of 20:58:19 :class:`urllib3.util.Timeout`. 20:58:19 20:58:19 :param pool_timeout: 20:58:19 If set and the pool is set to block=True, then this method will 20:58:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 20:58:19 connection is available within the time period. 20:58:19 20:58:19 :param bool preload_content: 20:58:19 If True, the response's body will be preloaded into memory. 20:58:19 20:58:19 :param bool decode_content: 20:58:19 If True, will attempt to decode the body based on the 20:58:19 'content-encoding' header. 20:58:19 20:58:19 :param release_conn: 20:58:19 If False, then the urlopen call will not release the connection 20:58:19 back into the pool once a response is received (but will release if 20:58:19 you read the entire contents of the response such as when 20:58:19 `preload_content=True`). This is useful if you're not preloading 20:58:19 the response's content immediately. You will need to call 20:58:19 ``r.release_conn()`` on the response ``r`` to return the connection 20:58:19 back into the pool. If None, it takes the value of ``preload_content`` 20:58:19 which defaults to ``True``. 20:58:19 20:58:19 :param bool chunked: 20:58:19 If True, urllib3 will send the body using chunked transfer 20:58:19 encoding. Otherwise, urllib3 will send the body using the standard 20:58:19 content-length form. Defaults to False. 20:58:19 20:58:19 :param int body_pos: 20:58:19 Position to seek to in file-like body in the event of a retry or 20:58:19 redirect. Typically this won't need to be set because urllib3 will 20:58:19 auto-populate the value when needed. 20:58:19 """ 20:58:19 parsed_url = parse_url(url) 20:58:19 destination_scheme = parsed_url.scheme 20:58:19 20:58:19 if headers is None: 20:58:19 headers = self.headers 20:58:19 20:58:19 if not isinstance(retries, Retry): 20:58:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 20:58:19 20:58:19 if release_conn is None: 20:58:19 release_conn = preload_content 20:58:19 20:58:19 # Check host 20:58:19 if assert_same_host and not self.is_same_host(url): 20:58:19 raise HostChangedError(self, url, retries) 20:58:19 20:58:19 # Ensure that the URL we're connecting to is properly encoded 20:58:19 if url.startswith("/"): 20:58:19 url = to_str(_encode_target(url)) 20:58:19 else: 20:58:19 url = to_str(parsed_url.url) 20:58:19 20:58:19 conn = None 20:58:19 20:58:19 # Track whether `conn` needs to be released before 20:58:19 # returning/raising/recursing. Update this variable if necessary, and 20:58:19 # leave `release_conn` constant throughout the function. That way, if 20:58:19 # the function recurses, the original value of `release_conn` will be 20:58:19 # passed down into the recursive call, and its value will be respected. 20:58:19 # 20:58:19 # See issue #651 [1] for details. 20:58:19 # 20:58:19 # [1] 20:58:19 release_this_conn = release_conn 20:58:19 20:58:19 http_tunnel_required = connection_requires_http_tunnel( 20:58:19 self.proxy, self.proxy_config, destination_scheme 20:58:19 ) 20:58:19 20:58:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 20:58:19 # have to copy the headers dict so we can safely change it without those 20:58:19 # changes being reflected in anyone else's copy. 20:58:19 if not http_tunnel_required: 20:58:19 headers = headers.copy() # type: ignore[attr-defined] 20:58:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 20:58:19 20:58:19 # Must keep the exception bound to a separate variable or else Python 3 20:58:19 # complains about UnboundLocalError. 20:58:19 err = None 20:58:19 20:58:19 # Keep track of whether we cleanly exited the except block. This 20:58:19 # ensures we do proper cleanup in finally. 20:58:19 clean_exit = False 20:58:19 20:58:19 # Rewind body position, if needed. Record current position 20:58:19 # for future rewinds in the event of a redirect/retry. 20:58:19 body_pos = set_file_position(body, body_pos) 20:58:19 20:58:19 try: 20:58:19 # Request a connection from the queue. 20:58:19 timeout_obj = self._get_timeout(timeout) 20:58:19 conn = self._get_conn(timeout=pool_timeout) 20:58:19 20:58:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 20:58:19 20:58:19 # Is this a closed/new connection that requires CONNECT tunnelling? 20:58:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 20:58:19 try: 20:58:19 self._prepare_proxy(conn) 20:58:19 except (BaseSSLError, OSError, SocketTimeout) as e: 20:58:19 self._raise_timeout( 20:58:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 20:58:19 ) 20:58:19 raise 20:58:19 20:58:19 # If we're going to release the connection in ``finally:``, then 20:58:19 # the response doesn't need to know about the connection. Otherwise 20:58:19 # it will also try to release it and we'll have a double-release 20:58:19 # mess. 20:58:19 response_conn = conn if not release_conn else None 20:58:19 20:58:19 # Make the request on the HTTPConnection object 20:58:19 > response = self._make_request( 20:58:19 conn, 20:58:19 method, 20:58:19 url, 20:58:19 timeout=timeout_obj, 20:58:19 body=body, 20:58:19 headers=headers, 20:58:19 chunked=chunked, 20:58:19 retries=retries, 20:58:19 response_conn=response_conn, 20:58:19 preload_content=preload_content, 20:58:19 decode_content=decode_content, 20:58:19 **response_kw, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 20:58:19 conn.request( 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 20:58:19 self.endheaders() 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 20:58:19 self._send_output(message_body, encode_chunked=encode_chunked) 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 20:58:19 self.send(msg) 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 20:58:19 self.connect() 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 20:58:19 self.sock = self._new_conn() 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def _new_conn(self) -> socket.socket: 20:58:19 """Establish a socket connection and set nodelay settings on it. 20:58:19 20:58:19 :return: New socket connection. 20:58:19 """ 20:58:19 try: 20:58:19 sock = connection.create_connection( 20:58:19 (self._dns_host, self.port), 20:58:19 self.timeout, 20:58:19 source_address=self.source_address, 20:58:19 socket_options=self.socket_options, 20:58:19 ) 20:58:19 except socket.gaierror as e: 20:58:19 raise NameResolutionError(self.host, self, e) from e 20:58:19 except SocketTimeout as e: 20:58:19 raise ConnectTimeoutError( 20:58:19 self, 20:58:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 20:58:19 ) from e 20:58:19 20:58:19 except OSError as e: 20:58:19 > raise NewConnectionError( 20:58:19 self, f"Failed to establish a new connection: {e}" 20:58:19 ) from e 20:58:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 20:58:19 20:58:19 The above exception was the direct cause of the following exception: 20:58:19 20:58:19 self = 20:58:19 request = , stream = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:19 proxies = OrderedDict() 20:58:19 20:58:19 def send( 20:58:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:19 ): 20:58:19 """Sends PreparedRequest object. Returns Response object. 20:58:19 20:58:19 :param request: The :class:`PreparedRequest ` being sent. 20:58:19 :param stream: (optional) Whether to stream the request content. 20:58:19 :param timeout: (optional) How long to wait for the server to send 20:58:19 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:19 read timeout) ` tuple. 20:58:19 :type timeout: float or tuple or urllib3 Timeout object 20:58:19 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:19 we verify the server's TLS certificate, or a string, in which case it 20:58:19 must be a path to a CA bundle to use 20:58:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:19 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:19 :rtype: requests.Response 20:58:19 """ 20:58:19 20:58:19 try: 20:58:19 conn = self.get_connection_with_tls_context( 20:58:19 request, verify, proxies=proxies, cert=cert 20:58:19 ) 20:58:19 except LocationValueError as e: 20:58:19 raise InvalidURL(e, request=request) 20:58:19 20:58:19 self.cert_verify(conn, request.url, verify, cert) 20:58:19 url = self.request_url(request, proxies) 20:58:19 self.add_headers( 20:58:19 request, 20:58:19 stream=stream, 20:58:19 timeout=timeout, 20:58:19 verify=verify, 20:58:19 cert=cert, 20:58:19 proxies=proxies, 20:58:19 ) 20:58:19 20:58:19 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:19 20:58:19 if isinstance(timeout, tuple): 20:58:19 try: 20:58:19 connect, read = timeout 20:58:19 timeout = TimeoutSauce(connect=connect, read=read) 20:58:19 except ValueError: 20:58:19 raise ValueError( 20:58:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:19 f"or a single float to set both timeouts to the same value." 20:58:19 ) 20:58:19 elif isinstance(timeout, TimeoutSauce): 20:58:19 pass 20:58:19 else: 20:58:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:19 20:58:19 try: 20:58:19 > resp = conn.urlopen( 20:58:19 method=request.method, 20:58:19 url=url, 20:58:19 body=request.body, 20:58:19 headers=request.headers, 20:58:19 redirect=False, 20:58:19 assert_same_host=False, 20:58:19 preload_content=False, 20:58:19 decode_content=False, 20:58:19 retries=self.max_retries, 20:58:19 timeout=timeout, 20:58:19 chunked=chunked, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 20:58:19 retries = retries.increment( 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:19 method = 'DELETE' 20:58:19 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' 20:58:19 response = None 20:58:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 20:58:19 _pool = 20:58:19 _stacktrace = 20:58:19 20:58:19 def increment( 20:58:19 self, 20:58:19 method: str | None = None, 20:58:19 url: str | None = None, 20:58:19 response: BaseHTTPResponse | None = None, 20:58:19 error: Exception | None = None, 20:58:19 _pool: ConnectionPool | None = None, 20:58:19 _stacktrace: TracebackType | None = None, 20:58:19 ) -> Self: 20:58:19 """Return a new Retry object with incremented retry counters. 20:58:19 20:58:19 :param response: A response object, or None, if the server did not 20:58:19 return a response. 20:58:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 20:58:19 :param Exception error: An error encountered during the request, or 20:58:19 None if the response was received successfully. 20:58:19 20:58:19 :return: A new ``Retry`` object. 20:58:19 """ 20:58:19 if self.total is False and error: 20:58:19 # Disabled, indicate to re-raise the error. 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 20:58:19 total = self.total 20:58:19 if total is not None: 20:58:19 total -= 1 20:58:19 20:58:19 connect = self.connect 20:58:19 read = self.read 20:58:19 redirect = self.redirect 20:58:19 status_count = self.status 20:58:19 other = self.other 20:58:19 cause = "unknown" 20:58:19 status = None 20:58:19 redirect_location = None 20:58:19 20:58:19 if error and self._is_connection_error(error): 20:58:19 # Connect retry? 20:58:19 if connect is False: 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 elif connect is not None: 20:58:19 connect -= 1 20:58:19 20:58:19 elif error and self._is_read_error(error): 20:58:19 # Read retry? 20:58:19 if read is False or method is None or not self._is_method_retryable(method): 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 elif read is not None: 20:58:19 read -= 1 20:58:19 20:58:19 elif error: 20:58:19 # Other retry? 20:58:19 if other is not None: 20:58:19 other -= 1 20:58:19 20:58:19 elif response and response.get_redirect_location(): 20:58:19 # Redirect retry? 20:58:19 if redirect is not None: 20:58:19 redirect -= 1 20:58:19 cause = "too many redirects" 20:58:19 response_redirect_location = response.get_redirect_location() 20:58:19 if response_redirect_location: 20:58:19 redirect_location = response_redirect_location 20:58:19 status = response.status 20:58:19 20:58:19 else: 20:58:19 # Incrementing because of a server error like a 500 in 20:58:19 # status_forcelist and the given method is in the allowed_methods 20:58:19 cause = ResponseError.GENERIC_ERROR 20:58:19 if response and response.status: 20:58:19 if status_count is not None: 20:58:19 status_count -= 1 20:58:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 20:58:19 status = response.status 20:58:19 20:58:19 history = self.history + ( 20:58:19 RequestHistory(method, url, error, status, redirect_location), 20:58:19 ) 20:58:19 20:58:19 new_retry = self.new( 20:58:19 total=total, 20:58:19 connect=connect, 20:58:19 read=read, 20:58:19 redirect=redirect, 20:58:19 status=status_count, 20:58:19 other=other, 20:58:19 history=history, 20:58:19 ) 20:58:19 20:58:19 if new_retry.is_exhausted(): 20:58:19 reason = error or ResponseError(cause) 20:58:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 20:58:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 20:58:19 20:58:19 During handling of the above exception, another exception occurred: 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_39_rdmA_device_disconnected(self): 20:58:19 > response = test_utils.unmount_device("ROADMA01") 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:574: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 transportpce_tests/common/test_utils.py:358: in unmount_device 20:58:19 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 20:58:19 transportpce_tests/common/test_utils.py:133: in delete_request 20:58:19 return requests.request( 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 20:58:19 return session.request(method=method, url=url, **kwargs) 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 20:58:19 resp = self.send(prep, **send_kwargs) 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 20:58:19 r = adapter.send(request, **kwargs) 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = 20:58:19 request = , stream = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:19 proxies = OrderedDict() 20:58:19 20:58:19 def send( 20:58:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:19 ): 20:58:19 """Sends PreparedRequest object. Returns Response object. 20:58:19 20:58:19 :param request: The :class:`PreparedRequest ` being sent. 20:58:19 :param stream: (optional) Whether to stream the request content. 20:58:19 :param timeout: (optional) How long to wait for the server to send 20:58:19 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:19 read timeout) ` tuple. 20:58:19 :type timeout: float or tuple or urllib3 Timeout object 20:58:19 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:19 we verify the server's TLS certificate, or a string, in which case it 20:58:19 must be a path to a CA bundle to use 20:58:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:19 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:19 :rtype: requests.Response 20:58:19 """ 20:58:19 20:58:19 try: 20:58:19 conn = self.get_connection_with_tls_context( 20:58:19 request, verify, proxies=proxies, cert=cert 20:58:19 ) 20:58:19 except LocationValueError as e: 20:58:19 raise InvalidURL(e, request=request) 20:58:19 20:58:19 self.cert_verify(conn, request.url, verify, cert) 20:58:19 url = self.request_url(request, proxies) 20:58:19 self.add_headers( 20:58:19 request, 20:58:19 stream=stream, 20:58:19 timeout=timeout, 20:58:19 verify=verify, 20:58:19 cert=cert, 20:58:19 proxies=proxies, 20:58:19 ) 20:58:19 20:58:19 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:19 20:58:19 if isinstance(timeout, tuple): 20:58:19 try: 20:58:19 connect, read = timeout 20:58:19 timeout = TimeoutSauce(connect=connect, read=read) 20:58:19 except ValueError: 20:58:19 raise ValueError( 20:58:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:19 f"or a single float to set both timeouts to the same value." 20:58:19 ) 20:58:19 elif isinstance(timeout, TimeoutSauce): 20:58:19 pass 20:58:19 else: 20:58:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:19 20:58:19 try: 20:58:19 resp = conn.urlopen( 20:58:19 method=request.method, 20:58:19 url=url, 20:58:19 body=request.body, 20:58:19 headers=request.headers, 20:58:19 redirect=False, 20:58:19 assert_same_host=False, 20:58:19 preload_content=False, 20:58:19 decode_content=False, 20:58:19 retries=self.max_retries, 20:58:19 timeout=timeout, 20:58:19 chunked=chunked, 20:58:19 ) 20:58:19 20:58:19 except (ProtocolError, OSError) as err: 20:58:19 raise ConnectionError(err, request=request) 20:58:19 20:58:19 except MaxRetryError as e: 20:58:19 if isinstance(e.reason, ConnectTimeoutError): 20:58:19 # TODO: Remove this in 3.0.0: see #2811 20:58:19 if not isinstance(e.reason, NewConnectionError): 20:58:19 raise ConnectTimeout(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, ResponseError): 20:58:19 raise RetryError(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, _ProxyError): 20:58:19 raise ProxyError(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, _SSLError): 20:58:19 # This branch is for urllib3 v1.22 and later. 20:58:19 raise SSLError(e, request=request) 20:58:19 20:58:19 > raise ConnectionError(e, request=request) 20:58:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_39_rdmA_device_disconnected 20:58:19 _____________ TransportOlmTesting.test_40_rdmC_device_disconnected _____________ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def _new_conn(self) -> socket.socket: 20:58:19 """Establish a socket connection and set nodelay settings on it. 20:58:19 20:58:19 :return: New socket connection. 20:58:19 """ 20:58:19 try: 20:58:19 > sock = connection.create_connection( 20:58:19 (self._dns_host, self.port), 20:58:19 self.timeout, 20:58:19 source_address=self.source_address, 20:58:19 socket_options=self.socket_options, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 20:58:19 raise err 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 address = ('localhost', 8182), timeout = 10, source_address = None 20:58:19 socket_options = [(6, 1, 1)] 20:58:19 20:58:19 def create_connection( 20:58:19 address: tuple[str, int], 20:58:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:19 source_address: tuple[str, int] | None = None, 20:58:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 20:58:19 ) -> socket.socket: 20:58:19 """Connect to *address* and return the socket object. 20:58:19 20:58:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 20:58:19 port)``) and return the socket object. Passing the optional 20:58:19 *timeout* parameter will set the timeout on the socket instance 20:58:19 before attempting to connect. If no *timeout* is supplied, the 20:58:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 20:58:19 is used. If *source_address* is set it must be a tuple of (host, port) 20:58:19 for the socket to bind as a source address before making the connection. 20:58:19 An host of '' or port 0 tells the OS to use the default. 20:58:19 """ 20:58:19 20:58:19 host, port = address 20:58:19 if host.startswith("["): 20:58:19 host = host.strip("[]") 20:58:19 err = None 20:58:19 20:58:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 20:58:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 20:58:19 # The original create_connection function always returns all records. 20:58:19 family = allowed_gai_family() 20:58:19 20:58:19 try: 20:58:19 host.encode("idna") 20:58:19 except UnicodeError: 20:58:19 raise LocationParseError(f"'{host}', label empty or too long") from None 20:58:19 20:58:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 20:58:19 af, socktype, proto, canonname, sa = res 20:58:19 sock = None 20:58:19 try: 20:58:19 sock = socket.socket(af, socktype, proto) 20:58:19 20:58:19 # If provided, set socket level options before connecting. 20:58:19 _set_socket_options(sock, socket_options) 20:58:19 20:58:19 if timeout is not _DEFAULT_TIMEOUT: 20:58:19 sock.settimeout(timeout) 20:58:19 if source_address: 20:58:19 sock.bind(source_address) 20:58:19 > sock.connect(sa) 20:58:19 E ConnectionRefusedError: [Errno 111] Connection refused 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 20:58:19 20:58:19 The above exception was the direct cause of the following exception: 20:58:19 20:58:19 self = 20:58:19 method = 'DELETE' 20:58:19 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMC01' 20:58:19 body = None 20:58:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 20:58:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:19 redirect = False, assert_same_host = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 20:58:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 20:58:19 decode_content = False, response_kw = {} 20:58:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMC01', query=None, fragment=None) 20:58:19 destination_scheme = None, conn = None, release_this_conn = True 20:58:19 http_tunnel_required = False, err = None, clean_exit = False 20:58:19 20:58:19 def urlopen( # type: ignore[override] 20:58:19 self, 20:58:19 method: str, 20:58:19 url: str, 20:58:19 body: _TYPE_BODY | None = None, 20:58:19 headers: typing.Mapping[str, str] | None = None, 20:58:19 retries: Retry | bool | int | None = None, 20:58:19 redirect: bool = True, 20:58:19 assert_same_host: bool = True, 20:58:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 20:58:19 pool_timeout: int | None = None, 20:58:19 release_conn: bool | None = None, 20:58:19 chunked: bool = False, 20:58:19 body_pos: _TYPE_BODY_POSITION | None = None, 20:58:19 preload_content: bool = True, 20:58:19 decode_content: bool = True, 20:58:19 **response_kw: typing.Any, 20:58:19 ) -> BaseHTTPResponse: 20:58:19 """ 20:58:19 Get a connection from the pool and perform an HTTP request. This is the 20:58:19 lowest level call for making a request, so you'll need to specify all 20:58:19 the raw details. 20:58:19 20:58:19 .. note:: 20:58:19 20:58:19 More commonly, it's appropriate to use a convenience method 20:58:19 such as :meth:`request`. 20:58:19 20:58:19 .. note:: 20:58:19 20:58:19 `release_conn` will only behave as expected if 20:58:19 `preload_content=False` because we want to make 20:58:19 `preload_content=False` the default behaviour someday soon without 20:58:19 breaking backwards compatibility. 20:58:19 20:58:19 :param method: 20:58:19 HTTP request method (such as GET, POST, PUT, etc.) 20:58:19 20:58:19 :param url: 20:58:19 The URL to perform the request on. 20:58:19 20:58:19 :param body: 20:58:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 20:58:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 20:58:19 20:58:19 :param headers: 20:58:19 Dictionary of custom headers to send, such as User-Agent, 20:58:19 If-None-Match, etc. If None, pool headers are used. If provided, 20:58:19 these headers completely replace any pool-specific headers. 20:58:19 20:58:19 :param retries: 20:58:19 Configure the number of retries to allow before raising a 20:58:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 20:58:19 20:58:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 20:58:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 20:58:19 over different types of retries. 20:58:19 Pass an integer number to retry connection errors that many times, 20:58:19 but no other types of errors. Pass zero to never retry. 20:58:19 20:58:19 If ``False``, then retries are disabled and any exception is raised 20:58:19 immediately. Also, instead of raising a MaxRetryError on redirects, 20:58:19 the redirect response will be returned. 20:58:19 20:58:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 20:58:19 20:58:19 :param redirect: 20:58:19 If True, automatically handle redirects (status codes 301, 302, 20:58:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 20:58:19 will disable redirect, too. 20:58:19 20:58:19 :param assert_same_host: 20:58:19 If ``True``, will make sure that the host of the pool requests is 20:58:19 consistent else will raise HostChangedError. When ``False``, you can 20:58:19 use the pool on an HTTP proxy and request foreign hosts. 20:58:19 20:58:19 :param timeout: 20:58:19 If specified, overrides the default timeout for this one 20:58:19 request. It may be a float (in seconds) or an instance of 20:58:19 :class:`urllib3.util.Timeout`. 20:58:19 20:58:19 :param pool_timeout: 20:58:19 If set and the pool is set to block=True, then this method will 20:58:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 20:58:19 connection is available within the time period. 20:58:19 20:58:19 :param bool preload_content: 20:58:19 If True, the response's body will be preloaded into memory. 20:58:19 20:58:19 :param bool decode_content: 20:58:19 If True, will attempt to decode the body based on the 20:58:19 'content-encoding' header. 20:58:19 20:58:19 :param release_conn: 20:58:19 If False, then the urlopen call will not release the connection 20:58:19 back into the pool once a response is received (but will release if 20:58:19 you read the entire contents of the response such as when 20:58:19 `preload_content=True`). This is useful if you're not preloading 20:58:19 the response's content immediately. You will need to call 20:58:19 ``r.release_conn()`` on the response ``r`` to return the connection 20:58:19 back into the pool. If None, it takes the value of ``preload_content`` 20:58:19 which defaults to ``True``. 20:58:19 20:58:19 :param bool chunked: 20:58:19 If True, urllib3 will send the body using chunked transfer 20:58:19 encoding. Otherwise, urllib3 will send the body using the standard 20:58:19 content-length form. Defaults to False. 20:58:19 20:58:19 :param int body_pos: 20:58:19 Position to seek to in file-like body in the event of a retry or 20:58:19 redirect. Typically this won't need to be set because urllib3 will 20:58:19 auto-populate the value when needed. 20:58:19 """ 20:58:19 parsed_url = parse_url(url) 20:58:19 destination_scheme = parsed_url.scheme 20:58:19 20:58:19 if headers is None: 20:58:19 headers = self.headers 20:58:19 20:58:19 if not isinstance(retries, Retry): 20:58:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 20:58:19 20:58:19 if release_conn is None: 20:58:19 release_conn = preload_content 20:58:19 20:58:19 # Check host 20:58:19 if assert_same_host and not self.is_same_host(url): 20:58:19 raise HostChangedError(self, url, retries) 20:58:19 20:58:19 # Ensure that the URL we're connecting to is properly encoded 20:58:19 if url.startswith("/"): 20:58:19 url = to_str(_encode_target(url)) 20:58:19 else: 20:58:19 url = to_str(parsed_url.url) 20:58:19 20:58:19 conn = None 20:58:19 20:58:19 # Track whether `conn` needs to be released before 20:58:19 # returning/raising/recursing. Update this variable if necessary, and 20:58:19 # leave `release_conn` constant throughout the function. That way, if 20:58:19 # the function recurses, the original value of `release_conn` will be 20:58:19 # passed down into the recursive call, and its value will be respected. 20:58:19 # 20:58:19 # See issue #651 [1] for details. 20:58:19 # 20:58:19 # [1] 20:58:19 release_this_conn = release_conn 20:58:19 20:58:19 http_tunnel_required = connection_requires_http_tunnel( 20:58:19 self.proxy, self.proxy_config, destination_scheme 20:58:19 ) 20:58:19 20:58:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 20:58:19 # have to copy the headers dict so we can safely change it without those 20:58:19 # changes being reflected in anyone else's copy. 20:58:19 if not http_tunnel_required: 20:58:19 headers = headers.copy() # type: ignore[attr-defined] 20:58:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 20:58:19 20:58:19 # Must keep the exception bound to a separate variable or else Python 3 20:58:19 # complains about UnboundLocalError. 20:58:19 err = None 20:58:19 20:58:19 # Keep track of whether we cleanly exited the except block. This 20:58:19 # ensures we do proper cleanup in finally. 20:58:19 clean_exit = False 20:58:19 20:58:19 # Rewind body position, if needed. Record current position 20:58:19 # for future rewinds in the event of a redirect/retry. 20:58:19 body_pos = set_file_position(body, body_pos) 20:58:19 20:58:19 try: 20:58:19 # Request a connection from the queue. 20:58:19 timeout_obj = self._get_timeout(timeout) 20:58:19 conn = self._get_conn(timeout=pool_timeout) 20:58:19 20:58:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 20:58:19 20:58:19 # Is this a closed/new connection that requires CONNECT tunnelling? 20:58:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 20:58:19 try: 20:58:19 self._prepare_proxy(conn) 20:58:19 except (BaseSSLError, OSError, SocketTimeout) as e: 20:58:19 self._raise_timeout( 20:58:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 20:58:19 ) 20:58:19 raise 20:58:19 20:58:19 # If we're going to release the connection in ``finally:``, then 20:58:19 # the response doesn't need to know about the connection. Otherwise 20:58:19 # it will also try to release it and we'll have a double-release 20:58:19 # mess. 20:58:19 response_conn = conn if not release_conn else None 20:58:19 20:58:19 # Make the request on the HTTPConnection object 20:58:19 > response = self._make_request( 20:58:19 conn, 20:58:19 method, 20:58:19 url, 20:58:19 timeout=timeout_obj, 20:58:19 body=body, 20:58:19 headers=headers, 20:58:19 chunked=chunked, 20:58:19 retries=retries, 20:58:19 response_conn=response_conn, 20:58:19 preload_content=preload_content, 20:58:19 decode_content=decode_content, 20:58:19 **response_kw, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 20:58:19 conn.request( 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 20:58:19 self.endheaders() 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 20:58:19 self._send_output(message_body, encode_chunked=encode_chunked) 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 20:58:19 self.send(msg) 20:58:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 20:58:19 self.connect() 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 20:58:19 self.sock = self._new_conn() 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = 20:58:19 20:58:19 def _new_conn(self) -> socket.socket: 20:58:19 """Establish a socket connection and set nodelay settings on it. 20:58:19 20:58:19 :return: New socket connection. 20:58:19 """ 20:58:19 try: 20:58:19 sock = connection.create_connection( 20:58:19 (self._dns_host, self.port), 20:58:19 self.timeout, 20:58:19 source_address=self.source_address, 20:58:19 socket_options=self.socket_options, 20:58:19 ) 20:58:19 except socket.gaierror as e: 20:58:19 raise NameResolutionError(self.host, self, e) from e 20:58:19 except SocketTimeout as e: 20:58:19 raise ConnectTimeoutError( 20:58:19 self, 20:58:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 20:58:19 ) from e 20:58:19 20:58:19 except OSError as e: 20:58:19 > raise NewConnectionError( 20:58:19 self, f"Failed to establish a new connection: {e}" 20:58:19 ) from e 20:58:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 20:58:19 20:58:19 The above exception was the direct cause of the following exception: 20:58:19 20:58:19 self = 20:58:19 request = , stream = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:19 proxies = OrderedDict() 20:58:19 20:58:19 def send( 20:58:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:19 ): 20:58:19 """Sends PreparedRequest object. Returns Response object. 20:58:19 20:58:19 :param request: The :class:`PreparedRequest ` being sent. 20:58:19 :param stream: (optional) Whether to stream the request content. 20:58:19 :param timeout: (optional) How long to wait for the server to send 20:58:19 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:19 read timeout) ` tuple. 20:58:19 :type timeout: float or tuple or urllib3 Timeout object 20:58:19 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:19 we verify the server's TLS certificate, or a string, in which case it 20:58:19 must be a path to a CA bundle to use 20:58:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:19 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:19 :rtype: requests.Response 20:58:19 """ 20:58:19 20:58:19 try: 20:58:19 conn = self.get_connection_with_tls_context( 20:58:19 request, verify, proxies=proxies, cert=cert 20:58:19 ) 20:58:19 except LocationValueError as e: 20:58:19 raise InvalidURL(e, request=request) 20:58:19 20:58:19 self.cert_verify(conn, request.url, verify, cert) 20:58:19 url = self.request_url(request, proxies) 20:58:19 self.add_headers( 20:58:19 request, 20:58:19 stream=stream, 20:58:19 timeout=timeout, 20:58:19 verify=verify, 20:58:19 cert=cert, 20:58:19 proxies=proxies, 20:58:19 ) 20:58:19 20:58:19 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:19 20:58:19 if isinstance(timeout, tuple): 20:58:19 try: 20:58:19 connect, read = timeout 20:58:19 timeout = TimeoutSauce(connect=connect, read=read) 20:58:19 except ValueError: 20:58:19 raise ValueError( 20:58:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:19 f"or a single float to set both timeouts to the same value." 20:58:19 ) 20:58:19 elif isinstance(timeout, TimeoutSauce): 20:58:19 pass 20:58:19 else: 20:58:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:19 20:58:19 try: 20:58:19 > resp = conn.urlopen( 20:58:19 method=request.method, 20:58:19 url=url, 20:58:19 body=request.body, 20:58:19 headers=request.headers, 20:58:19 redirect=False, 20:58:19 assert_same_host=False, 20:58:19 preload_content=False, 20:58:19 decode_content=False, 20:58:19 retries=self.max_retries, 20:58:19 timeout=timeout, 20:58:19 chunked=chunked, 20:58:19 ) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 20:58:19 retries = retries.increment( 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 20:58:19 method = 'DELETE' 20:58:19 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMC01' 20:58:19 response = None 20:58:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 20:58:19 _pool = 20:58:19 _stacktrace = 20:58:19 20:58:19 def increment( 20:58:19 self, 20:58:19 method: str | None = None, 20:58:19 url: str | None = None, 20:58:19 response: BaseHTTPResponse | None = None, 20:58:19 error: Exception | None = None, 20:58:19 _pool: ConnectionPool | None = None, 20:58:19 _stacktrace: TracebackType | None = None, 20:58:19 ) -> Self: 20:58:19 """Return a new Retry object with incremented retry counters. 20:58:19 20:58:19 :param response: A response object, or None, if the server did not 20:58:19 return a response. 20:58:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 20:58:19 :param Exception error: An error encountered during the request, or 20:58:19 None if the response was received successfully. 20:58:19 20:58:19 :return: A new ``Retry`` object. 20:58:19 """ 20:58:19 if self.total is False and error: 20:58:19 # Disabled, indicate to re-raise the error. 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 20:58:19 total = self.total 20:58:19 if total is not None: 20:58:19 total -= 1 20:58:19 20:58:19 connect = self.connect 20:58:19 read = self.read 20:58:19 redirect = self.redirect 20:58:19 status_count = self.status 20:58:19 other = self.other 20:58:19 cause = "unknown" 20:58:19 status = None 20:58:19 redirect_location = None 20:58:19 20:58:19 if error and self._is_connection_error(error): 20:58:19 # Connect retry? 20:58:19 if connect is False: 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 elif connect is not None: 20:58:19 connect -= 1 20:58:19 20:58:19 elif error and self._is_read_error(error): 20:58:19 # Read retry? 20:58:19 if read is False or method is None or not self._is_method_retryable(method): 20:58:19 raise reraise(type(error), error, _stacktrace) 20:58:19 elif read is not None: 20:58:19 read -= 1 20:58:19 20:58:19 elif error: 20:58:19 # Other retry? 20:58:19 if other is not None: 20:58:19 other -= 1 20:58:19 20:58:19 elif response and response.get_redirect_location(): 20:58:19 # Redirect retry? 20:58:19 if redirect is not None: 20:58:19 redirect -= 1 20:58:19 cause = "too many redirects" 20:58:19 response_redirect_location = response.get_redirect_location() 20:58:19 if response_redirect_location: 20:58:19 redirect_location = response_redirect_location 20:58:19 status = response.status 20:58:19 20:58:19 else: 20:58:19 # Incrementing because of a server error like a 500 in 20:58:19 # status_forcelist and the given method is in the allowed_methods 20:58:19 cause = ResponseError.GENERIC_ERROR 20:58:19 if response and response.status: 20:58:19 if status_count is not None: 20:58:19 status_count -= 1 20:58:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 20:58:19 status = response.status 20:58:19 20:58:19 history = self.history + ( 20:58:19 RequestHistory(method, url, error, status, redirect_location), 20:58:19 ) 20:58:19 20:58:19 new_retry = self.new( 20:58:19 total=total, 20:58:19 connect=connect, 20:58:19 read=read, 20:58:19 redirect=redirect, 20:58:19 status=status_count, 20:58:19 other=other, 20:58:19 history=history, 20:58:19 ) 20:58:19 20:58:19 if new_retry.is_exhausted(): 20:58:19 reason = error or ResponseError(cause) 20:58:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 20:58:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMC01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 20:58:19 20:58:19 During handling of the above exception, another exception occurred: 20:58:19 20:58:19 self = 20:58:19 20:58:19 def test_40_rdmC_device_disconnected(self): 20:58:19 > response = test_utils.unmount_device("ROADMC01") 20:58:19 20:58:19 transportpce_tests/1.2.1/test05_olm.py:578: 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 transportpce_tests/common/test_utils.py:358: in unmount_device 20:58:19 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 20:58:19 transportpce_tests/common/test_utils.py:133: in delete_request 20:58:19 return requests.request( 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 20:58:19 return session.request(method=method, url=url, **kwargs) 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 20:58:19 resp = self.send(prep, **send_kwargs) 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 20:58:19 r = adapter.send(request, **kwargs) 20:58:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 20:58:19 20:58:19 self = 20:58:19 request = , stream = False 20:58:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 20:58:19 proxies = OrderedDict() 20:58:19 20:58:19 def send( 20:58:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 20:58:19 ): 20:58:19 """Sends PreparedRequest object. Returns Response object. 20:58:19 20:58:19 :param request: The :class:`PreparedRequest ` being sent. 20:58:19 :param stream: (optional) Whether to stream the request content. 20:58:19 :param timeout: (optional) How long to wait for the server to send 20:58:19 data before giving up, as a float, or a :ref:`(connect timeout, 20:58:19 read timeout) ` tuple. 20:58:19 :type timeout: float or tuple or urllib3 Timeout object 20:58:19 :param verify: (optional) Either a boolean, in which case it controls whether 20:58:19 we verify the server's TLS certificate, or a string, in which case it 20:58:19 must be a path to a CA bundle to use 20:58:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 20:58:19 :param proxies: (optional) The proxies dictionary to apply to the request. 20:58:19 :rtype: requests.Response 20:58:19 """ 20:58:19 20:58:19 try: 20:58:19 conn = self.get_connection_with_tls_context( 20:58:19 request, verify, proxies=proxies, cert=cert 20:58:19 ) 20:58:19 except LocationValueError as e: 20:58:19 raise InvalidURL(e, request=request) 20:58:19 20:58:19 self.cert_verify(conn, request.url, verify, cert) 20:58:19 url = self.request_url(request, proxies) 20:58:19 self.add_headers( 20:58:19 request, 20:58:19 stream=stream, 20:58:19 timeout=timeout, 20:58:19 verify=verify, 20:58:19 cert=cert, 20:58:19 proxies=proxies, 20:58:19 ) 20:58:19 20:58:19 chunked = not (request.body is None or "Content-Length" in request.headers) 20:58:19 20:58:19 if isinstance(timeout, tuple): 20:58:19 try: 20:58:19 connect, read = timeout 20:58:19 timeout = TimeoutSauce(connect=connect, read=read) 20:58:19 except ValueError: 20:58:19 raise ValueError( 20:58:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 20:58:19 f"or a single float to set both timeouts to the same value." 20:58:19 ) 20:58:19 elif isinstance(timeout, TimeoutSauce): 20:58:19 pass 20:58:19 else: 20:58:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 20:58:19 20:58:19 try: 20:58:19 resp = conn.urlopen( 20:58:19 method=request.method, 20:58:19 url=url, 20:58:19 body=request.body, 20:58:19 headers=request.headers, 20:58:19 redirect=False, 20:58:19 assert_same_host=False, 20:58:19 preload_content=False, 20:58:19 decode_content=False, 20:58:19 retries=self.max_retries, 20:58:19 timeout=timeout, 20:58:19 chunked=chunked, 20:58:19 ) 20:58:19 20:58:19 except (ProtocolError, OSError) as err: 20:58:19 raise ConnectionError(err, request=request) 20:58:19 20:58:19 except MaxRetryError as e: 20:58:19 if isinstance(e.reason, ConnectTimeoutError): 20:58:19 # TODO: Remove this in 3.0.0: see #2811 20:58:19 if not isinstance(e.reason, NewConnectionError): 20:58:19 raise ConnectTimeout(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, ResponseError): 20:58:19 raise RetryError(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, _ProxyError): 20:58:19 raise ProxyError(e, request=request) 20:58:19 20:58:19 if isinstance(e.reason, _SSLError): 20:58:19 # This branch is for urllib3 v1.22 and later. 20:58:19 raise SSLError(e, request=request) 20:58:19 20:58:19 > raise ConnectionError(e, request=request) 20:58:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMC01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 20:58:19 20:58:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 20:58:19 ----------------------------- Captured stdout call ----------------------------- 20:58:19 execution of test_40_rdmC_device_disconnected 20:58:19 --------------------------- Captured stdout teardown --------------------------- 20:58:19 all processes killed 20:58:19 =========================== short test summary info ============================ 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_03_rdmA_device_connected 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_04_rdmC_device_connected 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_05_connect_xpdrA_to_roadmA 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_06_connect_roadmA_to_xpdrA 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_07_connect_xpdrC_to_roadmC 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_08_connect_roadmC_to_xpdrC 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_09_create_OTS_ROADMA 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_10_create_OTS_ROADMC 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_11_get_PM_ROADMA 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_12_get_PM_ROADMC 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_13_calculate_span_loss_base_ROADMA_ROADMC 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_14_calculate_span_loss_base_all 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_15_get_OTS_DEG1_TTP_TXRX_ROADMA 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_16_get_OTS_DEG2_TTP_TXRX_ROADMC 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_17_servicePath_create_AToZ 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_18_servicePath_create_ZToA 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_19_service_power_setup_XPDRA_XPDRC 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_20_get_interface_XPDRA_XPDR1_NETWORK1 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_21_get_roadmconnection_ROADMA 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_22_get_roadmconnection_ROADMC 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_23_service_power_setup_XPDRC_XPDRA 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_24_get_interface_XPDRC_XPDR1_NETWORK1 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_25_get_roadmconnection_ROADMC 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_26_service_power_turndown_XPDRA_XPDRC 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_27_get_roadmconnection_ROADMA 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_28_get_roadmconnection_ROADMC 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_29_servicePath_delete_AToZ 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_30_servicePath_delete_ZToA 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_31_connect_xpdrA_to_roadmA 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_32_connect_roadmA_to_xpdrA 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_33_servicePath_create_AToZ 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_34_get_interface_XPDRA_XPDR1_NETWORK2 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_35_servicePath_delete_AToZ 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_36_xpdrA_device_disconnected 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_37_xpdrC_device_disconnected 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_38_calculate_span_loss_current 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_39_rdmA_device_disconnected 20:58:19 FAILED transportpce_tests/1.2.1/test05_olm.py::TransportOlmTesting::test_40_rdmC_device_disconnected 20:58:19 38 failed, 2 passed in 649.98s (0:10:49) 20:58:19 tests121: exit 1 (998.58 seconds) /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh 1.2.1 pid=35353 20:58:40 ................................... [100%] 20:59:20 35 passed in 76.59s (0:01:16) 20:59:20 pytest -q transportpce_tests/2.2.1/test02_topo_portmapping.py 20:59:52 ...... [100%] 21:00:06 6 passed in 46.15s 21:00:06 pytest -q transportpce_tests/2.2.1/test03_topology.py 21:00:50 ............................................ [100%] 21:05:25 44 passed in 318.49s (0:05:18) 21:05:25 pytest -q transportpce_tests/2.2.1/test04_otn_topology.py 21:06:01 ............ [100%] 21:06:25 12 passed in 59.68s 21:06:25 pytest -q transportpce_tests/2.2.1/test05_flex_grid.py 21:06:51 ................ [100%] 21:08:20 16 passed in 114.79s (0:01:54) 21:08:20 pytest -q transportpce_tests/2.2.1/test06_renderer_service_path_nominal.py 21:08:49 ............................... [100%] 21:08:55 31 passed in 35.46s 21:08:55 pytest -q transportpce_tests/2.2.1/test07_otn_renderer.py 21:09:30 .......................... [100%] 21:10:26 26 passed in 89.96s (0:01:29) 21:10:26 pytest -q transportpce_tests/2.2.1/test08_otn_sh_renderer.py 21:11:01 ...................... [100%] 21:12:05 22 passed in 99.11s (0:01:39) 21:12:05 pytest -q transportpce_tests/2.2.1/test09_olm.py 21:12:46 ........................................ [100%] 21:18:08 40 passed in 362.46s (0:06:02) 21:18:08 pytest -q transportpce_tests/2.2.1/test11_otn_end2end.py 21:18:49 ........................................................................ [ 74%] 21:24:25 ......................... [100%] 21:26:17 97 passed in 489.60s (0:08:09) 21:26:17 pytest -q transportpce_tests/2.2.1/test12_end2end.py 21:26:56 ...................................................... [100%] 21:33:43 54 passed in 445.56s (0:07:25) 21:33:43 pytest -q transportpce_tests/2.2.1/test14_otn_switch_end2end.py 21:34:36 ........................................................................ [ 71%] 21:39:44 ............................. [100%] 21:41:54 101 passed in 490.12s (0:08:10) 21:41:54 pytest -q transportpce_tests/2.2.1/test15_otn_end2end_with_intermediate_switch.py 21:42:48 ........................................................................ [ 67%] 21:48:34 ................................... [100%] 21:51:54 107 passed in 600.52s (0:10:00) 21:51:55 tests121: FAIL ✖ in 16 minutes 44.93 seconds 21:51:55 tests221: OK ✔ in 53 minutes 57.75 seconds 21:51:55 tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 21:52:00 tests_hybrid: freeze> python -m pip freeze --all 21:52:00 tests_hybrid: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 21:52:00 tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh hybrid 21:52:00 using environment variables from ./karaf121.env 21:52:00 pytest -q transportpce_tests/hybrid/test01_device_change_notifications.py 21:52:45 ................................................... [100%] 21:54:31 51 passed in 149.96s (0:02:29) 21:54:31 pytest -q transportpce_tests/hybrid/test02_B100G_end2end.py 21:55:11 ........................................................................ [ 66%] 21:59:31 ..................................... [100%] 22:01:37 109 passed in 426.41s (0:07:06) 22:01:37 pytest -q transportpce_tests/hybrid/test03_autonomous_reroute.py 22:02:23 ..................................................... [100%] 22:05:55 53 passed in 258.06s (0:04:18) 22:05:56 tests_hybrid: OK ✔ in 14 minutes 1.17 seconds 22:05:56 buildlighty: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 22:06:01 buildlighty: freeze> python -m pip freeze --all 22:06:02 buildlighty: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 22:06:02 buildlighty: commands[0] /w/workspace/transportpce-tox-verify-scandium/lighty> ./build.sh 22:06:02 NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED 22:06:15 [ERROR] COMPILATION ERROR : 22:06:15 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[17,42] cannot find symbol 22:06:15 symbol: class YangModuleInfo 22:06:15 location: package org.opendaylight.yangtools.binding 22:06:15 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[21,30] cannot find symbol 22:06:15 symbol: class YangModuleInfo 22:06:15 location: class io.lighty.controllers.tpce.utils.TPCEUtils 22:06:15 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[343,30] cannot find symbol 22:06:15 symbol: class YangModuleInfo 22:06:15 location: class io.lighty.controllers.tpce.utils.TPCEUtils 22:06:15 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[350,23] cannot find symbol 22:06:15 symbol: class YangModuleInfo 22:06:15 location: class io.lighty.controllers.tpce.utils.TPCEUtils 22:06:15 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.13.0:compile (default-compile) on project tpce: Compilation failure: Compilation failure: 22:06:15 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[17,42] cannot find symbol 22:06:15 [ERROR] symbol: class YangModuleInfo 22:06:15 [ERROR] location: package org.opendaylight.yangtools.binding 22:06:15 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[21,30] cannot find symbol 22:06:15 [ERROR] symbol: class YangModuleInfo 22:06:15 [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils 22:06:15 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[343,30] cannot find symbol 22:06:15 [ERROR] symbol: class YangModuleInfo 22:06:15 [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils 22:06:15 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[350,23] cannot find symbol 22:06:15 [ERROR] symbol: class YangModuleInfo 22:06:15 [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils 22:06:15 [ERROR] -> [Help 1] 22:06:15 [ERROR] 22:06:15 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. 22:06:15 [ERROR] Re-run Maven using the -X switch to enable full debug logging. 22:06:15 [ERROR] 22:06:15 [ERROR] For more information about the errors and possible solutions, please read the following articles: 22:06:15 [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException 22:06:15 unzip: cannot find or open target/tpce-bin.zip, target/tpce-bin.zip.zip or target/tpce-bin.zip.ZIP. 22:06:15 buildlighty: exit 9 (13.33 seconds) /w/workspace/transportpce-tox-verify-scandium/lighty> ./build.sh pid=62478 22:06:15 buildlighty: command failed but is marked ignore outcome so handling it as success 22:06:15 buildcontroller: OK (107.52=setup[8.61]+cmd[98.91] seconds) 22:06:15 testsPCE: OK (311.76=setup[72.90]+cmd[238.86] seconds) 22:06:15 sims: OK (9.91=setup[7.51]+cmd[2.40] seconds) 22:06:15 build_karaf_tests121: OK (55.75=setup[7.67]+cmd[48.08] seconds) 22:06:15 tests121: FAIL code 1 (1004.93=setup[6.35]+cmd[998.58] seconds) 22:06:15 build_karaf_tests221: OK (54.67=setup[7.62]+cmd[47.05] seconds) 22:06:15 tests_tapi: FAIL code 1 (818.36=setup[6.96]+cmd[811.40] seconds) 22:06:15 tests221: OK (3237.75=setup[6.34]+cmd[3231.41] seconds) 22:06:15 build_karaf_tests71: OK (51.96=setup[11.23]+cmd[40.73] seconds) 22:06:15 tests71: OK (420.20=setup[7.01]+cmd[413.19] seconds) 22:06:15 build_karaf_tests_hybrid: OK (53.80=setup[7.84]+cmd[45.96] seconds) 22:06:15 tests_hybrid: OK (841.17=setup[6.07]+cmd[835.11] seconds) 22:06:15 buildlighty: OK (19.57=setup[6.24]+cmd[13.33] seconds) 22:06:15 docs: OK (33.06=setup[30.55]+cmd[2.51] seconds) 22:06:15 docs-linkcheck: OK (35.30=setup[31.01]+cmd[4.29] seconds) 22:06:15 checkbashisms: OK (2.88=setup[1.96]+cmd[0.01,0.06,0.85] seconds) 22:06:15 pre-commit: OK (44.69=setup[3.60]+cmd[0.01,0.01,33.48,7.60] seconds) 22:06:15 pylint: OK (27.53=setup[6.26]+cmd[21.27] seconds) 22:06:15 evaluation failed :( (5500.42 seconds) 22:06:15 + tox_status=255 22:06:15 + echo '---> Completed tox runs' 22:06:15 ---> Completed tox runs 22:06:15 + for i in .tox/*/log 22:06:15 ++ echo .tox/build_karaf_tests121/log 22:06:15 ++ awk -F/ '{print $2}' 22:06:15 + tox_env=build_karaf_tests121 22:06:15 + cp -r .tox/build_karaf_tests121/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/build_karaf_tests121 22:06:15 + for i in .tox/*/log 22:06:15 ++ echo .tox/build_karaf_tests221/log 22:06:15 ++ awk -F/ '{print $2}' 22:06:15 + tox_env=build_karaf_tests221 22:06:15 + cp -r .tox/build_karaf_tests221/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/build_karaf_tests221 22:06:15 + for i in .tox/*/log 22:06:15 ++ echo .tox/build_karaf_tests71/log 22:06:15 ++ awk -F/ '{print $2}' 22:06:15 + tox_env=build_karaf_tests71 22:06:15 + cp -r .tox/build_karaf_tests71/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/build_karaf_tests71 22:06:15 + for i in .tox/*/log 22:06:15 ++ echo .tox/build_karaf_tests_hybrid/log 22:06:15 ++ awk -F/ '{print $2}' 22:06:15 + tox_env=build_karaf_tests_hybrid 22:06:15 + cp -r .tox/build_karaf_tests_hybrid/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/build_karaf_tests_hybrid 22:06:15 + for i in .tox/*/log 22:06:15 ++ echo .tox/buildcontroller/log 22:06:15 ++ awk -F/ '{print $2}' 22:06:15 + tox_env=buildcontroller 22:06:15 + cp -r .tox/buildcontroller/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/buildcontroller 22:06:15 + for i in .tox/*/log 22:06:15 ++ echo .tox/buildlighty/log 22:06:15 ++ awk -F/ '{print $2}' 22:06:15 + tox_env=buildlighty 22:06:15 + cp -r .tox/buildlighty/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/buildlighty 22:06:15 + for i in .tox/*/log 22:06:15 ++ echo .tox/checkbashisms/log 22:06:15 ++ awk -F/ '{print $2}' 22:06:15 + tox_env=checkbashisms 22:06:15 + cp -r .tox/checkbashisms/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/checkbashisms 22:06:15 + for i in .tox/*/log 22:06:15 ++ echo .tox/docs-linkcheck/log 22:06:15 ++ awk -F/ '{print $2}' 22:06:15 + tox_env=docs-linkcheck 22:06:15 + cp -r .tox/docs-linkcheck/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/docs-linkcheck 22:06:15 + for i in .tox/*/log 22:06:15 ++ echo .tox/docs/log 22:06:15 ++ awk -F/ '{print $2}' 22:06:15 + tox_env=docs 22:06:15 + cp -r .tox/docs/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/docs 22:06:15 + for i in .tox/*/log 22:06:15 ++ echo .tox/pre-commit/log 22:06:15 ++ awk -F/ '{print $2}' 22:06:15 + tox_env=pre-commit 22:06:15 + cp -r .tox/pre-commit/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/pre-commit 22:06:15 + for i in .tox/*/log 22:06:15 ++ echo .tox/pylint/log 22:06:15 ++ awk -F/ '{print $2}' 22:06:15 + tox_env=pylint 22:06:15 + cp -r .tox/pylint/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/pylint 22:06:15 + for i in .tox/*/log 22:06:15 ++ echo .tox/sims/log 22:06:15 ++ awk -F/ '{print $2}' 22:06:15 + tox_env=sims 22:06:15 + cp -r .tox/sims/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/sims 22:06:15 + for i in .tox/*/log 22:06:15 ++ echo .tox/tests121/log 22:06:15 ++ awk -F/ '{print $2}' 22:06:15 + tox_env=tests121 22:06:15 + cp -r .tox/tests121/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/tests121 22:06:15 + for i in .tox/*/log 22:06:15 ++ awk -F/ '{print $2}' 22:06:15 ++ echo .tox/tests221/log 22:06:15 + tox_env=tests221 22:06:15 + cp -r .tox/tests221/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/tests221 22:06:15 + for i in .tox/*/log 22:06:15 ++ echo .tox/tests71/log 22:06:15 ++ awk -F/ '{print $2}' 22:06:15 + tox_env=tests71 22:06:15 + cp -r .tox/tests71/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/tests71 22:06:15 + for i in .tox/*/log 22:06:15 ++ echo .tox/testsPCE/log 22:06:15 ++ awk -F/ '{print $2}' 22:06:15 + tox_env=testsPCE 22:06:15 + cp -r .tox/testsPCE/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/testsPCE 22:06:15 + for i in .tox/*/log 22:06:15 ++ echo .tox/tests_hybrid/log 22:06:15 ++ awk -F/ '{print $2}' 22:06:15 + tox_env=tests_hybrid 22:06:15 + cp -r .tox/tests_hybrid/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/tests_hybrid 22:06:15 + for i in .tox/*/log 22:06:15 ++ echo .tox/tests_tapi/log 22:06:15 ++ awk -F/ '{print $2}' 22:06:15 + tox_env=tests_tapi 22:06:15 + cp -r .tox/tests_tapi/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/tests_tapi 22:06:15 + DOC_DIR=docs/_build/html 22:06:15 + [[ -d docs/_build/html ]] 22:06:15 + echo '---> Archiving generated docs' 22:06:15 ---> Archiving generated docs 22:06:15 + mv docs/_build/html /w/workspace/transportpce-tox-verify-scandium/archives/docs 22:06:15 + echo '---> tox-run.sh ends' 22:06:15 ---> tox-run.sh ends 22:06:15 + test 255 -eq 0 22:06:15 + exit 255 22:06:15 ++ '[' 1 = 1 ']' 22:06:15 ++ '[' -x /usr/bin/clear_console ']' 22:06:15 ++ /usr/bin/clear_console -q 22:06:15 Build step 'Execute shell' marked build as failure 22:06:15 $ ssh-agent -k 22:06:15 unset SSH_AUTH_SOCK; 22:06:15 unset SSH_AGENT_PID; 22:06:15 echo Agent pid 13753 killed; 22:06:15 [ssh-agent] Stopped. 22:06:15 [PostBuildScript] - [INFO] Executing post build scripts. 22:06:15 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins17796909327606308304.sh 22:06:15 ---> sysstat.sh 22:06:16 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins9076533529583009392.sh 22:06:16 ---> package-listing.sh 22:06:16 ++ facter osfamily 22:06:16 ++ tr '[:upper:]' '[:lower:]' 22:06:16 + OS_FAMILY=debian 22:06:16 + workspace=/w/workspace/transportpce-tox-verify-scandium 22:06:16 + START_PACKAGES=/tmp/packages_start.txt 22:06:16 + END_PACKAGES=/tmp/packages_end.txt 22:06:16 + DIFF_PACKAGES=/tmp/packages_diff.txt 22:06:16 + PACKAGES=/tmp/packages_start.txt 22:06:16 + '[' /w/workspace/transportpce-tox-verify-scandium ']' 22:06:16 + PACKAGES=/tmp/packages_end.txt 22:06:16 + case "${OS_FAMILY}" in 22:06:16 + dpkg -l 22:06:16 + grep '^ii' 22:06:16 + '[' -f /tmp/packages_start.txt ']' 22:06:16 + '[' -f /tmp/packages_end.txt ']' 22:06:16 + diff /tmp/packages_start.txt /tmp/packages_end.txt 22:06:16 + '[' /w/workspace/transportpce-tox-verify-scandium ']' 22:06:16 + mkdir -p /w/workspace/transportpce-tox-verify-scandium/archives/ 22:06:16 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/transportpce-tox-verify-scandium/archives/ 22:06:16 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins7725934898053167576.sh 22:06:16 ---> capture-instance-metadata.sh 22:06:16 Setup pyenv: 22:06:16 system 22:06:16 3.8.13 22:06:16 3.9.13 22:06:16 3.10.13 22:06:16 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) 22:06:17 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-LJis from file:/tmp/.os_lf_venv 22:06:18 lf-activate-venv(): INFO: Installing: lftools 22:06:29 lf-activate-venv(): INFO: Adding /tmp/venv-LJis/bin to PATH 22:06:29 INFO: Running in OpenStack, capturing instance metadata 22:06:30 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins11260053417714539384.sh 22:06:30 provisioning config files... 22:06:31 Could not find credentials [logs] for transportpce-tox-verify-scandium #9 22:06:31 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/transportpce-tox-verify-scandium@tmp/config5340864712828351738tmp 22:06:31 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[odl-logs-s3-cloudfront-index] 22:06:31 Run condition [Regular expression match] enabling perform for step [Provide Configuration files] 22:06:31 provisioning config files... 22:06:31 copy managed file [jenkins-s3-log-ship] to file:/home/jenkins/.aws/credentials 22:06:31 [EnvInject] - Injecting environment variables from a build step. 22:06:31 [EnvInject] - Injecting as environment variables the properties content 22:06:31 SERVER_ID=logs 22:06:31 22:06:31 [EnvInject] - Variables injected successfully. 22:06:31 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins17066106577384889922.sh 22:06:31 ---> create-netrc.sh 22:06:31 WARN: Log server credential not found. 22:06:31 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins2245750498897475638.sh 22:06:31 ---> python-tools-install.sh 22:06:31 Setup pyenv: 22:06:31 system 22:06:31 3.8.13 22:06:31 3.9.13 22:06:31 3.10.13 22:06:31 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) 22:06:31 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-LJis from file:/tmp/.os_lf_venv 22:06:32 lf-activate-venv(): INFO: Installing: lftools 22:06:40 lf-activate-venv(): INFO: Adding /tmp/venv-LJis/bin to PATH 22:06:40 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins6641001079058735345.sh 22:06:40 ---> sudo-logs.sh 22:06:40 Archiving 'sudo' log.. 22:06:40 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins7974531766646312104.sh 22:06:40 ---> job-cost.sh 22:06:40 Setup pyenv: 22:06:41 system 22:06:41 3.8.13 22:06:41 3.9.13 22:06:41 3.10.13 22:06:41 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) 22:06:41 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-LJis from file:/tmp/.os_lf_venv 22:06:41 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 22:06:46 lf-activate-venv(): INFO: Adding /tmp/venv-LJis/bin to PATH 22:06:46 INFO: No Stack... 22:06:46 INFO: Retrieving Pricing Info for: v3-standard-4 22:06:47 INFO: Archiving Costs 22:06:47 [transportpce-tox-verify-scandium] $ /bin/bash -l /tmp/jenkins13712769613140728391.sh 22:06:47 ---> logs-deploy.sh 22:06:47 Setup pyenv: 22:06:47 system 22:06:47 3.8.13 22:06:47 3.9.13 22:06:47 3.10.13 22:06:47 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) 22:06:47 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-LJis from file:/tmp/.os_lf_venv 22:06:48 lf-activate-venv(): INFO: Installing: lftools 22:06:56 lf-activate-venv(): INFO: Adding /tmp/venv-LJis/bin to PATH 22:06:56 WARNING: Nexus logging server not set 22:06:56 INFO: S3 path logs/releng/vex-yul-odl-jenkins-1/transportpce-tox-verify-scandium/9/ 22:06:56 INFO: archiving logs to S3 22:06:58 ---> uname -a: 22:06:58 Linux prd-ubuntu2004-docker-4c-16g-24687 5.4.0-190-generic #210-Ubuntu SMP Fri Jul 5 17:03:38 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 22:06:58 22:06:58 22:06:58 ---> lscpu: 22:06:58 Architecture: x86_64 22:06:58 CPU op-mode(s): 32-bit, 64-bit 22:06:58 Byte Order: Little Endian 22:06:58 Address sizes: 40 bits physical, 48 bits virtual 22:06:58 CPU(s): 4 22:06:58 On-line CPU(s) list: 0-3 22:06:58 Thread(s) per core: 1 22:06:58 Core(s) per socket: 1 22:06:58 Socket(s): 4 22:06:58 NUMA node(s): 1 22:06:58 Vendor ID: AuthenticAMD 22:06:58 CPU family: 23 22:06:58 Model: 49 22:06:58 Model name: AMD EPYC-Rome Processor 22:06:58 Stepping: 0 22:06:58 CPU MHz: 2799.998 22:06:58 BogoMIPS: 5599.99 22:06:58 Virtualization: AMD-V 22:06:58 Hypervisor vendor: KVM 22:06:58 Virtualization type: full 22:06:58 L1d cache: 128 KiB 22:06:58 L1i cache: 128 KiB 22:06:58 L2 cache: 2 MiB 22:06:58 L3 cache: 64 MiB 22:06:58 NUMA node0 CPU(s): 0-3 22:06:58 Vulnerability Gather data sampling: Not affected 22:06:58 Vulnerability Itlb multihit: Not affected 22:06:58 Vulnerability L1tf: Not affected 22:06:58 Vulnerability Mds: Not affected 22:06:58 Vulnerability Meltdown: Not affected 22:06:58 Vulnerability Mmio stale data: Not affected 22:06:58 Vulnerability Retbleed: Vulnerable 22:06:58 Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp 22:06:58 Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization 22:06:58 Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected 22:06:58 Vulnerability Srbds: Not affected 22:06:58 Vulnerability Tsx async abort: Not affected 22:06:58 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities 22:06:58 22:06:58 22:06:58 ---> nproc: 22:06:58 4 22:06:58 22:06:58 22:06:58 ---> df -h: 22:06:58 Filesystem Size Used Avail Use% Mounted on 22:06:58 udev 7.8G 0 7.8G 0% /dev 22:06:58 tmpfs 1.6G 1.1M 1.6G 1% /run 22:06:58 /dev/vda1 78G 17G 62G 21% / 22:06:58 tmpfs 7.9G 0 7.9G 0% /dev/shm 22:06:58 tmpfs 5.0M 0 5.0M 0% /run/lock 22:06:58 tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup 22:06:58 /dev/loop0 62M 62M 0 100% /snap/core20/1405 22:06:58 /dev/loop1 68M 68M 0 100% /snap/lxd/22753 22:06:58 /dev/vda15 105M 6.1M 99M 6% /boot/efi 22:06:58 tmpfs 1.6G 0 1.6G 0% /run/user/1001 22:06:58 /dev/loop3 39M 39M 0 100% /snap/snapd/21759 22:06:58 /dev/loop4 64M 64M 0 100% /snap/core20/2379 22:06:58 /dev/loop5 92M 92M 0 100% /snap/lxd/29619 22:06:58 22:06:58 22:06:58 ---> free -m: 22:06:58 total used free shared buff/cache available 22:06:58 Mem: 15997 671 8486 1 6839 14986 22:06:58 Swap: 1023 0 1023 22:06:58 22:06:58 22:06:58 ---> ip addr: 22:06:58 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 22:06:58 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 22:06:58 inet 127.0.0.1/8 scope host lo 22:06:58 valid_lft forever preferred_lft forever 22:06:58 inet6 ::1/128 scope host 22:06:58 valid_lft forever preferred_lft forever 22:06:58 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 22:06:58 link/ether fa:16:3e:bc:f0:74 brd ff:ff:ff:ff:ff:ff 22:06:58 inet 10.30.170.208/23 brd 10.30.171.255 scope global dynamic ens3 22:06:58 valid_lft 80718sec preferred_lft 80718sec 22:06:58 inet6 fe80::f816:3eff:febc:f074/64 scope link 22:06:58 valid_lft forever preferred_lft forever 22:06:58 3: docker0: mtu 1458 qdisc noqueue state DOWN group default 22:06:58 link/ether 02:42:98:de:af:8b brd ff:ff:ff:ff:ff:ff 22:06:58 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 22:06:58 valid_lft forever preferred_lft forever 22:06:58 22:06:58 22:06:58 ---> sar -b -r -n DEV: 22:06:58 Linux 5.4.0-190-generic (prd-ubuntu2004-docker-4c-16g-24687) 09/20/24 _x86_64_ (4 CPU) 22:06:58 22:06:58 20:32:18 LINUX RESTART (4 CPU) 22:06:58 22:06:58 20:33:02 tps rtps wtps dtps bread/s bwrtn/s bdscd/s 22:06:58 20:34:01 254.57 79.19 175.38 0.00 2499.64 8379.88 0.00 22:06:58 20:35:01 157.02 27.25 129.78 0.00 1956.61 21005.03 0.00 22:06:58 20:36:01 158.50 15.26 143.24 0.00 836.25 45586.00 0.00 22:06:58 20:37:01 122.45 0.87 121.58 0.00 43.19 52305.55 0.00 22:06:58 20:38:01 225.11 15.81 209.30 0.00 4895.57 160244.32 0.00 22:06:58 20:39:01 161.41 1.52 159.89 0.00 83.99 28385.00 0.00 22:06:58 20:40:01 114.53 2.67 111.86 0.00 179.01 2071.71 0.00 22:06:58 20:41:01 68.16 0.17 67.99 0.00 13.33 1101.37 0.00 22:06:58 20:42:01 127.50 2.53 124.96 0.00 459.12 10117.91 0.00 22:06:58 20:43:01 61.39 0.05 61.34 0.00 0.53 972.37 0.00 22:06:58 20:44:01 92.50 0.17 92.33 0.00 16.93 1567.87 0.00 22:06:58 20:45:01 57.02 0.08 56.94 0.00 4.53 821.06 0.00 22:06:58 20:46:01 2.55 0.00 2.55 0.00 0.00 41.06 0.00 22:06:58 20:47:01 74.19 0.47 73.72 0.00 10.80 1074.49 0.00 22:06:58 20:48:01 53.86 1.25 52.61 0.00 28.40 1079.02 0.00 22:06:58 20:49:01 18.18 0.18 17.99 0.00 11.86 721.76 0.00 22:06:58 20:50:01 2.68 0.00 2.68 0.00 0.00 39.19 0.00 22:06:58 20:51:01 5.55 0.02 5.53 0.00 0.13 217.16 0.00 22:06:58 20:52:01 97.57 2.87 94.70 0.00 73.03 9707.15 0.00 22:06:58 20:53:01 48.09 0.48 47.61 0.00 15.46 921.85 0.00 22:06:58 20:54:01 2.35 0.00 2.35 0.00 0.00 44.13 0.00 22:06:58 20:55:01 58.27 0.12 58.16 0.00 4.93 1154.47 0.00 22:06:58 20:56:01 2.77 0.00 2.77 0.00 0.00 54.66 0.00 22:06:58 20:57:01 24.79 0.02 24.78 0.00 0.53 394.67 0.00 22:06:58 20:58:01 49.23 0.00 49.23 0.00 0.00 1053.16 0.00 22:06:58 20:59:01 77.75 0.02 77.74 0.00 0.80 3075.22 0.00 22:06:58 21:00:01 68.39 0.02 68.37 0.00 0.27 1017.03 0.00 22:06:58 21:01:01 76.44 0.02 76.42 0.00 1.60 1219.99 0.00 22:06:58 21:02:01 2.88 0.00 2.88 0.00 0.00 56.39 0.00 22:06:58 21:03:01 2.40 0.00 2.40 0.00 0.00 36.39 0.00 22:06:58 21:04:01 1.38 0.00 1.38 0.00 0.00 17.86 0.00 22:06:58 21:05:01 1.80 0.00 1.80 0.00 0.00 23.73 0.00 22:06:58 21:06:01 72.44 0.02 72.42 0.00 0.40 1059.42 0.00 22:06:58 21:07:01 68.29 0.05 68.24 0.00 1.47 1002.50 0.00 22:06:58 21:08:01 1.83 0.00 1.83 0.00 0.00 30.79 0.00 22:06:58 21:09:01 98.38 0.03 98.35 0.00 1.47 1436.69 0.00 22:06:58 21:10:01 59.89 0.02 59.87 0.00 0.27 866.92 0.00 22:06:58 21:11:01 61.17 0.02 61.16 0.00 1.20 905.45 0.00 22:06:58 21:12:01 2.02 0.00 2.02 0.00 0.00 43.86 0.00 22:06:58 21:13:01 57.47 0.02 57.46 0.00 0.93 864.52 0.00 22:06:58 21:14:01 2.07 0.00 2.07 0.00 0.00 50.79 0.00 22:06:58 21:15:01 2.18 0.00 2.18 0.00 0.00 41.99 0.00 22:06:58 21:16:01 1.65 0.00 1.65 0.00 0.00 26.80 0.00 22:06:58 21:17:01 2.03 0.05 1.98 0.00 1.07 26.40 0.00 22:06:58 21:18:01 1.73 0.00 1.73 0.00 0.00 24.13 0.00 22:06:58 21:19:01 53.79 0.02 53.77 0.00 2.67 818.40 0.00 22:06:58 21:20:01 2.25 0.00 2.25 0.00 0.00 58.79 0.00 22:06:58 21:21:01 2.67 0.00 2.67 0.00 0.00 44.39 0.00 22:06:58 21:22:01 2.53 0.00 2.53 0.00 0.00 46.13 0.00 22:06:58 21:23:01 2.25 0.00 2.25 0.00 0.00 45.19 0.00 22:06:58 21:24:01 1.93 0.00 1.93 0.00 0.00 41.19 0.00 22:06:58 21:25:01 2.05 0.00 2.05 0.00 0.00 37.46 0.00 22:06:58 21:26:01 1.57 0.00 1.57 0.00 0.00 37.86 0.00 22:06:58 21:27:01 69.81 0.02 69.79 0.00 1.60 1030.59 0.00 22:06:58 21:28:01 2.50 0.00 2.50 0.00 0.00 69.72 0.00 22:06:58 21:29:01 2.88 0.00 2.88 0.00 0.00 62.12 0.00 22:06:58 21:30:01 1.57 0.00 1.57 0.00 0.00 33.06 0.00 22:06:58 21:31:01 2.45 0.00 2.45 0.00 0.00 55.06 0.00 22:06:58 21:32:01 1.95 0.00 1.95 0.00 0.00 47.33 0.00 22:06:58 21:33:02 1.93 0.00 1.93 0.00 0.00 53.59 0.00 22:06:58 21:34:01 16.73 0.02 16.71 0.00 3.12 299.20 0.00 22:06:58 21:35:01 53.99 0.07 53.92 0.00 7.47 774.40 0.00 22:06:58 21:36:01 1.92 0.00 1.92 0.00 0.00 51.32 0.00 22:06:58 21:37:01 2.35 0.00 2.35 0.00 0.00 50.79 0.00 22:06:58 21:38:01 1.83 0.00 1.83 0.00 0.00 45.33 0.00 22:06:58 21:39:01 3.22 0.00 3.22 0.00 0.00 58.66 0.00 22:06:58 21:40:01 1.80 0.00 1.80 0.00 0.00 41.33 0.00 22:06:58 21:41:01 2.43 0.00 2.43 0.00 0.00 53.06 0.00 22:06:58 21:42:01 15.76 0.02 15.75 0.00 3.07 365.67 0.00 22:06:58 21:43:01 69.87 0.00 69.87 0.00 0.00 993.97 0.00 22:06:58 21:44:01 2.37 0.00 2.37 0.00 0.00 63.85 0.00 22:06:58 21:45:01 2.90 0.00 2.90 0.00 0.00 53.59 0.00 22:06:58 21:46:01 2.23 0.00 2.23 0.00 0.00 50.26 0.00 22:06:58 21:47:01 2.63 0.00 2.63 0.00 0.00 47.33 0.00 22:06:58 21:48:01 2.13 0.00 2.13 0.00 0.00 39.73 0.00 22:06:58 21:49:01 2.60 0.00 2.60 0.00 0.00 51.19 0.00 22:06:58 21:50:01 2.52 0.00 2.52 0.00 0.00 47.45 0.00 22:06:58 21:51:01 2.20 0.00 2.20 0.00 0.00 43.59 0.00 22:06:58 21:52:01 7.87 0.05 7.82 0.00 1.47 448.06 0.00 22:06:58 21:53:01 97.58 0.12 97.47 0.00 1.73 9630.66 0.00 22:06:58 21:54:01 3.82 0.80 3.02 0.00 20.53 143.31 0.00 22:06:58 21:55:01 24.38 0.02 24.36 0.00 3.20 603.23 0.00 22:06:58 21:56:01 46.99 0.00 46.99 0.00 0.00 678.15 0.00 22:06:58 21:57:01 2.42 0.00 2.42 0.00 0.00 40.66 0.00 22:06:58 21:58:01 2.22 0.07 2.15 0.00 0.67 41.73 0.00 22:06:58 21:59:01 3.25 0.00 3.25 0.00 0.00 49.86 0.00 22:06:58 22:00:01 1.88 0.00 1.88 0.00 0.00 37.86 0.00 22:06:58 22:01:01 1.97 0.02 1.95 0.00 0.13 41.05 0.00 22:06:58 22:02:01 22.35 0.02 22.33 0.00 1.60 367.60 0.00 22:06:58 22:03:01 51.36 0.00 51.36 0.00 0.00 749.34 0.00 22:06:58 22:04:01 3.37 0.15 3.22 0.00 4.13 171.44 0.00 22:06:58 22:05:01 3.25 0.00 3.25 0.00 0.00 71.98 0.00 22:06:58 22:06:01 5.92 0.00 5.92 0.00 0.00 277.55 0.00 22:06:58 Average: 35.69 1.63 34.07 0.00 119.98 4084.65 0.00 22:06:58 22:06:58 20:33:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 22:06:58 20:34:01 13411004 15440580 538164 3.29 69076 2157908 1257808 7.22 793492 1899896 146260 22:06:58 20:35:01 11214732 14643404 1312580 8.01 119684 3408412 2165616 12.42 1800884 2966600 990904 22:06:58 20:36:01 9969216 13978556 1976192 12.06 145684 3917384 2565664 14.72 2487912 3476648 210280 22:06:58 20:37:01 6518312 13145768 2807480 17.14 187096 6382768 3632024 20.84 3843452 5456656 1289268 22:06:58 20:38:01 3871072 13201544 2740372 16.73 224564 8937528 3608108 20.70 4811648 7009784 275936 22:06:58 20:39:01 1259256 10694296 5245400 32.02 229688 9033424 6785584 38.93 7507800 6908916 720 22:06:58 20:40:01 203896 8609764 7328808 44.74 228708 8021500 8344956 47.88 9423492 6055596 1656 22:06:58 20:41:01 1051936 9462744 6475908 39.53 231844 8023108 7508608 43.08 8587664 6045912 704 22:06:58 20:42:01 167724 8295564 7642464 46.65 241368 7732744 8963300 51.42 9767864 5748940 716 22:06:58 20:43:01 164672 7308536 8628988 52.68 243392 6762896 9776764 56.09 10677692 4854988 52 22:06:58 20:44:01 1857440 8825032 7113060 43.42 245424 6589120 8149932 46.76 9178652 4670712 492 22:06:58 20:45:01 165600 6138440 9798304 59.81 247128 5618768 11094840 63.65 11740220 3817764 160 22:06:58 20:46:01 163300 5965476 9971244 60.87 247160 5453108 11143528 63.93 11876192 3689624 148 22:06:58 20:47:01 1463108 7267876 8669444 52.92 248980 5453632 9720928 55.77 10582036 3687664 200 22:06:58 20:48:01 6818144 12624444 3316080 20.24 249476 5454572 4747948 27.24 5257052 3677896 492 22:06:58 20:49:01 2772456 8580564 7357544 44.91 250028 5455800 8649792 49.63 9293116 3675688 208 22:06:58 20:50:01 2758156 8566328 7371816 45.00 250068 5455844 8665996 49.72 9307796 3675624 76 22:06:58 20:51:01 6472264 12315968 3623204 22.12 251464 5490016 4976644 28.55 5582104 3698596 36252 22:06:58 20:52:01 5728404 11774384 4165124 25.43 257180 5684352 5571084 31.96 6198024 3822516 1712 22:06:58 20:53:01 4690500 10738032 5201336 31.75 258096 5685056 6472408 37.13 7240664 3813716 320 22:06:58 20:54:01 4671416 10719100 5220260 31.87 258112 5685192 6488404 37.23 7259748 3813828 164 22:06:58 20:55:01 3886620 9935248 6003872 36.65 258740 5685488 7270580 41.71 8052612 3802444 596 22:06:58 20:56:01 3823176 9872052 6066880 37.04 258752 5685724 7334960 42.08 8115532 3802596 252 22:06:58 20:57:01 5070464 11119872 4819520 29.42 258996 5686012 6348112 36.42 6874160 3801736 544 22:06:58 20:58:01 6149800 12234940 3704128 22.61 261212 5717340 5021444 28.81 5766336 3831588 25788 22:06:58 20:59:01 6295536 12414384 3525864 21.52 263252 5747160 4415784 25.33 5592192 3860640 636 22:06:58 21:00:01 7045616 13165528 2775604 16.94 263976 5747492 3627516 20.81 4850520 3855636 256 22:06:58 21:01:01 5805180 11926176 4014212 24.50 264584 5747936 5103148 29.28 6094684 3847376 188 22:06:58 21:02:01 5487404 11608632 4331632 26.44 264596 5748156 5183812 29.74 6411600 3847596 44 22:06:58 21:03:01 5486072 11607320 4332932 26.45 264596 5748176 5183812 29.74 6413224 3847612 36 22:06:58 21:04:01 5485964 11607220 4333008 26.45 264604 5748180 5183812 29.74 6412864 3847608 64 22:06:58 21:05:01 5485168 11606432 4333768 26.46 264604 5748180 5183812 29.74 6412960 3847608 52 22:06:58 21:06:01 7696836 13819004 2122280 12.96 265268 5748408 2946304 16.90 4210440 3846560 112 22:06:58 21:07:01 7725800 13849524 2091664 12.77 266432 5748760 2888212 16.57 4181472 3846408 292 22:06:58 21:08:01 7709576 13833340 2107780 12.87 266448 5748784 2904240 16.66 4198660 3846412 72 22:06:58 21:09:01 8833160 14958192 983676 6.00 267228 5749236 1816832 10.42 3079296 3846556 544 22:06:58 21:10:01 7619004 13744920 2196308 13.41 267668 5749652 2973732 17.06 4287396 3846724 216 22:06:58 21:11:01 7015440 13142148 2798764 17.09 268056 5750048 3668424 21.05 4889164 3846612 240 22:06:58 21:12:01 6807472 12934416 3006228 18.35 268060 5750276 3766544 21.61 5096180 3846804 104 22:06:58 21:13:01 5683756 11811496 4128568 25.20 268364 5750760 5082456 29.16 6214384 3847160 656 22:06:58 21:14:01 5481548 11609536 4330228 26.43 268364 5751008 5164816 29.63 6416016 3847404 396 22:06:58 21:15:01 5456640 11585028 4354720 26.58 268376 5751408 5180804 29.72 6439956 3847792 396 22:06:58 21:16:01 5456720 11585136 4354656 26.58 268388 5751412 5180804 29.72 6439700 3847808 120 22:06:58 21:17:01 5456028 11584460 4355440 26.59 268404 5751408 5180804 29.72 6439852 3847808 260 22:06:58 21:18:01 5454680 11583212 4356580 26.59 268412 5751468 5180804 29.72 6441564 3847856 168 22:06:58 21:19:01 5590188 11719672 4220332 25.76 268920 5751900 5098696 29.25 6307516 3848188 416 22:06:58 21:20:01 5428588 11558464 4381212 26.75 268928 5752280 5196528 29.81 6467500 3848332 188 22:06:58 21:21:01 5420556 11550544 4389132 26.79 268936 5752380 5212524 29.91 6474840 3848432 324 22:06:58 21:22:01 5400680 11530832 4408816 26.91 268948 5752536 5228552 30.00 6494636 3848580 156 22:06:58 21:23:01 5384552 11514904 4424720 27.01 268956 5752728 5228552 30.00 6510488 3848776 168 22:06:58 21:24:01 5363324 11494160 4445348 27.14 268964 5753208 5244568 30.09 6530288 3849236 264 22:06:58 21:25:01 5314596 11445544 4494036 27.43 268968 5753308 5278264 30.28 6579428 3849340 232 22:06:58 21:26:01 5307068 11438284 4501376 27.48 268976 5753564 5294716 30.38 6586288 3849596 436 22:06:58 21:27:01 5704120 11835944 4104076 25.05 269380 5753756 5069592 29.09 6189440 3849608 384 22:06:58 21:28:01 5479948 11612388 4327220 26.42 269380 5754372 5199476 29.83 6413704 3850224 320 22:06:58 21:29:01 5446724 11579656 4359944 26.62 269392 5754852 5215480 29.92 6446292 3850704 136 22:06:58 21:30:01 5432864 11566180 4373416 26.70 269400 5755228 5215480 29.92 6459276 3851076 480 22:06:58 21:31:01 5396740 11530520 4409052 26.92 269400 5755688 5215480 29.92 6494984 3851536 548 22:06:58 21:32:01 5383708 11517924 4421596 26.99 269400 5756144 5231476 30.01 6508116 3851972 212 22:06:58 21:33:02 5337772 11472780 4466724 27.27 269400 5756920 5280836 30.30 6552620 3852764 580 22:06:58 21:34:01 7996380 14131880 1809336 11.05 269404 5757400 2916944 16.74 3905992 3853188 460 22:06:58 21:35:01 4056340 10191952 5747116 35.08 269596 5757300 6621724 37.99 7830776 3852760 572 22:06:58 21:36:01 3870252 10006252 5932604 36.22 269600 5757684 6736396 38.65 8015200 3853136 448 22:06:58 21:37:01 3785956 9922424 6016464 36.73 269600 5758148 6801872 39.02 8098184 3853604 624 22:06:58 21:38:01 3769624 9906224 6032632 36.83 269604 5758276 6801872 39.02 8113892 3853732 348 22:06:58 21:39:01 3724664 9861616 6077120 37.10 269612 5758620 6817920 39.12 8158364 3854060 176 22:06:58 21:40:01 3717772 9854932 6083720 37.14 269612 5758808 6833912 39.21 8165328 3854264 144 22:06:58 21:41:01 3701384 9838956 6099744 37.24 269616 5759212 6833912 39.21 8181320 3854664 112 22:06:58 21:42:01 8799924 14937784 1003760 6.13 269628 5759456 1826448 10.48 3104364 3854888 388 22:06:58 21:43:01 4172080 10310176 5628868 34.36 269864 5759456 6570512 37.70 7713628 3854728 424 22:06:58 21:44:01 3951876 10090288 5848548 35.70 269864 5759772 6635544 38.07 7932424 3855044 260 22:06:58 21:45:01 3701628 9840704 6097884 37.22 269868 5760428 6798424 39.00 8180536 3855700 504 22:06:58 21:46:01 3690140 9829348 6109224 37.29 269868 5760560 6798424 39.00 8191932 3855832 212 22:06:58 21:47:01 3681036 9820356 6118176 37.35 269872 5760664 6798424 39.00 8200316 3855936 160 22:06:58 21:48:01 3666232 9805736 6132796 37.44 269872 5760848 6798424 39.00 8214764 3856120 112 22:06:58 21:49:01 3658544 9798260 6140272 37.48 269872 5761060 6798424 39.00 8220884 3856332 332 22:06:58 21:50:01 3634532 9774420 6163972 37.63 269872 5761240 6814412 39.10 8245788 3856504 108 22:06:58 21:51:01 3600172 9740312 6198132 37.84 269880 5761476 6863484 39.38 8279232 3856752 332 22:06:58 21:52:01 9182596 15389128 552424 3.37 272020 5821424 1382064 7.93 2657320 3915796 60788 22:06:58 21:53:01 5110388 11490844 4448324 27.15 276260 5988464 5256960 30.16 6599884 4026000 3412 22:06:58 21:54:01 4995396 11376572 4562584 27.85 276260 5989228 5366584 30.79 6714680 4026008 432 22:06:58 21:55:01 6279032 12660540 3279324 20.02 276316 5989484 4823540 27.67 5455292 4007840 308 22:06:58 21:56:01 5177800 11559888 4379336 26.73 276460 5989920 5165104 29.63 6554468 4004620 28 22:06:58 21:57:01 5166376 11548656 4390516 26.80 276468 5990108 5181104 29.73 6565856 4004748 256 22:06:58 21:58:01 5156460 11538924 4400352 26.86 276476 5990280 5181104 29.73 6574984 4004928 292 22:06:58 21:59:01 5148948 11531728 4407508 26.91 276476 5990608 5181104 29.73 6582824 4005208 300 22:06:58 22:00:01 5118580 11501636 4437488 27.09 276476 5990872 5213536 29.91 6612552 4005484 108 22:06:58 22:01:01 5089064 11472448 4466636 27.27 276480 5991196 5213536 29.91 6640428 4005788 68 22:06:58 22:02:01 6965616 13349080 2591048 15.82 276504 5991252 4094844 23.49 4784432 3994596 540 22:06:58 22:03:01 4397448 10781204 5157504 31.48 276636 5991396 6021532 34.55 7341960 3994584 136 22:06:58 22:04:01 4279352 10663796 5274800 32.20 276636 5992080 6107360 35.04 7458568 3995040 320 22:06:58 22:05:01 4211596 10596772 5341784 32.61 276640 5992808 6123392 35.13 7525168 3995760 140 22:06:58 22:06:01 8895944 15340636 600448 3.67 278420 6047224 1441096 8.27 2811436 4041724 56244 22:06:58 Average: 5113493 11325440 4614941 28.17 259017 5848293 5538835 31.78 6636521 3998233 33577 22:06:58 22:06:58 20:33:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 22:06:58 20:34:01 ens3 185.38 143.31 1173.87 37.40 0.00 0.00 0.00 0.00 22:06:58 20:34:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:34:01 lo 0.75 0.75 0.07 0.07 0.00 0.00 0.00 0.00 22:06:58 20:35:01 ens3 290.38 236.51 4425.65 27.98 0.00 0.00 0.00 0.00 22:06:58 20:35:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:35:01 lo 3.93 3.93 0.38 0.38 0.00 0.00 0.00 0.00 22:06:58 20:36:01 ens3 371.24 318.26 5909.35 32.87 0.00 0.00 0.00 0.00 22:06:58 20:36:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:36:01 lo 3.00 3.00 0.32 0.32 0.00 0.00 0.00 0.00 22:06:58 20:37:01 ens3 310.08 198.12 5059.44 19.25 0.00 0.00 0.00 0.00 22:06:58 20:37:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:37:01 lo 1.20 1.20 0.11 0.11 0.00 0.00 0.00 0.00 22:06:58 20:38:01 ens3 230.67 159.18 2753.84 11.97 0.00 0.00 0.00 0.00 22:06:58 20:38:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:38:01 lo 2.25 2.25 0.21 0.21 0.00 0.00 0.00 0.00 22:06:58 20:39:01 ens3 1.45 1.20 0.18 0.16 0.00 0.00 0.00 0.00 22:06:58 20:39:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:39:01 lo 7.35 7.35 13.87 13.87 0.00 0.00 0.00 0.00 22:06:58 20:40:01 ens3 1.85 1.53 0.30 0.25 0.00 0.00 0.00 0.00 22:06:58 20:40:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:40:01 lo 48.62 48.62 37.15 37.15 0.00 0.00 0.00 0.00 22:06:58 20:41:01 ens3 1.82 1.70 0.53 0.46 0.00 0.00 0.00 0.00 22:06:58 20:41:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:41:01 lo 35.50 35.50 18.32 18.32 0.00 0.00 0.00 0.00 22:06:58 20:42:01 ens3 1.93 2.03 0.82 0.72 0.00 0.00 0.00 0.00 22:06:58 20:42:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:42:01 lo 8.33 8.33 3.24 3.24 0.00 0.00 0.00 0.00 22:06:58 20:43:01 ens3 1.12 0.90 0.29 0.21 0.00 0.00 0.00 0.00 22:06:58 20:43:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:43:01 lo 12.18 12.18 10.44 10.44 0.00 0.00 0.00 0.00 22:06:58 20:44:01 ens3 0.98 0.75 0.14 0.13 0.00 0.00 0.00 0.00 22:06:58 20:44:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:44:01 lo 17.76 17.76 5.76 5.76 0.00 0.00 0.00 0.00 22:06:58 20:45:01 ens3 0.95 0.82 0.15 0.14 0.00 0.00 0.00 0.00 22:06:58 20:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:45:01 lo 14.75 14.75 7.75 7.75 0.00 0.00 0.00 0.00 22:06:58 20:46:01 ens3 1.23 0.98 0.23 0.21 0.00 0.00 0.00 0.00 22:06:58 20:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:46:01 lo 17.41 17.41 9.15 9.15 0.00 0.00 0.00 0.00 22:06:58 20:47:01 ens3 1.68 1.08 0.45 0.34 0.00 0.00 0.00 0.00 22:06:58 20:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:47:01 lo 14.63 14.63 6.32 6.32 0.00 0.00 0.00 0.00 22:06:58 20:48:01 ens3 1.48 1.18 0.33 0.25 0.00 0.00 0.00 0.00 22:06:58 20:48:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:48:01 lo 13.16 13.16 2.71 2.71 0.00 0.00 0.00 0.00 22:06:58 20:49:01 ens3 0.77 0.63 0.10 0.09 0.00 0.00 0.00 0.00 22:06:58 20:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:49:01 lo 19.48 19.48 20.43 20.43 0.00 0.00 0.00 0.00 22:06:58 20:50:01 ens3 0.65 0.48 0.11 0.10 0.00 0.00 0.00 0.00 22:06:58 20:50:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:50:01 lo 2.82 2.82 3.41 3.41 0.00 0.00 0.00 0.00 22:06:58 20:51:01 ens3 1.93 2.65 0.88 1.06 0.00 0.00 0.00 0.00 22:06:58 20:51:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:51:01 lo 3.53 3.53 0.43 0.43 0.00 0.00 0.00 0.00 22:06:58 20:52:01 ens3 1.13 1.08 0.20 0.19 0.00 0.00 0.00 0.00 22:06:58 20:52:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:52:01 lo 13.53 13.53 21.59 21.59 0.00 0.00 0.00 0.00 22:06:58 20:53:01 ens3 1.23 1.40 0.31 0.26 0.00 0.00 0.00 0.00 22:06:58 20:53:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:53:01 lo 19.26 19.26 9.24 9.24 0.00 0.00 0.00 0.00 22:06:58 20:54:01 ens3 1.12 1.48 0.23 0.24 0.00 0.00 0.00 0.00 22:06:58 20:54:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:54:01 lo 25.58 25.58 8.51 8.51 0.00 0.00 0.00 0.00 22:06:58 20:55:01 ens3 1.87 2.35 0.35 0.37 0.00 0.00 0.00 0.00 22:06:58 20:55:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:55:01 lo 17.11 17.11 7.04 7.04 0.00 0.00 0.00 0.00 22:06:58 20:56:01 ens3 1.35 1.32 0.28 0.26 0.00 0.00 0.00 0.00 22:06:58 20:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:56:01 lo 38.89 38.89 15.42 15.42 0.00 0.00 0.00 0.00 22:06:58 20:57:01 ens3 1.12 0.95 0.20 0.18 0.00 0.00 0.00 0.00 22:06:58 20:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:57:01 lo 21.74 21.74 6.57 6.57 0.00 0.00 0.00 0.00 22:06:58 20:58:01 ens3 2.27 2.83 1.02 0.88 0.00 0.00 0.00 0.00 22:06:58 20:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:58:01 lo 26.40 26.40 11.59 11.59 0.00 0.00 0.00 0.00 22:06:58 20:59:01 ens3 15.05 13.35 3.15 8.96 0.00 0.00 0.00 0.00 22:06:58 20:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 20:59:01 lo 13.43 13.43 9.66 9.66 0.00 0.00 0.00 0.00 22:06:58 21:00:01 ens3 1.00 0.95 0.19 0.18 0.00 0.00 0.00 0.00 22:06:58 21:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:00:01 lo 14.56 14.56 4.90 4.90 0.00 0.00 0.00 0.00 22:06:58 21:01:01 ens3 0.87 0.75 0.12 0.11 0.00 0.00 0.00 0.00 22:06:58 21:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:01:01 lo 10.61 10.61 4.23 4.23 0.00 0.00 0.00 0.00 22:06:58 21:02:01 ens3 1.17 1.07 0.23 0.21 0.00 0.00 0.00 0.00 22:06:58 21:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:02:01 lo 20.83 20.83 9.28 9.28 0.00 0.00 0.00 0.00 22:06:58 21:03:01 ens3 0.67 0.63 0.20 0.14 0.00 0.00 0.00 0.00 22:06:58 21:03:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:03:01 lo 1.87 1.87 0.71 0.71 0.00 0.00 0.00 0.00 22:06:58 21:04:01 ens3 0.08 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:04:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:04:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:05:01 ens3 0.18 0.12 0.01 0.01 0.00 0.00 0.00 0.00 22:06:58 21:05:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:05:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:06:58 21:06:01 ens3 0.62 0.52 0.09 0.08 0.00 0.00 0.00 0.00 22:06:58 21:06:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:06:01 lo 4.68 4.68 5.71 5.71 0.00 0.00 0.00 0.00 22:06:58 21:07:01 ens3 0.88 0.97 0.14 0.14 0.00 0.00 0.00 0.00 22:06:58 21:07:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:07:01 lo 8.72 8.72 3.32 3.32 0.00 0.00 0.00 0.00 22:06:58 21:08:01 ens3 0.72 0.78 0.22 0.16 0.00 0.00 0.00 0.00 22:06:58 21:08:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:08:01 lo 3.78 3.78 0.94 0.94 0.00 0.00 0.00 0.00 22:06:58 21:09:01 ens3 0.90 1.08 0.15 0.16 0.00 0.00 0.00 0.00 22:06:58 21:09:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:09:01 lo 30.26 30.26 9.78 9.78 0.00 0.00 0.00 0.00 22:06:58 21:10:01 ens3 0.78 0.85 0.13 0.13 0.00 0.00 0.00 0.00 22:06:58 21:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:10:01 lo 10.76 10.76 9.50 9.50 0.00 0.00 0.00 0.00 22:06:58 21:11:01 ens3 0.80 0.93 0.13 0.14 0.00 0.00 0.00 0.00 22:06:58 21:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:11:01 lo 15.18 15.18 9.61 9.61 0.00 0.00 0.00 0.00 22:06:58 21:12:01 ens3 0.90 1.10 0.17 0.17 0.00 0.00 0.00 0.00 22:06:58 21:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:12:01 lo 20.01 20.01 15.38 15.38 0.00 0.00 0.00 0.00 22:06:58 21:13:01 ens3 1.18 1.13 0.28 0.22 0.00 0.00 0.00 0.00 22:06:58 21:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:13:01 lo 20.51 20.51 8.48 8.48 0.00 0.00 0.00 0.00 22:06:58 21:14:01 ens3 0.67 0.53 0.12 0.11 0.00 0.00 0.00 0.00 22:06:58 21:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:14:01 lo 32.61 32.61 11.43 11.43 0.00 0.00 0.00 0.00 22:06:58 21:15:01 ens3 0.55 0.43 0.09 0.08 0.00 0.00 0.00 0.00 22:06:58 21:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:15:01 lo 38.74 38.74 10.95 10.95 0.00 0.00 0.00 0.00 22:06:58 21:16:01 ens3 0.12 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:16:01 lo 0.40 0.40 0.03 0.03 0.00 0.00 0.00 0.00 22:06:58 21:17:01 ens3 0.30 0.10 0.02 0.01 0.00 0.00 0.00 0.00 22:06:58 21:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:17:01 lo 0.80 0.80 0.08 0.08 0.00 0.00 0.00 0.00 22:06:58 21:18:01 ens3 0.73 0.32 0.20 0.09 0.00 0.00 0.00 0.00 22:06:58 21:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:18:01 lo 3.85 3.85 1.39 1.39 0.00 0.00 0.00 0.00 22:06:58 21:19:01 ens3 1.28 1.12 0.38 0.32 0.00 0.00 0.00 0.00 22:06:58 21:19:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:19:01 lo 16.83 16.83 15.28 15.28 0.00 0.00 0.00 0.00 22:06:58 21:20:01 ens3 0.77 0.65 0.14 0.13 0.00 0.00 0.00 0.00 22:06:58 21:20:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:20:01 lo 21.03 21.03 10.29 10.29 0.00 0.00 0.00 0.00 22:06:58 21:21:01 ens3 0.63 0.50 0.10 0.09 0.00 0.00 0.00 0.00 22:06:58 21:21:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:21:01 lo 8.05 8.05 4.79 4.79 0.00 0.00 0.00 0.00 22:06:58 21:22:01 ens3 0.55 0.48 0.11 0.10 0.00 0.00 0.00 0.00 22:06:58 21:22:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:22:01 lo 17.38 17.38 8.78 8.78 0.00 0.00 0.00 0.00 22:06:58 21:23:01 ens3 1.43 0.83 0.48 0.36 0.00 0.00 0.00 0.00 22:06:58 21:23:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:23:01 lo 19.95 19.95 7.36 7.36 0.00 0.00 0.00 0.00 22:06:58 21:24:01 ens3 0.48 0.32 0.12 0.06 0.00 0.00 0.00 0.00 22:06:58 21:24:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:24:01 lo 22.46 22.46 9.96 9.96 0.00 0.00 0.00 0.00 22:06:58 21:25:01 ens3 1.22 0.70 0.40 0.29 0.00 0.00 0.00 0.00 22:06:58 21:25:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:25:01 lo 13.40 13.40 5.46 5.46 0.00 0.00 0.00 0.00 22:06:58 21:26:01 ens3 0.43 0.35 0.07 0.07 0.00 0.00 0.00 0.00 22:06:58 21:26:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:26:01 lo 23.80 23.80 9.03 9.03 0.00 0.00 0.00 0.00 22:06:58 21:27:01 ens3 1.02 0.85 0.15 0.14 0.00 0.00 0.00 0.00 22:06:58 21:27:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:27:01 lo 18.36 18.36 8.18 8.18 0.00 0.00 0.00 0.00 22:06:58 21:28:01 ens3 0.82 0.62 0.20 0.17 0.00 0.00 0.00 0.00 22:06:58 21:28:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:28:01 lo 37.51 37.51 12.32 12.32 0.00 0.00 0.00 0.00 22:06:58 21:29:01 ens3 0.65 0.45 0.14 0.08 0.00 0.00 0.00 0.00 22:06:58 21:29:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:29:01 lo 46.46 46.46 13.41 13.41 0.00 0.00 0.00 0.00 22:06:58 21:30:01 ens3 0.52 0.42 0.10 0.09 0.00 0.00 0.00 0.00 22:06:58 21:30:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:30:01 lo 18.58 18.58 6.23 6.23 0.00 0.00 0.00 0.00 22:06:58 21:31:01 ens3 0.40 0.30 0.06 0.05 0.00 0.00 0.00 0.00 22:06:58 21:31:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:31:01 lo 32.98 32.98 9.65 9.65 0.00 0.00 0.00 0.00 22:06:58 21:32:01 ens3 1.15 0.32 0.33 0.22 0.00 0.00 0.00 0.00 22:06:58 21:32:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:32:01 lo 32.46 32.46 9.11 9.11 0.00 0.00 0.00 0.00 22:06:58 21:33:02 ens3 1.40 0.48 0.51 0.31 0.00 0.00 0.00 0.00 22:06:58 21:33:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:33:02 lo 55.11 55.11 17.74 17.74 0.00 0.00 0.00 0.00 22:06:58 21:34:01 ens3 1.10 0.64 0.39 0.30 0.00 0.00 0.00 0.00 22:06:58 21:34:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:34:01 lo 29.03 29.03 7.96 7.96 0.00 0.00 0.00 0.00 22:06:58 21:35:01 ens3 0.82 0.67 0.11 0.09 0.00 0.00 0.00 0.00 22:06:58 21:35:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:35:01 lo 22.61 22.61 22.56 22.56 0.00 0.00 0.00 0.00 22:06:58 21:36:01 ens3 0.83 0.60 0.14 0.12 0.00 0.00 0.00 0.00 22:06:58 21:36:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:36:01 lo 31.89 31.89 13.42 13.42 0.00 0.00 0.00 0.00 22:06:58 21:37:01 ens3 0.58 0.50 0.10 0.09 0.00 0.00 0.00 0.00 22:06:58 21:37:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:37:01 lo 19.65 19.65 7.96 7.96 0.00 0.00 0.00 0.00 22:06:58 21:38:01 ens3 0.98 0.73 0.27 0.19 0.00 0.00 0.00 0.00 22:06:58 21:38:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:38:01 lo 10.73 10.73 4.60 4.60 0.00 0.00 0.00 0.00 22:06:58 21:39:01 ens3 0.65 0.60 0.12 0.11 0.00 0.00 0.00 0.00 22:06:58 21:39:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:39:01 lo 37.91 37.91 14.61 14.61 0.00 0.00 0.00 0.00 22:06:58 21:40:01 ens3 0.58 0.43 0.10 0.09 0.00 0.00 0.00 0.00 22:06:58 21:40:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:40:01 lo 19.10 19.10 6.44 6.44 0.00 0.00 0.00 0.00 22:06:58 21:41:01 ens3 0.58 0.37 0.08 0.06 0.00 0.00 0.00 0.00 22:06:58 21:41:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:41:01 lo 20.76 20.76 8.77 8.77 0.00 0.00 0.00 0.00 22:06:58 21:42:01 ens3 0.92 0.63 0.15 0.12 0.00 0.00 0.00 0.00 22:06:58 21:42:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:42:01 lo 20.50 20.50 8.14 8.14 0.00 0.00 0.00 0.00 22:06:58 21:43:01 ens3 0.97 0.75 0.22 0.15 0.00 0.00 0.00 0.00 22:06:58 21:43:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:43:01 lo 14.81 14.81 19.44 19.44 0.00 0.00 0.00 0.00 22:06:58 21:44:01 ens3 0.83 0.62 0.15 0.13 0.00 0.00 0.00 0.00 22:06:58 21:44:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:44:01 lo 27.92 27.92 10.04 10.04 0.00 0.00 0.00 0.00 22:06:58 21:45:01 ens3 0.57 0.37 0.08 0.06 0.00 0.00 0.00 0.00 22:06:58 21:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:45:01 lo 31.18 31.18 15.85 15.85 0.00 0.00 0.00 0.00 22:06:58 21:46:01 ens3 0.47 0.38 0.08 0.07 0.00 0.00 0.00 0.00 22:06:58 21:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:46:01 lo 8.80 8.80 5.76 5.76 0.00 0.00 0.00 0.00 22:06:58 21:47:01 ens3 0.88 0.70 0.15 0.13 0.00 0.00 0.00 0.00 22:06:58 21:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:47:01 lo 15.68 15.68 8.14 8.14 0.00 0.00 0.00 0.00 22:06:58 21:48:01 ens3 0.60 0.43 0.21 0.13 0.00 0.00 0.00 0.00 22:06:58 21:48:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:48:01 lo 15.21 15.21 8.77 8.77 0.00 0.00 0.00 0.00 22:06:58 21:49:01 ens3 0.53 0.40 0.08 0.07 0.00 0.00 0.00 0.00 22:06:58 21:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:49:01 lo 11.45 11.45 5.95 5.95 0.00 0.00 0.00 0.00 22:06:58 21:50:01 ens3 0.63 0.50 0.11 0.10 0.00 0.00 0.00 0.00 22:06:58 21:50:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:50:01 lo 24.78 24.78 12.82 12.82 0.00 0.00 0.00 0.00 22:06:58 21:51:01 ens3 0.38 0.18 0.04 0.03 0.00 0.00 0.00 0.00 22:06:58 21:51:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:51:01 lo 22.53 22.53 8.83 8.83 0.00 0.00 0.00 0.00 22:06:58 21:52:01 ens3 1.85 2.00 0.84 0.74 0.00 0.00 0.00 0.00 22:06:58 21:52:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:52:01 lo 21.01 21.01 7.61 7.61 0.00 0.00 0.00 0.00 22:06:58 21:53:01 ens3 0.98 0.90 0.22 0.16 0.00 0.00 0.00 0.00 22:06:58 21:53:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:53:01 lo 36.78 36.78 32.80 32.80 0.00 0.00 0.00 0.00 22:06:58 21:54:01 ens3 1.32 1.77 0.27 0.29 0.00 0.00 0.00 0.00 22:06:58 21:54:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:54:01 lo 19.38 19.38 9.79 9.79 0.00 0.00 0.00 0.00 22:06:58 21:55:01 ens3 0.78 0.80 0.10 0.10 0.00 0.00 0.00 0.00 22:06:58 21:55:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:55:01 lo 20.86 20.86 5.42 5.42 0.00 0.00 0.00 0.00 22:06:58 21:56:01 ens3 0.88 1.12 0.17 0.18 0.00 0.00 0.00 0.00 22:06:58 21:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:56:01 lo 37.98 37.98 19.35 19.35 0.00 0.00 0.00 0.00 22:06:58 21:57:01 ens3 0.80 1.00 0.15 0.16 0.00 0.00 0.00 0.00 22:06:58 21:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:57:01 lo 12.33 12.33 5.32 5.32 0.00 0.00 0.00 0.00 22:06:58 21:58:01 ens3 0.62 0.52 0.19 0.12 0.00 0.00 0.00 0.00 22:06:58 21:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:58:01 lo 20.61 20.61 7.67 7.67 0.00 0.00 0.00 0.00 22:06:58 21:59:01 ens3 0.88 1.13 0.17 0.18 0.00 0.00 0.00 0.00 22:06:58 21:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 21:59:01 lo 18.76 18.76 7.59 7.59 0.00 0.00 0.00 0.00 22:06:58 22:00:01 ens3 0.65 0.77 0.12 0.12 0.00 0.00 0.00 0.00 22:06:58 22:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 22:00:01 lo 28.70 28.70 9.52 9.52 0.00 0.00 0.00 0.00 22:06:58 22:01:01 ens3 0.68 0.83 0.13 0.13 0.00 0.00 0.00 0.00 22:06:58 22:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 22:01:01 lo 21.46 21.46 7.69 7.69 0.00 0.00 0.00 0.00 22:06:58 22:02:01 ens3 0.87 0.92 0.14 0.14 0.00 0.00 0.00 0.00 22:06:58 22:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 22:02:01 lo 37.85 37.85 11.57 11.57 0.00 0.00 0.00 0.00 22:06:58 22:03:01 ens3 0.95 1.03 0.25 0.19 0.00 0.00 0.00 0.00 22:06:58 22:03:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 22:03:01 lo 41.66 41.66 20.95 20.95 0.00 0.00 0.00 0.00 22:06:58 22:04:01 ens3 1.38 0.72 0.23 0.15 0.00 0.00 0.00 0.00 22:06:58 22:04:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 22:04:01 lo 41.28 41.28 15.15 15.15 0.00 0.00 0.00 0.00 22:06:58 22:05:01 ens3 0.95 0.75 0.39 0.33 0.00 0.00 0.00 0.00 22:06:58 22:05:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 22:05:01 lo 59.56 59.56 20.86 20.86 0.00 0.00 0.00 0.00 22:06:58 22:06:01 ens3 1.57 1.70 0.77 0.67 0.00 0.00 0.00 0.00 22:06:58 22:06:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 22:06:01 lo 73.85 73.85 23.77 23.77 0.00 0.00 0.00 0.00 22:06:58 Average: ens3 15.94 12.26 207.88 1.66 0.00 0.00 0.00 0.00 22:06:58 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:58 Average: lo 20.45 20.45 9.44 9.44 0.00 0.00 0.00 0.00 22:06:58 22:06:58 22:06:58 ---> sar -P ALL: 22:06:58 Linux 5.4.0-190-generic (prd-ubuntu2004-docker-4c-16g-24687) 09/20/24 _x86_64_ (4 CPU) 22:06:58 22:06:58 20:32:18 LINUX RESTART (4 CPU) 22:06:58 22:06:58 20:33:02 CPU %user %nice %system %iowait %steal %idle 22:06:58 20:34:01 all 15.83 11.20 9.99 3.37 0.09 59.52 22:06:58 20:34:01 0 12.06 11.62 9.28 1.67 0.07 65.30 22:06:58 20:34:01 1 23.69 11.85 11.85 2.79 0.10 49.73 22:06:58 20:34:01 2 12.17 11.88 10.32 3.48 0.09 62.07 22:06:58 20:34:01 3 15.39 9.47 8.53 5.55 0.10 60.96 22:06:58 20:35:01 all 43.04 0.00 3.91 4.27 0.09 48.68 22:06:58 20:35:01 0 46.90 0.00 3.87 2.63 0.08 46.52 22:06:58 20:35:01 1 45.85 0.00 3.89 3.23 0.13 46.89 22:06:58 20:35:01 2 48.33 0.00 4.08 4.11 0.08 43.39 22:06:58 20:35:01 3 31.09 0.00 3.81 7.11 0.07 57.93 22:06:58 20:36:01 all 81.70 0.00 3.16 2.12 0.12 12.89 22:06:58 20:36:01 0 76.29 0.00 2.59 0.62 0.12 20.38 22:06:58 20:36:01 1 75.07 0.00 3.80 5.30 0.13 15.69 22:06:58 20:36:01 2 91.54 0.00 1.83 0.42 0.10 6.11 22:06:58 20:36:01 3 83.86 0.00 4.44 2.17 0.13 9.40 22:06:58 20:37:01 all 65.11 0.00 3.88 2.36 0.10 28.55 22:06:58 20:37:01 0 77.81 0.00 4.22 2.18 0.12 15.68 22:06:58 20:37:01 1 59.88 0.00 4.11 1.88 0.10 34.03 22:06:58 20:37:01 2 65.46 0.00 4.00 2.90 0.08 27.55 22:06:58 20:37:01 3 57.33 0.00 3.20 2.48 0.08 36.91 22:06:58 20:38:01 all 86.29 0.00 5.16 5.03 0.12 3.40 22:06:58 20:38:01 0 82.79 0.00 4.95 8.86 0.10 3.30 22:06:58 20:38:01 1 87.63 0.00 4.63 3.57 0.14 4.04 22:06:58 20:38:01 2 86.53 0.00 5.24 5.17 0.12 2.94 22:06:58 20:38:01 3 88.21 0.00 5.84 2.52 0.12 3.32 22:06:58 20:39:01 all 81.81 0.00 2.85 0.13 0.10 15.11 22:06:58 20:39:01 0 83.36 0.00 2.92 0.05 0.08 13.59 22:06:58 20:39:01 1 81.59 0.00 2.47 0.07 0.08 15.79 22:06:58 20:39:01 2 81.97 0.00 2.81 0.15 0.10 14.97 22:06:58 20:39:01 3 80.32 0.00 3.23 0.25 0.12 16.08 22:06:58 20:40:01 all 56.31 0.00 1.79 0.44 0.09 41.36 22:06:58 20:40:01 0 57.31 0.00 1.83 0.65 0.10 40.11 22:06:58 20:40:01 1 54.50 0.00 1.70 0.15 0.08 43.57 22:06:58 20:40:01 2 59.97 0.00 1.88 0.77 0.08 37.30 22:06:58 20:40:01 3 53.48 0.00 1.75 0.20 0.10 44.46 22:06:58 20:41:01 all 26.34 0.00 1.15 0.34 0.10 72.07 22:06:58 20:41:01 0 26.50 0.00 1.12 0.34 0.10 71.94 22:06:58 20:41:01 1 24.20 0.00 1.26 0.00 0.10 74.44 22:06:58 20:41:01 2 28.09 0.00 0.91 0.12 0.10 70.78 22:06:58 20:41:01 3 26.56 0.00 1.32 0.90 0.08 71.13 22:06:58 20:42:01 all 46.71 0.00 1.68 0.94 0.10 50.57 22:06:58 20:42:01 0 45.63 0.00 1.70 0.20 0.10 52.37 22:06:58 20:42:01 1 52.26 0.00 1.97 0.86 0.10 44.81 22:06:58 20:42:01 2 43.00 0.00 1.48 0.64 0.10 54.79 22:06:58 20:42:01 3 45.97 0.00 1.58 2.05 0.10 50.31 22:06:58 20:43:01 all 12.94 0.00 0.70 0.27 0.10 85.99 22:06:58 20:43:01 0 16.31 0.00 0.87 0.38 0.10 82.34 22:06:58 20:43:01 1 10.69 0.00 0.85 0.47 0.10 87.89 22:06:58 20:43:01 2 11.30 0.00 0.64 0.05 0.12 87.89 22:06:58 20:43:01 3 13.49 0.00 0.42 0.18 0.10 85.81 22:06:58 20:44:01 all 48.47 0.00 1.81 0.23 0.10 49.39 22:06:58 20:44:01 0 50.91 0.00 1.16 0.00 0.10 47.83 22:06:58 20:44:01 1 48.59 0.00 2.39 0.08 0.10 48.83 22:06:58 20:44:01 2 49.46 0.00 1.76 0.37 0.10 48.31 22:06:58 20:44:01 3 44.93 0.00 1.92 0.45 0.10 52.60 22:06:58 22:06:58 20:44:01 CPU %user %nice %system %iowait %steal %idle 22:06:58 20:45:01 all 29.08 0.00 1.14 0.32 0.10 69.36 22:06:58 20:45:01 0 31.54 0.00 1.14 0.05 0.12 67.15 22:06:58 20:45:01 1 27.02 0.00 1.06 0.00 0.10 71.82 22:06:58 20:45:01 2 31.53 0.00 1.22 0.00 0.08 67.17 22:06:58 20:45:01 3 26.24 0.00 1.13 1.23 0.10 71.30 22:06:58 20:46:01 all 3.67 0.00 0.26 0.02 0.11 95.94 22:06:58 20:46:01 0 5.19 0.00 0.24 0.00 0.10 94.48 22:06:58 20:46:01 1 2.44 0.00 0.30 0.00 0.12 97.14 22:06:58 20:46:01 2 3.78 0.00 0.23 0.00 0.10 95.89 22:06:58 20:46:01 3 3.27 0.00 0.27 0.07 0.12 96.27 22:06:58 20:47:01 all 35.25 0.00 1.16 0.28 0.12 63.19 22:06:58 20:47:01 0 33.76 0.00 1.04 0.03 0.12 65.05 22:06:58 20:47:01 1 34.73 0.00 1.33 0.13 0.12 63.69 22:06:58 20:47:01 2 36.82 0.00 1.21 0.34 0.13 61.51 22:06:58 20:47:01 3 35.70 0.00 1.07 0.60 0.10 62.52 22:06:58 20:48:01 all 72.59 0.00 2.69 0.10 0.08 24.54 22:06:58 20:48:01 0 73.36 0.00 2.91 0.03 0.08 23.61 22:06:58 20:48:01 1 71.94 0.00 2.24 0.15 0.08 25.58 22:06:58 20:48:01 2 73.07 0.00 2.35 0.03 0.08 24.46 22:06:58 20:48:01 3 71.99 0.00 3.25 0.18 0.08 24.49 22:06:58 20:49:01 all 52.54 0.00 1.74 0.21 0.10 45.41 22:06:58 20:49:01 0 53.01 0.00 1.61 0.00 0.12 45.26 22:06:58 20:49:01 1 53.10 0.00 1.68 0.05 0.08 45.08 22:06:58 20:49:01 2 53.32 0.00 1.71 0.17 0.10 44.71 22:06:58 20:49:01 3 50.74 0.00 1.97 0.60 0.10 46.59 22:06:58 20:50:01 all 1.67 0.00 0.24 0.17 0.11 97.82 22:06:58 20:50:01 0 1.77 0.00 0.37 0.02 0.08 97.76 22:06:58 20:50:01 1 1.16 0.00 0.20 0.02 0.15 98.47 22:06:58 20:50:01 2 1.67 0.00 0.22 0.25 0.10 97.75 22:06:58 20:50:01 3 2.08 0.00 0.17 0.39 0.08 97.28 22:06:58 20:51:01 all 3.25 0.00 0.42 0.37 0.10 95.86 22:06:58 20:51:01 0 2.81 0.00 0.66 0.00 0.08 96.45 22:06:58 20:51:01 1 3.32 0.00 0.54 0.08 0.08 95.98 22:06:58 20:51:01 2 3.20 0.00 0.19 0.52 0.10 95.99 22:06:58 20:51:01 3 3.66 0.00 0.32 0.89 0.12 95.01 22:06:58 20:52:01 all 56.02 0.00 2.19 2.30 0.11 39.38 22:06:58 20:52:01 0 54.62 0.00 2.07 3.66 0.10 39.54 22:06:58 20:52:01 1 56.45 0.00 2.37 3.33 0.12 37.74 22:06:58 20:52:01 2 54.62 0.00 2.64 1.14 0.12 41.48 22:06:58 20:52:01 3 58.40 0.00 1.69 1.07 0.10 38.73 22:06:58 20:53:01 all 20.35 0.00 0.82 0.36 0.10 78.37 22:06:58 20:53:01 0 20.23 0.00 0.79 0.13 0.10 78.74 22:06:58 20:53:01 1 20.31 0.00 0.88 0.27 0.08 78.46 22:06:58 20:53:01 2 19.55 0.00 0.74 0.00 0.10 79.61 22:06:58 20:53:01 3 21.31 0.00 0.89 1.03 0.10 76.67 22:06:58 20:54:01 all 2.80 0.00 0.29 0.05 0.09 96.78 22:06:58 20:54:01 0 2.64 0.00 0.34 0.00 0.08 96.94 22:06:58 20:54:01 1 2.93 0.00 0.23 0.10 0.10 96.63 22:06:58 20:54:01 2 2.90 0.00 0.27 0.00 0.08 96.75 22:06:58 20:54:01 3 2.74 0.00 0.30 0.08 0.08 96.79 22:06:58 20:55:01 all 37.70 0.00 1.31 0.10 0.10 60.80 22:06:58 20:55:01 0 37.47 0.00 1.34 0.10 0.10 60.99 22:06:58 20:55:01 1 36.70 0.00 1.30 0.15 0.10 61.74 22:06:58 20:55:01 2 38.46 0.00 1.21 0.08 0.10 60.14 22:06:58 20:55:01 3 38.15 0.00 1.37 0.07 0.08 60.32 22:06:58 22:06:58 20:55:01 CPU %user %nice %system %iowait %steal %idle 22:06:58 20:56:01 all 9.41 0.00 0.41 0.01 0.11 90.05 22:06:58 20:56:01 0 10.06 0.00 0.44 0.03 0.12 89.36 22:06:58 20:56:01 1 9.66 0.00 0.32 0.00 0.08 89.93 22:06:58 20:56:01 2 9.19 0.00 0.44 0.00 0.12 90.26 22:06:58 20:56:01 3 8.75 0.00 0.45 0.00 0.13 90.67 22:06:58 20:57:01 all 23.41 0.00 1.01 0.04 0.10 75.44 22:06:58 20:57:01 0 23.62 0.00 0.95 0.10 0.10 75.23 22:06:58 20:57:01 1 23.99 0.00 1.17 0.05 0.12 74.67 22:06:58 20:57:01 2 24.32 0.00 0.90 0.00 0.10 74.68 22:06:58 20:57:01 3 21.73 0.00 1.03 0.00 0.08 77.16 22:06:58 20:58:01 all 13.37 0.00 0.59 0.35 0.09 85.59 22:06:58 20:58:01 0 11.61 0.00 0.62 1.13 0.08 86.56 22:06:58 20:58:01 1 12.88 0.00 0.44 0.00 0.07 86.61 22:06:58 20:58:01 2 17.07 0.00 0.69 0.07 0.10 82.07 22:06:58 20:58:01 3 11.93 0.00 0.62 0.22 0.10 87.13 22:06:58 20:59:01 all 44.81 0.00 1.45 0.39 0.10 53.25 22:06:58 20:59:01 0 42.69 0.00 1.48 0.85 0.10 54.88 22:06:58 20:59:01 1 46.92 0.00 1.51 0.37 0.10 51.11 22:06:58 20:59:01 2 45.51 0.00 1.59 0.03 0.08 52.78 22:06:58 20:59:01 3 44.14 0.00 1.21 0.32 0.10 54.23 22:06:58 21:00:01 all 37.71 0.00 1.35 0.40 0.10 60.44 22:06:58 21:00:01 0 36.61 0.00 1.46 1.49 0.10 60.34 22:06:58 21:00:01 1 39.38 0.00 1.19 0.03 0.12 59.28 22:06:58 21:00:01 2 38.12 0.00 1.39 0.03 0.08 60.37 22:06:58 21:00:01 3 36.73 0.00 1.36 0.05 0.08 61.78 22:06:58 21:01:01 all 46.16 0.00 1.69 0.31 0.12 51.72 22:06:58 21:01:01 0 43.35 0.00 2.19 0.65 0.10 53.70 22:06:58 21:01:01 1 48.84 0.00 1.39 0.03 0.10 49.64 22:06:58 21:01:01 2 48.86 0.00 1.72 0.54 0.13 48.75 22:06:58 21:01:01 3 43.59 0.00 1.48 0.02 0.13 54.78 22:06:58 21:02:01 all 6.52 0.00 0.25 0.06 0.08 93.10 22:06:58 21:02:01 0 6.47 0.00 0.29 0.20 0.07 92.98 22:06:58 21:02:01 1 6.44 0.00 0.20 0.00 0.08 93.27 22:06:58 21:02:01 2 5.64 0.00 0.25 0.00 0.08 94.02 22:06:58 21:02:01 3 7.50 0.00 0.25 0.03 0.08 92.13 22:06:58 21:03:01 all 0.53 0.00 0.14 0.01 0.08 99.23 22:06:58 21:03:01 0 0.45 0.00 0.10 0.03 0.08 99.33 22:06:58 21:03:01 1 0.47 0.00 0.18 0.00 0.08 99.26 22:06:58 21:03:01 2 0.82 0.00 0.13 0.00 0.08 98.96 22:06:58 21:03:01 3 0.39 0.00 0.15 0.02 0.08 99.36 22:06:58 21:04:01 all 0.25 0.00 0.10 0.01 0.08 99.56 22:06:58 21:04:01 0 0.29 0.00 0.19 0.02 0.10 99.41 22:06:58 21:04:01 1 0.27 0.00 0.10 0.00 0.08 99.55 22:06:58 21:04:01 2 0.15 0.00 0.03 0.00 0.03 99.78 22:06:58 21:04:01 3 0.30 0.00 0.08 0.02 0.08 99.51 22:06:58 21:05:01 all 0.45 0.00 0.13 0.01 0.08 99.34 22:06:58 21:05:01 0 0.27 0.00 0.13 0.02 0.08 99.50 22:06:58 21:05:01 1 0.32 0.00 0.13 0.00 0.08 99.47 22:06:58 21:05:01 2 0.10 0.00 0.08 0.00 0.07 99.75 22:06:58 21:05:01 3 1.12 0.00 0.15 0.02 0.08 98.63 22:06:58 21:06:01 all 29.37 0.00 1.00 0.31 0.08 69.24 22:06:58 21:06:01 0 30.54 0.00 0.84 0.75 0.08 67.79 22:06:58 21:06:01 1 30.85 0.00 1.22 0.17 0.08 67.68 22:06:58 21:06:01 2 27.76 0.00 1.20 0.03 0.08 70.92 22:06:58 21:06:01 3 28.35 0.00 0.75 0.27 0.07 70.56 22:06:58 22:06:58 21:06:01 CPU %user %nice %system %iowait %steal %idle 22:06:58 21:07:01 all 30.86 0.00 1.12 0.33 0.10 67.58 22:06:58 21:07:01 0 32.34 0.00 1.57 0.02 0.10 65.97 22:06:58 21:07:01 1 31.66 0.00 1.19 1.12 0.08 65.95 22:06:58 21:07:01 2 28.81 0.00 0.84 0.02 0.10 70.23 22:06:58 21:07:01 3 30.65 0.00 0.90 0.18 0.10 68.16 22:06:58 21:08:01 all 1.62 0.00 0.19 0.01 0.08 98.10 22:06:58 21:08:01 0 1.72 0.00 0.30 0.00 0.08 97.90 22:06:58 21:08:01 1 1.87 0.00 0.17 0.05 0.08 97.83 22:06:58 21:08:01 2 1.34 0.00 0.10 0.00 0.07 98.49 22:06:58 21:08:01 3 1.53 0.00 0.20 0.00 0.08 98.18 22:06:58 21:09:01 all 46.32 0.00 1.74 0.27 0.10 51.58 22:06:58 21:09:01 0 42.59 0.00 1.57 0.52 0.10 55.22 22:06:58 21:09:01 1 48.54 0.00 1.92 0.47 0.10 48.96 22:06:58 21:09:01 2 46.18 0.00 1.69 0.02 0.10 52.02 22:06:58 21:09:01 3 47.95 0.00 1.78 0.07 0.10 50.10 22:06:58 21:10:01 all 23.51 0.00 0.54 0.28 0.09 75.59 22:06:58 21:10:01 0 20.45 0.00 0.52 0.02 0.10 78.92 22:06:58 21:10:01 1 25.07 0.00 0.49 0.65 0.08 73.71 22:06:58 21:10:01 2 24.39 0.00 0.45 0.02 0.10 75.04 22:06:58 21:10:01 3 24.13 0.00 0.69 0.42 0.08 74.68 22:06:58 21:11:01 all 34.92 0.00 1.15 0.24 0.10 63.58 22:06:58 21:11:01 0 34.83 0.00 1.13 0.02 0.10 63.92 22:06:58 21:11:01 1 34.45 0.00 1.49 0.55 0.10 63.41 22:06:58 21:11:01 2 36.03 0.00 1.00 0.07 0.08 62.81 22:06:58 21:11:01 3 34.38 0.00 0.99 0.34 0.10 64.19 22:06:58 21:12:01 all 8.46 0.00 0.25 0.03 0.08 91.17 22:06:58 21:12:01 0 8.25 0.00 0.22 0.05 0.08 91.40 22:06:58 21:12:01 1 8.11 0.00 0.28 0.05 0.08 91.47 22:06:58 21:12:01 2 8.14 0.00 0.25 0.00 0.07 91.54 22:06:58 21:12:01 3 9.36 0.00 0.25 0.03 0.10 90.25 22:06:58 21:13:01 all 50.19 0.00 1.57 0.38 0.10 47.76 22:06:58 21:13:01 0 51.26 0.00 1.83 0.00 0.10 46.81 22:06:58 21:13:01 1 46.06 0.00 1.43 1.41 0.12 50.99 22:06:58 21:13:01 2 49.66 0.00 1.21 0.02 0.12 48.99 22:06:58 21:13:01 3 53.75 0.00 1.81 0.08 0.08 44.28 22:06:58 21:14:01 all 6.64 0.00 0.27 0.01 0.10 92.98 22:06:58 21:14:01 0 6.33 0.00 0.23 0.00 0.10 93.33 22:06:58 21:14:01 1 6.40 0.00 0.28 0.05 0.10 93.16 22:06:58 21:14:01 2 7.42 0.00 0.28 0.00 0.10 92.20 22:06:58 21:14:01 3 6.41 0.00 0.27 0.00 0.10 93.22 22:06:58 21:15:01 all 4.06 0.00 0.22 0.01 0.09 95.63 22:06:58 21:15:01 0 3.65 0.00 0.28 0.02 0.10 95.95 22:06:58 21:15:01 1 3.61 0.00 0.18 0.03 0.05 96.12 22:06:58 21:15:01 2 5.40 0.00 0.18 0.00 0.12 94.31 22:06:58 21:15:01 3 3.55 0.00 0.22 0.00 0.08 96.15 22:06:58 21:16:01 all 0.65 0.00 0.10 0.00 0.07 99.17 22:06:58 21:16:01 0 0.35 0.00 0.03 0.00 0.03 99.58 22:06:58 21:16:01 1 0.20 0.00 0.12 0.02 0.10 99.56 22:06:58 21:16:01 2 1.71 0.00 0.13 0.00 0.08 98.08 22:06:58 21:16:01 3 0.33 0.00 0.12 0.00 0.07 99.48 22:06:58 21:17:01 all 0.23 0.00 0.13 0.01 0.10 99.53 22:06:58 21:17:01 0 0.22 0.00 0.10 0.02 0.07 99.60 22:06:58 21:17:01 1 0.27 0.00 0.18 0.03 0.12 99.40 22:06:58 21:17:01 2 0.20 0.00 0.12 0.00 0.08 99.60 22:06:58 21:17:01 3 0.23 0.00 0.12 0.00 0.12 99.53 22:06:58 22:06:58 21:17:01 CPU %user %nice %system %iowait %steal %idle 22:06:58 21:18:01 all 0.61 0.00 0.12 0.01 0.08 99.18 22:06:58 21:18:01 0 0.52 0.00 0.13 0.00 0.07 99.28 22:06:58 21:18:01 1 0.72 0.00 0.12 0.03 0.07 99.06 22:06:58 21:18:01 2 0.61 0.00 0.15 0.00 0.08 99.16 22:06:58 21:18:01 3 0.60 0.00 0.08 0.00 0.08 99.23 22:06:58 21:19:01 all 48.84 0.00 1.65 0.32 0.10 49.08 22:06:58 21:19:01 0 50.55 0.00 2.02 0.17 0.10 47.16 22:06:58 21:19:01 1 47.81 0.00 1.68 0.97 0.10 49.44 22:06:58 21:19:01 2 48.43 0.00 1.28 0.02 0.10 50.17 22:06:58 21:19:01 3 48.55 0.00 1.64 0.13 0.10 49.57 22:06:58 21:20:01 all 7.14 0.00 0.25 0.04 0.09 92.48 22:06:58 21:20:01 0 7.30 0.00 0.27 0.00 0.10 92.33 22:06:58 21:20:01 1 7.07 0.00 0.18 0.08 0.08 92.58 22:06:58 21:20:01 2 7.03 0.00 0.23 0.08 0.08 92.57 22:06:58 21:20:01 3 7.16 0.00 0.30 0.00 0.08 92.46 22:06:58 21:21:01 all 1.73 0.00 0.16 0.02 0.08 98.01 22:06:58 21:21:01 0 1.49 0.00 0.15 0.05 0.08 98.23 22:06:58 21:21:01 1 1.85 0.00 0.13 0.02 0.08 97.92 22:06:58 21:21:01 2 1.91 0.00 0.13 0.00 0.08 97.87 22:06:58 21:21:01 3 1.67 0.00 0.23 0.00 0.08 98.01 22:06:58 21:22:01 all 2.45 0.00 0.20 0.01 0.08 97.26 22:06:58 21:22:01 0 2.86 0.00 0.23 0.02 0.08 96.81 22:06:58 21:22:01 1 2.15 0.00 0.18 0.03 0.08 97.55 22:06:58 21:22:01 2 2.66 0.00 0.27 0.00 0.08 96.99 22:06:58 21:22:01 3 2.12 0.00 0.12 0.00 0.07 97.69 22:06:58 21:23:01 all 2.43 0.00 0.22 0.01 0.10 97.25 22:06:58 21:23:01 0 2.75 0.00 0.27 0.02 0.08 96.88 22:06:58 21:23:01 1 2.98 0.00 0.23 0.02 0.08 96.68 22:06:58 21:23:01 2 1.96 0.00 0.15 0.00 0.07 97.83 22:06:58 21:23:01 3 2.00 0.00 0.22 0.00 0.18 97.59 22:06:58 21:24:01 all 2.92 0.00 0.19 0.01 0.09 96.79 22:06:58 21:24:01 0 2.88 0.00 0.23 0.00 0.08 96.81 22:06:58 21:24:01 1 2.51 0.00 0.20 0.03 0.10 97.15 22:06:58 21:24:01 2 2.71 0.00 0.15 0.02 0.08 97.04 22:06:58 21:24:01 3 3.58 0.00 0.18 0.00 0.08 96.15 22:06:58 21:25:01 all 1.60 0.00 0.16 0.01 0.10 98.13 22:06:58 21:25:01 0 1.74 0.00 0.15 0.02 0.10 97.99 22:06:58 21:25:01 1 1.85 0.00 0.18 0.02 0.10 97.84 22:06:58 21:25:01 2 1.64 0.00 0.13 0.00 0.08 98.14 22:06:58 21:25:01 3 1.17 0.00 0.17 0.00 0.13 98.53 22:06:58 21:26:01 all 1.89 0.00 0.16 0.03 0.08 97.85 22:06:58 21:26:01 0 1.81 0.00 0.17 0.02 0.08 97.92 22:06:58 21:26:01 1 1.70 0.00 0.10 0.08 0.07 98.05 22:06:58 21:26:01 2 2.07 0.00 0.22 0.00 0.10 97.61 22:06:58 21:26:01 3 1.96 0.00 0.15 0.00 0.08 97.81 22:06:58 21:27:01 all 47.10 0.00 1.50 0.28 0.09 51.04 22:06:58 21:27:01 0 48.20 0.00 1.19 0.32 0.10 50.19 22:06:58 21:27:01 1 47.26 0.00 1.87 0.79 0.08 50.00 22:06:58 21:27:01 2 47.84 0.00 1.30 0.00 0.08 50.77 22:06:58 21:27:01 3 45.09 0.00 1.63 0.00 0.08 53.20 22:06:58 21:28:01 all 8.44 0.00 0.39 0.02 0.10 91.06 22:06:58 21:28:01 0 7.85 0.00 0.38 0.00 0.10 91.67 22:06:58 21:28:01 1 8.40 0.00 0.37 0.08 0.10 91.05 22:06:58 21:28:01 2 8.65 0.00 0.28 0.00 0.08 90.98 22:06:58 21:28:01 3 8.85 0.00 0.51 0.00 0.10 90.54 22:06:58 22:06:58 21:28:01 CPU %user %nice %system %iowait %steal %idle 22:06:58 21:29:01 all 4.86 0.00 0.35 0.03 0.09 94.67 22:06:58 21:29:01 0 4.94 0.00 0.38 0.00 0.10 94.58 22:06:58 21:29:01 1 4.76 0.00 0.25 0.08 0.08 94.83 22:06:58 21:29:01 2 4.81 0.00 0.43 0.00 0.10 94.66 22:06:58 21:29:01 3 4.96 0.00 0.32 0.02 0.08 94.63 22:06:58 21:30:01 all 2.14 0.00 0.27 0.01 0.08 97.51 22:06:58 21:30:01 0 2.40 0.00 0.40 0.02 0.07 97.11 22:06:58 21:30:01 1 2.04 0.00 0.20 0.00 0.08 97.68 22:06:58 21:30:01 2 1.89 0.00 0.25 0.00 0.08 97.77 22:06:58 21:30:01 3 2.22 0.00 0.22 0.02 0.07 97.48 22:06:58 21:31:01 all 2.95 0.00 0.33 0.05 0.09 96.60 22:06:58 21:31:01 0 2.81 0.00 0.35 0.02 0.08 96.74 22:06:58 21:31:01 1 3.03 0.00 0.32 0.00 0.10 96.55 22:06:58 21:31:01 2 2.82 0.00 0.35 0.00 0.08 96.74 22:06:58 21:31:01 3 3.11 0.00 0.28 0.17 0.08 96.35 22:06:58 21:32:01 all 2.08 0.00 0.30 0.02 0.09 97.50 22:06:58 21:32:01 0 2.09 0.00 0.32 0.02 0.08 97.49 22:06:58 21:32:01 1 1.74 0.00 0.25 0.00 0.07 97.95 22:06:58 21:32:01 2 2.57 0.00 0.38 0.00 0.12 96.93 22:06:58 21:32:01 3 1.94 0.00 0.27 0.05 0.10 97.64 22:06:58 21:33:02 all 4.44 0.00 0.34 0.01 0.08 95.12 22:06:58 21:33:02 0 4.61 0.00 0.25 0.00 0.07 95.07 22:06:58 21:33:02 1 4.90 0.00 0.42 0.00 0.10 94.58 22:06:58 21:33:02 2 4.40 0.00 0.35 0.00 0.07 95.18 22:06:58 21:33:02 3 3.86 0.00 0.35 0.05 0.08 95.66 22:06:58 21:34:01 all 23.62 0.00 0.89 0.02 0.10 75.37 22:06:58 21:34:01 0 24.76 0.00 0.58 0.00 0.14 74.53 22:06:58 21:34:01 1 23.82 0.00 1.14 0.00 0.09 74.96 22:06:58 21:34:01 2 24.64 0.00 0.98 0.02 0.10 74.25 22:06:58 21:34:01 3 21.28 0.00 0.85 0.05 0.09 77.74 22:06:58 21:35:01 all 41.80 0.00 1.19 0.25 0.09 56.67 22:06:58 21:35:01 0 44.82 0.00 1.32 0.44 0.10 53.32 22:06:58 21:35:01 1 39.38 0.00 0.75 0.00 0.10 59.77 22:06:58 21:35:01 2 41.39 0.00 1.54 0.22 0.08 56.77 22:06:58 21:35:01 3 41.61 0.00 1.17 0.35 0.08 56.79 22:06:58 21:36:01 all 7.97 0.00 0.31 0.01 0.08 91.63 22:06:58 21:36:01 0 7.53 0.00 0.28 0.00 0.08 92.10 22:06:58 21:36:01 1 7.73 0.00 0.30 0.00 0.07 91.90 22:06:58 21:36:01 2 8.86 0.00 0.32 0.03 0.08 90.71 22:06:58 21:36:01 3 7.74 0.00 0.35 0.02 0.08 91.81 22:06:58 21:37:01 all 3.99 0.00 0.23 0.01 0.09 95.68 22:06:58 21:37:01 0 3.77 0.00 0.18 0.02 0.08 95.94 22:06:58 21:37:01 1 3.81 0.00 0.23 0.00 0.08 95.87 22:06:58 21:37:01 2 4.23 0.00 0.23 0.03 0.10 95.40 22:06:58 21:37:01 3 4.12 0.00 0.25 0.00 0.10 95.53 22:06:58 21:38:01 all 2.78 0.00 0.18 0.01 0.09 96.94 22:06:58 21:38:01 0 2.09 0.00 0.20 0.02 0.08 97.61 22:06:58 21:38:01 1 2.39 0.00 0.17 0.00 0.08 97.36 22:06:58 21:38:01 2 4.18 0.00 0.13 0.02 0.10 95.57 22:06:58 21:38:01 3 2.46 0.00 0.23 0.00 0.08 97.22 22:06:58 21:39:01 all 4.63 0.00 0.24 0.02 0.08 95.03 22:06:58 21:39:01 0 4.85 0.00 0.17 0.05 0.07 94.87 22:06:58 21:39:01 1 3.98 0.00 0.22 0.00 0.07 95.74 22:06:58 21:39:01 2 5.34 0.00 0.30 0.02 0.10 94.24 22:06:58 21:39:01 3 4.37 0.00 0.27 0.00 0.10 95.27 22:06:58 22:06:58 21:39:01 CPU %user %nice %system %iowait %steal %idle 22:06:58 21:40:01 all 1.81 0.00 0.19 0.01 0.10 97.89 22:06:58 21:40:01 0 1.53 0.00 0.17 0.02 0.08 98.21 22:06:58 21:40:01 1 1.78 0.00 0.12 0.00 0.10 98.01 22:06:58 21:40:01 2 2.13 0.00 0.25 0.03 0.10 97.49 22:06:58 21:40:01 3 1.82 0.00 0.22 0.00 0.10 97.86 22:06:58 21:41:01 all 1.98 0.00 0.20 0.01 0.10 97.70 22:06:58 21:41:01 0 1.97 0.00 0.22 0.02 0.12 97.67 22:06:58 21:41:01 1 2.01 0.00 0.18 0.00 0.08 97.72 22:06:58 21:41:01 2 2.06 0.00 0.18 0.02 0.10 97.64 22:06:58 21:41:01 3 1.89 0.00 0.22 0.00 0.12 97.77 22:06:58 21:42:01 all 10.69 0.00 0.59 0.08 0.10 88.54 22:06:58 21:42:01 0 11.16 0.00 0.72 0.08 0.08 87.95 22:06:58 21:42:01 1 10.51 0.00 0.72 0.07 0.12 88.59 22:06:58 21:42:01 2 10.24 0.00 0.60 0.18 0.10 88.87 22:06:58 21:42:01 3 10.84 0.00 0.32 0.00 0.08 88.76 22:06:58 21:43:01 all 52.05 0.00 1.40 0.25 0.09 46.21 22:06:58 21:43:01 0 52.31 0.00 1.07 0.00 0.08 46.54 22:06:58 21:43:01 1 51.04 0.00 1.80 0.40 0.08 46.68 22:06:58 21:43:01 2 53.60 0.00 1.69 0.22 0.10 44.40 22:06:58 21:43:01 3 51.27 0.00 1.05 0.37 0.10 47.21 22:06:58 21:44:01 all 7.43 0.00 0.34 0.01 0.10 92.13 22:06:58 21:44:01 0 6.79 0.00 0.38 0.00 0.10 92.72 22:06:58 21:44:01 1 8.06 0.00 0.25 0.00 0.08 91.61 22:06:58 21:44:01 2 7.97 0.00 0.35 0.02 0.10 91.57 22:06:58 21:44:01 3 6.88 0.00 0.37 0.03 0.10 92.61 22:06:58 21:45:01 all 8.55 0.00 0.43 0.02 0.10 90.91 22:06:58 21:45:01 0 8.25 0.00 0.40 0.03 0.10 91.22 22:06:58 21:45:01 1 8.03 0.00 0.35 0.00 0.08 91.53 22:06:58 21:45:01 2 8.42 0.00 0.47 0.00 0.08 91.03 22:06:58 21:45:01 3 9.50 0.00 0.48 0.03 0.12 89.87 22:06:58 21:46:01 all 2.69 0.00 0.24 0.01 0.10 96.97 22:06:58 21:46:01 0 2.70 0.00 0.18 0.02 0.10 97.00 22:06:58 21:46:01 1 2.04 0.00 0.15 0.00 0.08 97.72 22:06:58 21:46:01 2 2.50 0.00 0.42 0.00 0.12 96.97 22:06:58 21:46:01 3 3.50 0.00 0.22 0.02 0.08 96.18 22:06:58 21:47:01 all 2.28 0.00 0.30 0.02 0.08 97.32 22:06:58 21:47:01 0 2.51 0.00 0.42 0.03 0.10 96.94 22:06:58 21:47:01 1 2.41 0.00 0.25 0.00 0.07 97.28 22:06:58 21:47:01 2 2.27 0.00 0.32 0.00 0.08 97.33 22:06:58 21:47:01 3 1.92 0.00 0.22 0.03 0.08 97.75 22:06:58 21:48:01 all 1.85 0.00 0.28 0.01 0.09 97.77 22:06:58 21:48:01 0 2.02 0.00 0.28 0.02 0.10 97.58 22:06:58 21:48:01 1 1.86 0.00 0.20 0.00 0.10 97.84 22:06:58 21:48:01 2 1.68 0.00 0.25 0.00 0.08 97.98 22:06:58 21:48:01 3 1.83 0.00 0.38 0.03 0.08 97.67 22:06:58 21:49:01 all 1.68 0.00 0.28 0.01 0.09 97.95 22:06:58 21:49:01 0 1.65 0.00 0.38 0.03 0.08 97.85 22:06:58 21:49:01 1 2.20 0.00 0.18 0.00 0.10 97.52 22:06:58 21:49:01 2 1.35 0.00 0.23 0.00 0.08 98.33 22:06:58 21:49:01 3 1.52 0.00 0.30 0.02 0.08 98.08 22:06:58 21:50:01 all 2.29 0.00 0.24 0.02 0.08 97.37 22:06:58 21:50:01 0 2.74 0.00 0.33 0.00 0.10 96.82 22:06:58 21:50:01 1 2.11 0.00 0.25 0.00 0.08 97.55 22:06:58 21:50:01 2 1.92 0.00 0.20 0.02 0.08 97.78 22:06:58 21:50:01 3 2.40 0.00 0.17 0.05 0.07 97.32 22:06:58 22:06:58 21:50:01 CPU %user %nice %system %iowait %steal %idle 22:06:58 21:51:01 all 2.12 0.00 0.25 0.02 0.10 97.51 22:06:58 21:51:01 0 2.29 0.00 0.35 0.02 0.12 97.23 22:06:58 21:51:01 1 2.06 0.00 0.25 0.00 0.10 97.59 22:06:58 21:51:01 2 2.32 0.00 0.20 0.02 0.10 97.36 22:06:58 21:51:01 3 1.80 0.00 0.22 0.03 0.08 97.86 22:06:58 21:52:01 all 4.94 0.00 0.53 0.11 0.08 94.33 22:06:58 21:52:01 0 2.62 0.00 0.45 0.15 0.08 96.69 22:06:58 21:52:01 1 2.55 0.00 0.44 0.00 0.10 96.92 22:06:58 21:52:01 2 11.53 0.00 0.70 0.08 0.07 87.61 22:06:58 21:52:01 3 3.04 0.00 0.55 0.22 0.08 96.11 22:06:58 21:53:01 all 58.21 0.00 1.65 0.62 0.10 39.42 22:06:58 21:53:01 0 57.88 0.00 2.21 0.97 0.10 38.84 22:06:58 21:53:01 1 56.24 0.00 1.27 0.80 0.10 41.59 22:06:58 21:53:01 2 61.35 0.00 1.17 0.60 0.10 36.77 22:06:58 21:53:01 3 57.38 0.00 1.96 0.08 0.10 40.48 22:06:58 21:54:01 all 3.22 0.00 0.20 0.01 0.09 96.48 22:06:58 21:54:01 0 3.69 0.00 0.17 0.02 0.10 96.03 22:06:58 21:54:01 1 2.68 0.00 0.18 0.03 0.08 97.02 22:06:58 21:54:01 2 3.79 0.00 0.22 0.00 0.08 95.91 22:06:58 21:54:01 3 2.73 0.00 0.22 0.00 0.08 96.97 22:06:58 21:55:01 all 34.80 0.00 1.07 0.03 0.08 64.03 22:06:58 21:55:01 0 35.21 0.00 1.07 0.02 0.07 63.64 22:06:58 21:55:01 1 36.79 0.00 1.16 0.05 0.07 61.93 22:06:58 21:55:01 2 36.23 0.00 1.19 0.00 0.10 62.48 22:06:58 21:55:01 3 30.97 0.00 0.87 0.03 0.08 68.05 22:06:58 21:56:01 all 22.88 0.00 0.54 0.28 0.10 76.21 22:06:58 21:56:01 0 22.41 0.00 0.74 0.08 0.08 76.69 22:06:58 21:56:01 1 23.83 0.00 0.54 0.85 0.10 74.68 22:06:58 21:56:01 2 22.93 0.00 0.49 0.08 0.12 76.38 22:06:58 21:56:01 3 22.34 0.00 0.40 0.08 0.08 77.09 22:06:58 21:57:01 all 2.28 0.00 0.15 0.01 0.09 97.46 22:06:58 21:57:01 0 2.08 0.00 0.17 0.00 0.10 97.65 22:06:58 21:57:01 1 2.61 0.00 0.20 0.03 0.12 97.03 22:06:58 21:57:01 2 2.26 0.00 0.10 0.02 0.07 97.56 22:06:58 21:57:01 3 2.17 0.00 0.13 0.00 0.08 97.61 22:06:58 21:58:01 all 2.27 0.00 0.18 0.01 0.09 97.45 22:06:58 21:58:01 0 2.40 0.00 0.10 0.00 0.08 97.41 22:06:58 21:58:01 1 2.22 0.00 0.23 0.02 0.12 97.41 22:06:58 21:58:01 2 2.19 0.00 0.20 0.03 0.08 97.49 22:06:58 21:58:01 3 2.24 0.00 0.18 0.00 0.08 97.49 22:06:58 21:59:01 all 2.31 0.00 0.15 0.02 0.07 97.45 22:06:58 21:59:01 0 2.14 0.00 0.13 0.02 0.05 97.66 22:06:58 21:59:01 1 1.81 0.00 0.15 0.05 0.08 97.91 22:06:58 21:59:01 2 3.03 0.00 0.15 0.02 0.07 96.74 22:06:58 21:59:01 3 2.27 0.00 0.15 0.00 0.08 97.49 22:06:58 22:00:01 all 2.90 0.00 0.22 0.01 0.08 96.79 22:06:58 22:00:01 0 2.33 0.00 0.25 0.02 0.10 97.30 22:06:58 22:00:01 1 2.85 0.00 0.22 0.00 0.08 96.85 22:06:58 22:00:01 2 2.70 0.00 0.18 0.02 0.08 97.02 22:06:58 22:00:01 3 3.71 0.00 0.22 0.00 0.07 96.01 22:06:58 22:01:01 all 1.91 0.00 0.18 0.01 0.08 97.81 22:06:58 22:01:01 0 1.75 0.00 0.18 0.02 0.07 97.98 22:06:58 22:01:01 1 1.93 0.00 0.22 0.00 0.10 97.76 22:06:58 22:01:01 2 2.23 0.00 0.18 0.03 0.08 97.47 22:06:58 22:01:01 3 1.76 0.00 0.15 0.00 0.07 98.03 22:06:58 22:06:58 22:01:01 CPU %user %nice %system %iowait %steal %idle 22:06:58 22:02:01 all 29.30 0.00 1.06 0.03 0.10 69.51 22:06:58 22:02:01 0 30.46 0.00 0.90 0.08 0.12 68.44 22:06:58 22:02:01 1 27.29 0.00 0.85 0.02 0.12 71.72 22:06:58 22:02:01 2 28.23 0.00 1.13 0.00 0.08 70.56 22:06:58 22:02:01 3 31.22 0.00 1.35 0.02 0.08 67.32 22:06:58 22:03:01 all 34.94 0.00 1.11 0.26 0.09 63.60 22:06:58 22:03:01 0 35.99 0.00 1.51 0.65 0.10 61.75 22:06:58 22:03:01 1 35.00 0.00 0.72 0.02 0.08 64.18 22:06:58 22:03:01 2 32.09 0.00 1.10 0.00 0.10 66.71 22:06:58 22:03:01 3 36.69 0.00 1.10 0.38 0.08 61.74 22:06:58 22:04:01 all 6.67 0.00 0.27 0.07 0.10 92.89 22:06:58 22:04:01 0 6.57 0.00 0.28 0.23 0.10 92.81 22:06:58 22:04:01 1 6.42 0.00 0.28 0.03 0.12 93.15 22:06:58 22:04:01 2 6.52 0.00 0.30 0.00 0.10 93.08 22:06:58 22:04:01 3 7.17 0.00 0.22 0.00 0.10 92.51 22:06:58 22:05:01 all 5.98 0.00 0.31 0.03 0.08 93.59 22:06:58 22:05:01 0 5.66 0.00 0.32 0.08 0.08 93.85 22:06:58 22:05:01 1 5.89 0.00 0.25 0.03 0.08 93.74 22:06:58 22:05:01 2 5.54 0.00 0.32 0.00 0.07 94.07 22:06:58 22:05:01 3 6.83 0.00 0.35 0.00 0.10 92.72 22:06:58 22:06:01 all 6.62 0.00 0.52 0.08 0.10 92.68 22:06:58 22:06:01 0 4.98 0.00 0.64 0.23 0.10 94.04 22:06:58 22:06:01 1 5.09 0.00 0.44 0.03 0.10 94.34 22:06:58 22:06:01 2 11.47 0.00 0.57 0.03 0.10 87.83 22:06:58 22:06:01 3 4.94 0.00 0.44 0.00 0.12 94.50 22:06:58 Average: all 19.76 0.12 0.92 0.33 0.09 78.78 22:06:58 Average: 0 19.82 0.12 0.93 0.34 0.09 78.69 22:06:58 Average: 1 19.65 0.12 0.94 0.35 0.09 78.84 22:06:58 Average: 2 20.21 0.13 0.90 0.25 0.09 78.42 22:06:58 Average: 3 19.36 0.10 0.92 0.37 0.09 79.16 22:06:58 22:06:58 22:06:58