17:10:16 Triggered by Gerrit: https://git.opendaylight.org/gerrit/c/transportpce/+/114149 17:10:16 Running as SYSTEM 17:10:16 [EnvInject] - Loading node environment variables. 17:10:16 Building remotely on prd-ubuntu2004-docker-4c-16g-2598 (ubuntu2004-docker-4c-16g) in workspace /w/workspace/transportpce-tox-verify-scandium 17:10:17 [ssh-agent] Looking for ssh-agent implementation... 17:10:17 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 17:10:17 $ ssh-agent 17:10:17 SSH_AUTH_SOCK=/tmp/ssh-odnJpmxAuG8X/agent.11708 17:10:17 SSH_AGENT_PID=11713 17:10:17 [ssh-agent] Started. 17:10:17 Running ssh-add (command line suppressed) 17:10:17 Identity added: /w/workspace/transportpce-tox-verify-scandium@tmp/private_key_11275763022423403391.key (/w/workspace/transportpce-tox-verify-scandium@tmp/private_key_11275763022423403391.key) 17:10:17 [ssh-agent] Using credentials jenkins (jenkins-ssh) 17:10:17 The recommended git tool is: NONE 17:10:19 using credential jenkins-ssh 17:10:19 Wiping out workspace first. 17:10:19 Cloning the remote Git repository 17:10:19 Cloning repository git://devvexx.opendaylight.org/mirror/transportpce 17:10:19 > git init /w/workspace/transportpce-tox-verify-scandium # timeout=10 17:10:19 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 17:10:19 > git --version # timeout=10 17:10:19 > git --version # 'git version 2.25.1' 17:10:19 using GIT_SSH to set credentials jenkins-ssh 17:10:19 Verifying host key using known hosts file 17:10:19 You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. 17:10:19 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce +refs/heads/*:refs/remotes/origin/* # timeout=10 17:10:23 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 17:10:23 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 17:10:24 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 17:10:24 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 17:10:24 using GIT_SSH to set credentials jenkins-ssh 17:10:24 Verifying host key using known hosts file 17:10:24 You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. 17:10:24 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce refs/changes/49/114149/1 # timeout=10 17:10:24 > git rev-parse d8582bd561e2cb9223a3ebcd6b2c6b160d82da7f^{commit} # timeout=10 17:10:24 Checking out Revision d8582bd561e2cb9223a3ebcd6b2c6b160d82da7f (refs/changes/49/114149/1) 17:10:24 > git config core.sparsecheckout # timeout=10 17:10:24 > git checkout -f d8582bd561e2cb9223a3ebcd6b2c6b160d82da7f # timeout=10 17:10:27 Commit message: "Fix grid spectrum computation (reversed logic)" 17:10:27 > git rev-parse FETCH_HEAD^{commit} # timeout=10 17:10:27 > git rev-list --no-walk eb0fc3bf2b24dbd3f807a63dd11ce6d490c3d332 # timeout=10 17:10:27 > git remote # timeout=10 17:10:27 > git submodule init # timeout=10 17:10:28 > git submodule sync # timeout=10 17:10:28 > git config --get remote.origin.url # timeout=10 17:10:28 > git submodule init # timeout=10 17:10:28 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 17:10:28 ERROR: No submodules found. 17:10:28 provisioning config files... 17:10:28 copy managed file [npmrc] to file:/home/jenkins/.npmrc 17:10:28 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 17:10:28 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins752068149743475712.sh 17:10:28 ---> python-tools-install.sh 17:10:28 Setup pyenv: 17:10:28 * system (set by /opt/pyenv/version) 17:10:28 * 3.8.13 (set by /opt/pyenv/version) 17:10:28 * 3.9.13 (set by /opt/pyenv/version) 17:10:28 * 3.10.13 (set by /opt/pyenv/version) 17:10:28 * 3.11.7 (set by /opt/pyenv/version) 17:10:36 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-0SmD 17:10:36 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 17:10:41 lf-activate-venv(): INFO: Installing: lftools 17:11:27 lf-activate-venv(): INFO: Adding /tmp/venv-0SmD/bin to PATH 17:11:27 Generating Requirements File 17:12:02 Python 3.11.7 17:12:02 pip 24.3.1 from /tmp/venv-0SmD/lib/python3.11/site-packages/pip (python 3.11) 17:12:03 appdirs==1.4.4 17:12:03 argcomplete==3.5.1 17:12:03 aspy.yaml==1.3.0 17:12:03 attrs==24.2.0 17:12:03 autopage==0.5.2 17:12:03 beautifulsoup4==4.12.3 17:12:03 boto3==1.35.50 17:12:03 botocore==1.35.50 17:12:03 bs4==0.0.2 17:12:03 cachetools==5.5.0 17:12:03 certifi==2024.8.30 17:12:03 cffi==1.17.1 17:12:03 cfgv==3.4.0 17:12:03 chardet==5.2.0 17:12:03 charset-normalizer==3.4.0 17:12:03 click==8.1.7 17:12:03 cliff==4.7.0 17:12:03 cmd2==2.5.0 17:12:03 cryptography==3.3.2 17:12:03 debtcollector==3.0.0 17:12:03 decorator==5.1.1 17:12:03 defusedxml==0.7.1 17:12:03 Deprecated==1.2.14 17:12:03 distlib==0.3.9 17:12:03 dnspython==2.7.0 17:12:03 docker==4.2.2 17:12:03 dogpile.cache==1.3.3 17:12:03 durationpy==0.9 17:12:03 email_validator==2.2.0 17:12:03 filelock==3.16.1 17:12:03 future==1.0.0 17:12:03 gitdb==4.0.11 17:12:03 GitPython==3.1.43 17:12:03 google-auth==2.35.0 17:12:03 httplib2==0.22.0 17:12:03 identify==2.6.1 17:12:03 idna==3.10 17:12:03 importlib-resources==1.5.0 17:12:03 iso8601==2.1.0 17:12:03 Jinja2==3.1.4 17:12:03 jmespath==1.0.1 17:12:03 jsonpatch==1.33 17:12:03 jsonpointer==3.0.0 17:12:03 jsonschema==4.23.0 17:12:03 jsonschema-specifications==2024.10.1 17:12:03 keystoneauth1==5.8.0 17:12:03 kubernetes==31.0.0 17:12:03 lftools==0.37.10 17:12:03 lxml==5.3.0 17:12:03 MarkupSafe==3.0.2 17:12:03 msgpack==1.1.0 17:12:03 multi_key_dict==2.0.3 17:12:03 munch==4.0.0 17:12:03 netaddr==1.3.0 17:12:03 netifaces==0.11.0 17:12:03 niet==1.4.2 17:12:03 nodeenv==1.9.1 17:12:03 oauth2client==4.1.3 17:12:03 oauthlib==3.2.2 17:12:03 openstacksdk==4.1.0 17:12:03 os-client-config==2.1.0 17:12:03 os-service-types==1.7.0 17:12:03 osc-lib==3.1.0 17:12:03 oslo.config==9.6.0 17:12:03 oslo.context==5.6.0 17:12:03 oslo.i18n==6.4.0 17:12:03 oslo.log==6.1.2 17:12:03 oslo.serialization==5.5.0 17:12:03 oslo.utils==7.3.0 17:12:03 packaging==24.1 17:12:03 pbr==6.1.0 17:12:03 platformdirs==4.3.6 17:12:03 prettytable==3.11.0 17:12:03 pyasn1==0.6.1 17:12:03 pyasn1_modules==0.4.1 17:12:03 pycparser==2.22 17:12:03 pygerrit2==2.0.15 17:12:03 PyGithub==2.4.0 17:12:03 PyJWT==2.9.0 17:12:03 PyNaCl==1.5.0 17:12:03 pyparsing==2.4.7 17:12:03 pyperclip==1.9.0 17:12:03 pyrsistent==0.20.0 17:12:03 python-cinderclient==9.6.0 17:12:03 python-dateutil==2.9.0.post0 17:12:03 python-heatclient==4.0.0 17:12:03 python-jenkins==1.8.2 17:12:03 python-keystoneclient==5.5.0 17:12:03 python-magnumclient==4.7.0 17:12:03 python-openstackclient==7.2.1 17:12:03 python-swiftclient==4.6.0 17:12:03 PyYAML==6.0.2 17:12:03 referencing==0.35.1 17:12:03 requests==2.32.3 17:12:03 requests-oauthlib==2.0.0 17:12:03 requestsexceptions==1.4.0 17:12:03 rfc3986==2.0.0 17:12:03 rpds-py==0.20.0 17:12:03 rsa==4.9 17:12:03 ruamel.yaml==0.18.6 17:12:03 ruamel.yaml.clib==0.2.12 17:12:03 s3transfer==0.10.3 17:12:03 simplejson==3.19.3 17:12:03 six==1.16.0 17:12:03 smmap==5.0.1 17:12:03 soupsieve==2.6 17:12:03 stevedore==5.3.0 17:12:03 tabulate==0.9.0 17:12:03 toml==0.10.2 17:12:03 tomlkit==0.13.2 17:12:03 tqdm==4.66.6 17:12:03 typing_extensions==4.12.2 17:12:03 tzdata==2024.2 17:12:03 urllib3==1.26.20 17:12:03 virtualenv==20.27.1 17:12:03 wcwidth==0.2.13 17:12:03 websocket-client==1.8.0 17:12:03 wrapt==1.16.0 17:12:03 xdg==6.0.0 17:12:03 xmltodict==0.14.2 17:12:03 yq==3.4.3 17:12:03 [EnvInject] - Injecting environment variables from a build step. 17:12:03 [EnvInject] - Injecting as environment variables the properties content 17:12:03 PYTHON=python3 17:12:03 17:12:03 [EnvInject] - Variables injected successfully. 17:12:03 [transportpce-tox-verify-scandium] $ /bin/bash -l /tmp/jenkins7388140519194448358.sh 17:12:03 ---> tox-install.sh 17:12:03 + source /home/jenkins/lf-env.sh 17:12:03 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 17:12:03 ++ mktemp -d /tmp/venv-XXXX 17:12:03 + lf_venv=/tmp/venv-xPML 17:12:03 + local venv_file=/tmp/.os_lf_venv 17:12:03 + local python=python3 17:12:03 + local options 17:12:03 + local set_path=true 17:12:03 + local install_args= 17:12:03 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 17:12:03 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 17:12:03 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 17:12:03 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 17:12:03 + true 17:12:03 + case $1 in 17:12:03 + venv_file=/tmp/.toxenv 17:12:03 + shift 2 17:12:03 + true 17:12:03 + case $1 in 17:12:03 + shift 17:12:03 + break 17:12:03 + case $python in 17:12:03 + local pkg_list= 17:12:03 + [[ -d /opt/pyenv ]] 17:12:03 + echo 'Setup pyenv:' 17:12:03 Setup pyenv: 17:12:03 + export PYENV_ROOT=/opt/pyenv 17:12:03 + PYENV_ROOT=/opt/pyenv 17:12:03 + export PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:12:03 + PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:12:03 + pyenv versions 17:12:03 system 17:12:03 3.8.13 17:12:03 3.9.13 17:12:03 3.10.13 17:12:03 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) 17:12:03 + command -v pyenv 17:12:03 ++ pyenv init - --no-rehash 17:12:03 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 17:12:03 for i in ${!paths[@]}; do 17:12:03 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 17:12:03 fi; done; 17:12:03 echo "${paths[*]}"'\'')" 17:12:03 export PATH="/opt/pyenv/shims:${PATH}" 17:12:03 export PYENV_SHELL=bash 17:12:03 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 17:12:03 pyenv() { 17:12:03 local command 17:12:03 command="${1:-}" 17:12:03 if [ "$#" -gt 0 ]; then 17:12:03 shift 17:12:03 fi 17:12:03 17:12:03 case "$command" in 17:12:03 rehash|shell) 17:12:03 eval "$(pyenv "sh-$command" "$@")" 17:12:03 ;; 17:12:03 *) 17:12:03 command pyenv "$command" "$@" 17:12:03 ;; 17:12:03 esac 17:12:03 }' 17:12:03 +++ bash --norc -ec 'IFS=:; paths=($PATH); 17:12:03 for i in ${!paths[@]}; do 17:12:03 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 17:12:03 fi; done; 17:12:03 echo "${paths[*]}"' 17:12:03 ++ PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:12:03 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:12:03 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:12:03 ++ export PYENV_SHELL=bash 17:12:03 ++ PYENV_SHELL=bash 17:12:03 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 17:12:03 +++ complete -F _pyenv pyenv 17:12:03 ++ lf-pyver python3 17:12:03 ++ local py_version_xy=python3 17:12:03 ++ local py_version_xyz= 17:12:03 ++ sed 's/^[ *]* //' 17:12:03 ++ pyenv versions 17:12:03 ++ local command 17:12:03 ++ command=versions 17:12:03 ++ '[' 1 -gt 0 ']' 17:12:03 ++ shift 17:12:03 ++ case "$command" in 17:12:03 ++ command pyenv versions 17:12:03 ++ pyenv versions 17:12:03 ++ grep -E '^[0-9.]*[0-9]$' 17:12:03 ++ awk '{ print $1 }' 17:12:03 ++ [[ ! -s /tmp/.pyenv_versions ]] 17:12:03 +++ grep '^3' /tmp/.pyenv_versions 17:12:03 +++ sort -V 17:12:03 +++ tail -n 1 17:12:03 ++ py_version_xyz=3.11.7 17:12:03 ++ [[ -z 3.11.7 ]] 17:12:03 ++ echo 3.11.7 17:12:03 ++ return 0 17:12:03 + pyenv local 3.11.7 17:12:03 + local command 17:12:03 + command=local 17:12:03 + '[' 2 -gt 0 ']' 17:12:03 + shift 17:12:03 + case "$command" in 17:12:03 + command pyenv local 3.11.7 17:12:03 + pyenv local 3.11.7 17:12:03 + for arg in "$@" 17:12:03 + case $arg in 17:12:03 + pkg_list+='tox ' 17:12:03 + for arg in "$@" 17:12:03 + case $arg in 17:12:03 + pkg_list+='virtualenv ' 17:12:03 + for arg in "$@" 17:12:03 + case $arg in 17:12:03 + pkg_list+='urllib3~=1.26.15 ' 17:12:03 + [[ -f /tmp/.toxenv ]] 17:12:03 + [[ ! -f /tmp/.toxenv ]] 17:12:03 + [[ -n '' ]] 17:12:03 + python3 -m venv /tmp/venv-xPML 17:12:07 + echo 'lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-xPML' 17:12:07 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-xPML 17:12:07 + echo /tmp/venv-xPML 17:12:07 + echo 'lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv' 17:12:07 lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv 17:12:07 + /tmp/venv-xPML/bin/python3 -m pip install --upgrade --quiet pip virtualenv 17:12:11 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 17:12:11 + echo 'lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 ' 17:12:11 lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 17:12:11 + /tmp/venv-xPML/bin/python3 -m pip install --upgrade --quiet --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 17:12:13 + type python3 17:12:13 + true 17:12:13 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-xPML/bin to PATH' 17:12:13 lf-activate-venv(): INFO: Adding /tmp/venv-xPML/bin to PATH 17:12:13 + PATH=/tmp/venv-xPML/bin:/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:12:13 + return 0 17:12:13 + python3 --version 17:12:13 Python 3.11.7 17:12:13 + python3 -m pip --version 17:12:14 pip 24.3.1 from /tmp/venv-xPML/lib/python3.11/site-packages/pip (python 3.11) 17:12:14 + python3 -m pip freeze 17:12:14 cachetools==5.5.0 17:12:14 chardet==5.2.0 17:12:14 colorama==0.4.6 17:12:14 distlib==0.3.9 17:12:14 filelock==3.16.1 17:12:14 packaging==24.1 17:12:14 platformdirs==4.3.6 17:12:14 pluggy==1.5.0 17:12:14 pyproject-api==1.8.0 17:12:14 tox==4.23.2 17:12:14 urllib3==1.26.20 17:12:14 virtualenv==20.27.1 17:12:14 [transportpce-tox-verify-scandium] $ /bin/sh -xe /tmp/jenkins13324403177830293337.sh 17:12:14 [EnvInject] - Injecting environment variables from a build step. 17:12:14 [EnvInject] - Injecting as environment variables the properties content 17:12:14 PARALLEL=True 17:12:14 17:12:14 [EnvInject] - Variables injected successfully. 17:12:14 [transportpce-tox-verify-scandium] $ /bin/bash -l /tmp/jenkins14369315487409171576.sh 17:12:14 ---> tox-run.sh 17:12:14 + PATH=/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:12:14 + ARCHIVE_TOX_DIR=/w/workspace/transportpce-tox-verify-scandium/archives/tox 17:12:14 + ARCHIVE_DOC_DIR=/w/workspace/transportpce-tox-verify-scandium/archives/docs 17:12:14 + mkdir -p /w/workspace/transportpce-tox-verify-scandium/archives/tox 17:12:14 + cd /w/workspace/transportpce-tox-verify-scandium/. 17:12:14 + source /home/jenkins/lf-env.sh 17:12:14 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 17:12:14 ++ mktemp -d /tmp/venv-XXXX 17:12:14 + lf_venv=/tmp/venv-Wn6w 17:12:14 + local venv_file=/tmp/.os_lf_venv 17:12:14 + local python=python3 17:12:14 + local options 17:12:14 + local set_path=true 17:12:14 + local install_args= 17:12:14 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 17:12:14 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 17:12:14 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 17:12:14 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 17:12:14 + true 17:12:14 + case $1 in 17:12:14 + venv_file=/tmp/.toxenv 17:12:14 + shift 2 17:12:14 + true 17:12:14 + case $1 in 17:12:14 + shift 17:12:14 + break 17:12:14 + case $python in 17:12:14 + local pkg_list= 17:12:14 + [[ -d /opt/pyenv ]] 17:12:14 + echo 'Setup pyenv:' 17:12:14 Setup pyenv: 17:12:14 + export PYENV_ROOT=/opt/pyenv 17:12:14 + PYENV_ROOT=/opt/pyenv 17:12:14 + export PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:12:14 + PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:12:14 + pyenv versions 17:12:14 system 17:12:14 3.8.13 17:12:14 3.9.13 17:12:14 3.10.13 17:12:14 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) 17:12:14 + command -v pyenv 17:12:14 ++ pyenv init - --no-rehash 17:12:14 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 17:12:14 for i in ${!paths[@]}; do 17:12:14 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 17:12:14 fi; done; 17:12:14 echo "${paths[*]}"'\'')" 17:12:14 export PATH="/opt/pyenv/shims:${PATH}" 17:12:14 export PYENV_SHELL=bash 17:12:14 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 17:12:14 pyenv() { 17:12:14 local command 17:12:14 command="${1:-}" 17:12:14 if [ "$#" -gt 0 ]; then 17:12:14 shift 17:12:14 fi 17:12:14 17:12:14 case "$command" in 17:12:14 rehash|shell) 17:12:14 eval "$(pyenv "sh-$command" "$@")" 17:12:14 ;; 17:12:14 *) 17:12:14 command pyenv "$command" "$@" 17:12:14 ;; 17:12:14 esac 17:12:14 }' 17:12:14 +++ bash --norc -ec 'IFS=:; paths=($PATH); 17:12:14 for i in ${!paths[@]}; do 17:12:14 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 17:12:14 fi; done; 17:12:14 echo "${paths[*]}"' 17:12:14 ++ PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:12:14 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:12:14 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:12:14 ++ export PYENV_SHELL=bash 17:12:14 ++ PYENV_SHELL=bash 17:12:14 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 17:12:14 +++ complete -F _pyenv pyenv 17:12:14 ++ lf-pyver python3 17:12:14 ++ local py_version_xy=python3 17:12:14 ++ local py_version_xyz= 17:12:14 ++ pyenv versions 17:12:14 ++ local command 17:12:14 ++ command=versions 17:12:14 ++ sed 's/^[ *]* //' 17:12:14 ++ '[' 1 -gt 0 ']' 17:12:14 ++ awk '{ print $1 }' 17:12:14 ++ shift 17:12:14 ++ case "$command" in 17:12:14 ++ command pyenv versions 17:12:14 ++ grep -E '^[0-9.]*[0-9]$' 17:12:14 ++ pyenv versions 17:12:14 ++ [[ ! -s /tmp/.pyenv_versions ]] 17:12:14 +++ grep '^3' /tmp/.pyenv_versions 17:12:14 +++ tail -n 1 17:12:14 +++ sort -V 17:12:14 ++ py_version_xyz=3.11.7 17:12:14 ++ [[ -z 3.11.7 ]] 17:12:14 ++ echo 3.11.7 17:12:14 ++ return 0 17:12:14 + pyenv local 3.11.7 17:12:14 + local command 17:12:14 + command=local 17:12:14 + '[' 2 -gt 0 ']' 17:12:14 + shift 17:12:14 + case "$command" in 17:12:14 + command pyenv local 3.11.7 17:12:14 + pyenv local 3.11.7 17:12:14 + for arg in "$@" 17:12:14 + case $arg in 17:12:14 + pkg_list+='tox ' 17:12:14 + for arg in "$@" 17:12:14 + case $arg in 17:12:14 + pkg_list+='virtualenv ' 17:12:14 + for arg in "$@" 17:12:14 + case $arg in 17:12:14 + pkg_list+='urllib3~=1.26.15 ' 17:12:14 + [[ -f /tmp/.toxenv ]] 17:12:14 ++ cat /tmp/.toxenv 17:12:14 + lf_venv=/tmp/venv-xPML 17:12:14 + echo 'lf-activate-venv(): INFO: Reuse venv:/tmp/venv-xPML from' file:/tmp/.toxenv 17:12:14 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-xPML from file:/tmp/.toxenv 17:12:14 + /tmp/venv-xPML/bin/python3 -m pip install --upgrade --quiet pip virtualenv 17:12:15 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 17:12:15 + echo 'lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 ' 17:12:15 lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 17:12:15 + /tmp/venv-xPML/bin/python3 -m pip install --upgrade --quiet --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 17:12:16 + type python3 17:12:16 + true 17:12:16 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-xPML/bin to PATH' 17:12:16 lf-activate-venv(): INFO: Adding /tmp/venv-xPML/bin to PATH 17:12:16 + PATH=/tmp/venv-xPML/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:12:16 + return 0 17:12:16 + [[ -d /opt/pyenv ]] 17:12:16 + echo '---> Setting up pyenv' 17:12:16 ---> Setting up pyenv 17:12:16 + export PYENV_ROOT=/opt/pyenv 17:12:16 + PYENV_ROOT=/opt/pyenv 17:12:16 + export PATH=/opt/pyenv/bin:/tmp/venv-xPML/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:12:16 + PATH=/opt/pyenv/bin:/tmp/venv-xPML/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:12:16 ++ pwd 17:12:16 + PYTHONPATH=/w/workspace/transportpce-tox-verify-scandium 17:12:16 + export PYTHONPATH 17:12:16 + export TOX_TESTENV_PASSENV=PYTHONPATH 17:12:16 + TOX_TESTENV_PASSENV=PYTHONPATH 17:12:16 + tox --version 17:12:16 4.23.2 from /tmp/venv-xPML/lib/python3.11/site-packages/tox/__init__.py 17:12:16 + PARALLEL=True 17:12:16 + TOX_OPTIONS_LIST= 17:12:16 + [[ -n '' ]] 17:12:16 + case ${PARALLEL,,} in 17:12:16 + TOX_OPTIONS_LIST=' --parallel auto --parallel-live' 17:12:16 + tox --parallel auto --parallel-live 17:12:16 + tee -a /w/workspace/transportpce-tox-verify-scandium/archives/tox/tox.log 17:12:18 docs: install_deps> python -I -m pip install -r docs/requirements.txt 17:12:18 checkbashisms: freeze> python -m pip freeze --all 17:12:18 buildcontroller: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 17:12:18 docs-linkcheck: install_deps> python -I -m pip install -r docs/requirements.txt 17:12:19 checkbashisms: pip==24.3.1,setuptools==75.2.0,wheel==0.44.0 17:12:19 checkbashisms: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./fixCIcentOS8reposMirrors.sh 17:12:19 checkbashisms: commands[1] /w/workspace/transportpce-tox-verify-scandium/tests> sh -c 'command checkbashisms>/dev/null || sudo yum install -y devscripts-checkbashisms || sudo yum install -y devscripts-minimal || sudo yum install -y devscripts || sudo yum install -y https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/31/Everything/x86_64/os/Packages/d/devscripts-checkbashisms-2.19.6-2.fc31.x86_64.rpm || (echo "checkbashisms command not found - please install it (e.g. sudo apt-get install devscripts | yum install devscripts-minimal )" >&2 && exit 1)' 17:12:19 checkbashisms: commands[2] /w/workspace/transportpce-tox-verify-scandium/tests> find . -not -path '*/\.*' -name '*.sh' -exec checkbashisms -f '{}' + 17:12:19 script ./reflectwarn.sh does not appear to have a #! interpreter line; 17:12:19 you may get strange results 17:12:20 checkbashisms: OK ✔ in 2.9 seconds 17:12:20 pre-commit: install_deps> python -I -m pip install pre-commit 17:12:23 pre-commit: freeze> python -m pip freeze --all 17:12:23 pre-commit: cfgv==3.4.0,distlib==0.3.9,filelock==3.16.1,identify==2.6.1,nodeenv==1.9.1,pip==24.3.1,platformdirs==4.3.6,pre_commit==4.0.1,PyYAML==6.0.2,setuptools==75.2.0,virtualenv==20.27.1,wheel==0.44.0 17:12:23 pre-commit: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./fixCIcentOS8reposMirrors.sh 17:12:23 pre-commit: commands[1] /w/workspace/transportpce-tox-verify-scandium/tests> sh -c 'which cpan || sudo yum install -y perl-CPAN || (echo "cpan command not found - please install it (e.g. sudo apt-get install perl-modules | yum install perl-CPAN )" >&2 && exit 1)' 17:12:23 /usr/bin/cpan 17:12:23 pre-commit: commands[2] /w/workspace/transportpce-tox-verify-scandium/tests> pre-commit run --all-files --show-diff-on-failure 17:12:23 [WARNING] hook id `remove-tabs` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 17:12:23 [WARNING] hook id `perltidy` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 17:12:23 [INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks. 17:12:23 [WARNING] repo `https://github.com/pre-commit/pre-commit-hooks` uses deprecated stage names (commit, push) which will be removed in a future version. Hint: often `pre-commit autoupdate --repo https://github.com/pre-commit/pre-commit-hooks` will fix this. if it does not -- consider reporting an issue to that repo. 17:12:23 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint. 17:12:24 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint:./gitlint-core[trusted-deps]. 17:12:24 [INFO] Initializing environment for https://github.com/Lucas-C/pre-commit-hooks. 17:12:25 [INFO] Initializing environment for https://github.com/pre-commit/mirrors-autopep8. 17:12:25 [INFO] Initializing environment for https://github.com/perltidy/perltidy. 17:12:25 buildcontroller: freeze> python -m pip freeze --all 17:12:25 buildcontroller: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 17:12:25 buildcontroller: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./build_controller.sh 17:12:25 + update-java-alternatives -l 17:12:25 java-1.11.0-openjdk-amd64 1111 /usr/lib/jvm/java-1.11.0-openjdk-amd64 17:12:25 java-1.12.0-openjdk-amd64 1211 /usr/lib/jvm/java-1.12.0-openjdk-amd64 17:12:25 java-1.17.0-openjdk-amd64 1711 /usr/lib/jvm/java-1.17.0-openjdk-amd64 17:12:25 java-1.21.0-openjdk-amd64 2111 /usr/lib/jvm/java-1.21.0-openjdk-amd64 17:12:25 java-1.8.0-openjdk-amd64 1081 /usr/lib/jvm/java-1.8.0-openjdk-amd64 17:12:25 + sudo update-java-alternatives -s java-1.21.0-openjdk-amd64 17:12:26 [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. 17:12:26 [INFO] Once installed this environment will be reused. 17:12:26 [INFO] This may take a few minutes... 17:12:26 + sed -n ;s/.* version "\(.*\)\.\(.*\)\..*".*$/\1/p; 17:12:26 + java -version 17:12:27 + JAVA_VER=21 17:12:27 21 17:12:27 + echo 21 17:12:27 + sed -n ;s/javac \(.*\)\.\(.*\)\..*.*$/\1/p; 17:12:27 + javac -version 17:12:27 + JAVAC_VER=21 17:12:27 + echo 21 17:12:27 21 17:12:27 ok, java is 21 or newer 17:12:27 + [ 21 -ge 21 ] 17:12:27 + [ 21 -ge 21 ] 17:12:27 + echo ok, java is 21 or newer 17:12:27 + wget -nv https://dlcdn.apache.org/maven/maven-3/3.9.8/binaries/apache-maven-3.9.8-bin.tar.gz -P /tmp 17:12:28 2024-10-29 17:12:28 URL:https://dlcdn.apache.org/maven/maven-3/3.9.8/binaries/apache-maven-3.9.8-bin.tar.gz [9083702/9083702] -> "/tmp/apache-maven-3.9.8-bin.tar.gz" [1] 17:12:28 + sudo mkdir -p /opt 17:12:28 + sudo tar xf /tmp/apache-maven-3.9.8-bin.tar.gz -C /opt 17:12:28 + sudo ln -s /opt/apache-maven-3.9.8 /opt/maven 17:12:28 + sudo ln -s /opt/maven/bin/mvn /usr/bin/mvn 17:12:28 + mvn --version 17:12:28 Apache Maven 3.9.8 (36645f6c9b5079805ea5009217e36f2cffd34256) 17:12:28 Maven home: /opt/maven 17:12:28 Java version: 21.0.4, vendor: Ubuntu, runtime: /usr/lib/jvm/java-21-openjdk-amd64 17:12:28 Default locale: en, platform encoding: UTF-8 17:12:28 OS name: "linux", version: "5.4.0-190-generic", arch: "amd64", family: "unix" 17:12:29 NOTE: Picked up JDK_JAVA_OPTIONS: 17:12:29 --add-opens=java.base/java.io=ALL-UNNAMED 17:12:29 --add-opens=java.base/java.lang=ALL-UNNAMED 17:12:29 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 17:12:29 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 17:12:29 --add-opens=java.base/java.net=ALL-UNNAMED 17:12:29 --add-opens=java.base/java.nio=ALL-UNNAMED 17:12:29 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 17:12:29 --add-opens=java.base/java.nio.file=ALL-UNNAMED 17:12:29 --add-opens=java.base/java.util=ALL-UNNAMED 17:12:29 --add-opens=java.base/java.util.jar=ALL-UNNAMED 17:12:29 --add-opens=java.base/java.util.stream=ALL-UNNAMED 17:12:29 --add-opens=java.base/java.util.zip=ALL-UNNAMED 17:12:29 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 17:12:29 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 17:12:29 -Xlog:disable 17:12:31 [INFO] Installing environment for https://github.com/Lucas-C/pre-commit-hooks. 17:12:31 [INFO] Once installed this environment will be reused. 17:12:31 [INFO] This may take a few minutes... 17:12:39 [INFO] Installing environment for https://github.com/pre-commit/mirrors-autopep8. 17:12:39 [INFO] Once installed this environment will be reused. 17:12:39 [INFO] This may take a few minutes... 17:12:43 [INFO] Installing environment for https://github.com/perltidy/perltidy. 17:12:43 [INFO] Once installed this environment will be reused. 17:12:43 [INFO] This may take a few minutes... 17:12:48 docs: freeze> python -m pip freeze --all 17:12:48 docs-linkcheck: freeze> python -m pip freeze --all 17:12:49 docs: alabaster==1.0.0,attrs==24.2.0,babel==2.16.0,blockdiag==3.0.0,certifi==2024.8.30,charset-normalizer==3.4.0,contourpy==1.3.0,cycler==0.12.1,docutils==0.21.2,fonttools==4.54.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.10,imagesize==1.4.1,Jinja2==3.1.4,jsonschema==3.2.0,kiwisolver==1.4.7,lfdocs-conf==0.9.0,MarkupSafe==3.0.2,matplotlib==3.9.2,numpy==2.1.2,nwdiag==3.0.0,packaging==24.1,pillow==11.0.0,pip==24.3.1,Pygments==2.18.0,pyparsing==3.2.0,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.2,requests==2.32.3,requests-file==1.5.1,seqdiag==3.0.0,setuptools==75.2.0,six==1.16.0,snowballstemmer==2.2.0,Sphinx==8.1.3,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-rtd-theme==3.0.1,sphinx-tabs==3.4.7,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.30,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.2.3,webcolors==24.8.0,wheel==0.44.0 17:12:49 docs: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> sphinx-build -q -W --keep-going -b html -n -d /w/workspace/transportpce-tox-verify-scandium/.tox/docs/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-scandium/docs/_build/html 17:12:49 docs-linkcheck: alabaster==1.0.0,attrs==24.2.0,babel==2.16.0,blockdiag==3.0.0,certifi==2024.8.30,charset-normalizer==3.4.0,contourpy==1.3.0,cycler==0.12.1,docutils==0.21.2,fonttools==4.54.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.10,imagesize==1.4.1,Jinja2==3.1.4,jsonschema==3.2.0,kiwisolver==1.4.7,lfdocs-conf==0.9.0,MarkupSafe==3.0.2,matplotlib==3.9.2,numpy==2.1.2,nwdiag==3.0.0,packaging==24.1,pillow==11.0.0,pip==24.3.1,Pygments==2.18.0,pyparsing==3.2.0,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.2,requests==2.32.3,requests-file==1.5.1,seqdiag==3.0.0,setuptools==75.2.0,six==1.16.0,snowballstemmer==2.2.0,Sphinx==8.1.3,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-rtd-theme==3.0.1,sphinx-tabs==3.4.7,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.30,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.2.3,webcolors==24.8.0,wheel==0.44.0 17:12:49 docs-linkcheck: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> sphinx-build -q -b linkcheck -d /w/workspace/transportpce-tox-verify-scandium/.tox/docs-linkcheck/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-scandium/docs/_build/linkcheck 17:12:51 docs: OK ✔ in 34.59 seconds 17:12:51 pylint: install_deps> python -I -m pip install 'pylint>=2.6.0' 17:12:54 trim trailing whitespace.................................................Passed 17:12:54 Tabs remover.............................................................Passed 17:12:54 autopep8.................................................................docs-linkcheck: OK ✔ in 35.92 seconds 17:12:56 pylint: freeze> python -m pip freeze --all 17:12:57 pylint: astroid==3.3.5,dill==0.3.9,isort==5.13.2,mccabe==0.7.0,pip==24.3.1,platformdirs==4.3.6,pylint==3.3.1,setuptools==75.2.0,tomlkit==0.13.2,wheel==0.44.0 17:12:57 pylint: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> find transportpce_tests/ -name '*.py' -exec pylint --fail-under=10 --max-line-length=120 --disable=missing-docstring,import-error --disable=fixme --disable=duplicate-code '--module-rgx=([a-z0-9_]+$)|([0-9.]{1,30}$)' '--method-rgx=(([a-z_][a-zA-Z0-9_]{2,})|(_[a-z0-9_]*)|(__[a-zA-Z][a-zA-Z0-9_]+__))$' '--variable-rgx=[a-zA-Z_][a-zA-Z0-9_]{1,30}$' '{}' + 17:12:58 Passed 17:12:58 perltidy.................................................................Passed 17:12:59 pre-commit: commands[3] /w/workspace/transportpce-tox-verify-scandium/tests> pre-commit run gitlint-ci --hook-stage manual 17:12:59 [WARNING] hook id `remove-tabs` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 17:12:59 [WARNING] hook id `perltidy` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 17:12:59 [INFO] Installing environment for https://github.com/jorisroovers/gitlint. 17:12:59 [INFO] Once installed this environment will be reused. 17:12:59 [INFO] This may take a few minutes... 17:13:17 17:13:17 ------------------------------------ 17:13:17 Your code has been rated at 10.00/10 17:13:17 17:13:17 gitlint..................................................................Passed 17:14:08 pre-commit: OK ✔ in 57.48 seconds 17:14:08 pylint: OK ✔ in 26.98 seconds 17:14:08 buildcontroller: OK ✔ in 1 minute 49.98 seconds 17:14:08 build_karaf_tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 17:14:08 sims: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 17:14:08 testsPCE: install_deps> python -I -m pip install gnpy4tpce==2.4.7 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 17:14:08 build_karaf_tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 17:14:15 build_karaf_tests221: freeze> python -m pip freeze --all 17:14:15 sims: freeze> python -m pip freeze --all 17:14:15 build_karaf_tests121: freeze> python -m pip freeze --all 17:14:15 build_karaf_tests221: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 17:14:15 build_karaf_tests221: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./build_karaf_for_tests.sh 17:14:15 NOTE: Picked up JDK_JAVA_OPTIONS: 17:14:15 --add-opens=java.base/java.io=ALL-UNNAMED 17:14:15 --add-opens=java.base/java.lang=ALL-UNNAMED 17:14:15 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 17:14:15 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 17:14:15 --add-opens=java.base/java.net=ALL-UNNAMED 17:14:15 --add-opens=java.base/java.nio=ALL-UNNAMED 17:14:15 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 17:14:15 --add-opens=java.base/java.nio.file=ALL-UNNAMED 17:14:15 --add-opens=java.base/java.util=ALL-UNNAMED 17:14:15 --add-opens=java.base/java.util.jar=ALL-UNNAMED 17:14:15 --add-opens=java.base/java.util.stream=ALL-UNNAMED 17:14:15 --add-opens=java.base/java.util.zip=ALL-UNNAMED 17:14:15 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 17:14:15 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 17:14:15 -Xlog:disable 17:14:15 sims: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 17:14:15 sims: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./install_lightynode.sh 17:14:15 Using lighynode version 20.1.0.2 17:14:15 Installing lightynode device to ./lightynode/lightynode-openroadm-device directory 17:14:15 build_karaf_tests121: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 17:14:15 build_karaf_tests121: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./build_karaf_for_tests.sh 17:14:15 NOTE: Picked up JDK_JAVA_OPTIONS: 17:14:15 --add-opens=java.base/java.io=ALL-UNNAMED 17:14:15 --add-opens=java.base/java.lang=ALL-UNNAMED 17:14:15 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 17:14:15 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 17:14:15 --add-opens=java.base/java.net=ALL-UNNAMED 17:14:15 --add-opens=java.base/java.nio=ALL-UNNAMED 17:14:15 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 17:14:15 --add-opens=java.base/java.nio.file=ALL-UNNAMED 17:14:15 --add-opens=java.base/java.util=ALL-UNNAMED 17:14:15 --add-opens=java.base/java.util.jar=ALL-UNNAMED 17:14:15 --add-opens=java.base/java.util.stream=ALL-UNNAMED 17:14:15 --add-opens=java.base/java.util.zip=ALL-UNNAMED 17:14:15 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 17:14:15 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 17:14:15 -Xlog:disable 17:14:18 sims: OK ✔ in 11.37 seconds 17:14:18 build_karaf_tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 17:14:33 build_karaf_tests71: freeze> python -m pip freeze --all 17:14:33 build_karaf_tests71: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 17:14:33 build_karaf_tests71: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./build_karaf_for_tests.sh 17:14:34 NOTE: Picked up JDK_JAVA_OPTIONS: 17:14:34 --add-opens=java.base/java.io=ALL-UNNAMED 17:14:34 --add-opens=java.base/java.lang=ALL-UNNAMED 17:14:34 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 17:14:34 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 17:14:34 --add-opens=java.base/java.net=ALL-UNNAMED 17:14:34 --add-opens=java.base/java.nio=ALL-UNNAMED 17:14:34 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 17:14:34 --add-opens=java.base/java.nio.file=ALL-UNNAMED 17:14:34 --add-opens=java.base/java.util=ALL-UNNAMED 17:14:34 --add-opens=java.base/java.util.jar=ALL-UNNAMED 17:14:34 --add-opens=java.base/java.util.stream=ALL-UNNAMED 17:14:34 --add-opens=java.base/java.util.zip=ALL-UNNAMED 17:14:34 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 17:14:34 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 17:14:34 -Xlog:disable 17:15:06 build_karaf_tests121: OK ✔ in 58.54 seconds 17:15:06 build_karaf_tests221: OK ✔ in 58.55 seconds 17:15:06 build_karaf_tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 17:15:06 tests_tapi: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 17:15:16 build_karaf_tests_hybrid: freeze> python -m pip freeze --all 17:15:16 tests_tapi: freeze> python -m pip freeze --all 17:15:16 build_karaf_tests_hybrid: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 17:15:16 build_karaf_tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./build_karaf_for_tests.sh 17:15:16 NOTE: Picked up JDK_JAVA_OPTIONS: 17:15:16 --add-opens=java.base/java.io=ALL-UNNAMED 17:15:16 --add-opens=java.base/java.lang=ALL-UNNAMED 17:15:16 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 17:15:16 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 17:15:16 --add-opens=java.base/java.net=ALL-UNNAMED 17:15:16 --add-opens=java.base/java.nio=ALL-UNNAMED 17:15:16 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 17:15:16 --add-opens=java.base/java.nio.file=ALL-UNNAMED 17:15:16 --add-opens=java.base/java.util=ALL-UNNAMED 17:15:16 --add-opens=java.base/java.util.jar=ALL-UNNAMED 17:15:16 --add-opens=java.base/java.util.stream=ALL-UNNAMED 17:15:16 --add-opens=java.base/java.util.zip=ALL-UNNAMED 17:15:16 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 17:15:16 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 17:15:16 -Xlog:disable 17:15:16 tests_tapi: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 17:15:16 tests_tapi: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh tapi 17:15:16 using environment variables from ./karaf221.env 17:15:16 pytest -q transportpce_tests/tapi/test01_abstracted_topology.py 17:15:35 build_karaf_tests71: OK ✔ in 1 minute 2.18 seconds 17:15:35 testsPCE: freeze> python -m pip freeze --all 17:15:35 testsPCE: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,click==8.1.7,contourpy==1.3.0,cryptography==3.3.2,cycler==0.12.1,dict2xml==1.7.6,Flask==2.1.3,Flask-Injector==0.14.0,fonttools==4.54.1,gnpy4tpce==2.4.7,idna==3.10,iniconfig==2.0.0,injector==0.22.0,itsdangerous==2.2.0,Jinja2==3.1.4,kiwisolver==1.4.7,lxml==5.3.0,MarkupSafe==3.0.2,matplotlib==3.9.2,netconf-client==3.1.1,networkx==2.8.8,numpy==1.26.4,packaging==24.1,pandas==1.5.3,paramiko==3.5.0,pbr==5.11.1,pillow==11.0.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pyparsing==3.2.0,pytest==8.3.3,python-dateutil==2.9.0.post0,pytz==2024.2,requests==2.32.3,scipy==1.14.1,setuptools==50.3.2,six==1.16.0,urllib3==2.2.3,Werkzeug==2.0.3,wheel==0.44.0,xlrd==1.2.0 17:15:35 testsPCE: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh pce 17:15:35 pytest -q transportpce_tests/pce/test01_pce.py 17:16:36 .................................................. [100%] 17:17:39 20 passed in 124.31s (0:02:04) 17:17:40 pytest -q transportpce_tests/pce/test02_pce_400G.py 17:17:50 ................ [100%] 17:18:18 9 passed in 38.25s 17:18:18 pytest -q transportpce_tests/pce/test03_gnpy.py 17:18:34 ..................... [100%] 17:18:56 8 passed in 37.46s 17:18:56 pytest -q transportpce_tests/pce/test04_pce_bug_fix.py 17:18:59 [100%] 17:18:59 50 passed in 222.86s (0:03:42) 17:18:59 pytest -q transportpce_tests/tapi/test02_full_topology.py 17:19:35 ... [100%] 17:19:41 3 passed in 45.05s 17:19:41 build_karaf_tests_hybrid: OK ✔ in 51.75 seconds 17:19:41 testsPCE: OK ✔ in 5 minutes 34.53 seconds 17:19:41 tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 17:19:49 tests121: freeze> python -m pip freeze --all 17:19:49 tests121: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 17:19:49 tests121: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh 1.2.1 17:19:49 using environment variables from ./karaf121.env 17:19:49 pytest -q transportpce_tests/1.2.1/test01_portmapping.py 17:20:15 ...........FF....................FF [100%] 17:23:44 =================================== FAILURES =================================== 17:23:44 _____________ TransportPCEtesting.test_12_check_openroadm_topology _____________ 17:23:44 17:23:44 self = 17:23:44 17:23:44 def test_12_check_openroadm_topology(self): 17:23:44 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 17:23:44 self.assertEqual(response['status_code'], requests.codes.ok) 17:23:44 > self.assertEqual(len(response['network'][0]['node']), 13, 'There should be 13 openroadm nodes') 17:23:44 E AssertionError: 17 != 13 : There should be 13 openroadm nodes 17:23:44 17:23:44 transportpce_tests/tapi/test02_full_topology.py:272: AssertionError 17:23:44 ____________ TransportPCEtesting.test_13_get_tapi_topology_details _____________ 17:23:44 17:23:44 self = 17:23:44 17:23:44 def test_13_get_tapi_topology_details(self): 17:23:44 self.tapi_topo["topology-id"] = test_utils.T0_FULL_MULTILAYER_TOPO_UUID 17:23:44 response = test_utils.transportpce_api_rpc_request( 17:23:44 'tapi-topology', 'get-topology-details', self.tapi_topo) 17:23:44 time.sleep(2) 17:23:44 self.assertEqual(response['status_code'], requests.codes.ok) 17:23:44 > self.assertEqual(len(response['output']['topology']['node']), 8, 'There should be 8 TAPI nodes') 17:23:44 E AssertionError: 9 != 8 : There should be 8 TAPI nodes 17:23:44 17:23:44 transportpce_tests/tapi/test02_full_topology.py:282: AssertionError 17:23:44 =========================== short test summary info ============================ 17:23:44 FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_12_check_openroadm_topology 17:23:44 FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_13_get_tapi_topology_details 17:23:44 2 failed, 28 passed in 284.76s (0:04:44) 17:23:44 tests_tapi: exit 1 (508.13 seconds) /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh tapi pid=31324 17:23:44 tests_tapi: FAIL ✖ in 8 minutes 39.04 seconds 17:23:44 tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 17:23:45 FFFFFtests71: freeze> python -m pip freeze --all 17:23:51 tests71: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 17:23:51 tests71: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh 7.1 17:23:51 using environment variables from ./karaf71.env 17:23:51 pytest -q transportpce_tests/7.1/test01_portmapping.py 17:23:51 FFFFFFFFFFF [100%] 17:24:05 =================================== FAILURES =================================== 17:24:05 _____ TransportPCEPortMappingTesting.test_04_rdm_portmapping_DEG1_TTP_TXRX _____ 17:24:05 17:24:05 self = 17:24:05 method = 'GET' 17:24:05 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX' 17:24:05 body = None 17:24:05 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:24:05 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:05 redirect = False, assert_same_host = False 17:24:05 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:24:05 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:24:05 decode_content = False, response_kw = {} 17:24:05 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX', query=None, fragment=None) 17:24:05 destination_scheme = None, conn = None, release_this_conn = True 17:24:05 http_tunnel_required = False, err = None, clean_exit = False 17:24:05 17:24:05 def urlopen( # type: ignore[override] 17:24:05 self, 17:24:05 method: str, 17:24:05 url: str, 17:24:05 body: _TYPE_BODY | None = None, 17:24:05 headers: typing.Mapping[str, str] | None = None, 17:24:05 retries: Retry | bool | int | None = None, 17:24:05 redirect: bool = True, 17:24:05 assert_same_host: bool = True, 17:24:05 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:05 pool_timeout: int | None = None, 17:24:05 release_conn: bool | None = None, 17:24:05 chunked: bool = False, 17:24:05 body_pos: _TYPE_BODY_POSITION | None = None, 17:24:05 preload_content: bool = True, 17:24:05 decode_content: bool = True, 17:24:05 **response_kw: typing.Any, 17:24:05 ) -> BaseHTTPResponse: 17:24:05 """ 17:24:05 Get a connection from the pool and perform an HTTP request. This is the 17:24:05 lowest level call for making a request, so you'll need to specify all 17:24:05 the raw details. 17:24:05 17:24:05 .. note:: 17:24:05 17:24:05 More commonly, it's appropriate to use a convenience method 17:24:05 such as :meth:`request`. 17:24:05 17:24:05 .. note:: 17:24:05 17:24:05 `release_conn` will only behave as expected if 17:24:05 `preload_content=False` because we want to make 17:24:05 `preload_content=False` the default behaviour someday soon without 17:24:05 breaking backwards compatibility. 17:24:05 17:24:05 :param method: 17:24:05 HTTP request method (such as GET, POST, PUT, etc.) 17:24:05 17:24:05 :param url: 17:24:05 The URL to perform the request on. 17:24:05 17:24:05 :param body: 17:24:05 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:24:05 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:24:05 17:24:05 :param headers: 17:24:05 Dictionary of custom headers to send, such as User-Agent, 17:24:05 If-None-Match, etc. If None, pool headers are used. If provided, 17:24:05 these headers completely replace any pool-specific headers. 17:24:05 17:24:05 :param retries: 17:24:05 Configure the number of retries to allow before raising a 17:24:05 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:24:05 17:24:05 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:24:05 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:24:05 over different types of retries. 17:24:05 Pass an integer number to retry connection errors that many times, 17:24:05 but no other types of errors. Pass zero to never retry. 17:24:05 17:24:05 If ``False``, then retries are disabled and any exception is raised 17:24:05 immediately. Also, instead of raising a MaxRetryError on redirects, 17:24:05 the redirect response will be returned. 17:24:05 17:24:05 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:24:05 17:24:05 :param redirect: 17:24:05 If True, automatically handle redirects (status codes 301, 302, 17:24:05 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:24:05 will disable redirect, too. 17:24:05 17:24:05 :param assert_same_host: 17:24:05 If ``True``, will make sure that the host of the pool requests is 17:24:05 consistent else will raise HostChangedError. When ``False``, you can 17:24:05 use the pool on an HTTP proxy and request foreign hosts. 17:24:05 17:24:05 :param timeout: 17:24:05 If specified, overrides the default timeout for this one 17:24:05 request. It may be a float (in seconds) or an instance of 17:24:05 :class:`urllib3.util.Timeout`. 17:24:05 17:24:05 :param pool_timeout: 17:24:05 If set and the pool is set to block=True, then this method will 17:24:05 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:24:05 connection is available within the time period. 17:24:05 17:24:05 :param bool preload_content: 17:24:05 If True, the response's body will be preloaded into memory. 17:24:05 17:24:05 :param bool decode_content: 17:24:05 If True, will attempt to decode the body based on the 17:24:05 'content-encoding' header. 17:24:05 17:24:05 :param release_conn: 17:24:05 If False, then the urlopen call will not release the connection 17:24:05 back into the pool once a response is received (but will release if 17:24:05 you read the entire contents of the response such as when 17:24:05 `preload_content=True`). This is useful if you're not preloading 17:24:05 the response's content immediately. You will need to call 17:24:05 ``r.release_conn()`` on the response ``r`` to return the connection 17:24:05 back into the pool. If None, it takes the value of ``preload_content`` 17:24:05 which defaults to ``True``. 17:24:05 17:24:05 :param bool chunked: 17:24:05 If True, urllib3 will send the body using chunked transfer 17:24:05 encoding. Otherwise, urllib3 will send the body using the standard 17:24:05 content-length form. Defaults to False. 17:24:05 17:24:05 :param int body_pos: 17:24:05 Position to seek to in file-like body in the event of a retry or 17:24:05 redirect. Typically this won't need to be set because urllib3 will 17:24:05 auto-populate the value when needed. 17:24:05 """ 17:24:05 parsed_url = parse_url(url) 17:24:05 destination_scheme = parsed_url.scheme 17:24:05 17:24:05 if headers is None: 17:24:05 headers = self.headers 17:24:05 17:24:05 if not isinstance(retries, Retry): 17:24:05 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:24:05 17:24:05 if release_conn is None: 17:24:05 release_conn = preload_content 17:24:05 17:24:05 # Check host 17:24:05 if assert_same_host and not self.is_same_host(url): 17:24:05 raise HostChangedError(self, url, retries) 17:24:05 17:24:05 # Ensure that the URL we're connecting to is properly encoded 17:24:05 if url.startswith("/"): 17:24:05 url = to_str(_encode_target(url)) 17:24:05 else: 17:24:05 url = to_str(parsed_url.url) 17:24:05 17:24:05 conn = None 17:24:05 17:24:05 # Track whether `conn` needs to be released before 17:24:05 # returning/raising/recursing. Update this variable if necessary, and 17:24:05 # leave `release_conn` constant throughout the function. That way, if 17:24:05 # the function recurses, the original value of `release_conn` will be 17:24:05 # passed down into the recursive call, and its value will be respected. 17:24:05 # 17:24:05 # See issue #651 [1] for details. 17:24:05 # 17:24:05 # [1] 17:24:05 release_this_conn = release_conn 17:24:05 17:24:05 http_tunnel_required = connection_requires_http_tunnel( 17:24:05 self.proxy, self.proxy_config, destination_scheme 17:24:05 ) 17:24:05 17:24:05 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:24:05 # have to copy the headers dict so we can safely change it without those 17:24:05 # changes being reflected in anyone else's copy. 17:24:05 if not http_tunnel_required: 17:24:05 headers = headers.copy() # type: ignore[attr-defined] 17:24:05 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:24:05 17:24:05 # Must keep the exception bound to a separate variable or else Python 3 17:24:05 # complains about UnboundLocalError. 17:24:05 err = None 17:24:05 17:24:05 # Keep track of whether we cleanly exited the except block. This 17:24:05 # ensures we do proper cleanup in finally. 17:24:05 clean_exit = False 17:24:05 17:24:05 # Rewind body position, if needed. Record current position 17:24:05 # for future rewinds in the event of a redirect/retry. 17:24:05 body_pos = set_file_position(body, body_pos) 17:24:05 17:24:05 try: 17:24:05 # Request a connection from the queue. 17:24:05 timeout_obj = self._get_timeout(timeout) 17:24:05 conn = self._get_conn(timeout=pool_timeout) 17:24:05 17:24:05 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:24:05 17:24:05 # Is this a closed/new connection that requires CONNECT tunnelling? 17:24:05 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:24:05 try: 17:24:05 self._prepare_proxy(conn) 17:24:05 except (BaseSSLError, OSError, SocketTimeout) as e: 17:24:05 self._raise_timeout( 17:24:05 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:24:05 ) 17:24:05 raise 17:24:05 17:24:05 # If we're going to release the connection in ``finally:``, then 17:24:05 # the response doesn't need to know about the connection. Otherwise 17:24:05 # it will also try to release it and we'll have a double-release 17:24:05 # mess. 17:24:05 response_conn = conn if not release_conn else None 17:24:05 17:24:05 # Make the request on the HTTPConnection object 17:24:05 > response = self._make_request( 17:24:05 conn, 17:24:05 method, 17:24:05 url, 17:24:05 timeout=timeout_obj, 17:24:05 body=body, 17:24:05 headers=headers, 17:24:05 chunked=chunked, 17:24:05 retries=retries, 17:24:05 response_conn=response_conn, 17:24:05 preload_content=preload_content, 17:24:05 decode_content=decode_content, 17:24:05 **response_kw, 17:24:05 ) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:536: in _make_request 17:24:05 response = conn.getresponse() 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:507: in getresponse 17:24:05 httplib_response = super().getresponse() 17:24:05 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1386: in getresponse 17:24:05 response.begin() 17:24:05 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:325: in begin 17:24:05 version, status, reason = self._read_status() 17:24:05 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:286: in _read_status 17:24:05 line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 17:24:05 self = 17:24:05 b = 17:24:05 17:24:05 def readinto(self, b): 17:24:05 """Read up to len(b) bytes into the writable buffer *b* and return 17:24:05 the number of bytes read. If the socket is non-blocking and no bytes 17:24:05 are available, None is returned. 17:24:05 17:24:05 If *b* is non-empty, a 0 return value indicates that the connection 17:24:05 was shutdown at the other end. 17:24:05 """ 17:24:05 self._checkClosed() 17:24:05 self._checkReadable() 17:24:05 if self._timeout_occurred: 17:24:05 raise OSError("cannot read from timed out object") 17:24:05 while True: 17:24:05 try: 17:24:05 > return self._sock.recv_into(b) 17:24:05 E ConnectionResetError: [Errno 104] Connection reset by peer 17:24:05 17:24:05 /opt/pyenv/versions/3.11.7/lib/python3.11/socket.py:706: ConnectionResetError 17:24:05 17:24:05 During handling of the above exception, another exception occurred: 17:24:05 17:24:05 self = 17:24:05 request = , stream = False 17:24:05 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:05 proxies = OrderedDict() 17:24:05 17:24:05 def send( 17:24:05 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:05 ): 17:24:05 """Sends PreparedRequest object. Returns Response object. 17:24:05 17:24:05 :param request: The :class:`PreparedRequest ` being sent. 17:24:05 :param stream: (optional) Whether to stream the request content. 17:24:05 :param timeout: (optional) How long to wait for the server to send 17:24:05 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:05 read timeout) ` tuple. 17:24:05 :type timeout: float or tuple or urllib3 Timeout object 17:24:05 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:05 we verify the server's TLS certificate, or a string, in which case it 17:24:05 must be a path to a CA bundle to use 17:24:05 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:05 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:05 :rtype: requests.Response 17:24:05 """ 17:24:05 17:24:05 try: 17:24:05 conn = self.get_connection_with_tls_context( 17:24:05 request, verify, proxies=proxies, cert=cert 17:24:05 ) 17:24:05 except LocationValueError as e: 17:24:05 raise InvalidURL(e, request=request) 17:24:05 17:24:05 self.cert_verify(conn, request.url, verify, cert) 17:24:05 url = self.request_url(request, proxies) 17:24:05 self.add_headers( 17:24:05 request, 17:24:05 stream=stream, 17:24:05 timeout=timeout, 17:24:05 verify=verify, 17:24:05 cert=cert, 17:24:05 proxies=proxies, 17:24:05 ) 17:24:05 17:24:05 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:05 17:24:05 if isinstance(timeout, tuple): 17:24:05 try: 17:24:05 connect, read = timeout 17:24:05 timeout = TimeoutSauce(connect=connect, read=read) 17:24:05 except ValueError: 17:24:05 raise ValueError( 17:24:05 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:05 f"or a single float to set both timeouts to the same value." 17:24:05 ) 17:24:05 elif isinstance(timeout, TimeoutSauce): 17:24:05 pass 17:24:05 else: 17:24:05 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:05 17:24:05 try: 17:24:05 > resp = conn.urlopen( 17:24:05 method=request.method, 17:24:05 url=url, 17:24:05 body=request.body, 17:24:05 headers=request.headers, 17:24:05 redirect=False, 17:24:05 assert_same_host=False, 17:24:05 preload_content=False, 17:24:05 decode_content=False, 17:24:05 retries=self.max_retries, 17:24:05 timeout=timeout, 17:24:05 chunked=chunked, 17:24:05 ) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:24:05 retries = retries.increment( 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:474: in increment 17:24:05 raise reraise(type(error), error, _stacktrace) 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/util.py:38: in reraise 17:24:05 raise value.with_traceback(tb) 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: in urlopen 17:24:05 response = self._make_request( 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:536: in _make_request 17:24:05 response = conn.getresponse() 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:507: in getresponse 17:24:05 httplib_response = super().getresponse() 17:24:05 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1386: in getresponse 17:24:05 response.begin() 17:24:05 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:325: in begin 17:24:05 version, status, reason = self._read_status() 17:24:05 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:286: in _read_status 17:24:05 line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 17:24:05 self = 17:24:05 b = 17:24:05 17:24:05 def readinto(self, b): 17:24:05 """Read up to len(b) bytes into the writable buffer *b* and return 17:24:05 the number of bytes read. If the socket is non-blocking and no bytes 17:24:05 are available, None is returned. 17:24:05 17:24:05 If *b* is non-empty, a 0 return value indicates that the connection 17:24:05 was shutdown at the other end. 17:24:05 """ 17:24:05 self._checkClosed() 17:24:05 self._checkReadable() 17:24:05 if self._timeout_occurred: 17:24:05 raise OSError("cannot read from timed out object") 17:24:05 while True: 17:24:05 try: 17:24:05 > return self._sock.recv_into(b) 17:24:05 E urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')) 17:24:05 17:24:05 /opt/pyenv/versions/3.11.7/lib/python3.11/socket.py:706: ProtocolError 17:24:05 17:24:05 During handling of the above exception, another exception occurred: 17:24:05 17:24:05 self = 17:24:05 17:24:05 def test_04_rdm_portmapping_DEG1_TTP_TXRX(self): 17:24:05 > response = test_utils.get_portmapping_node_attr("ROADMA01", "mapping", "DEG1-TTP-TXRX") 17:24:05 17:24:05 transportpce_tests/1.2.1/test01_portmapping.py:72: 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 transportpce_tests/common/test_utils.py:473: in get_portmapping_node_attr 17:24:05 response = get_request(target_url) 17:24:05 transportpce_tests/common/test_utils.py:116: in get_request 17:24:05 return requests.request( 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:24:05 return session.request(method=method, url=url, **kwargs) 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:24:05 resp = self.send(prep, **send_kwargs) 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:24:05 r = adapter.send(request, **kwargs) 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 17:24:05 self = 17:24:05 request = , stream = False 17:24:05 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:05 proxies = OrderedDict() 17:24:05 17:24:05 def send( 17:24:05 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:05 ): 17:24:05 """Sends PreparedRequest object. Returns Response object. 17:24:05 17:24:05 :param request: The :class:`PreparedRequest ` being sent. 17:24:05 :param stream: (optional) Whether to stream the request content. 17:24:05 :param timeout: (optional) How long to wait for the server to send 17:24:05 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:05 read timeout) ` tuple. 17:24:05 :type timeout: float or tuple or urllib3 Timeout object 17:24:05 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:05 we verify the server's TLS certificate, or a string, in which case it 17:24:05 must be a path to a CA bundle to use 17:24:05 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:05 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:05 :rtype: requests.Response 17:24:05 """ 17:24:05 17:24:05 try: 17:24:05 conn = self.get_connection_with_tls_context( 17:24:05 request, verify, proxies=proxies, cert=cert 17:24:05 ) 17:24:05 except LocationValueError as e: 17:24:05 raise InvalidURL(e, request=request) 17:24:05 17:24:05 self.cert_verify(conn, request.url, verify, cert) 17:24:05 url = self.request_url(request, proxies) 17:24:05 self.add_headers( 17:24:05 request, 17:24:05 stream=stream, 17:24:05 timeout=timeout, 17:24:05 verify=verify, 17:24:05 cert=cert, 17:24:05 proxies=proxies, 17:24:05 ) 17:24:05 17:24:05 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:05 17:24:05 if isinstance(timeout, tuple): 17:24:05 try: 17:24:05 connect, read = timeout 17:24:05 timeout = TimeoutSauce(connect=connect, read=read) 17:24:05 except ValueError: 17:24:05 raise ValueError( 17:24:05 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:05 f"or a single float to set both timeouts to the same value." 17:24:05 ) 17:24:05 elif isinstance(timeout, TimeoutSauce): 17:24:05 pass 17:24:05 else: 17:24:05 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:05 17:24:05 try: 17:24:05 resp = conn.urlopen( 17:24:05 method=request.method, 17:24:05 url=url, 17:24:05 body=request.body, 17:24:05 headers=request.headers, 17:24:05 redirect=False, 17:24:05 assert_same_host=False, 17:24:05 preload_content=False, 17:24:05 decode_content=False, 17:24:05 retries=self.max_retries, 17:24:05 timeout=timeout, 17:24:05 chunked=chunked, 17:24:05 ) 17:24:05 17:24:05 except (ProtocolError, OSError) as err: 17:24:05 > raise ConnectionError(err, request=request) 17:24:05 E requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:682: ConnectionError 17:24:05 ----------------------------- Captured stdout call ----------------------------- 17:24:05 execution of test_04_rdm_portmapping_DEG1_TTP_TXRX 17:24:05 _____ TransportPCEPortMappingTesting.test_05_rdm_portmapping_SRG1_PP7_TXRX _____ 17:24:05 17:24:05 self = 17:24:05 17:24:05 def _new_conn(self) -> socket.socket: 17:24:05 """Establish a socket connection and set nodelay settings on it. 17:24:05 17:24:05 :return: New socket connection. 17:24:05 """ 17:24:05 try: 17:24:05 > sock = connection.create_connection( 17:24:05 (self._dns_host, self.port), 17:24:05 self.timeout, 17:24:05 source_address=self.source_address, 17:24:05 socket_options=self.socket_options, 17:24:05 ) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 17:24:05 raise err 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 17:24:05 address = ('localhost', 8182), timeout = 10, source_address = None 17:24:05 socket_options = [(6, 1, 1)] 17:24:05 17:24:05 def create_connection( 17:24:05 address: tuple[str, int], 17:24:05 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:05 source_address: tuple[str, int] | None = None, 17:24:05 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 17:24:05 ) -> socket.socket: 17:24:05 """Connect to *address* and return the socket object. 17:24:05 17:24:05 Convenience function. Connect to *address* (a 2-tuple ``(host, 17:24:05 port)``) and return the socket object. Passing the optional 17:24:05 *timeout* parameter will set the timeout on the socket instance 17:24:05 before attempting to connect. If no *timeout* is supplied, the 17:24:05 global default timeout setting returned by :func:`socket.getdefaulttimeout` 17:24:05 is used. If *source_address* is set it must be a tuple of (host, port) 17:24:05 for the socket to bind as a source address before making the connection. 17:24:05 An host of '' or port 0 tells the OS to use the default. 17:24:05 """ 17:24:05 17:24:05 host, port = address 17:24:05 if host.startswith("["): 17:24:05 host = host.strip("[]") 17:24:05 err = None 17:24:05 17:24:05 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 17:24:05 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 17:24:05 # The original create_connection function always returns all records. 17:24:05 family = allowed_gai_family() 17:24:05 17:24:05 try: 17:24:05 host.encode("idna") 17:24:05 except UnicodeError: 17:24:05 raise LocationParseError(f"'{host}', label empty or too long") from None 17:24:05 17:24:05 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 17:24:05 af, socktype, proto, canonname, sa = res 17:24:05 sock = None 17:24:05 try: 17:24:05 sock = socket.socket(af, socktype, proto) 17:24:05 17:24:05 # If provided, set socket level options before connecting. 17:24:05 _set_socket_options(sock, socket_options) 17:24:05 17:24:05 if timeout is not _DEFAULT_TIMEOUT: 17:24:05 sock.settimeout(timeout) 17:24:05 if source_address: 17:24:05 sock.bind(source_address) 17:24:05 > sock.connect(sa) 17:24:05 E ConnectionRefusedError: [Errno 111] Connection refused 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 17:24:05 17:24:05 The above exception was the direct cause of the following exception: 17:24:05 17:24:05 self = 17:24:05 method = 'GET' 17:24:05 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX' 17:24:05 body = None 17:24:05 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:24:05 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:05 redirect = False, assert_same_host = False 17:24:05 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:24:05 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:24:05 decode_content = False, response_kw = {} 17:24:05 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX', query=None, fragment=None) 17:24:05 destination_scheme = None, conn = None, release_this_conn = True 17:24:05 http_tunnel_required = False, err = None, clean_exit = False 17:24:05 17:24:05 def urlopen( # type: ignore[override] 17:24:05 self, 17:24:05 method: str, 17:24:05 url: str, 17:24:05 body: _TYPE_BODY | None = None, 17:24:05 headers: typing.Mapping[str, str] | None = None, 17:24:05 retries: Retry | bool | int | None = None, 17:24:05 redirect: bool = True, 17:24:05 assert_same_host: bool = True, 17:24:05 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:05 pool_timeout: int | None = None, 17:24:05 release_conn: bool | None = None, 17:24:05 chunked: bool = False, 17:24:05 body_pos: _TYPE_BODY_POSITION | None = None, 17:24:05 preload_content: bool = True, 17:24:05 decode_content: bool = True, 17:24:05 **response_kw: typing.Any, 17:24:05 ) -> BaseHTTPResponse: 17:24:05 """ 17:24:05 Get a connection from the pool and perform an HTTP request. This is the 17:24:05 lowest level call for making a request, so you'll need to specify all 17:24:05 the raw details. 17:24:05 17:24:05 .. note:: 17:24:05 17:24:05 More commonly, it's appropriate to use a convenience method 17:24:05 such as :meth:`request`. 17:24:05 17:24:05 .. note:: 17:24:05 17:24:05 `release_conn` will only behave as expected if 17:24:05 `preload_content=False` because we want to make 17:24:05 `preload_content=False` the default behaviour someday soon without 17:24:05 breaking backwards compatibility. 17:24:05 17:24:05 :param method: 17:24:05 HTTP request method (such as GET, POST, PUT, etc.) 17:24:05 17:24:05 :param url: 17:24:05 The URL to perform the request on. 17:24:05 17:24:05 :param body: 17:24:05 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:24:05 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:24:05 17:24:05 :param headers: 17:24:05 Dictionary of custom headers to send, such as User-Agent, 17:24:05 If-None-Match, etc. If None, pool headers are used. If provided, 17:24:05 these headers completely replace any pool-specific headers. 17:24:05 17:24:05 :param retries: 17:24:05 Configure the number of retries to allow before raising a 17:24:05 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:24:05 17:24:05 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:24:05 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:24:05 over different types of retries. 17:24:05 Pass an integer number to retry connection errors that many times, 17:24:05 but no other types of errors. Pass zero to never retry. 17:24:05 17:24:05 If ``False``, then retries are disabled and any exception is raised 17:24:05 immediately. Also, instead of raising a MaxRetryError on redirects, 17:24:05 the redirect response will be returned. 17:24:05 17:24:05 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:24:05 17:24:05 :param redirect: 17:24:05 If True, automatically handle redirects (status codes 301, 302, 17:24:05 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:24:05 will disable redirect, too. 17:24:05 17:24:05 :param assert_same_host: 17:24:05 If ``True``, will make sure that the host of the pool requests is 17:24:05 consistent else will raise HostChangedError. When ``False``, you can 17:24:05 use the pool on an HTTP proxy and request foreign hosts. 17:24:05 17:24:05 :param timeout: 17:24:05 If specified, overrides the default timeout for this one 17:24:05 request. It may be a float (in seconds) or an instance of 17:24:05 :class:`urllib3.util.Timeout`. 17:24:05 17:24:05 :param pool_timeout: 17:24:05 If set and the pool is set to block=True, then this method will 17:24:05 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:24:05 connection is available within the time period. 17:24:05 17:24:05 :param bool preload_content: 17:24:05 If True, the response's body will be preloaded into memory. 17:24:05 17:24:05 :param bool decode_content: 17:24:05 If True, will attempt to decode the body based on the 17:24:05 'content-encoding' header. 17:24:05 17:24:05 :param release_conn: 17:24:05 If False, then the urlopen call will not release the connection 17:24:05 back into the pool once a response is received (but will release if 17:24:05 you read the entire contents of the response such as when 17:24:05 `preload_content=True`). This is useful if you're not preloading 17:24:05 the response's content immediately. You will need to call 17:24:05 ``r.release_conn()`` on the response ``r`` to return the connection 17:24:05 back into the pool. If None, it takes the value of ``preload_content`` 17:24:05 which defaults to ``True``. 17:24:05 17:24:05 :param bool chunked: 17:24:05 If True, urllib3 will send the body using chunked transfer 17:24:05 encoding. Otherwise, urllib3 will send the body using the standard 17:24:05 content-length form. Defaults to False. 17:24:05 17:24:05 :param int body_pos: 17:24:05 Position to seek to in file-like body in the event of a retry or 17:24:05 redirect. Typically this won't need to be set because urllib3 will 17:24:05 auto-populate the value when needed. 17:24:05 """ 17:24:05 parsed_url = parse_url(url) 17:24:05 destination_scheme = parsed_url.scheme 17:24:05 17:24:05 if headers is None: 17:24:05 headers = self.headers 17:24:05 17:24:05 if not isinstance(retries, Retry): 17:24:05 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:24:05 17:24:05 if release_conn is None: 17:24:05 release_conn = preload_content 17:24:05 17:24:05 # Check host 17:24:05 if assert_same_host and not self.is_same_host(url): 17:24:05 raise HostChangedError(self, url, retries) 17:24:05 17:24:05 # Ensure that the URL we're connecting to is properly encoded 17:24:05 if url.startswith("/"): 17:24:05 url = to_str(_encode_target(url)) 17:24:05 else: 17:24:05 url = to_str(parsed_url.url) 17:24:05 17:24:05 conn = None 17:24:05 17:24:05 # Track whether `conn` needs to be released before 17:24:05 # returning/raising/recursing. Update this variable if necessary, and 17:24:05 # leave `release_conn` constant throughout the function. That way, if 17:24:05 # the function recurses, the original value of `release_conn` will be 17:24:05 # passed down into the recursive call, and its value will be respected. 17:24:05 # 17:24:05 # See issue #651 [1] for details. 17:24:05 # 17:24:05 # [1] 17:24:05 release_this_conn = release_conn 17:24:05 17:24:05 http_tunnel_required = connection_requires_http_tunnel( 17:24:05 self.proxy, self.proxy_config, destination_scheme 17:24:05 ) 17:24:05 17:24:05 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:24:05 # have to copy the headers dict so we can safely change it without those 17:24:05 # changes being reflected in anyone else's copy. 17:24:05 if not http_tunnel_required: 17:24:05 headers = headers.copy() # type: ignore[attr-defined] 17:24:05 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:24:05 17:24:05 # Must keep the exception bound to a separate variable or else Python 3 17:24:05 # complains about UnboundLocalError. 17:24:05 err = None 17:24:05 17:24:05 # Keep track of whether we cleanly exited the except block. This 17:24:05 # ensures we do proper cleanup in finally. 17:24:05 clean_exit = False 17:24:05 17:24:05 # Rewind body position, if needed. Record current position 17:24:05 # for future rewinds in the event of a redirect/retry. 17:24:05 body_pos = set_file_position(body, body_pos) 17:24:05 17:24:05 try: 17:24:05 # Request a connection from the queue. 17:24:05 timeout_obj = self._get_timeout(timeout) 17:24:05 conn = self._get_conn(timeout=pool_timeout) 17:24:05 17:24:05 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:24:05 17:24:05 # Is this a closed/new connection that requires CONNECT tunnelling? 17:24:05 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:24:05 try: 17:24:05 self._prepare_proxy(conn) 17:24:05 except (BaseSSLError, OSError, SocketTimeout) as e: 17:24:05 self._raise_timeout( 17:24:05 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:24:05 ) 17:24:05 raise 17:24:05 17:24:05 # If we're going to release the connection in ``finally:``, then 17:24:05 # the response doesn't need to know about the connection. Otherwise 17:24:05 # it will also try to release it and we'll have a double-release 17:24:05 # mess. 17:24:05 response_conn = conn if not release_conn else None 17:24:05 17:24:05 # Make the request on the HTTPConnection object 17:24:05 > response = self._make_request( 17:24:05 conn, 17:24:05 method, 17:24:05 url, 17:24:05 timeout=timeout_obj, 17:24:05 body=body, 17:24:05 headers=headers, 17:24:05 chunked=chunked, 17:24:05 retries=retries, 17:24:05 response_conn=response_conn, 17:24:05 preload_content=preload_content, 17:24:05 decode_content=decode_content, 17:24:05 **response_kw, 17:24:05 ) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 17:24:05 conn.request( 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 17:24:05 self.endheaders() 17:24:05 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 17:24:05 self._send_output(message_body, encode_chunked=encode_chunked) 17:24:05 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 17:24:05 self.send(msg) 17:24:05 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 17:24:05 self.connect() 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 17:24:05 self.sock = self._new_conn() 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 17:24:05 self = 17:24:05 17:24:05 def _new_conn(self) -> socket.socket: 17:24:05 """Establish a socket connection and set nodelay settings on it. 17:24:05 17:24:05 :return: New socket connection. 17:24:05 """ 17:24:05 try: 17:24:05 sock = connection.create_connection( 17:24:05 (self._dns_host, self.port), 17:24:05 self.timeout, 17:24:05 source_address=self.source_address, 17:24:05 socket_options=self.socket_options, 17:24:05 ) 17:24:05 except socket.gaierror as e: 17:24:05 raise NameResolutionError(self.host, self, e) from e 17:24:05 except SocketTimeout as e: 17:24:05 raise ConnectTimeoutError( 17:24:05 self, 17:24:05 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 17:24:05 ) from e 17:24:05 17:24:05 except OSError as e: 17:24:05 > raise NewConnectionError( 17:24:05 self, f"Failed to establish a new connection: {e}" 17:24:05 ) from e 17:24:05 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 17:24:05 17:24:05 The above exception was the direct cause of the following exception: 17:24:05 17:24:05 self = 17:24:05 request = , stream = False 17:24:05 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:05 proxies = OrderedDict() 17:24:05 17:24:05 def send( 17:24:05 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:05 ): 17:24:05 """Sends PreparedRequest object. Returns Response object. 17:24:05 17:24:05 :param request: The :class:`PreparedRequest ` being sent. 17:24:05 :param stream: (optional) Whether to stream the request content. 17:24:05 :param timeout: (optional) How long to wait for the server to send 17:24:05 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:05 read timeout) ` tuple. 17:24:05 :type timeout: float or tuple or urllib3 Timeout object 17:24:05 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:05 we verify the server's TLS certificate, or a string, in which case it 17:24:05 must be a path to a CA bundle to use 17:24:05 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:05 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:05 :rtype: requests.Response 17:24:05 """ 17:24:05 17:24:05 try: 17:24:05 conn = self.get_connection_with_tls_context( 17:24:05 request, verify, proxies=proxies, cert=cert 17:24:05 ) 17:24:05 except LocationValueError as e: 17:24:05 raise InvalidURL(e, request=request) 17:24:05 17:24:05 self.cert_verify(conn, request.url, verify, cert) 17:24:05 url = self.request_url(request, proxies) 17:24:05 self.add_headers( 17:24:05 request, 17:24:05 stream=stream, 17:24:05 timeout=timeout, 17:24:05 verify=verify, 17:24:05 cert=cert, 17:24:05 proxies=proxies, 17:24:05 ) 17:24:05 17:24:05 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:05 17:24:05 if isinstance(timeout, tuple): 17:24:05 try: 17:24:05 connect, read = timeout 17:24:05 timeout = TimeoutSauce(connect=connect, read=read) 17:24:05 except ValueError: 17:24:05 raise ValueError( 17:24:05 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:05 f"or a single float to set both timeouts to the same value." 17:24:05 ) 17:24:05 elif isinstance(timeout, TimeoutSauce): 17:24:05 pass 17:24:05 else: 17:24:05 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:05 17:24:05 try: 17:24:05 > resp = conn.urlopen( 17:24:05 method=request.method, 17:24:05 url=url, 17:24:05 body=request.body, 17:24:05 headers=request.headers, 17:24:05 redirect=False, 17:24:05 assert_same_host=False, 17:24:05 preload_content=False, 17:24:05 decode_content=False, 17:24:05 retries=self.max_retries, 17:24:05 timeout=timeout, 17:24:05 chunked=chunked, 17:24:05 ) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:24:05 retries = retries.increment( 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 17:24:05 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:05 method = 'GET' 17:24:05 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX' 17:24:05 response = None 17:24:05 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 17:24:05 _pool = 17:24:05 _stacktrace = 17:24:05 17:24:05 def increment( 17:24:05 self, 17:24:05 method: str | None = None, 17:24:05 url: str | None = None, 17:24:05 response: BaseHTTPResponse | None = None, 17:24:05 error: Exception | None = None, 17:24:05 _pool: ConnectionPool | None = None, 17:24:05 _stacktrace: TracebackType | None = None, 17:24:05 ) -> Self: 17:24:05 """Return a new Retry object with incremented retry counters. 17:24:05 17:24:05 :param response: A response object, or None, if the server did not 17:24:05 return a response. 17:24:05 :type response: :class:`~urllib3.response.BaseHTTPResponse` 17:24:05 :param Exception error: An error encountered during the request, or 17:24:05 None if the response was received successfully. 17:24:05 17:24:05 :return: A new ``Retry`` object. 17:24:05 """ 17:24:05 if self.total is False and error: 17:24:05 # Disabled, indicate to re-raise the error. 17:24:05 raise reraise(type(error), error, _stacktrace) 17:24:05 17:24:05 total = self.total 17:24:05 if total is not None: 17:24:05 total -= 1 17:24:05 17:24:05 connect = self.connect 17:24:05 read = self.read 17:24:05 redirect = self.redirect 17:24:05 status_count = self.status 17:24:05 other = self.other 17:24:05 cause = "unknown" 17:24:05 status = None 17:24:05 redirect_location = None 17:24:05 17:24:05 if error and self._is_connection_error(error): 17:24:05 # Connect retry? 17:24:05 if connect is False: 17:24:05 raise reraise(type(error), error, _stacktrace) 17:24:05 elif connect is not None: 17:24:05 connect -= 1 17:24:05 17:24:05 elif error and self._is_read_error(error): 17:24:05 # Read retry? 17:24:05 if read is False or method is None or not self._is_method_retryable(method): 17:24:05 raise reraise(type(error), error, _stacktrace) 17:24:05 elif read is not None: 17:24:05 read -= 1 17:24:05 17:24:05 elif error: 17:24:05 # Other retry? 17:24:05 if other is not None: 17:24:05 other -= 1 17:24:05 17:24:05 elif response and response.get_redirect_location(): 17:24:05 # Redirect retry? 17:24:05 if redirect is not None: 17:24:05 redirect -= 1 17:24:05 cause = "too many redirects" 17:24:05 response_redirect_location = response.get_redirect_location() 17:24:05 if response_redirect_location: 17:24:05 redirect_location = response_redirect_location 17:24:05 status = response.status 17:24:05 17:24:05 else: 17:24:05 # Incrementing because of a server error like a 500 in 17:24:05 # status_forcelist and the given method is in the allowed_methods 17:24:05 cause = ResponseError.GENERIC_ERROR 17:24:05 if response and response.status: 17:24:05 if status_count is not None: 17:24:05 status_count -= 1 17:24:05 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 17:24:05 status = response.status 17:24:05 17:24:05 history = self.history + ( 17:24:05 RequestHistory(method, url, error, status, redirect_location), 17:24:05 ) 17:24:05 17:24:05 new_retry = self.new( 17:24:05 total=total, 17:24:05 connect=connect, 17:24:05 read=read, 17:24:05 redirect=redirect, 17:24:05 status=status_count, 17:24:05 other=other, 17:24:05 history=history, 17:24:05 ) 17:24:05 17:24:05 if new_retry.is_exhausted(): 17:24:05 reason = error or ResponseError(cause) 17:24:05 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 17:24:05 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 17:24:05 17:24:05 During handling of the above exception, another exception occurred: 17:24:05 17:24:05 self = 17:24:05 17:24:05 def test_05_rdm_portmapping_SRG1_PP7_TXRX(self): 17:24:05 > response = test_utils.get_portmapping_node_attr("ROADMA01", "mapping", "SRG1-PP7-TXRX") 17:24:05 17:24:05 transportpce_tests/1.2.1/test01_portmapping.py:81: 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 transportpce_tests/common/test_utils.py:473: in get_portmapping_node_attr 17:24:05 response = get_request(target_url) 17:24:05 transportpce_tests/common/test_utils.py:116: in get_request 17:24:05 return requests.request( 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:24:05 return session.request(method=method, url=url, **kwargs) 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:24:05 resp = self.send(prep, **send_kwargs) 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:24:05 r = adapter.send(request, **kwargs) 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 17:24:05 self = 17:24:05 request = , stream = False 17:24:05 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:05 proxies = OrderedDict() 17:24:05 17:24:05 def send( 17:24:05 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:05 ): 17:24:05 """Sends PreparedRequest object. Returns Response object. 17:24:05 17:24:05 :param request: The :class:`PreparedRequest ` being sent. 17:24:05 :param stream: (optional) Whether to stream the request content. 17:24:05 :param timeout: (optional) How long to wait for the server to send 17:24:05 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:05 read timeout) ` tuple. 17:24:05 :type timeout: float or tuple or urllib3 Timeout object 17:24:05 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:05 we verify the server's TLS certificate, or a string, in which case it 17:24:05 must be a path to a CA bundle to use 17:24:05 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:05 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:05 :rtype: requests.Response 17:24:05 """ 17:24:05 17:24:05 try: 17:24:05 conn = self.get_connection_with_tls_context( 17:24:05 request, verify, proxies=proxies, cert=cert 17:24:05 ) 17:24:05 except LocationValueError as e: 17:24:05 raise InvalidURL(e, request=request) 17:24:05 17:24:05 self.cert_verify(conn, request.url, verify, cert) 17:24:05 url = self.request_url(request, proxies) 17:24:05 self.add_headers( 17:24:05 request, 17:24:05 stream=stream, 17:24:05 timeout=timeout, 17:24:05 verify=verify, 17:24:05 cert=cert, 17:24:05 proxies=proxies, 17:24:05 ) 17:24:05 17:24:05 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:05 17:24:05 if isinstance(timeout, tuple): 17:24:05 try: 17:24:05 connect, read = timeout 17:24:05 timeout = TimeoutSauce(connect=connect, read=read) 17:24:05 except ValueError: 17:24:05 raise ValueError( 17:24:05 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:05 f"or a single float to set both timeouts to the same value." 17:24:05 ) 17:24:05 elif isinstance(timeout, TimeoutSauce): 17:24:05 pass 17:24:05 else: 17:24:05 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:05 17:24:05 try: 17:24:05 resp = conn.urlopen( 17:24:05 method=request.method, 17:24:05 url=url, 17:24:05 body=request.body, 17:24:05 headers=request.headers, 17:24:05 redirect=False, 17:24:05 assert_same_host=False, 17:24:05 preload_content=False, 17:24:05 decode_content=False, 17:24:05 retries=self.max_retries, 17:24:05 timeout=timeout, 17:24:05 chunked=chunked, 17:24:05 ) 17:24:05 17:24:05 except (ProtocolError, OSError) as err: 17:24:05 raise ConnectionError(err, request=request) 17:24:05 17:24:05 except MaxRetryError as e: 17:24:05 if isinstance(e.reason, ConnectTimeoutError): 17:24:05 # TODO: Remove this in 3.0.0: see #2811 17:24:05 if not isinstance(e.reason, NewConnectionError): 17:24:05 raise ConnectTimeout(e, request=request) 17:24:05 17:24:05 if isinstance(e.reason, ResponseError): 17:24:05 raise RetryError(e, request=request) 17:24:05 17:24:05 if isinstance(e.reason, _ProxyError): 17:24:05 raise ProxyError(e, request=request) 17:24:05 17:24:05 if isinstance(e.reason, _SSLError): 17:24:05 # This branch is for urllib3 v1.22 and later. 17:24:05 raise SSLError(e, request=request) 17:24:05 17:24:05 > raise ConnectionError(e, request=request) 17:24:05 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 17:24:05 ----------------------------- Captured stdout call ----------------------------- 17:24:05 execution of test_05_rdm_portmapping_SRG1_PP7_TXRX 17:24:05 _____ TransportPCEPortMappingTesting.test_06_rdm_portmapping_SRG3_PP1_TXRX _____ 17:24:05 17:24:05 self = 17:24:05 17:24:05 def _new_conn(self) -> socket.socket: 17:24:05 """Establish a socket connection and set nodelay settings on it. 17:24:05 17:24:05 :return: New socket connection. 17:24:05 """ 17:24:05 try: 17:24:05 > sock = connection.create_connection( 17:24:05 (self._dns_host, self.port), 17:24:05 self.timeout, 17:24:05 source_address=self.source_address, 17:24:05 socket_options=self.socket_options, 17:24:05 ) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 17:24:05 raise err 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 17:24:05 address = ('localhost', 8182), timeout = 10, source_address = None 17:24:05 socket_options = [(6, 1, 1)] 17:24:05 17:24:05 def create_connection( 17:24:05 address: tuple[str, int], 17:24:05 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:05 source_address: tuple[str, int] | None = None, 17:24:05 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 17:24:05 ) -> socket.socket: 17:24:05 """Connect to *address* and return the socket object. 17:24:05 17:24:05 Convenience function. Connect to *address* (a 2-tuple ``(host, 17:24:05 port)``) and return the socket object. Passing the optional 17:24:05 *timeout* parameter will set the timeout on the socket instance 17:24:05 before attempting to connect. If no *timeout* is supplied, the 17:24:05 global default timeout setting returned by :func:`socket.getdefaulttimeout` 17:24:05 is used. If *source_address* is set it must be a tuple of (host, port) 17:24:05 for the socket to bind as a source address before making the connection. 17:24:05 An host of '' or port 0 tells the OS to use the default. 17:24:05 """ 17:24:05 17:24:05 host, port = address 17:24:05 if host.startswith("["): 17:24:05 host = host.strip("[]") 17:24:05 err = None 17:24:05 17:24:05 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 17:24:05 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 17:24:05 # The original create_connection function always returns all records. 17:24:05 family = allowed_gai_family() 17:24:05 17:24:05 try: 17:24:05 host.encode("idna") 17:24:05 except UnicodeError: 17:24:05 raise LocationParseError(f"'{host}', label empty or too long") from None 17:24:05 17:24:05 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 17:24:05 af, socktype, proto, canonname, sa = res 17:24:05 sock = None 17:24:05 try: 17:24:05 sock = socket.socket(af, socktype, proto) 17:24:05 17:24:05 # If provided, set socket level options before connecting. 17:24:05 _set_socket_options(sock, socket_options) 17:24:05 17:24:05 if timeout is not _DEFAULT_TIMEOUT: 17:24:05 sock.settimeout(timeout) 17:24:05 if source_address: 17:24:05 sock.bind(source_address) 17:24:05 > sock.connect(sa) 17:24:05 E ConnectionRefusedError: [Errno 111] Connection refused 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 17:24:05 17:24:05 The above exception was the direct cause of the following exception: 17:24:05 17:24:05 self = 17:24:05 method = 'GET' 17:24:05 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX' 17:24:05 body = None 17:24:05 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:24:05 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:05 redirect = False, assert_same_host = False 17:24:05 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:24:05 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:24:05 decode_content = False, response_kw = {} 17:24:05 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX', query=None, fragment=None) 17:24:05 destination_scheme = None, conn = None, release_this_conn = True 17:24:05 http_tunnel_required = False, err = None, clean_exit = False 17:24:05 17:24:05 def urlopen( # type: ignore[override] 17:24:05 self, 17:24:05 method: str, 17:24:05 url: str, 17:24:05 body: _TYPE_BODY | None = None, 17:24:05 headers: typing.Mapping[str, str] | None = None, 17:24:05 retries: Retry | bool | int | None = None, 17:24:05 redirect: bool = True, 17:24:05 assert_same_host: bool = True, 17:24:05 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:05 pool_timeout: int | None = None, 17:24:05 release_conn: bool | None = None, 17:24:05 chunked: bool = False, 17:24:05 body_pos: _TYPE_BODY_POSITION | None = None, 17:24:05 preload_content: bool = True, 17:24:05 decode_content: bool = True, 17:24:05 **response_kw: typing.Any, 17:24:05 ) -> BaseHTTPResponse: 17:24:05 """ 17:24:05 Get a connection from the pool and perform an HTTP request. This is the 17:24:05 lowest level call for making a request, so you'll need to specify all 17:24:05 the raw details. 17:24:05 17:24:05 .. note:: 17:24:05 17:24:05 More commonly, it's appropriate to use a convenience method 17:24:05 such as :meth:`request`. 17:24:05 17:24:05 .. note:: 17:24:05 17:24:05 `release_conn` will only behave as expected if 17:24:05 `preload_content=False` because we want to make 17:24:05 `preload_content=False` the default behaviour someday soon without 17:24:05 breaking backwards compatibility. 17:24:05 17:24:05 :param method: 17:24:05 HTTP request method (such as GET, POST, PUT, etc.) 17:24:05 17:24:05 :param url: 17:24:05 The URL to perform the request on. 17:24:05 17:24:05 :param body: 17:24:05 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:24:05 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:24:05 17:24:05 :param headers: 17:24:05 Dictionary of custom headers to send, such as User-Agent, 17:24:05 If-None-Match, etc. If None, pool headers are used. If provided, 17:24:05 these headers completely replace any pool-specific headers. 17:24:05 17:24:05 :param retries: 17:24:05 Configure the number of retries to allow before raising a 17:24:05 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:24:05 17:24:05 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:24:05 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:24:05 over different types of retries. 17:24:05 Pass an integer number to retry connection errors that many times, 17:24:05 but no other types of errors. Pass zero to never retry. 17:24:05 17:24:05 If ``False``, then retries are disabled and any exception is raised 17:24:05 immediately. Also, instead of raising a MaxRetryError on redirects, 17:24:05 the redirect response will be returned. 17:24:05 17:24:05 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:24:05 17:24:05 :param redirect: 17:24:05 If True, automatically handle redirects (status codes 301, 302, 17:24:05 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:24:05 will disable redirect, too. 17:24:05 17:24:05 :param assert_same_host: 17:24:05 If ``True``, will make sure that the host of the pool requests is 17:24:05 consistent else will raise HostChangedError. When ``False``, you can 17:24:05 use the pool on an HTTP proxy and request foreign hosts. 17:24:05 17:24:05 :param timeout: 17:24:05 If specified, overrides the default timeout for this one 17:24:05 request. It may be a float (in seconds) or an instance of 17:24:05 :class:`urllib3.util.Timeout`. 17:24:05 17:24:05 :param pool_timeout: 17:24:05 If set and the pool is set to block=True, then this method will 17:24:05 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:24:05 connection is available within the time period. 17:24:05 17:24:05 :param bool preload_content: 17:24:05 If True, the response's body will be preloaded into memory. 17:24:05 17:24:05 :param bool decode_content: 17:24:05 If True, will attempt to decode the body based on the 17:24:05 'content-encoding' header. 17:24:05 17:24:05 :param release_conn: 17:24:05 If False, then the urlopen call will not release the connection 17:24:05 back into the pool once a response is received (but will release if 17:24:05 you read the entire contents of the response such as when 17:24:05 `preload_content=True`). This is useful if you're not preloading 17:24:05 the response's content immediately. You will need to call 17:24:05 ``r.release_conn()`` on the response ``r`` to return the connection 17:24:05 back into the pool. If None, it takes the value of ``preload_content`` 17:24:05 which defaults to ``True``. 17:24:05 17:24:05 :param bool chunked: 17:24:05 If True, urllib3 will send the body using chunked transfer 17:24:05 encoding. Otherwise, urllib3 will send the body using the standard 17:24:05 content-length form. Defaults to False. 17:24:05 17:24:05 :param int body_pos: 17:24:05 Position to seek to in file-like body in the event of a retry or 17:24:05 redirect. Typically this won't need to be set because urllib3 will 17:24:05 auto-populate the value when needed. 17:24:05 """ 17:24:05 parsed_url = parse_url(url) 17:24:05 destination_scheme = parsed_url.scheme 17:24:05 17:24:05 if headers is None: 17:24:05 headers = self.headers 17:24:05 17:24:05 if not isinstance(retries, Retry): 17:24:05 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:24:05 17:24:05 if release_conn is None: 17:24:05 release_conn = preload_content 17:24:05 17:24:05 # Check host 17:24:05 if assert_same_host and not self.is_same_host(url): 17:24:05 raise HostChangedError(self, url, retries) 17:24:05 17:24:05 # Ensure that the URL we're connecting to is properly encoded 17:24:05 if url.startswith("/"): 17:24:05 url = to_str(_encode_target(url)) 17:24:05 else: 17:24:05 url = to_str(parsed_url.url) 17:24:05 17:24:05 conn = None 17:24:05 17:24:05 # Track whether `conn` needs to be released before 17:24:05 # returning/raising/recursing. Update this variable if necessary, and 17:24:05 # leave `release_conn` constant throughout the function. That way, if 17:24:05 # the function recurses, the original value of `release_conn` will be 17:24:05 # passed down into the recursive call, and its value will be respected. 17:24:05 # 17:24:05 # See issue #651 [1] for details. 17:24:05 # 17:24:05 # [1] 17:24:05 release_this_conn = release_conn 17:24:05 17:24:05 http_tunnel_required = connection_requires_http_tunnel( 17:24:05 self.proxy, self.proxy_config, destination_scheme 17:24:05 ) 17:24:05 17:24:05 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:24:05 # have to copy the headers dict so we can safely change it without those 17:24:05 # changes being reflected in anyone else's copy. 17:24:05 if not http_tunnel_required: 17:24:05 headers = headers.copy() # type: ignore[attr-defined] 17:24:05 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:24:05 17:24:05 # Must keep the exception bound to a separate variable or else Python 3 17:24:05 # complains about UnboundLocalError. 17:24:05 err = None 17:24:05 17:24:05 # Keep track of whether we cleanly exited the except block. This 17:24:05 # ensures we do proper cleanup in finally. 17:24:05 clean_exit = False 17:24:05 17:24:05 # Rewind body position, if needed. Record current position 17:24:05 # for future rewinds in the event of a redirect/retry. 17:24:05 body_pos = set_file_position(body, body_pos) 17:24:05 17:24:05 try: 17:24:05 # Request a connection from the queue. 17:24:05 timeout_obj = self._get_timeout(timeout) 17:24:05 conn = self._get_conn(timeout=pool_timeout) 17:24:05 17:24:05 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:24:05 17:24:05 # Is this a closed/new connection that requires CONNECT tunnelling? 17:24:05 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:24:05 try: 17:24:05 self._prepare_proxy(conn) 17:24:05 except (BaseSSLError, OSError, SocketTimeout) as e: 17:24:05 self._raise_timeout( 17:24:05 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:24:05 ) 17:24:05 raise 17:24:05 17:24:05 # If we're going to release the connection in ``finally:``, then 17:24:05 # the response doesn't need to know about the connection. Otherwise 17:24:05 # it will also try to release it and we'll have a double-release 17:24:05 # mess. 17:24:05 response_conn = conn if not release_conn else None 17:24:05 17:24:05 # Make the request on the HTTPConnection object 17:24:05 > response = self._make_request( 17:24:05 conn, 17:24:05 method, 17:24:05 url, 17:24:05 timeout=timeout_obj, 17:24:05 body=body, 17:24:05 headers=headers, 17:24:05 chunked=chunked, 17:24:05 retries=retries, 17:24:05 response_conn=response_conn, 17:24:05 preload_content=preload_content, 17:24:05 decode_content=decode_content, 17:24:05 **response_kw, 17:24:05 ) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 17:24:05 conn.request( 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 17:24:05 self.endheaders() 17:24:05 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 17:24:05 self._send_output(message_body, encode_chunked=encode_chunked) 17:24:05 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 17:24:05 self.send(msg) 17:24:05 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 17:24:05 self.connect() 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 17:24:05 self.sock = self._new_conn() 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 17:24:05 self = 17:24:05 17:24:05 def _new_conn(self) -> socket.socket: 17:24:05 """Establish a socket connection and set nodelay settings on it. 17:24:05 17:24:05 :return: New socket connection. 17:24:05 """ 17:24:05 try: 17:24:05 sock = connection.create_connection( 17:24:05 (self._dns_host, self.port), 17:24:05 self.timeout, 17:24:05 source_address=self.source_address, 17:24:05 socket_options=self.socket_options, 17:24:05 ) 17:24:05 except socket.gaierror as e: 17:24:05 raise NameResolutionError(self.host, self, e) from e 17:24:05 except SocketTimeout as e: 17:24:05 raise ConnectTimeoutError( 17:24:05 self, 17:24:05 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 17:24:05 ) from e 17:24:05 17:24:05 except OSError as e: 17:24:05 > raise NewConnectionError( 17:24:05 self, f"Failed to establish a new connection: {e}" 17:24:05 ) from e 17:24:05 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 17:24:05 17:24:05 The above exception was the direct cause of the following exception: 17:24:05 17:24:05 self = 17:24:05 request = , stream = False 17:24:05 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:05 proxies = OrderedDict() 17:24:05 17:24:05 def send( 17:24:05 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:05 ): 17:24:05 """Sends PreparedRequest object. Returns Response object. 17:24:05 17:24:05 :param request: The :class:`PreparedRequest ` being sent. 17:24:05 :param stream: (optional) Whether to stream the request content. 17:24:05 :param timeout: (optional) How long to wait for the server to send 17:24:05 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:05 read timeout) ` tuple. 17:24:05 :type timeout: float or tuple or urllib3 Timeout object 17:24:05 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:05 we verify the server's TLS certificate, or a string, in which case it 17:24:05 must be a path to a CA bundle to use 17:24:05 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:05 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:05 :rtype: requests.Response 17:24:05 """ 17:24:05 17:24:05 try: 17:24:05 conn = self.get_connection_with_tls_context( 17:24:05 request, verify, proxies=proxies, cert=cert 17:24:05 ) 17:24:05 except LocationValueError as e: 17:24:05 raise InvalidURL(e, request=request) 17:24:05 17:24:05 self.cert_verify(conn, request.url, verify, cert) 17:24:05 url = self.request_url(request, proxies) 17:24:05 self.add_headers( 17:24:05 request, 17:24:05 stream=stream, 17:24:05 timeout=timeout, 17:24:05 verify=verify, 17:24:05 cert=cert, 17:24:05 proxies=proxies, 17:24:05 ) 17:24:05 17:24:05 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:05 17:24:05 if isinstance(timeout, tuple): 17:24:05 try: 17:24:05 connect, read = timeout 17:24:05 timeout = TimeoutSauce(connect=connect, read=read) 17:24:05 except ValueError: 17:24:05 raise ValueError( 17:24:05 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:05 f"or a single float to set both timeouts to the same value." 17:24:05 ) 17:24:05 elif isinstance(timeout, TimeoutSauce): 17:24:05 pass 17:24:05 else: 17:24:05 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:05 17:24:05 try: 17:24:05 > resp = conn.urlopen( 17:24:05 method=request.method, 17:24:05 url=url, 17:24:05 body=request.body, 17:24:05 headers=request.headers, 17:24:05 redirect=False, 17:24:05 assert_same_host=False, 17:24:05 preload_content=False, 17:24:05 decode_content=False, 17:24:05 retries=self.max_retries, 17:24:05 timeout=timeout, 17:24:05 chunked=chunked, 17:24:05 ) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:24:05 retries = retries.increment( 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 17:24:05 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:05 method = 'GET' 17:24:05 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX' 17:24:05 response = None 17:24:05 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 17:24:05 _pool = 17:24:05 _stacktrace = 17:24:05 17:24:05 def increment( 17:24:05 self, 17:24:05 method: str | None = None, 17:24:05 url: str | None = None, 17:24:05 response: BaseHTTPResponse | None = None, 17:24:05 error: Exception | None = None, 17:24:05 _pool: ConnectionPool | None = None, 17:24:05 _stacktrace: TracebackType | None = None, 17:24:05 ) -> Self: 17:24:05 """Return a new Retry object with incremented retry counters. 17:24:05 17:24:05 :param response: A response object, or None, if the server did not 17:24:05 return a response. 17:24:05 :type response: :class:`~urllib3.response.BaseHTTPResponse` 17:24:05 :param Exception error: An error encountered during the request, or 17:24:05 None if the response was received successfully. 17:24:05 17:24:05 :return: A new ``Retry`` object. 17:24:05 """ 17:24:05 if self.total is False and error: 17:24:05 # Disabled, indicate to re-raise the error. 17:24:05 raise reraise(type(error), error, _stacktrace) 17:24:05 17:24:05 total = self.total 17:24:05 if total is not None: 17:24:05 total -= 1 17:24:05 17:24:05 connect = self.connect 17:24:05 read = self.read 17:24:05 redirect = self.redirect 17:24:05 status_count = self.status 17:24:05 other = self.other 17:24:05 cause = "unknown" 17:24:05 status = None 17:24:05 redirect_location = None 17:24:05 17:24:05 if error and self._is_connection_error(error): 17:24:05 # Connect retry? 17:24:05 if connect is False: 17:24:05 raise reraise(type(error), error, _stacktrace) 17:24:05 elif connect is not None: 17:24:05 connect -= 1 17:24:05 17:24:05 elif error and self._is_read_error(error): 17:24:05 # Read retry? 17:24:05 if read is False or method is None or not self._is_method_retryable(method): 17:24:05 raise reraise(type(error), error, _stacktrace) 17:24:05 elif read is not None: 17:24:05 read -= 1 17:24:05 17:24:05 elif error: 17:24:05 # Other retry? 17:24:05 if other is not None: 17:24:05 other -= 1 17:24:05 17:24:05 elif response and response.get_redirect_location(): 17:24:05 # Redirect retry? 17:24:05 if redirect is not None: 17:24:05 redirect -= 1 17:24:05 cause = "too many redirects" 17:24:05 response_redirect_location = response.get_redirect_location() 17:24:05 if response_redirect_location: 17:24:05 redirect_location = response_redirect_location 17:24:05 status = response.status 17:24:05 17:24:05 else: 17:24:05 # Incrementing because of a server error like a 500 in 17:24:05 # status_forcelist and the given method is in the allowed_methods 17:24:05 cause = ResponseError.GENERIC_ERROR 17:24:05 if response and response.status: 17:24:05 if status_count is not None: 17:24:05 status_count -= 1 17:24:05 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 17:24:05 status = response.status 17:24:05 17:24:05 history = self.history + ( 17:24:05 RequestHistory(method, url, error, status, redirect_location), 17:24:05 ) 17:24:05 17:24:05 new_retry = self.new( 17:24:05 total=total, 17:24:05 connect=connect, 17:24:05 read=read, 17:24:05 redirect=redirect, 17:24:05 status=status_count, 17:24:05 other=other, 17:24:05 history=history, 17:24:05 ) 17:24:05 17:24:05 if new_retry.is_exhausted(): 17:24:05 reason = error or ResponseError(cause) 17:24:05 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 17:24:05 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 17:24:05 17:24:05 During handling of the above exception, another exception occurred: 17:24:05 17:24:05 self = 17:24:05 17:24:05 def test_06_rdm_portmapping_SRG3_PP1_TXRX(self): 17:24:05 > response = test_utils.get_portmapping_node_attr("ROADMA01", "mapping", "SRG3-PP1-TXRX") 17:24:05 17:24:05 transportpce_tests/1.2.1/test01_portmapping.py:90: 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 transportpce_tests/common/test_utils.py:473: in get_portmapping_node_attr 17:24:05 response = get_request(target_url) 17:24:05 transportpce_tests/common/test_utils.py:116: in get_request 17:24:05 return requests.request( 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:24:05 return session.request(method=method, url=url, **kwargs) 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:24:05 resp = self.send(prep, **send_kwargs) 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:24:05 r = adapter.send(request, **kwargs) 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 17:24:05 self = 17:24:05 request = , stream = False 17:24:05 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:05 proxies = OrderedDict() 17:24:05 17:24:05 def send( 17:24:05 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:05 ): 17:24:05 """Sends PreparedRequest object. Returns Response object. 17:24:05 17:24:05 :param request: The :class:`PreparedRequest ` being sent. 17:24:05 :param stream: (optional) Whether to stream the request content. 17:24:05 :param timeout: (optional) How long to wait for the server to send 17:24:05 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:05 read timeout) ` tuple. 17:24:05 :type timeout: float or tuple or urllib3 Timeout object 17:24:05 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:05 we verify the server's TLS certificate, or a string, in which case it 17:24:05 must be a path to a CA bundle to use 17:24:05 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:05 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:05 :rtype: requests.Response 17:24:05 """ 17:24:05 17:24:05 try: 17:24:05 conn = self.get_connection_with_tls_context( 17:24:05 request, verify, proxies=proxies, cert=cert 17:24:05 ) 17:24:05 except LocationValueError as e: 17:24:05 raise InvalidURL(e, request=request) 17:24:05 17:24:05 self.cert_verify(conn, request.url, verify, cert) 17:24:05 url = self.request_url(request, proxies) 17:24:05 self.add_headers( 17:24:05 request, 17:24:05 stream=stream, 17:24:05 timeout=timeout, 17:24:05 verify=verify, 17:24:05 cert=cert, 17:24:05 proxies=proxies, 17:24:05 ) 17:24:05 17:24:05 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:05 17:24:05 if isinstance(timeout, tuple): 17:24:05 try: 17:24:05 connect, read = timeout 17:24:05 timeout = TimeoutSauce(connect=connect, read=read) 17:24:05 except ValueError: 17:24:05 raise ValueError( 17:24:05 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:05 f"or a single float to set both timeouts to the same value." 17:24:05 ) 17:24:05 elif isinstance(timeout, TimeoutSauce): 17:24:05 pass 17:24:05 else: 17:24:05 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:05 17:24:05 try: 17:24:05 resp = conn.urlopen( 17:24:05 method=request.method, 17:24:05 url=url, 17:24:05 body=request.body, 17:24:05 headers=request.headers, 17:24:05 redirect=False, 17:24:05 assert_same_host=False, 17:24:05 preload_content=False, 17:24:05 decode_content=False, 17:24:05 retries=self.max_retries, 17:24:05 timeout=timeout, 17:24:05 chunked=chunked, 17:24:05 ) 17:24:05 17:24:05 except (ProtocolError, OSError) as err: 17:24:05 raise ConnectionError(err, request=request) 17:24:05 17:24:05 except MaxRetryError as e: 17:24:05 if isinstance(e.reason, ConnectTimeoutError): 17:24:05 # TODO: Remove this in 3.0.0: see #2811 17:24:05 if not isinstance(e.reason, NewConnectionError): 17:24:05 raise ConnectTimeout(e, request=request) 17:24:05 17:24:05 if isinstance(e.reason, ResponseError): 17:24:05 raise RetryError(e, request=request) 17:24:05 17:24:05 if isinstance(e.reason, _ProxyError): 17:24:05 raise ProxyError(e, request=request) 17:24:05 17:24:05 if isinstance(e.reason, _SSLError): 17:24:05 # This branch is for urllib3 v1.22 and later. 17:24:05 raise SSLError(e, request=request) 17:24:05 17:24:05 > raise ConnectionError(e, request=request) 17:24:05 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 17:24:05 ----------------------------- Captured stdout call ----------------------------- 17:24:05 execution of test_06_rdm_portmapping_SRG3_PP1_TXRX 17:24:05 ________ TransportPCEPortMappingTesting.test_07_xpdr_device_connection _________ 17:24:05 17:24:05 self = 17:24:05 17:24:05 def _new_conn(self) -> socket.socket: 17:24:05 """Establish a socket connection and set nodelay settings on it. 17:24:05 17:24:05 :return: New socket connection. 17:24:05 """ 17:24:05 try: 17:24:05 > sock = connection.create_connection( 17:24:05 (self._dns_host, self.port), 17:24:05 self.timeout, 17:24:05 source_address=self.source_address, 17:24:05 socket_options=self.socket_options, 17:24:05 ) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 17:24:05 raise err 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 17:24:05 address = ('localhost', 8182), timeout = 10, source_address = None 17:24:05 socket_options = [(6, 1, 1)] 17:24:05 17:24:05 def create_connection( 17:24:05 address: tuple[str, int], 17:24:05 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:05 source_address: tuple[str, int] | None = None, 17:24:05 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 17:24:05 ) -> socket.socket: 17:24:05 """Connect to *address* and return the socket object. 17:24:05 17:24:05 Convenience function. Connect to *address* (a 2-tuple ``(host, 17:24:05 port)``) and return the socket object. Passing the optional 17:24:05 *timeout* parameter will set the timeout on the socket instance 17:24:05 before attempting to connect. If no *timeout* is supplied, the 17:24:05 global default timeout setting returned by :func:`socket.getdefaulttimeout` 17:24:05 is used. If *source_address* is set it must be a tuple of (host, port) 17:24:05 for the socket to bind as a source address before making the connection. 17:24:05 An host of '' or port 0 tells the OS to use the default. 17:24:05 """ 17:24:05 17:24:05 host, port = address 17:24:05 if host.startswith("["): 17:24:05 host = host.strip("[]") 17:24:05 err = None 17:24:05 17:24:05 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 17:24:05 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 17:24:05 # The original create_connection function always returns all records. 17:24:05 family = allowed_gai_family() 17:24:05 17:24:05 try: 17:24:05 host.encode("idna") 17:24:05 except UnicodeError: 17:24:05 raise LocationParseError(f"'{host}', label empty or too long") from None 17:24:05 17:24:05 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 17:24:05 af, socktype, proto, canonname, sa = res 17:24:05 sock = None 17:24:05 try: 17:24:05 sock = socket.socket(af, socktype, proto) 17:24:05 17:24:05 # If provided, set socket level options before connecting. 17:24:05 _set_socket_options(sock, socket_options) 17:24:05 17:24:05 if timeout is not _DEFAULT_TIMEOUT: 17:24:05 sock.settimeout(timeout) 17:24:05 if source_address: 17:24:05 sock.bind(source_address) 17:24:05 > sock.connect(sa) 17:24:05 E ConnectionRefusedError: [Errno 111] Connection refused 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 17:24:05 17:24:05 The above exception was the direct cause of the following exception: 17:24:05 17:24:05 self = 17:24:05 method = 'PUT' 17:24:05 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' 17:24:05 body = '{"node": [{"node-id": "XPDRA01", "netconf-node-topology:netconf-node": {"netconf-node-topology:host": "127.0.0.1", "n...ff-millis": 1800000, "netconf-node-topology:backoff-multiplier": 1.5, "netconf-node-topology:keepalive-delay": 120}}]}' 17:24:05 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '709', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:24:05 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:05 redirect = False, assert_same_host = False 17:24:05 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:24:05 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:24:05 decode_content = False, response_kw = {} 17:24:05 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query=None, fragment=None) 17:24:05 destination_scheme = None, conn = None, release_this_conn = True 17:24:05 http_tunnel_required = False, err = None, clean_exit = False 17:24:05 17:24:05 def urlopen( # type: ignore[override] 17:24:05 self, 17:24:05 method: str, 17:24:05 url: str, 17:24:05 body: _TYPE_BODY | None = None, 17:24:05 headers: typing.Mapping[str, str] | None = None, 17:24:05 retries: Retry | bool | int | None = None, 17:24:05 redirect: bool = True, 17:24:05 assert_same_host: bool = True, 17:24:05 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:05 pool_timeout: int | None = None, 17:24:05 release_conn: bool | None = None, 17:24:05 chunked: bool = False, 17:24:05 body_pos: _TYPE_BODY_POSITION | None = None, 17:24:05 preload_content: bool = True, 17:24:05 decode_content: bool = True, 17:24:05 **response_kw: typing.Any, 17:24:05 ) -> BaseHTTPResponse: 17:24:05 """ 17:24:05 Get a connection from the pool and perform an HTTP request. This is the 17:24:05 lowest level call for making a request, so you'll need to specify all 17:24:05 the raw details. 17:24:05 17:24:05 .. note:: 17:24:05 17:24:05 More commonly, it's appropriate to use a convenience method 17:24:05 such as :meth:`request`. 17:24:05 17:24:05 .. note:: 17:24:05 17:24:05 `release_conn` will only behave as expected if 17:24:05 `preload_content=False` because we want to make 17:24:05 `preload_content=False` the default behaviour someday soon without 17:24:05 breaking backwards compatibility. 17:24:05 17:24:05 :param method: 17:24:05 HTTP request method (such as GET, POST, PUT, etc.) 17:24:05 17:24:05 :param url: 17:24:05 The URL to perform the request on. 17:24:05 17:24:05 :param body: 17:24:05 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:24:05 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:24:05 17:24:05 :param headers: 17:24:05 Dictionary of custom headers to send, such as User-Agent, 17:24:05 If-None-Match, etc. If None, pool headers are used. If provided, 17:24:05 these headers completely replace any pool-specific headers. 17:24:05 17:24:05 :param retries: 17:24:05 Configure the number of retries to allow before raising a 17:24:05 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:24:05 17:24:05 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:24:05 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:24:05 over different types of retries. 17:24:05 Pass an integer number to retry connection errors that many times, 17:24:05 but no other types of errors. Pass zero to never retry. 17:24:05 17:24:05 If ``False``, then retries are disabled and any exception is raised 17:24:05 immediately. Also, instead of raising a MaxRetryError on redirects, 17:24:05 the redirect response will be returned. 17:24:05 17:24:05 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:24:05 17:24:05 :param redirect: 17:24:05 If True, automatically handle redirects (status codes 301, 302, 17:24:05 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:24:05 will disable redirect, too. 17:24:05 17:24:05 :param assert_same_host: 17:24:05 If ``True``, will make sure that the host of the pool requests is 17:24:05 consistent else will raise HostChangedError. When ``False``, you can 17:24:05 use the pool on an HTTP proxy and request foreign hosts. 17:24:05 17:24:05 :param timeout: 17:24:05 If specified, overrides the default timeout for this one 17:24:05 request. It may be a float (in seconds) or an instance of 17:24:05 :class:`urllib3.util.Timeout`. 17:24:05 17:24:05 :param pool_timeout: 17:24:05 If set and the pool is set to block=True, then this method will 17:24:05 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:24:05 connection is available within the time period. 17:24:05 17:24:05 :param bool preload_content: 17:24:05 If True, the response's body will be preloaded into memory. 17:24:05 17:24:05 :param bool decode_content: 17:24:05 If True, will attempt to decode the body based on the 17:24:05 'content-encoding' header. 17:24:05 17:24:05 :param release_conn: 17:24:05 If False, then the urlopen call will not release the connection 17:24:05 back into the pool once a response is received (but will release if 17:24:05 you read the entire contents of the response such as when 17:24:05 `preload_content=True`). This is useful if you're not preloading 17:24:05 the response's content immediately. You will need to call 17:24:05 ``r.release_conn()`` on the response ``r`` to return the connection 17:24:05 back into the pool. If None, it takes the value of ``preload_content`` 17:24:05 which defaults to ``True``. 17:24:05 17:24:05 :param bool chunked: 17:24:05 If True, urllib3 will send the body using chunked transfer 17:24:05 encoding. Otherwise, urllib3 will send the body using the standard 17:24:05 content-length form. Defaults to False. 17:24:05 17:24:05 :param int body_pos: 17:24:05 Position to seek to in file-like body in the event of a retry or 17:24:05 redirect. Typically this won't need to be set because urllib3 will 17:24:05 auto-populate the value when needed. 17:24:05 """ 17:24:05 parsed_url = parse_url(url) 17:24:05 destination_scheme = parsed_url.scheme 17:24:05 17:24:05 if headers is None: 17:24:05 headers = self.headers 17:24:05 17:24:05 if not isinstance(retries, Retry): 17:24:05 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:24:05 17:24:05 if release_conn is None: 17:24:05 release_conn = preload_content 17:24:05 17:24:05 # Check host 17:24:05 if assert_same_host and not self.is_same_host(url): 17:24:05 raise HostChangedError(self, url, retries) 17:24:05 17:24:05 # Ensure that the URL we're connecting to is properly encoded 17:24:05 if url.startswith("/"): 17:24:05 url = to_str(_encode_target(url)) 17:24:05 else: 17:24:05 url = to_str(parsed_url.url) 17:24:05 17:24:05 conn = None 17:24:05 17:24:05 # Track whether `conn` needs to be released before 17:24:05 # returning/raising/recursing. Update this variable if necessary, and 17:24:05 # leave `release_conn` constant throughout the function. That way, if 17:24:05 # the function recurses, the original value of `release_conn` will be 17:24:05 # passed down into the recursive call, and its value will be respected. 17:24:05 # 17:24:05 # See issue #651 [1] for details. 17:24:05 # 17:24:05 # [1] 17:24:05 release_this_conn = release_conn 17:24:05 17:24:05 http_tunnel_required = connection_requires_http_tunnel( 17:24:05 self.proxy, self.proxy_config, destination_scheme 17:24:05 ) 17:24:05 17:24:05 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:24:05 # have to copy the headers dict so we can safely change it without those 17:24:05 # changes being reflected in anyone else's copy. 17:24:05 if not http_tunnel_required: 17:24:05 headers = headers.copy() # type: ignore[attr-defined] 17:24:05 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:24:05 17:24:05 # Must keep the exception bound to a separate variable or else Python 3 17:24:05 # complains about UnboundLocalError. 17:24:05 err = None 17:24:05 17:24:05 # Keep track of whether we cleanly exited the except block. This 17:24:05 # ensures we do proper cleanup in finally. 17:24:05 clean_exit = False 17:24:05 17:24:05 # Rewind body position, if needed. Record current position 17:24:05 # for future rewinds in the event of a redirect/retry. 17:24:05 body_pos = set_file_position(body, body_pos) 17:24:05 17:24:05 try: 17:24:05 # Request a connection from the queue. 17:24:05 timeout_obj = self._get_timeout(timeout) 17:24:05 conn = self._get_conn(timeout=pool_timeout) 17:24:05 17:24:05 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:24:05 17:24:05 # Is this a closed/new connection that requires CONNECT tunnelling? 17:24:05 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:24:05 try: 17:24:05 self._prepare_proxy(conn) 17:24:05 except (BaseSSLError, OSError, SocketTimeout) as e: 17:24:05 self._raise_timeout( 17:24:05 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:24:05 ) 17:24:05 raise 17:24:05 17:24:05 # If we're going to release the connection in ``finally:``, then 17:24:05 # the response doesn't need to know about the connection. Otherwise 17:24:05 # it will also try to release it and we'll have a double-release 17:24:05 # mess. 17:24:05 response_conn = conn if not release_conn else None 17:24:05 17:24:05 # Make the request on the HTTPConnection object 17:24:05 > response = self._make_request( 17:24:05 conn, 17:24:05 method, 17:24:05 url, 17:24:05 timeout=timeout_obj, 17:24:05 body=body, 17:24:05 headers=headers, 17:24:05 chunked=chunked, 17:24:05 retries=retries, 17:24:05 response_conn=response_conn, 17:24:05 preload_content=preload_content, 17:24:05 decode_content=decode_content, 17:24:05 **response_kw, 17:24:05 ) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 17:24:05 conn.request( 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 17:24:05 self.endheaders() 17:24:05 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 17:24:05 self._send_output(message_body, encode_chunked=encode_chunked) 17:24:05 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 17:24:05 self.send(msg) 17:24:05 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 17:24:05 self.connect() 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 17:24:05 self.sock = self._new_conn() 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 17:24:05 self = 17:24:05 17:24:05 def _new_conn(self) -> socket.socket: 17:24:05 """Establish a socket connection and set nodelay settings on it. 17:24:05 17:24:05 :return: New socket connection. 17:24:05 """ 17:24:05 try: 17:24:05 sock = connection.create_connection( 17:24:05 (self._dns_host, self.port), 17:24:05 self.timeout, 17:24:05 source_address=self.source_address, 17:24:05 socket_options=self.socket_options, 17:24:05 ) 17:24:05 except socket.gaierror as e: 17:24:05 raise NameResolutionError(self.host, self, e) from e 17:24:05 except SocketTimeout as e: 17:24:05 raise ConnectTimeoutError( 17:24:05 self, 17:24:05 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 17:24:05 ) from e 17:24:05 17:24:05 except OSError as e: 17:24:05 > raise NewConnectionError( 17:24:05 self, f"Failed to establish a new connection: {e}" 17:24:05 ) from e 17:24:05 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 17:24:05 17:24:05 The above exception was the direct cause of the following exception: 17:24:05 17:24:05 self = 17:24:05 request = , stream = False 17:24:05 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:05 proxies = OrderedDict() 17:24:05 17:24:05 def send( 17:24:05 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:05 ): 17:24:05 """Sends PreparedRequest object. Returns Response object. 17:24:05 17:24:05 :param request: The :class:`PreparedRequest ` being sent. 17:24:05 :param stream: (optional) Whether to stream the request content. 17:24:05 :param timeout: (optional) How long to wait for the server to send 17:24:05 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:05 read timeout) ` tuple. 17:24:05 :type timeout: float or tuple or urllib3 Timeout object 17:24:05 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:05 we verify the server's TLS certificate, or a string, in which case it 17:24:05 must be a path to a CA bundle to use 17:24:05 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:05 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:05 :rtype: requests.Response 17:24:05 """ 17:24:05 17:24:05 try: 17:24:05 conn = self.get_connection_with_tls_context( 17:24:05 request, verify, proxies=proxies, cert=cert 17:24:05 ) 17:24:05 except LocationValueError as e: 17:24:05 raise InvalidURL(e, request=request) 17:24:05 17:24:05 self.cert_verify(conn, request.url, verify, cert) 17:24:05 url = self.request_url(request, proxies) 17:24:05 self.add_headers( 17:24:05 request, 17:24:05 stream=stream, 17:24:05 timeout=timeout, 17:24:05 verify=verify, 17:24:05 cert=cert, 17:24:05 proxies=proxies, 17:24:05 ) 17:24:05 17:24:05 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:05 17:24:05 if isinstance(timeout, tuple): 17:24:05 try: 17:24:05 connect, read = timeout 17:24:05 timeout = TimeoutSauce(connect=connect, read=read) 17:24:05 except ValueError: 17:24:05 raise ValueError( 17:24:05 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:05 f"or a single float to set both timeouts to the same value." 17:24:05 ) 17:24:05 elif isinstance(timeout, TimeoutSauce): 17:24:05 pass 17:24:05 else: 17:24:05 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:05 17:24:05 try: 17:24:05 > resp = conn.urlopen( 17:24:05 method=request.method, 17:24:05 url=url, 17:24:05 body=request.body, 17:24:05 headers=request.headers, 17:24:05 redirect=False, 17:24:05 assert_same_host=False, 17:24:05 preload_content=False, 17:24:05 decode_content=False, 17:24:05 retries=self.max_retries, 17:24:05 timeout=timeout, 17:24:05 chunked=chunked, 17:24:05 ) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:24:05 retries = retries.increment( 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 17:24:05 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:05 method = 'PUT' 17:24:05 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' 17:24:05 response = None 17:24:05 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 17:24:05 _pool = 17:24:05 _stacktrace = 17:24:05 17:24:05 def increment( 17:24:05 self, 17:24:05 method: str | None = None, 17:24:05 url: str | None = None, 17:24:05 response: BaseHTTPResponse | None = None, 17:24:05 error: Exception | None = None, 17:24:05 _pool: ConnectionPool | None = None, 17:24:05 _stacktrace: TracebackType | None = None, 17:24:05 ) -> Self: 17:24:05 """Return a new Retry object with incremented retry counters. 17:24:05 17:24:05 :param response: A response object, or None, if the server did not 17:24:05 return a response. 17:24:05 :type response: :class:`~urllib3.response.BaseHTTPResponse` 17:24:05 :param Exception error: An error encountered during the request, or 17:24:05 None if the response was received successfully. 17:24:05 17:24:05 :return: A new ``Retry`` object. 17:24:05 """ 17:24:05 if self.total is False and error: 17:24:05 # Disabled, indicate to re-raise the error. 17:24:05 raise reraise(type(error), error, _stacktrace) 17:24:05 17:24:05 total = self.total 17:24:05 if total is not None: 17:24:05 total -= 1 17:24:05 17:24:05 connect = self.connect 17:24:05 read = self.read 17:24:05 redirect = self.redirect 17:24:05 status_count = self.status 17:24:05 other = self.other 17:24:05 cause = "unknown" 17:24:05 status = None 17:24:05 redirect_location = None 17:24:05 17:24:05 if error and self._is_connection_error(error): 17:24:05 # Connect retry? 17:24:05 if connect is False: 17:24:05 raise reraise(type(error), error, _stacktrace) 17:24:05 elif connect is not None: 17:24:05 connect -= 1 17:24:05 17:24:05 elif error and self._is_read_error(error): 17:24:05 # Read retry? 17:24:05 if read is False or method is None or not self._is_method_retryable(method): 17:24:05 raise reraise(type(error), error, _stacktrace) 17:24:05 elif read is not None: 17:24:05 read -= 1 17:24:05 17:24:05 elif error: 17:24:05 # Other retry? 17:24:05 if other is not None: 17:24:05 other -= 1 17:24:05 17:24:05 elif response and response.get_redirect_location(): 17:24:05 # Redirect retry? 17:24:05 if redirect is not None: 17:24:05 redirect -= 1 17:24:05 cause = "too many redirects" 17:24:05 response_redirect_location = response.get_redirect_location() 17:24:05 if response_redirect_location: 17:24:05 redirect_location = response_redirect_location 17:24:05 status = response.status 17:24:05 17:24:05 else: 17:24:05 # Incrementing because of a server error like a 500 in 17:24:05 # status_forcelist and the given method is in the allowed_methods 17:24:05 cause = ResponseError.GENERIC_ERROR 17:24:05 if response and response.status: 17:24:05 if status_count is not None: 17:24:05 status_count -= 1 17:24:05 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 17:24:05 status = response.status 17:24:05 17:24:05 history = self.history + ( 17:24:05 RequestHistory(method, url, error, status, redirect_location), 17:24:05 ) 17:24:05 17:24:05 new_retry = self.new( 17:24:05 total=total, 17:24:05 connect=connect, 17:24:05 read=read, 17:24:05 redirect=redirect, 17:24:05 status=status_count, 17:24:05 other=other, 17:24:05 history=history, 17:24:05 ) 17:24:05 17:24:05 if new_retry.is_exhausted(): 17:24:05 reason = error or ResponseError(cause) 17:24:05 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 17:24:05 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 17:24:05 17:24:05 During handling of the above exception, another exception occurred: 17:24:05 17:24:05 self = 17:24:05 17:24:05 def test_07_xpdr_device_connection(self): 17:24:05 > response = test_utils.mount_device("XPDRA01", ('xpdra', self.NODE_VERSION)) 17:24:05 17:24:05 transportpce_tests/1.2.1/test01_portmapping.py:99: 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 transportpce_tests/common/test_utils.py:343: in mount_device 17:24:05 response = put_request(url[RESTCONF_VERSION].format('{}', node), body) 17:24:05 transportpce_tests/common/test_utils.py:124: in put_request 17:24:05 return requests.request( 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:24:05 return session.request(method=method, url=url, **kwargs) 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:24:05 resp = self.send(prep, **send_kwargs) 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:24:05 r = adapter.send(request, **kwargs) 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 17:24:05 self = 17:24:05 request = , stream = False 17:24:05 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:05 proxies = OrderedDict() 17:24:05 17:24:05 def send( 17:24:05 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:05 ): 17:24:05 """Sends PreparedRequest object. Returns Response object. 17:24:05 17:24:05 :param request: The :class:`PreparedRequest ` being sent. 17:24:05 :param stream: (optional) Whether to stream the request content. 17:24:05 :param timeout: (optional) How long to wait for the server to send 17:24:05 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:05 read timeout) ` tuple. 17:24:05 :type timeout: float or tuple or urllib3 Timeout object 17:24:05 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:05 we verify the server's TLS certificate, or a string, in which case it 17:24:05 must be a path to a CA bundle to use 17:24:05 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:05 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:05 :rtype: requests.Response 17:24:05 """ 17:24:05 17:24:05 try: 17:24:05 conn = self.get_connection_with_tls_context( 17:24:05 request, verify, proxies=proxies, cert=cert 17:24:05 ) 17:24:05 except LocationValueError as e: 17:24:05 raise InvalidURL(e, request=request) 17:24:05 17:24:05 self.cert_verify(conn, request.url, verify, cert) 17:24:05 url = self.request_url(request, proxies) 17:24:05 self.add_headers( 17:24:05 request, 17:24:05 stream=stream, 17:24:05 timeout=timeout, 17:24:05 verify=verify, 17:24:05 cert=cert, 17:24:05 proxies=proxies, 17:24:05 ) 17:24:05 17:24:05 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:05 17:24:05 if isinstance(timeout, tuple): 17:24:05 try: 17:24:05 connect, read = timeout 17:24:05 timeout = TimeoutSauce(connect=connect, read=read) 17:24:05 except ValueError: 17:24:05 raise ValueError( 17:24:05 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:05 f"or a single float to set both timeouts to the same value." 17:24:05 ) 17:24:05 elif isinstance(timeout, TimeoutSauce): 17:24:05 pass 17:24:05 else: 17:24:05 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:05 17:24:05 try: 17:24:05 resp = conn.urlopen( 17:24:05 method=request.method, 17:24:05 url=url, 17:24:05 body=request.body, 17:24:05 headers=request.headers, 17:24:05 redirect=False, 17:24:05 assert_same_host=False, 17:24:05 preload_content=False, 17:24:05 decode_content=False, 17:24:05 retries=self.max_retries, 17:24:05 timeout=timeout, 17:24:05 chunked=chunked, 17:24:05 ) 17:24:05 17:24:05 except (ProtocolError, OSError) as err: 17:24:05 raise ConnectionError(err, request=request) 17:24:05 17:24:05 except MaxRetryError as e: 17:24:05 if isinstance(e.reason, ConnectTimeoutError): 17:24:05 # TODO: Remove this in 3.0.0: see #2811 17:24:05 if not isinstance(e.reason, NewConnectionError): 17:24:05 raise ConnectTimeout(e, request=request) 17:24:05 17:24:05 if isinstance(e.reason, ResponseError): 17:24:05 raise RetryError(e, request=request) 17:24:05 17:24:05 if isinstance(e.reason, _ProxyError): 17:24:05 raise ProxyError(e, request=request) 17:24:05 17:24:05 if isinstance(e.reason, _SSLError): 17:24:05 # This branch is for urllib3 v1.22 and later. 17:24:05 raise SSLError(e, request=request) 17:24:05 17:24:05 > raise ConnectionError(e, request=request) 17:24:05 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 17:24:05 ----------------------------- Captured stdout call ----------------------------- 17:24:05 execution of test_07_xpdr_device_connection 17:24:05 _________ TransportPCEPortMappingTesting.test_08_xpdr_device_connected _________ 17:24:05 17:24:05 self = 17:24:05 17:24:05 def _new_conn(self) -> socket.socket: 17:24:05 """Establish a socket connection and set nodelay settings on it. 17:24:05 17:24:05 :return: New socket connection. 17:24:05 """ 17:24:05 try: 17:24:05 > sock = connection.create_connection( 17:24:05 (self._dns_host, self.port), 17:24:05 self.timeout, 17:24:05 source_address=self.source_address, 17:24:05 socket_options=self.socket_options, 17:24:05 ) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 17:24:05 raise err 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 17:24:05 address = ('localhost', 8182), timeout = 10, source_address = None 17:24:05 socket_options = [(6, 1, 1)] 17:24:05 17:24:05 def create_connection( 17:24:05 address: tuple[str, int], 17:24:05 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:05 source_address: tuple[str, int] | None = None, 17:24:05 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 17:24:05 ) -> socket.socket: 17:24:05 """Connect to *address* and return the socket object. 17:24:05 17:24:05 Convenience function. Connect to *address* (a 2-tuple ``(host, 17:24:05 port)``) and return the socket object. Passing the optional 17:24:05 *timeout* parameter will set the timeout on the socket instance 17:24:05 before attempting to connect. If no *timeout* is supplied, the 17:24:05 global default timeout setting returned by :func:`socket.getdefaulttimeout` 17:24:05 is used. If *source_address* is set it must be a tuple of (host, port) 17:24:05 for the socket to bind as a source address before making the connection. 17:24:05 An host of '' or port 0 tells the OS to use the default. 17:24:05 """ 17:24:05 17:24:05 host, port = address 17:24:05 if host.startswith("["): 17:24:05 host = host.strip("[]") 17:24:05 err = None 17:24:05 17:24:05 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 17:24:05 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 17:24:05 # The original create_connection function always returns all records. 17:24:05 family = allowed_gai_family() 17:24:05 17:24:05 try: 17:24:05 host.encode("idna") 17:24:05 except UnicodeError: 17:24:05 raise LocationParseError(f"'{host}', label empty or too long") from None 17:24:05 17:24:05 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 17:24:05 af, socktype, proto, canonname, sa = res 17:24:05 sock = None 17:24:05 try: 17:24:05 sock = socket.socket(af, socktype, proto) 17:24:05 17:24:05 # If provided, set socket level options before connecting. 17:24:05 _set_socket_options(sock, socket_options) 17:24:05 17:24:05 if timeout is not _DEFAULT_TIMEOUT: 17:24:05 sock.settimeout(timeout) 17:24:05 if source_address: 17:24:05 sock.bind(source_address) 17:24:05 > sock.connect(sa) 17:24:05 E ConnectionRefusedError: [Errno 111] Connection refused 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 17:24:05 17:24:05 The above exception was the direct cause of the following exception: 17:24:05 17:24:05 self = 17:24:05 method = 'GET' 17:24:05 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' 17:24:05 body = None 17:24:05 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:24:05 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:05 redirect = False, assert_same_host = False 17:24:05 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:24:05 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:24:05 decode_content = False, response_kw = {} 17:24:05 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query='content=nonconfig', fragment=None) 17:24:05 destination_scheme = None, conn = None, release_this_conn = True 17:24:05 http_tunnel_required = False, err = None, clean_exit = False 17:24:05 17:24:05 def urlopen( # type: ignore[override] 17:24:05 self, 17:24:05 method: str, 17:24:05 url: str, 17:24:05 body: _TYPE_BODY | None = None, 17:24:05 headers: typing.Mapping[str, str] | None = None, 17:24:05 retries: Retry | bool | int | None = None, 17:24:05 redirect: bool = True, 17:24:05 assert_same_host: bool = True, 17:24:05 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:05 pool_timeout: int | None = None, 17:24:05 release_conn: bool | None = None, 17:24:05 chunked: bool = False, 17:24:05 body_pos: _TYPE_BODY_POSITION | None = None, 17:24:05 preload_content: bool = True, 17:24:05 decode_content: bool = True, 17:24:05 **response_kw: typing.Any, 17:24:05 ) -> BaseHTTPResponse: 17:24:05 """ 17:24:05 Get a connection from the pool and perform an HTTP request. This is the 17:24:05 lowest level call for making a request, so you'll need to specify all 17:24:05 the raw details. 17:24:05 17:24:05 .. note:: 17:24:05 17:24:05 More commonly, it's appropriate to use a convenience method 17:24:05 such as :meth:`request`. 17:24:05 17:24:05 .. note:: 17:24:05 17:24:05 `release_conn` will only behave as expected if 17:24:05 `preload_content=False` because we want to make 17:24:05 `preload_content=False` the default behaviour someday soon without 17:24:05 breaking backwards compatibility. 17:24:05 17:24:05 :param method: 17:24:05 HTTP request method (such as GET, POST, PUT, etc.) 17:24:05 17:24:05 :param url: 17:24:05 The URL to perform the request on. 17:24:05 17:24:05 :param body: 17:24:05 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:24:05 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:24:05 17:24:05 :param headers: 17:24:05 Dictionary of custom headers to send, such as User-Agent, 17:24:05 If-None-Match, etc. If None, pool headers are used. If provided, 17:24:05 these headers completely replace any pool-specific headers. 17:24:05 17:24:05 :param retries: 17:24:05 Configure the number of retries to allow before raising a 17:24:05 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:24:05 17:24:05 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:24:05 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:24:05 over different types of retries. 17:24:05 Pass an integer number to retry connection errors that many times, 17:24:05 but no other types of errors. Pass zero to never retry. 17:24:05 17:24:05 If ``False``, then retries are disabled and any exception is raised 17:24:05 immediately. Also, instead of raising a MaxRetryError on redirects, 17:24:05 the redirect response will be returned. 17:24:05 17:24:05 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:24:05 17:24:05 :param redirect: 17:24:05 If True, automatically handle redirects (status codes 301, 302, 17:24:05 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:24:05 will disable redirect, too. 17:24:05 17:24:05 :param assert_same_host: 17:24:05 If ``True``, will make sure that the host of the pool requests is 17:24:05 consistent else will raise HostChangedError. When ``False``, you can 17:24:05 use the pool on an HTTP proxy and request foreign hosts. 17:24:05 17:24:05 :param timeout: 17:24:05 If specified, overrides the default timeout for this one 17:24:05 request. It may be a float (in seconds) or an instance of 17:24:05 :class:`urllib3.util.Timeout`. 17:24:05 17:24:05 :param pool_timeout: 17:24:05 If set and the pool is set to block=True, then this method will 17:24:05 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:24:05 connection is available within the time period. 17:24:05 17:24:05 :param bool preload_content: 17:24:05 If True, the response's body will be preloaded into memory. 17:24:05 17:24:05 :param bool decode_content: 17:24:05 If True, will attempt to decode the body based on the 17:24:05 'content-encoding' header. 17:24:05 17:24:05 :param release_conn: 17:24:05 If False, then the urlopen call will not release the connection 17:24:05 back into the pool once a response is received (but will release if 17:24:05 you read the entire contents of the response such as when 17:24:05 `preload_content=True`). This is useful if you're not preloading 17:24:05 the response's content immediately. You will need to call 17:24:05 ``r.release_conn()`` on the response ``r`` to return the connection 17:24:05 back into the pool. If None, it takes the value of ``preload_content`` 17:24:05 which defaults to ``True``. 17:24:05 17:24:05 :param bool chunked: 17:24:05 If True, urllib3 will send the body using chunked transfer 17:24:05 encoding. Otherwise, urllib3 will send the body using the standard 17:24:05 content-length form. Defaults to False. 17:24:05 17:24:05 :param int body_pos: 17:24:05 Position to seek to in file-like body in the event of a retry or 17:24:05 redirect. Typically this won't need to be set because urllib3 will 17:24:05 auto-populate the value when needed. 17:24:05 """ 17:24:05 parsed_url = parse_url(url) 17:24:05 destination_scheme = parsed_url.scheme 17:24:05 17:24:05 if headers is None: 17:24:05 headers = self.headers 17:24:05 17:24:05 if not isinstance(retries, Retry): 17:24:05 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:24:05 17:24:05 if release_conn is None: 17:24:05 release_conn = preload_content 17:24:05 17:24:05 # Check host 17:24:05 if assert_same_host and not self.is_same_host(url): 17:24:05 raise HostChangedError(self, url, retries) 17:24:05 17:24:05 # Ensure that the URL we're connecting to is properly encoded 17:24:05 if url.startswith("/"): 17:24:05 url = to_str(_encode_target(url)) 17:24:05 else: 17:24:05 url = to_str(parsed_url.url) 17:24:05 17:24:05 conn = None 17:24:05 17:24:05 # Track whether `conn` needs to be released before 17:24:05 # returning/raising/recursing. Update this variable if necessary, and 17:24:05 # leave `release_conn` constant throughout the function. That way, if 17:24:05 # the function recurses, the original value of `release_conn` will be 17:24:05 # passed down into the recursive call, and its value will be respected. 17:24:05 # 17:24:05 # See issue #651 [1] for details. 17:24:05 # 17:24:05 # [1] 17:24:05 release_this_conn = release_conn 17:24:05 17:24:05 http_tunnel_required = connection_requires_http_tunnel( 17:24:05 self.proxy, self.proxy_config, destination_scheme 17:24:05 ) 17:24:05 17:24:05 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:24:05 # have to copy the headers dict so we can safely change it without those 17:24:05 # changes being reflected in anyone else's copy. 17:24:05 if not http_tunnel_required: 17:24:05 headers = headers.copy() # type: ignore[attr-defined] 17:24:05 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:24:05 17:24:05 # Must keep the exception bound to a separate variable or else Python 3 17:24:05 # complains about UnboundLocalError. 17:24:05 err = None 17:24:05 17:24:05 # Keep track of whether we cleanly exited the except block. This 17:24:05 # ensures we do proper cleanup in finally. 17:24:05 clean_exit = False 17:24:05 17:24:05 # Rewind body position, if needed. Record current position 17:24:05 # for future rewinds in the event of a redirect/retry. 17:24:05 body_pos = set_file_position(body, body_pos) 17:24:05 17:24:05 try: 17:24:05 # Request a connection from the queue. 17:24:05 timeout_obj = self._get_timeout(timeout) 17:24:05 conn = self._get_conn(timeout=pool_timeout) 17:24:05 17:24:05 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:24:05 17:24:05 # Is this a closed/new connection that requires CONNECT tunnelling? 17:24:05 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:24:05 try: 17:24:05 self._prepare_proxy(conn) 17:24:05 except (BaseSSLError, OSError, SocketTimeout) as e: 17:24:05 self._raise_timeout( 17:24:05 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:24:05 ) 17:24:05 raise 17:24:05 17:24:05 # If we're going to release the connection in ``finally:``, then 17:24:05 # the response doesn't need to know about the connection. Otherwise 17:24:05 # it will also try to release it and we'll have a double-release 17:24:05 # mess. 17:24:05 response_conn = conn if not release_conn else None 17:24:05 17:24:05 # Make the request on the HTTPConnection object 17:24:05 > response = self._make_request( 17:24:05 conn, 17:24:05 method, 17:24:05 url, 17:24:05 timeout=timeout_obj, 17:24:05 body=body, 17:24:05 headers=headers, 17:24:05 chunked=chunked, 17:24:05 retries=retries, 17:24:05 response_conn=response_conn, 17:24:05 preload_content=preload_content, 17:24:05 decode_content=decode_content, 17:24:05 **response_kw, 17:24:05 ) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 17:24:05 conn.request( 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 17:24:05 self.endheaders() 17:24:05 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 17:24:05 self._send_output(message_body, encode_chunked=encode_chunked) 17:24:05 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 17:24:05 self.send(msg) 17:24:05 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 17:24:05 self.connect() 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 17:24:05 self.sock = self._new_conn() 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 17:24:05 self = 17:24:05 17:24:05 def _new_conn(self) -> socket.socket: 17:24:05 """Establish a socket connection and set nodelay settings on it. 17:24:05 17:24:05 :return: New socket connection. 17:24:05 """ 17:24:05 try: 17:24:05 sock = connection.create_connection( 17:24:05 (self._dns_host, self.port), 17:24:05 self.timeout, 17:24:05 source_address=self.source_address, 17:24:05 socket_options=self.socket_options, 17:24:05 ) 17:24:05 except socket.gaierror as e: 17:24:05 raise NameResolutionError(self.host, self, e) from e 17:24:05 except SocketTimeout as e: 17:24:05 raise ConnectTimeoutError( 17:24:05 self, 17:24:05 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 17:24:05 ) from e 17:24:05 17:24:05 except OSError as e: 17:24:05 > raise NewConnectionError( 17:24:05 self, f"Failed to establish a new connection: {e}" 17:24:05 ) from e 17:24:05 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 17:24:05 17:24:05 The above exception was the direct cause of the following exception: 17:24:05 17:24:05 self = 17:24:05 request = , stream = False 17:24:05 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:05 proxies = OrderedDict() 17:24:05 17:24:05 def send( 17:24:05 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:05 ): 17:24:05 """Sends PreparedRequest object. Returns Response object. 17:24:05 17:24:05 :param request: The :class:`PreparedRequest ` being sent. 17:24:05 :param stream: (optional) Whether to stream the request content. 17:24:05 :param timeout: (optional) How long to wait for the server to send 17:24:05 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:05 read timeout) ` tuple. 17:24:05 :type timeout: float or tuple or urllib3 Timeout object 17:24:05 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:05 we verify the server's TLS certificate, or a string, in which case it 17:24:05 must be a path to a CA bundle to use 17:24:05 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:05 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:05 :rtype: requests.Response 17:24:05 """ 17:24:05 17:24:05 try: 17:24:05 conn = self.get_connection_with_tls_context( 17:24:05 request, verify, proxies=proxies, cert=cert 17:24:05 ) 17:24:05 except LocationValueError as e: 17:24:05 raise InvalidURL(e, request=request) 17:24:05 17:24:05 self.cert_verify(conn, request.url, verify, cert) 17:24:05 url = self.request_url(request, proxies) 17:24:05 self.add_headers( 17:24:05 request, 17:24:05 stream=stream, 17:24:05 timeout=timeout, 17:24:05 verify=verify, 17:24:05 cert=cert, 17:24:05 proxies=proxies, 17:24:05 ) 17:24:05 17:24:05 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:05 17:24:05 if isinstance(timeout, tuple): 17:24:05 try: 17:24:05 connect, read = timeout 17:24:05 timeout = TimeoutSauce(connect=connect, read=read) 17:24:05 except ValueError: 17:24:05 raise ValueError( 17:24:05 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:05 f"or a single float to set both timeouts to the same value." 17:24:05 ) 17:24:05 elif isinstance(timeout, TimeoutSauce): 17:24:05 pass 17:24:05 else: 17:24:05 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:05 17:24:05 try: 17:24:05 > resp = conn.urlopen( 17:24:05 method=request.method, 17:24:05 url=url, 17:24:05 body=request.body, 17:24:05 headers=request.headers, 17:24:05 redirect=False, 17:24:05 assert_same_host=False, 17:24:05 preload_content=False, 17:24:05 decode_content=False, 17:24:05 retries=self.max_retries, 17:24:05 timeout=timeout, 17:24:05 chunked=chunked, 17:24:05 ) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:24:05 retries = retries.increment( 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 17:24:05 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:05 method = 'GET' 17:24:05 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' 17:24:05 response = None 17:24:05 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 17:24:05 _pool = 17:24:05 _stacktrace = 17:24:05 17:24:05 def increment( 17:24:05 self, 17:24:05 method: str | None = None, 17:24:05 url: str | None = None, 17:24:05 response: BaseHTTPResponse | None = None, 17:24:05 error: Exception | None = None, 17:24:05 _pool: ConnectionPool | None = None, 17:24:05 _stacktrace: TracebackType | None = None, 17:24:05 ) -> Self: 17:24:05 """Return a new Retry object with incremented retry counters. 17:24:05 17:24:05 :param response: A response object, or None, if the server did not 17:24:05 return a response. 17:24:05 :type response: :class:`~urllib3.response.BaseHTTPResponse` 17:24:05 :param Exception error: An error encountered during the request, or 17:24:05 None if the response was received successfully. 17:24:05 17:24:05 :return: A new ``Retry`` object. 17:24:05 """ 17:24:05 if self.total is False and error: 17:24:05 # Disabled, indicate to re-raise the error. 17:24:05 raise reraise(type(error), error, _stacktrace) 17:24:05 17:24:05 total = self.total 17:24:05 if total is not None: 17:24:05 total -= 1 17:24:05 17:24:05 connect = self.connect 17:24:05 read = self.read 17:24:05 redirect = self.redirect 17:24:05 status_count = self.status 17:24:05 other = self.other 17:24:05 cause = "unknown" 17:24:05 status = None 17:24:05 redirect_location = None 17:24:05 17:24:05 if error and self._is_connection_error(error): 17:24:05 # Connect retry? 17:24:05 if connect is False: 17:24:05 raise reraise(type(error), error, _stacktrace) 17:24:05 elif connect is not None: 17:24:05 connect -= 1 17:24:05 17:24:05 elif error and self._is_read_error(error): 17:24:05 # Read retry? 17:24:05 if read is False or method is None or not self._is_method_retryable(method): 17:24:05 raise reraise(type(error), error, _stacktrace) 17:24:05 elif read is not None: 17:24:05 read -= 1 17:24:05 17:24:05 elif error: 17:24:05 # Other retry? 17:24:05 if other is not None: 17:24:05 other -= 1 17:24:05 17:24:05 elif response and response.get_redirect_location(): 17:24:05 # Redirect retry? 17:24:05 if redirect is not None: 17:24:05 redirect -= 1 17:24:05 cause = "too many redirects" 17:24:05 response_redirect_location = response.get_redirect_location() 17:24:05 if response_redirect_location: 17:24:05 redirect_location = response_redirect_location 17:24:05 status = response.status 17:24:05 17:24:05 else: 17:24:05 # Incrementing because of a server error like a 500 in 17:24:05 # status_forcelist and the given method is in the allowed_methods 17:24:05 cause = ResponseError.GENERIC_ERROR 17:24:05 if response and response.status: 17:24:05 if status_count is not None: 17:24:05 status_count -= 1 17:24:05 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 17:24:05 status = response.status 17:24:05 17:24:05 history = self.history + ( 17:24:05 RequestHistory(method, url, error, status, redirect_location), 17:24:05 ) 17:24:05 17:24:05 new_retry = self.new( 17:24:05 total=total, 17:24:05 connect=connect, 17:24:05 read=read, 17:24:05 redirect=redirect, 17:24:05 status=status_count, 17:24:05 other=other, 17:24:05 history=history, 17:24:05 ) 17:24:05 17:24:05 if new_retry.is_exhausted(): 17:24:05 reason = error or ResponseError(cause) 17:24:05 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 17:24:05 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 17:24:05 17:24:05 During handling of the above exception, another exception occurred: 17:24:05 17:24:05 self = 17:24:05 17:24:05 def test_08_xpdr_device_connected(self): 17:24:05 > response = test_utils.check_device_connection("XPDRA01") 17:24:05 17:24:05 transportpce_tests/1.2.1/test01_portmapping.py:103: 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 transportpce_tests/common/test_utils.py:371: in check_device_connection 17:24:05 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 17:24:05 transportpce_tests/common/test_utils.py:116: in get_request 17:24:05 return requests.request( 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:24:05 return session.request(method=method, url=url, **kwargs) 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:24:05 resp = self.send(prep, **send_kwargs) 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:24:05 r = adapter.send(request, **kwargs) 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 17:24:05 self = 17:24:05 request = , stream = False 17:24:05 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:05 proxies = OrderedDict() 17:24:05 17:24:05 def send( 17:24:05 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:05 ): 17:24:05 """Sends PreparedRequest object. Returns Response object. 17:24:05 17:24:05 :param request: The :class:`PreparedRequest ` being sent. 17:24:05 :param stream: (optional) Whether to stream the request content. 17:24:05 :param timeout: (optional) How long to wait for the server to send 17:24:05 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:05 read timeout) ` tuple. 17:24:05 :type timeout: float or tuple or urllib3 Timeout object 17:24:05 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:05 we verify the server's TLS certificate, or a string, in which case it 17:24:05 must be a path to a CA bundle to use 17:24:05 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:05 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:05 :rtype: requests.Response 17:24:05 """ 17:24:05 17:24:05 try: 17:24:05 conn = self.get_connection_with_tls_context( 17:24:05 request, verify, proxies=proxies, cert=cert 17:24:05 ) 17:24:05 except LocationValueError as e: 17:24:05 raise InvalidURL(e, request=request) 17:24:05 17:24:05 self.cert_verify(conn, request.url, verify, cert) 17:24:05 url = self.request_url(request, proxies) 17:24:05 self.add_headers( 17:24:05 request, 17:24:05 stream=stream, 17:24:05 timeout=timeout, 17:24:05 verify=verify, 17:24:05 cert=cert, 17:24:05 proxies=proxies, 17:24:05 ) 17:24:05 17:24:05 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:05 17:24:05 if isinstance(timeout, tuple): 17:24:05 try: 17:24:05 connect, read = timeout 17:24:05 timeout = TimeoutSauce(connect=connect, read=read) 17:24:05 except ValueError: 17:24:05 raise ValueError( 17:24:05 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:05 f"or a single float to set both timeouts to the same value." 17:24:05 ) 17:24:05 elif isinstance(timeout, TimeoutSauce): 17:24:05 pass 17:24:05 else: 17:24:05 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:05 17:24:05 try: 17:24:05 resp = conn.urlopen( 17:24:05 method=request.method, 17:24:05 url=url, 17:24:05 body=request.body, 17:24:05 headers=request.headers, 17:24:05 redirect=False, 17:24:05 assert_same_host=False, 17:24:05 preload_content=False, 17:24:05 decode_content=False, 17:24:05 retries=self.max_retries, 17:24:05 timeout=timeout, 17:24:05 chunked=chunked, 17:24:05 ) 17:24:05 17:24:05 except (ProtocolError, OSError) as err: 17:24:05 raise ConnectionError(err, request=request) 17:24:05 17:24:05 except MaxRetryError as e: 17:24:05 if isinstance(e.reason, ConnectTimeoutError): 17:24:05 # TODO: Remove this in 3.0.0: see #2811 17:24:05 if not isinstance(e.reason, NewConnectionError): 17:24:05 raise ConnectTimeout(e, request=request) 17:24:05 17:24:05 if isinstance(e.reason, ResponseError): 17:24:05 raise RetryError(e, request=request) 17:24:05 17:24:05 if isinstance(e.reason, _ProxyError): 17:24:05 raise ProxyError(e, request=request) 17:24:05 17:24:05 if isinstance(e.reason, _SSLError): 17:24:05 # This branch is for urllib3 v1.22 and later. 17:24:05 raise SSLError(e, request=request) 17:24:05 17:24:05 > raise ConnectionError(e, request=request) 17:24:05 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 17:24:05 ----------------------------- Captured stdout call ----------------------------- 17:24:05 execution of test_08_xpdr_device_connected 17:24:05 _________ TransportPCEPortMappingTesting.test_09_xpdr_portmapping_info _________ 17:24:05 17:24:05 self = 17:24:05 17:24:05 def _new_conn(self) -> socket.socket: 17:24:05 """Establish a socket connection and set nodelay settings on it. 17:24:05 17:24:05 :return: New socket connection. 17:24:05 """ 17:24:05 try: 17:24:05 > sock = connection.create_connection( 17:24:05 (self._dns_host, self.port), 17:24:05 self.timeout, 17:24:05 source_address=self.source_address, 17:24:05 socket_options=self.socket_options, 17:24:05 ) 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 17:24:05 raise err 17:24:05 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:05 17:24:05 address = ('localhost', 8182), timeout = 10, source_address = None 17:24:05 socket_options = [(6, 1, 1)] 17:24:05 17:24:05 def create_connection( 17:24:05 address: tuple[str, int], 17:24:05 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:05 source_address: tuple[str, int] | None = None, 17:24:05 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 17:24:05 ) -> socket.socket: 17:24:05 """Connect to *address* and return the socket object. 17:24:05 17:24:05 Convenience function. Connect to *address* (a 2-tuple ``(host, 17:24:05 port)``) and return the socket object. Passing the optional 17:24:05 *timeout* parameter will set the timeout on the socket instance 17:24:05 before attempting to connect. If no *timeout* is supplied, the 17:24:05 global default timeout setting returned by :func:`socket.getdefaulttimeout` 17:24:05 is used. If *source_address* is set it must be a tuple of (host, port) 17:24:05 for the socket to bind as a source address before making the connection. 17:24:05 An host of '' or port 0 tells the OS to use the default. 17:24:05 """ 17:24:05 17:24:05 host, port = address 17:24:05 if host.startswith("["): 17:24:05 host = host.strip("[]") 17:24:05 err = None 17:24:05 17:24:05 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 17:24:05 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 17:24:05 # The original create_connection function always returns all records. 17:24:05 family = allowed_gai_family() 17:24:05 17:24:05 try: 17:24:05 host.encode("idna") 17:24:05 except UnicodeError: 17:24:05 raise LocationParseError(f"'{host}', label empty or too long") from None 17:24:05 17:24:05 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 17:24:05 af, socktype, proto, canonname, sa = res 17:24:05 sock = None 17:24:05 try: 17:24:05 sock = socket.socket(af, socktype, proto) 17:24:05 17:24:05 # If provided, set socket level options before connecting. 17:24:05 _set_socket_options(sock, socket_options) 17:24:05 17:24:05 if timeout is not _DEFAULT_TIMEOUT: 17:24:05 sock.settimeout(timeout) 17:24:05 if source_address: 17:24:05 sock.bind(source_address) 17:24:05 > sock.connect(sa) 17:24:05 E ConnectionRefusedError: [Errno 111] Connection refused 17:24:05 17:24:05 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 17:24:05 17:24:05 The above exception was the direct cause of the following exception: 17:24:05 17:24:05 self = 17:24:05 method = 'GET' 17:24:05 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' 17:24:05 body = None 17:24:05 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:24:05 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:05 redirect = False, assert_same_host = False 17:24:05 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:24:05 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:24:05 decode_content = False, response_kw = {} 17:24:05 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info', query=None, fragment=None) 17:24:05 destination_scheme = None, conn = None, release_this_conn = True 17:24:05 http_tunnel_required = False, err = None, clean_exit = False 17:24:05 17:24:05 def urlopen( # type: ignore[override] 17:24:05 self, 17:24:05 method: str, 17:24:05 url: str, 17:24:05 body: _TYPE_BODY | None = None, 17:24:05 headers: typing.Mapping[str, str] | None = None, 17:24:05 retries: Retry | bool | int | None = None, 17:24:05 redirect: bool = True, 17:24:05 assert_same_host: bool = True, 17:24:05 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:05 pool_timeout: int | None = None, 17:24:05 release_conn: bool | None = None, 17:24:05 chunked: bool = False, 17:24:05 body_pos: _TYPE_BODY_POSITION | None = None, 17:24:05 preload_content: bool = True, 17:24:05 decode_content: bool = True, 17:24:05 **response_kw: typing.Any, 17:24:05 ) -> BaseHTTPResponse: 17:24:05 """ 17:24:05 Get a connection from the pool and perform an HTTP request. This is the 17:24:05 lowest level call for making a request, so you'll need to specify all 17:24:05 the raw details. 17:24:05 17:24:05 .. note:: 17:24:05 17:24:05 More commonly, it's appropriate to use a convenience method 17:24:05 such as :meth:`request`. 17:24:05 17:24:05 .. note:: 17:24:05 17:24:05 `release_conn` will only behave as expected if 17:24:05 `preload_content=False` because we want to make 17:24:05 `preload_content=False` the default behaviour someday soon without 17:24:05 breaking backwards compatibility. 17:24:05 17:24:05 :param method: 17:24:05 HTTP request method (such as GET, POST, PUT, etc.) 17:24:05 17:24:05 :param url: 17:24:05 The URL to perform the request on. 17:24:05 17:24:05 :param body: 17:24:05 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:24:05 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:24:05 17:24:05 :param headers: 17:24:05 Dictionary of custom headers to send, such as User-Agent, 17:24:05 If-None-Match, etc. If None, pool headers are used. If provided, 17:24:05 these headers completely replace any pool-specific headers. 17:24:05 17:24:05 :param retries: 17:24:05 Configure the number of retries to allow before raising a 17:24:05 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:24:05 17:24:05 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:24:05 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:24:05 over different types of retries. 17:24:05 Pass an integer number to retry connection errors that many times, 17:24:05 but no other types of errors. Pass zero to never retry. 17:24:05 17:24:05 If ``False``, then retries are disabled and any exception is raised 17:24:05 immediately. Also, instead of raising a MaxRetryError on redirects, 17:24:05 the redirect response will be returned. 17:24:05 17:24:05 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:24:05 17:24:05 :param redirect: 17:24:05 If True, automatically handle redirects (status codes 301, 302, 17:24:05 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:24:05 will disable redirect, too. 17:24:05 17:24:05 :param assert_same_host: 17:24:05 If ``True``, will make sure that the host of the pool requests is 17:24:05 consistent else will raise HostChangedError. When ``False``, you can 17:24:05 use the pool on an HTTP proxy and request foreign hosts. 17:24:05 17:24:05 :param timeout: 17:24:05 If specified, overrides the default timeout for this one 17:24:05 request. It may be a float (in seconds) or an instance of 17:24:05 :class:`urllib3.util.Timeout`. 17:24:05 17:24:05 :param pool_timeout: 17:24:05 If set and the pool is set to block=True, then this method will 17:24:05 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:24:05 connection is available within the time period. 17:24:05 17:24:05 :param bool preload_content: 17:24:05 If True, the response's body will be preloaded into memory. 17:24:05 17:24:05 :param bool decode_content: 17:24:05 If True, will attempt to decode the body based on the 17:24:05 'content-encoding' header. 17:24:05 17:24:05 :param release_conn: 17:24:05 If False, then the urlopen call will not release the connection 17:24:05 back into the pool once a response is received (but will release if 17:24:05 you read the entire contents of the response such as when 17:24:05 `preload_content=True`). This is useful if you're not preloading 17:24:05 the response's content immediately. You will need to call 17:24:05 ``r.release_conn()`` on the response ``r`` to return the connection 17:24:05 back into the pool. If None, it takes the value of ``preload_content`` 17:24:05 which defaults to ``True``. 17:24:05 17:24:05 :param bool chunked: 17:24:05 If True, urllib3 will send the body using chunked transfer 17:24:05 encoding. Otherwise, urllib3 will send the body using the standard 17:24:05 content-length form. Defaults to False. 17:24:05 17:24:05 :param int body_pos: 17:24:05 Position to seek to in file-like body in the event of a retry or 17:24:05 redirect. Typically this won't need to be set because urllib3 will 17:24:05 auto-populate the value when needed. 17:24:05 """ 17:24:05 parsed_url = parse_url(url) 17:24:05 destination_scheme = parsed_url.scheme 17:24:05 17:24:05 if headers is None: 17:24:05 headers = self.headers 17:24:05 17:24:05 if not isinstance(retries, Retry): 17:24:05 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:24:05 17:24:05 if release_conn is None: 17:24:05 release_conn = preload_content 17:24:05 17:24:05 # Check host 17:24:05 if assert_same_host and not self.is_same_host(url): 17:24:05 raise HostChangedError(self, url, retries) 17:24:05 17:24:05 # Ensure that the URL we're connecting to is properly encoded 17:24:05 if url.startswith("/"): 17:24:05 url = to_str(_encode_target(url)) 17:24:05 else: 17:24:05 url = to_str(parsed_url.url) 17:24:05 17:24:05 conn = None 17:24:05 17:24:05 # Track whether `conn` needs to be released before 17:24:05 # returning/raising/recursing. Update this variable if necessary, and 17:24:05 # leave `release_conn` constant throughout the function. That way, if 17:24:05 # the function recurses, the original value of `release_conn` will be 17:24:05 # passed down into the recursive call, and its value will be respected. 17:24:05 # 17:24:05 # See issue #651 [1] for details. 17:24:05 # 17:24:05 # [1] 17:24:06 release_this_conn = release_conn 17:24:06 17:24:06 http_tunnel_required = connection_requires_http_tunnel( 17:24:06 self.proxy, self.proxy_config, destination_scheme 17:24:06 ) 17:24:06 17:24:06 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:24:06 # have to copy the headers dict so we can safely change it without those 17:24:06 # changes being reflected in anyone else's copy. 17:24:06 if not http_tunnel_required: 17:24:06 headers = headers.copy() # type: ignore[attr-defined] 17:24:06 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:24:06 17:24:06 # Must keep the exception bound to a separate variable or else Python 3 17:24:06 # complains about UnboundLocalError. 17:24:06 err = None 17:24:06 17:24:06 # Keep track of whether we cleanly exited the except block. This 17:24:06 # ensures we do proper cleanup in finally. 17:24:06 clean_exit = False 17:24:06 17:24:06 # Rewind body position, if needed. Record current position 17:24:06 # for future rewinds in the event of a redirect/retry. 17:24:06 body_pos = set_file_position(body, body_pos) 17:24:06 17:24:06 try: 17:24:06 # Request a connection from the queue. 17:24:06 timeout_obj = self._get_timeout(timeout) 17:24:06 conn = self._get_conn(timeout=pool_timeout) 17:24:06 17:24:06 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:24:06 17:24:06 # Is this a closed/new connection that requires CONNECT tunnelling? 17:24:06 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:24:06 try: 17:24:06 self._prepare_proxy(conn) 17:24:06 except (BaseSSLError, OSError, SocketTimeout) as e: 17:24:06 self._raise_timeout( 17:24:06 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:24:06 ) 17:24:06 raise 17:24:06 17:24:06 # If we're going to release the connection in ``finally:``, then 17:24:06 # the response doesn't need to know about the connection. Otherwise 17:24:06 # it will also try to release it and we'll have a double-release 17:24:06 # mess. 17:24:06 response_conn = conn if not release_conn else None 17:24:06 17:24:06 # Make the request on the HTTPConnection object 17:24:06 > response = self._make_request( 17:24:06 conn, 17:24:06 method, 17:24:06 url, 17:24:06 timeout=timeout_obj, 17:24:06 body=body, 17:24:06 headers=headers, 17:24:06 chunked=chunked, 17:24:06 retries=retries, 17:24:06 response_conn=response_conn, 17:24:06 preload_content=preload_content, 17:24:06 decode_content=decode_content, 17:24:06 **response_kw, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 17:24:06 conn.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 17:24:06 self.endheaders() 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 17:24:06 self._send_output(message_body, encode_chunked=encode_chunked) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 17:24:06 self.send(msg) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 17:24:06 self.connect() 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 17:24:06 self.sock = self._new_conn() 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 except socket.gaierror as e: 17:24:06 raise NameResolutionError(self.host, self, e) from e 17:24:06 except SocketTimeout as e: 17:24:06 raise ConnectTimeoutError( 17:24:06 self, 17:24:06 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 17:24:06 ) from e 17:24:06 17:24:06 except OSError as e: 17:24:06 > raise NewConnectionError( 17:24:06 self, f"Failed to establish a new connection: {e}" 17:24:06 ) from e 17:24:06 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 > resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:24:06 retries = retries.increment( 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 method = 'GET' 17:24:06 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' 17:24:06 response = None 17:24:06 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 17:24:06 _pool = 17:24:06 _stacktrace = 17:24:06 17:24:06 def increment( 17:24:06 self, 17:24:06 method: str | None = None, 17:24:06 url: str | None = None, 17:24:06 response: BaseHTTPResponse | None = None, 17:24:06 error: Exception | None = None, 17:24:06 _pool: ConnectionPool | None = None, 17:24:06 _stacktrace: TracebackType | None = None, 17:24:06 ) -> Self: 17:24:06 """Return a new Retry object with incremented retry counters. 17:24:06 17:24:06 :param response: A response object, or None, if the server did not 17:24:06 return a response. 17:24:06 :type response: :class:`~urllib3.response.BaseHTTPResponse` 17:24:06 :param Exception error: An error encountered during the request, or 17:24:06 None if the response was received successfully. 17:24:06 17:24:06 :return: A new ``Retry`` object. 17:24:06 """ 17:24:06 if self.total is False and error: 17:24:06 # Disabled, indicate to re-raise the error. 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 17:24:06 total = self.total 17:24:06 if total is not None: 17:24:06 total -= 1 17:24:06 17:24:06 connect = self.connect 17:24:06 read = self.read 17:24:06 redirect = self.redirect 17:24:06 status_count = self.status 17:24:06 other = self.other 17:24:06 cause = "unknown" 17:24:06 status = None 17:24:06 redirect_location = None 17:24:06 17:24:06 if error and self._is_connection_error(error): 17:24:06 # Connect retry? 17:24:06 if connect is False: 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif connect is not None: 17:24:06 connect -= 1 17:24:06 17:24:06 elif error and self._is_read_error(error): 17:24:06 # Read retry? 17:24:06 if read is False or method is None or not self._is_method_retryable(method): 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif read is not None: 17:24:06 read -= 1 17:24:06 17:24:06 elif error: 17:24:06 # Other retry? 17:24:06 if other is not None: 17:24:06 other -= 1 17:24:06 17:24:06 elif response and response.get_redirect_location(): 17:24:06 # Redirect retry? 17:24:06 if redirect is not None: 17:24:06 redirect -= 1 17:24:06 cause = "too many redirects" 17:24:06 response_redirect_location = response.get_redirect_location() 17:24:06 if response_redirect_location: 17:24:06 redirect_location = response_redirect_location 17:24:06 status = response.status 17:24:06 17:24:06 else: 17:24:06 # Incrementing because of a server error like a 500 in 17:24:06 # status_forcelist and the given method is in the allowed_methods 17:24:06 cause = ResponseError.GENERIC_ERROR 17:24:06 if response and response.status: 17:24:06 if status_count is not None: 17:24:06 status_count -= 1 17:24:06 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 17:24:06 status = response.status 17:24:06 17:24:06 history = self.history + ( 17:24:06 RequestHistory(method, url, error, status, redirect_location), 17:24:06 ) 17:24:06 17:24:06 new_retry = self.new( 17:24:06 total=total, 17:24:06 connect=connect, 17:24:06 read=read, 17:24:06 redirect=redirect, 17:24:06 status=status_count, 17:24:06 other=other, 17:24:06 history=history, 17:24:06 ) 17:24:06 17:24:06 if new_retry.is_exhausted(): 17:24:06 reason = error or ResponseError(cause) 17:24:06 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 17:24:06 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 17:24:06 17:24:06 During handling of the above exception, another exception occurred: 17:24:06 17:24:06 self = 17:24:06 17:24:06 def test_09_xpdr_portmapping_info(self): 17:24:06 > response = test_utils.get_portmapping_node_attr("XPDRA01", "node-info", None) 17:24:06 17:24:06 transportpce_tests/1.2.1/test01_portmapping.py:109: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 transportpce_tests/common/test_utils.py:473: in get_portmapping_node_attr 17:24:06 response = get_request(target_url) 17:24:06 transportpce_tests/common/test_utils.py:116: in get_request 17:24:06 return requests.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:24:06 return session.request(method=method, url=url, **kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:24:06 resp = self.send(prep, **send_kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:24:06 r = adapter.send(request, **kwargs) 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 except (ProtocolError, OSError) as err: 17:24:06 raise ConnectionError(err, request=request) 17:24:06 17:24:06 except MaxRetryError as e: 17:24:06 if isinstance(e.reason, ConnectTimeoutError): 17:24:06 # TODO: Remove this in 3.0.0: see #2811 17:24:06 if not isinstance(e.reason, NewConnectionError): 17:24:06 raise ConnectTimeout(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, ResponseError): 17:24:06 raise RetryError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _ProxyError): 17:24:06 raise ProxyError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _SSLError): 17:24:06 # This branch is for urllib3 v1.22 and later. 17:24:06 raise SSLError(e, request=request) 17:24:06 17:24:06 > raise ConnectionError(e, request=request) 17:24:06 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 17:24:06 ----------------------------- Captured stdout call ----------------------------- 17:24:06 execution of test_09_xpdr_portmapping_info 17:24:06 _______ TransportPCEPortMappingTesting.test_10_xpdr_portmapping_NETWORK1 _______ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 > sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 17:24:06 raise err 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 address = ('localhost', 8182), timeout = 10, source_address = None 17:24:06 socket_options = [(6, 1, 1)] 17:24:06 17:24:06 def create_connection( 17:24:06 address: tuple[str, int], 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 source_address: tuple[str, int] | None = None, 17:24:06 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 17:24:06 ) -> socket.socket: 17:24:06 """Connect to *address* and return the socket object. 17:24:06 17:24:06 Convenience function. Connect to *address* (a 2-tuple ``(host, 17:24:06 port)``) and return the socket object. Passing the optional 17:24:06 *timeout* parameter will set the timeout on the socket instance 17:24:06 before attempting to connect. If no *timeout* is supplied, the 17:24:06 global default timeout setting returned by :func:`socket.getdefaulttimeout` 17:24:06 is used. If *source_address* is set it must be a tuple of (host, port) 17:24:06 for the socket to bind as a source address before making the connection. 17:24:06 An host of '' or port 0 tells the OS to use the default. 17:24:06 """ 17:24:06 17:24:06 host, port = address 17:24:06 if host.startswith("["): 17:24:06 host = host.strip("[]") 17:24:06 err = None 17:24:06 17:24:06 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 17:24:06 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 17:24:06 # The original create_connection function always returns all records. 17:24:06 family = allowed_gai_family() 17:24:06 17:24:06 try: 17:24:06 host.encode("idna") 17:24:06 except UnicodeError: 17:24:06 raise LocationParseError(f"'{host}', label empty or too long") from None 17:24:06 17:24:06 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 17:24:06 af, socktype, proto, canonname, sa = res 17:24:06 sock = None 17:24:06 try: 17:24:06 sock = socket.socket(af, socktype, proto) 17:24:06 17:24:06 # If provided, set socket level options before connecting. 17:24:06 _set_socket_options(sock, socket_options) 17:24:06 17:24:06 if timeout is not _DEFAULT_TIMEOUT: 17:24:06 sock.settimeout(timeout) 17:24:06 if source_address: 17:24:06 sock.bind(source_address) 17:24:06 > sock.connect(sa) 17:24:06 E ConnectionRefusedError: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 method = 'GET' 17:24:06 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1' 17:24:06 body = None 17:24:06 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:24:06 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 redirect = False, assert_same_host = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:24:06 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:24:06 decode_content = False, response_kw = {} 17:24:06 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1', query=None, fragment=None) 17:24:06 destination_scheme = None, conn = None, release_this_conn = True 17:24:06 http_tunnel_required = False, err = None, clean_exit = False 17:24:06 17:24:06 def urlopen( # type: ignore[override] 17:24:06 self, 17:24:06 method: str, 17:24:06 url: str, 17:24:06 body: _TYPE_BODY | None = None, 17:24:06 headers: typing.Mapping[str, str] | None = None, 17:24:06 retries: Retry | bool | int | None = None, 17:24:06 redirect: bool = True, 17:24:06 assert_same_host: bool = True, 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 pool_timeout: int | None = None, 17:24:06 release_conn: bool | None = None, 17:24:06 chunked: bool = False, 17:24:06 body_pos: _TYPE_BODY_POSITION | None = None, 17:24:06 preload_content: bool = True, 17:24:06 decode_content: bool = True, 17:24:06 **response_kw: typing.Any, 17:24:06 ) -> BaseHTTPResponse: 17:24:06 """ 17:24:06 Get a connection from the pool and perform an HTTP request. This is the 17:24:06 lowest level call for making a request, so you'll need to specify all 17:24:06 the raw details. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 More commonly, it's appropriate to use a convenience method 17:24:06 such as :meth:`request`. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 `release_conn` will only behave as expected if 17:24:06 `preload_content=False` because we want to make 17:24:06 `preload_content=False` the default behaviour someday soon without 17:24:06 breaking backwards compatibility. 17:24:06 17:24:06 :param method: 17:24:06 HTTP request method (such as GET, POST, PUT, etc.) 17:24:06 17:24:06 :param url: 17:24:06 The URL to perform the request on. 17:24:06 17:24:06 :param body: 17:24:06 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:24:06 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:24:06 17:24:06 :param headers: 17:24:06 Dictionary of custom headers to send, such as User-Agent, 17:24:06 If-None-Match, etc. If None, pool headers are used. If provided, 17:24:06 these headers completely replace any pool-specific headers. 17:24:06 17:24:06 :param retries: 17:24:06 Configure the number of retries to allow before raising a 17:24:06 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:24:06 17:24:06 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:24:06 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:24:06 over different types of retries. 17:24:06 Pass an integer number to retry connection errors that many times, 17:24:06 but no other types of errors. Pass zero to never retry. 17:24:06 17:24:06 If ``False``, then retries are disabled and any exception is raised 17:24:06 immediately. Also, instead of raising a MaxRetryError on redirects, 17:24:06 the redirect response will be returned. 17:24:06 17:24:06 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:24:06 17:24:06 :param redirect: 17:24:06 If True, automatically handle redirects (status codes 301, 302, 17:24:06 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:24:06 will disable redirect, too. 17:24:06 17:24:06 :param assert_same_host: 17:24:06 If ``True``, will make sure that the host of the pool requests is 17:24:06 consistent else will raise HostChangedError. When ``False``, you can 17:24:06 use the pool on an HTTP proxy and request foreign hosts. 17:24:06 17:24:06 :param timeout: 17:24:06 If specified, overrides the default timeout for this one 17:24:06 request. It may be a float (in seconds) or an instance of 17:24:06 :class:`urllib3.util.Timeout`. 17:24:06 17:24:06 :param pool_timeout: 17:24:06 If set and the pool is set to block=True, then this method will 17:24:06 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:24:06 connection is available within the time period. 17:24:06 17:24:06 :param bool preload_content: 17:24:06 If True, the response's body will be preloaded into memory. 17:24:06 17:24:06 :param bool decode_content: 17:24:06 If True, will attempt to decode the body based on the 17:24:06 'content-encoding' header. 17:24:06 17:24:06 :param release_conn: 17:24:06 If False, then the urlopen call will not release the connection 17:24:06 back into the pool once a response is received (but will release if 17:24:06 you read the entire contents of the response such as when 17:24:06 `preload_content=True`). This is useful if you're not preloading 17:24:06 the response's content immediately. You will need to call 17:24:06 ``r.release_conn()`` on the response ``r`` to return the connection 17:24:06 back into the pool. If None, it takes the value of ``preload_content`` 17:24:06 which defaults to ``True``. 17:24:06 17:24:06 :param bool chunked: 17:24:06 If True, urllib3 will send the body using chunked transfer 17:24:06 encoding. Otherwise, urllib3 will send the body using the standard 17:24:06 content-length form. Defaults to False. 17:24:06 17:24:06 :param int body_pos: 17:24:06 Position to seek to in file-like body in the event of a retry or 17:24:06 redirect. Typically this won't need to be set because urllib3 will 17:24:06 auto-populate the value when needed. 17:24:06 """ 17:24:06 parsed_url = parse_url(url) 17:24:06 destination_scheme = parsed_url.scheme 17:24:06 17:24:06 if headers is None: 17:24:06 headers = self.headers 17:24:06 17:24:06 if not isinstance(retries, Retry): 17:24:06 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:24:06 17:24:06 if release_conn is None: 17:24:06 release_conn = preload_content 17:24:06 17:24:06 # Check host 17:24:06 if assert_same_host and not self.is_same_host(url): 17:24:06 raise HostChangedError(self, url, retries) 17:24:06 17:24:06 # Ensure that the URL we're connecting to is properly encoded 17:24:06 if url.startswith("/"): 17:24:06 url = to_str(_encode_target(url)) 17:24:06 else: 17:24:06 url = to_str(parsed_url.url) 17:24:06 17:24:06 conn = None 17:24:06 17:24:06 # Track whether `conn` needs to be released before 17:24:06 # returning/raising/recursing. Update this variable if necessary, and 17:24:06 # leave `release_conn` constant throughout the function. That way, if 17:24:06 # the function recurses, the original value of `release_conn` will be 17:24:06 # passed down into the recursive call, and its value will be respected. 17:24:06 # 17:24:06 # See issue #651 [1] for details. 17:24:06 # 17:24:06 # [1] 17:24:06 release_this_conn = release_conn 17:24:06 17:24:06 http_tunnel_required = connection_requires_http_tunnel( 17:24:06 self.proxy, self.proxy_config, destination_scheme 17:24:06 ) 17:24:06 17:24:06 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:24:06 # have to copy the headers dict so we can safely change it without those 17:24:06 # changes being reflected in anyone else's copy. 17:24:06 if not http_tunnel_required: 17:24:06 headers = headers.copy() # type: ignore[attr-defined] 17:24:06 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:24:06 17:24:06 # Must keep the exception bound to a separate variable or else Python 3 17:24:06 # complains about UnboundLocalError. 17:24:06 err = None 17:24:06 17:24:06 # Keep track of whether we cleanly exited the except block. This 17:24:06 # ensures we do proper cleanup in finally. 17:24:06 clean_exit = False 17:24:06 17:24:06 # Rewind body position, if needed. Record current position 17:24:06 # for future rewinds in the event of a redirect/retry. 17:24:06 body_pos = set_file_position(body, body_pos) 17:24:06 17:24:06 try: 17:24:06 # Request a connection from the queue. 17:24:06 timeout_obj = self._get_timeout(timeout) 17:24:06 conn = self._get_conn(timeout=pool_timeout) 17:24:06 17:24:06 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:24:06 17:24:06 # Is this a closed/new connection that requires CONNECT tunnelling? 17:24:06 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:24:06 try: 17:24:06 self._prepare_proxy(conn) 17:24:06 except (BaseSSLError, OSError, SocketTimeout) as e: 17:24:06 self._raise_timeout( 17:24:06 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:24:06 ) 17:24:06 raise 17:24:06 17:24:06 # If we're going to release the connection in ``finally:``, then 17:24:06 # the response doesn't need to know about the connection. Otherwise 17:24:06 # it will also try to release it and we'll have a double-release 17:24:06 # mess. 17:24:06 response_conn = conn if not release_conn else None 17:24:06 17:24:06 # Make the request on the HTTPConnection object 17:24:06 > response = self._make_request( 17:24:06 conn, 17:24:06 method, 17:24:06 url, 17:24:06 timeout=timeout_obj, 17:24:06 body=body, 17:24:06 headers=headers, 17:24:06 chunked=chunked, 17:24:06 retries=retries, 17:24:06 response_conn=response_conn, 17:24:06 preload_content=preload_content, 17:24:06 decode_content=decode_content, 17:24:06 **response_kw, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 17:24:06 conn.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 17:24:06 self.endheaders() 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 17:24:06 self._send_output(message_body, encode_chunked=encode_chunked) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 17:24:06 self.send(msg) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 17:24:06 self.connect() 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 17:24:06 self.sock = self._new_conn() 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 except socket.gaierror as e: 17:24:06 raise NameResolutionError(self.host, self, e) from e 17:24:06 except SocketTimeout as e: 17:24:06 raise ConnectTimeoutError( 17:24:06 self, 17:24:06 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 17:24:06 ) from e 17:24:06 17:24:06 except OSError as e: 17:24:06 > raise NewConnectionError( 17:24:06 self, f"Failed to establish a new connection: {e}" 17:24:06 ) from e 17:24:06 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 > resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:24:06 retries = retries.increment( 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 method = 'GET' 17:24:06 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1' 17:24:06 response = None 17:24:06 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 17:24:06 _pool = 17:24:06 _stacktrace = 17:24:06 17:24:06 def increment( 17:24:06 self, 17:24:06 method: str | None = None, 17:24:06 url: str | None = None, 17:24:06 response: BaseHTTPResponse | None = None, 17:24:06 error: Exception | None = None, 17:24:06 _pool: ConnectionPool | None = None, 17:24:06 _stacktrace: TracebackType | None = None, 17:24:06 ) -> Self: 17:24:06 """Return a new Retry object with incremented retry counters. 17:24:06 17:24:06 :param response: A response object, or None, if the server did not 17:24:06 return a response. 17:24:06 :type response: :class:`~urllib3.response.BaseHTTPResponse` 17:24:06 :param Exception error: An error encountered during the request, or 17:24:06 None if the response was received successfully. 17:24:06 17:24:06 :return: A new ``Retry`` object. 17:24:06 """ 17:24:06 if self.total is False and error: 17:24:06 # Disabled, indicate to re-raise the error. 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 17:24:06 total = self.total 17:24:06 if total is not None: 17:24:06 total -= 1 17:24:06 17:24:06 connect = self.connect 17:24:06 read = self.read 17:24:06 redirect = self.redirect 17:24:06 status_count = self.status 17:24:06 other = self.other 17:24:06 cause = "unknown" 17:24:06 status = None 17:24:06 redirect_location = None 17:24:06 17:24:06 if error and self._is_connection_error(error): 17:24:06 # Connect retry? 17:24:06 if connect is False: 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif connect is not None: 17:24:06 connect -= 1 17:24:06 17:24:06 elif error and self._is_read_error(error): 17:24:06 # Read retry? 17:24:06 if read is False or method is None or not self._is_method_retryable(method): 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif read is not None: 17:24:06 read -= 1 17:24:06 17:24:06 elif error: 17:24:06 # Other retry? 17:24:06 if other is not None: 17:24:06 other -= 1 17:24:06 17:24:06 elif response and response.get_redirect_location(): 17:24:06 # Redirect retry? 17:24:06 if redirect is not None: 17:24:06 redirect -= 1 17:24:06 cause = "too many redirects" 17:24:06 response_redirect_location = response.get_redirect_location() 17:24:06 if response_redirect_location: 17:24:06 redirect_location = response_redirect_location 17:24:06 status = response.status 17:24:06 17:24:06 else: 17:24:06 # Incrementing because of a server error like a 500 in 17:24:06 # status_forcelist and the given method is in the allowed_methods 17:24:06 cause = ResponseError.GENERIC_ERROR 17:24:06 if response and response.status: 17:24:06 if status_count is not None: 17:24:06 status_count -= 1 17:24:06 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 17:24:06 status = response.status 17:24:06 17:24:06 history = self.history + ( 17:24:06 RequestHistory(method, url, error, status, redirect_location), 17:24:06 ) 17:24:06 17:24:06 new_retry = self.new( 17:24:06 total=total, 17:24:06 connect=connect, 17:24:06 read=read, 17:24:06 redirect=redirect, 17:24:06 status=status_count, 17:24:06 other=other, 17:24:06 history=history, 17:24:06 ) 17:24:06 17:24:06 if new_retry.is_exhausted(): 17:24:06 reason = error or ResponseError(cause) 17:24:06 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 17:24:06 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 17:24:06 17:24:06 During handling of the above exception, another exception occurred: 17:24:06 17:24:06 self = 17:24:06 17:24:06 def test_10_xpdr_portmapping_NETWORK1(self): 17:24:06 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-NETWORK1") 17:24:06 17:24:06 transportpce_tests/1.2.1/test01_portmapping.py:122: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 transportpce_tests/common/test_utils.py:473: in get_portmapping_node_attr 17:24:06 response = get_request(target_url) 17:24:06 transportpce_tests/common/test_utils.py:116: in get_request 17:24:06 return requests.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:24:06 return session.request(method=method, url=url, **kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:24:06 resp = self.send(prep, **send_kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:24:06 r = adapter.send(request, **kwargs) 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 except (ProtocolError, OSError) as err: 17:24:06 raise ConnectionError(err, request=request) 17:24:06 17:24:06 except MaxRetryError as e: 17:24:06 if isinstance(e.reason, ConnectTimeoutError): 17:24:06 # TODO: Remove this in 3.0.0: see #2811 17:24:06 if not isinstance(e.reason, NewConnectionError): 17:24:06 raise ConnectTimeout(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, ResponseError): 17:24:06 raise RetryError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _ProxyError): 17:24:06 raise ProxyError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _SSLError): 17:24:06 # This branch is for urllib3 v1.22 and later. 17:24:06 raise SSLError(e, request=request) 17:24:06 17:24:06 > raise ConnectionError(e, request=request) 17:24:06 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 17:24:06 ----------------------------- Captured stdout call ----------------------------- 17:24:06 execution of test_10_xpdr_portmapping_NETWORK1 17:24:06 _______ TransportPCEPortMappingTesting.test_11_xpdr_portmapping_NETWORK2 _______ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 > sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 17:24:06 raise err 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 address = ('localhost', 8182), timeout = 10, source_address = None 17:24:06 socket_options = [(6, 1, 1)] 17:24:06 17:24:06 def create_connection( 17:24:06 address: tuple[str, int], 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 source_address: tuple[str, int] | None = None, 17:24:06 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 17:24:06 ) -> socket.socket: 17:24:06 """Connect to *address* and return the socket object. 17:24:06 17:24:06 Convenience function. Connect to *address* (a 2-tuple ``(host, 17:24:06 port)``) and return the socket object. Passing the optional 17:24:06 *timeout* parameter will set the timeout on the socket instance 17:24:06 before attempting to connect. If no *timeout* is supplied, the 17:24:06 global default timeout setting returned by :func:`socket.getdefaulttimeout` 17:24:06 is used. If *source_address* is set it must be a tuple of (host, port) 17:24:06 for the socket to bind as a source address before making the connection. 17:24:06 An host of '' or port 0 tells the OS to use the default. 17:24:06 """ 17:24:06 17:24:06 host, port = address 17:24:06 if host.startswith("["): 17:24:06 host = host.strip("[]") 17:24:06 err = None 17:24:06 17:24:06 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 17:24:06 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 17:24:06 # The original create_connection function always returns all records. 17:24:06 family = allowed_gai_family() 17:24:06 17:24:06 try: 17:24:06 host.encode("idna") 17:24:06 except UnicodeError: 17:24:06 raise LocationParseError(f"'{host}', label empty or too long") from None 17:24:06 17:24:06 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 17:24:06 af, socktype, proto, canonname, sa = res 17:24:06 sock = None 17:24:06 try: 17:24:06 sock = socket.socket(af, socktype, proto) 17:24:06 17:24:06 # If provided, set socket level options before connecting. 17:24:06 _set_socket_options(sock, socket_options) 17:24:06 17:24:06 if timeout is not _DEFAULT_TIMEOUT: 17:24:06 sock.settimeout(timeout) 17:24:06 if source_address: 17:24:06 sock.bind(source_address) 17:24:06 > sock.connect(sa) 17:24:06 E ConnectionRefusedError: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 method = 'GET' 17:24:06 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2' 17:24:06 body = None 17:24:06 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:24:06 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 redirect = False, assert_same_host = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:24:06 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:24:06 decode_content = False, response_kw = {} 17:24:06 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2', query=None, fragment=None) 17:24:06 destination_scheme = None, conn = None, release_this_conn = True 17:24:06 http_tunnel_required = False, err = None, clean_exit = False 17:24:06 17:24:06 def urlopen( # type: ignore[override] 17:24:06 self, 17:24:06 method: str, 17:24:06 url: str, 17:24:06 body: _TYPE_BODY | None = None, 17:24:06 headers: typing.Mapping[str, str] | None = None, 17:24:06 retries: Retry | bool | int | None = None, 17:24:06 redirect: bool = True, 17:24:06 assert_same_host: bool = True, 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 pool_timeout: int | None = None, 17:24:06 release_conn: bool | None = None, 17:24:06 chunked: bool = False, 17:24:06 body_pos: _TYPE_BODY_POSITION | None = None, 17:24:06 preload_content: bool = True, 17:24:06 decode_content: bool = True, 17:24:06 **response_kw: typing.Any, 17:24:06 ) -> BaseHTTPResponse: 17:24:06 """ 17:24:06 Get a connection from the pool and perform an HTTP request. This is the 17:24:06 lowest level call for making a request, so you'll need to specify all 17:24:06 the raw details. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 More commonly, it's appropriate to use a convenience method 17:24:06 such as :meth:`request`. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 `release_conn` will only behave as expected if 17:24:06 `preload_content=False` because we want to make 17:24:06 `preload_content=False` the default behaviour someday soon without 17:24:06 breaking backwards compatibility. 17:24:06 17:24:06 :param method: 17:24:06 HTTP request method (such as GET, POST, PUT, etc.) 17:24:06 17:24:06 :param url: 17:24:06 The URL to perform the request on. 17:24:06 17:24:06 :param body: 17:24:06 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:24:06 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:24:06 17:24:06 :param headers: 17:24:06 Dictionary of custom headers to send, such as User-Agent, 17:24:06 If-None-Match, etc. If None, pool headers are used. If provided, 17:24:06 these headers completely replace any pool-specific headers. 17:24:06 17:24:06 :param retries: 17:24:06 Configure the number of retries to allow before raising a 17:24:06 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:24:06 17:24:06 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:24:06 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:24:06 over different types of retries. 17:24:06 Pass an integer number to retry connection errors that many times, 17:24:06 but no other types of errors. Pass zero to never retry. 17:24:06 17:24:06 If ``False``, then retries are disabled and any exception is raised 17:24:06 immediately. Also, instead of raising a MaxRetryError on redirects, 17:24:06 the redirect response will be returned. 17:24:06 17:24:06 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:24:06 17:24:06 :param redirect: 17:24:06 If True, automatically handle redirects (status codes 301, 302, 17:24:06 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:24:06 will disable redirect, too. 17:24:06 17:24:06 :param assert_same_host: 17:24:06 If ``True``, will make sure that the host of the pool requests is 17:24:06 consistent else will raise HostChangedError. When ``False``, you can 17:24:06 use the pool on an HTTP proxy and request foreign hosts. 17:24:06 17:24:06 :param timeout: 17:24:06 If specified, overrides the default timeout for this one 17:24:06 request. It may be a float (in seconds) or an instance of 17:24:06 :class:`urllib3.util.Timeout`. 17:24:06 17:24:06 :param pool_timeout: 17:24:06 If set and the pool is set to block=True, then this method will 17:24:06 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:24:06 connection is available within the time period. 17:24:06 17:24:06 :param bool preload_content: 17:24:06 If True, the response's body will be preloaded into memory. 17:24:06 17:24:06 :param bool decode_content: 17:24:06 If True, will attempt to decode the body based on the 17:24:06 'content-encoding' header. 17:24:06 17:24:06 :param release_conn: 17:24:06 If False, then the urlopen call will not release the connection 17:24:06 back into the pool once a response is received (but will release if 17:24:06 you read the entire contents of the response such as when 17:24:06 `preload_content=True`). This is useful if you're not preloading 17:24:06 the response's content immediately. You will need to call 17:24:06 ``r.release_conn()`` on the response ``r`` to return the connection 17:24:06 back into the pool. If None, it takes the value of ``preload_content`` 17:24:06 which defaults to ``True``. 17:24:06 17:24:06 :param bool chunked: 17:24:06 If True, urllib3 will send the body using chunked transfer 17:24:06 encoding. Otherwise, urllib3 will send the body using the standard 17:24:06 content-length form. Defaults to False. 17:24:06 17:24:06 :param int body_pos: 17:24:06 Position to seek to in file-like body in the event of a retry or 17:24:06 redirect. Typically this won't need to be set because urllib3 will 17:24:06 auto-populate the value when needed. 17:24:06 """ 17:24:06 parsed_url = parse_url(url) 17:24:06 destination_scheme = parsed_url.scheme 17:24:06 17:24:06 if headers is None: 17:24:06 headers = self.headers 17:24:06 17:24:06 if not isinstance(retries, Retry): 17:24:06 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:24:06 17:24:06 if release_conn is None: 17:24:06 release_conn = preload_content 17:24:06 17:24:06 # Check host 17:24:06 if assert_same_host and not self.is_same_host(url): 17:24:06 raise HostChangedError(self, url, retries) 17:24:06 17:24:06 # Ensure that the URL we're connecting to is properly encoded 17:24:06 if url.startswith("/"): 17:24:06 url = to_str(_encode_target(url)) 17:24:06 else: 17:24:06 url = to_str(parsed_url.url) 17:24:06 17:24:06 conn = None 17:24:06 17:24:06 # Track whether `conn` needs to be released before 17:24:06 # returning/raising/recursing. Update this variable if necessary, and 17:24:06 # leave `release_conn` constant throughout the function. That way, if 17:24:06 # the function recurses, the original value of `release_conn` will be 17:24:06 # passed down into the recursive call, and its value will be respected. 17:24:06 # 17:24:06 # See issue #651 [1] for details. 17:24:06 # 17:24:06 # [1] 17:24:06 release_this_conn = release_conn 17:24:06 17:24:06 http_tunnel_required = connection_requires_http_tunnel( 17:24:06 self.proxy, self.proxy_config, destination_scheme 17:24:06 ) 17:24:06 17:24:06 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:24:06 # have to copy the headers dict so we can safely change it without those 17:24:06 # changes being reflected in anyone else's copy. 17:24:06 if not http_tunnel_required: 17:24:06 headers = headers.copy() # type: ignore[attr-defined] 17:24:06 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:24:06 17:24:06 # Must keep the exception bound to a separate variable or else Python 3 17:24:06 # complains about UnboundLocalError. 17:24:06 err = None 17:24:06 17:24:06 # Keep track of whether we cleanly exited the except block. This 17:24:06 # ensures we do proper cleanup in finally. 17:24:06 clean_exit = False 17:24:06 17:24:06 # Rewind body position, if needed. Record current position 17:24:06 # for future rewinds in the event of a redirect/retry. 17:24:06 body_pos = set_file_position(body, body_pos) 17:24:06 17:24:06 try: 17:24:06 # Request a connection from the queue. 17:24:06 timeout_obj = self._get_timeout(timeout) 17:24:06 conn = self._get_conn(timeout=pool_timeout) 17:24:06 17:24:06 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:24:06 17:24:06 # Is this a closed/new connection that requires CONNECT tunnelling? 17:24:06 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:24:06 try: 17:24:06 self._prepare_proxy(conn) 17:24:06 except (BaseSSLError, OSError, SocketTimeout) as e: 17:24:06 self._raise_timeout( 17:24:06 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:24:06 ) 17:24:06 raise 17:24:06 17:24:06 # If we're going to release the connection in ``finally:``, then 17:24:06 # the response doesn't need to know about the connection. Otherwise 17:24:06 # it will also try to release it and we'll have a double-release 17:24:06 # mess. 17:24:06 response_conn = conn if not release_conn else None 17:24:06 17:24:06 # Make the request on the HTTPConnection object 17:24:06 > response = self._make_request( 17:24:06 conn, 17:24:06 method, 17:24:06 url, 17:24:06 timeout=timeout_obj, 17:24:06 body=body, 17:24:06 headers=headers, 17:24:06 chunked=chunked, 17:24:06 retries=retries, 17:24:06 response_conn=response_conn, 17:24:06 preload_content=preload_content, 17:24:06 decode_content=decode_content, 17:24:06 **response_kw, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 17:24:06 conn.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 17:24:06 self.endheaders() 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 17:24:06 self._send_output(message_body, encode_chunked=encode_chunked) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 17:24:06 self.send(msg) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 17:24:06 self.connect() 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 17:24:06 self.sock = self._new_conn() 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 except socket.gaierror as e: 17:24:06 raise NameResolutionError(self.host, self, e) from e 17:24:06 except SocketTimeout as e: 17:24:06 raise ConnectTimeoutError( 17:24:06 self, 17:24:06 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 17:24:06 ) from e 17:24:06 17:24:06 except OSError as e: 17:24:06 > raise NewConnectionError( 17:24:06 self, f"Failed to establish a new connection: {e}" 17:24:06 ) from e 17:24:06 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 > resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:24:06 retries = retries.increment( 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 method = 'GET' 17:24:06 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2' 17:24:06 response = None 17:24:06 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 17:24:06 _pool = 17:24:06 _stacktrace = 17:24:06 17:24:06 def increment( 17:24:06 self, 17:24:06 method: str | None = None, 17:24:06 url: str | None = None, 17:24:06 response: BaseHTTPResponse | None = None, 17:24:06 error: Exception | None = None, 17:24:06 _pool: ConnectionPool | None = None, 17:24:06 _stacktrace: TracebackType | None = None, 17:24:06 ) -> Self: 17:24:06 """Return a new Retry object with incremented retry counters. 17:24:06 17:24:06 :param response: A response object, or None, if the server did not 17:24:06 return a response. 17:24:06 :type response: :class:`~urllib3.response.BaseHTTPResponse` 17:24:06 :param Exception error: An error encountered during the request, or 17:24:06 None if the response was received successfully. 17:24:06 17:24:06 :return: A new ``Retry`` object. 17:24:06 """ 17:24:06 if self.total is False and error: 17:24:06 # Disabled, indicate to re-raise the error. 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 17:24:06 total = self.total 17:24:06 if total is not None: 17:24:06 total -= 1 17:24:06 17:24:06 connect = self.connect 17:24:06 read = self.read 17:24:06 redirect = self.redirect 17:24:06 status_count = self.status 17:24:06 other = self.other 17:24:06 cause = "unknown" 17:24:06 status = None 17:24:06 redirect_location = None 17:24:06 17:24:06 if error and self._is_connection_error(error): 17:24:06 # Connect retry? 17:24:06 if connect is False: 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif connect is not None: 17:24:06 connect -= 1 17:24:06 17:24:06 elif error and self._is_read_error(error): 17:24:06 # Read retry? 17:24:06 if read is False or method is None or not self._is_method_retryable(method): 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif read is not None: 17:24:06 read -= 1 17:24:06 17:24:06 elif error: 17:24:06 # Other retry? 17:24:06 if other is not None: 17:24:06 other -= 1 17:24:06 17:24:06 elif response and response.get_redirect_location(): 17:24:06 # Redirect retry? 17:24:06 if redirect is not None: 17:24:06 redirect -= 1 17:24:06 cause = "too many redirects" 17:24:06 response_redirect_location = response.get_redirect_location() 17:24:06 if response_redirect_location: 17:24:06 redirect_location = response_redirect_location 17:24:06 status = response.status 17:24:06 17:24:06 else: 17:24:06 # Incrementing because of a server error like a 500 in 17:24:06 # status_forcelist and the given method is in the allowed_methods 17:24:06 cause = ResponseError.GENERIC_ERROR 17:24:06 if response and response.status: 17:24:06 if status_count is not None: 17:24:06 status_count -= 1 17:24:06 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 17:24:06 status = response.status 17:24:06 17:24:06 history = self.history + ( 17:24:06 RequestHistory(method, url, error, status, redirect_location), 17:24:06 ) 17:24:06 17:24:06 new_retry = self.new( 17:24:06 total=total, 17:24:06 connect=connect, 17:24:06 read=read, 17:24:06 redirect=redirect, 17:24:06 status=status_count, 17:24:06 other=other, 17:24:06 history=history, 17:24:06 ) 17:24:06 17:24:06 if new_retry.is_exhausted(): 17:24:06 reason = error or ResponseError(cause) 17:24:06 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 17:24:06 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 17:24:06 17:24:06 During handling of the above exception, another exception occurred: 17:24:06 17:24:06 self = 17:24:06 17:24:06 def test_11_xpdr_portmapping_NETWORK2(self): 17:24:06 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-NETWORK2") 17:24:06 17:24:06 transportpce_tests/1.2.1/test01_portmapping.py:133: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 transportpce_tests/common/test_utils.py:473: in get_portmapping_node_attr 17:24:06 response = get_request(target_url) 17:24:06 transportpce_tests/common/test_utils.py:116: in get_request 17:24:06 return requests.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:24:06 return session.request(method=method, url=url, **kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:24:06 resp = self.send(prep, **send_kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:24:06 r = adapter.send(request, **kwargs) 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 except (ProtocolError, OSError) as err: 17:24:06 raise ConnectionError(err, request=request) 17:24:06 17:24:06 except MaxRetryError as e: 17:24:06 if isinstance(e.reason, ConnectTimeoutError): 17:24:06 # TODO: Remove this in 3.0.0: see #2811 17:24:06 if not isinstance(e.reason, NewConnectionError): 17:24:06 raise ConnectTimeout(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, ResponseError): 17:24:06 raise RetryError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _ProxyError): 17:24:06 raise ProxyError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _SSLError): 17:24:06 # This branch is for urllib3 v1.22 and later. 17:24:06 raise SSLError(e, request=request) 17:24:06 17:24:06 > raise ConnectionError(e, request=request) 17:24:06 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 17:24:06 ----------------------------- Captured stdout call ----------------------------- 17:24:06 execution of test_11_xpdr_portmapping_NETWORK2 17:24:06 _______ TransportPCEPortMappingTesting.test_12_xpdr_portmapping_CLIENT1 ________ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 > sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 17:24:06 raise err 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 address = ('localhost', 8182), timeout = 10, source_address = None 17:24:06 socket_options = [(6, 1, 1)] 17:24:06 17:24:06 def create_connection( 17:24:06 address: tuple[str, int], 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 source_address: tuple[str, int] | None = None, 17:24:06 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 17:24:06 ) -> socket.socket: 17:24:06 """Connect to *address* and return the socket object. 17:24:06 17:24:06 Convenience function. Connect to *address* (a 2-tuple ``(host, 17:24:06 port)``) and return the socket object. Passing the optional 17:24:06 *timeout* parameter will set the timeout on the socket instance 17:24:06 before attempting to connect. If no *timeout* is supplied, the 17:24:06 global default timeout setting returned by :func:`socket.getdefaulttimeout` 17:24:06 is used. If *source_address* is set it must be a tuple of (host, port) 17:24:06 for the socket to bind as a source address before making the connection. 17:24:06 An host of '' or port 0 tells the OS to use the default. 17:24:06 """ 17:24:06 17:24:06 host, port = address 17:24:06 if host.startswith("["): 17:24:06 host = host.strip("[]") 17:24:06 err = None 17:24:06 17:24:06 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 17:24:06 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 17:24:06 # The original create_connection function always returns all records. 17:24:06 family = allowed_gai_family() 17:24:06 17:24:06 try: 17:24:06 host.encode("idna") 17:24:06 except UnicodeError: 17:24:06 raise LocationParseError(f"'{host}', label empty or too long") from None 17:24:06 17:24:06 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 17:24:06 af, socktype, proto, canonname, sa = res 17:24:06 sock = None 17:24:06 try: 17:24:06 sock = socket.socket(af, socktype, proto) 17:24:06 17:24:06 # If provided, set socket level options before connecting. 17:24:06 _set_socket_options(sock, socket_options) 17:24:06 17:24:06 if timeout is not _DEFAULT_TIMEOUT: 17:24:06 sock.settimeout(timeout) 17:24:06 if source_address: 17:24:06 sock.bind(source_address) 17:24:06 > sock.connect(sa) 17:24:06 E ConnectionRefusedError: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 method = 'GET' 17:24:06 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1' 17:24:06 body = None 17:24:06 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:24:06 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 redirect = False, assert_same_host = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:24:06 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:24:06 decode_content = False, response_kw = {} 17:24:06 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1', query=None, fragment=None) 17:24:06 destination_scheme = None, conn = None, release_this_conn = True 17:24:06 http_tunnel_required = False, err = None, clean_exit = False 17:24:06 17:24:06 def urlopen( # type: ignore[override] 17:24:06 self, 17:24:06 method: str, 17:24:06 url: str, 17:24:06 body: _TYPE_BODY | None = None, 17:24:06 headers: typing.Mapping[str, str] | None = None, 17:24:06 retries: Retry | bool | int | None = None, 17:24:06 redirect: bool = True, 17:24:06 assert_same_host: bool = True, 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 pool_timeout: int | None = None, 17:24:06 release_conn: bool | None = None, 17:24:06 chunked: bool = False, 17:24:06 body_pos: _TYPE_BODY_POSITION | None = None, 17:24:06 preload_content: bool = True, 17:24:06 decode_content: bool = True, 17:24:06 **response_kw: typing.Any, 17:24:06 ) -> BaseHTTPResponse: 17:24:06 """ 17:24:06 Get a connection from the pool and perform an HTTP request. This is the 17:24:06 lowest level call for making a request, so you'll need to specify all 17:24:06 the raw details. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 More commonly, it's appropriate to use a convenience method 17:24:06 such as :meth:`request`. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 `release_conn` will only behave as expected if 17:24:06 `preload_content=False` because we want to make 17:24:06 `preload_content=False` the default behaviour someday soon without 17:24:06 breaking backwards compatibility. 17:24:06 17:24:06 :param method: 17:24:06 HTTP request method (such as GET, POST, PUT, etc.) 17:24:06 17:24:06 :param url: 17:24:06 The URL to perform the request on. 17:24:06 17:24:06 :param body: 17:24:06 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:24:06 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:24:06 17:24:06 :param headers: 17:24:06 Dictionary of custom headers to send, such as User-Agent, 17:24:06 If-None-Match, etc. If None, pool headers are used. If provided, 17:24:06 these headers completely replace any pool-specific headers. 17:24:06 17:24:06 :param retries: 17:24:06 Configure the number of retries to allow before raising a 17:24:06 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:24:06 17:24:06 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:24:06 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:24:06 over different types of retries. 17:24:06 Pass an integer number to retry connection errors that many times, 17:24:06 but no other types of errors. Pass zero to never retry. 17:24:06 17:24:06 If ``False``, then retries are disabled and any exception is raised 17:24:06 immediately. Also, instead of raising a MaxRetryError on redirects, 17:24:06 the redirect response will be returned. 17:24:06 17:24:06 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:24:06 17:24:06 :param redirect: 17:24:06 If True, automatically handle redirects (status codes 301, 302, 17:24:06 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:24:06 will disable redirect, too. 17:24:06 17:24:06 :param assert_same_host: 17:24:06 If ``True``, will make sure that the host of the pool requests is 17:24:06 consistent else will raise HostChangedError. When ``False``, you can 17:24:06 use the pool on an HTTP proxy and request foreign hosts. 17:24:06 17:24:06 :param timeout: 17:24:06 If specified, overrides the default timeout for this one 17:24:06 request. It may be a float (in seconds) or an instance of 17:24:06 :class:`urllib3.util.Timeout`. 17:24:06 17:24:06 :param pool_timeout: 17:24:06 If set and the pool is set to block=True, then this method will 17:24:06 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:24:06 connection is available within the time period. 17:24:06 17:24:06 :param bool preload_content: 17:24:06 If True, the response's body will be preloaded into memory. 17:24:06 17:24:06 :param bool decode_content: 17:24:06 If True, will attempt to decode the body based on the 17:24:06 'content-encoding' header. 17:24:06 17:24:06 :param release_conn: 17:24:06 If False, then the urlopen call will not release the connection 17:24:06 back into the pool once a response is received (but will release if 17:24:06 you read the entire contents of the response such as when 17:24:06 `preload_content=True`). This is useful if you're not preloading 17:24:06 the response's content immediately. You will need to call 17:24:06 ``r.release_conn()`` on the response ``r`` to return the connection 17:24:06 back into the pool. If None, it takes the value of ``preload_content`` 17:24:06 which defaults to ``True``. 17:24:06 17:24:06 :param bool chunked: 17:24:06 If True, urllib3 will send the body using chunked transfer 17:24:06 encoding. Otherwise, urllib3 will send the body using the standard 17:24:06 content-length form. Defaults to False. 17:24:06 17:24:06 :param int body_pos: 17:24:06 Position to seek to in file-like body in the event of a retry or 17:24:06 redirect. Typically this won't need to be set because urllib3 will 17:24:06 auto-populate the value when needed. 17:24:06 """ 17:24:06 parsed_url = parse_url(url) 17:24:06 destination_scheme = parsed_url.scheme 17:24:06 17:24:06 if headers is None: 17:24:06 headers = self.headers 17:24:06 17:24:06 if not isinstance(retries, Retry): 17:24:06 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:24:06 17:24:06 if release_conn is None: 17:24:06 release_conn = preload_content 17:24:06 17:24:06 # Check host 17:24:06 if assert_same_host and not self.is_same_host(url): 17:24:06 raise HostChangedError(self, url, retries) 17:24:06 17:24:06 # Ensure that the URL we're connecting to is properly encoded 17:24:06 if url.startswith("/"): 17:24:06 url = to_str(_encode_target(url)) 17:24:06 else: 17:24:06 url = to_str(parsed_url.url) 17:24:06 17:24:06 conn = None 17:24:06 17:24:06 # Track whether `conn` needs to be released before 17:24:06 # returning/raising/recursing. Update this variable if necessary, and 17:24:06 # leave `release_conn` constant throughout the function. That way, if 17:24:06 # the function recurses, the original value of `release_conn` will be 17:24:06 # passed down into the recursive call, and its value will be respected. 17:24:06 # 17:24:06 # See issue #651 [1] for details. 17:24:06 # 17:24:06 # [1] 17:24:06 release_this_conn = release_conn 17:24:06 17:24:06 http_tunnel_required = connection_requires_http_tunnel( 17:24:06 self.proxy, self.proxy_config, destination_scheme 17:24:06 ) 17:24:06 17:24:06 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:24:06 # have to copy the headers dict so we can safely change it without those 17:24:06 # changes being reflected in anyone else's copy. 17:24:06 if not http_tunnel_required: 17:24:06 headers = headers.copy() # type: ignore[attr-defined] 17:24:06 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:24:06 17:24:06 # Must keep the exception bound to a separate variable or else Python 3 17:24:06 # complains about UnboundLocalError. 17:24:06 err = None 17:24:06 17:24:06 # Keep track of whether we cleanly exited the except block. This 17:24:06 # ensures we do proper cleanup in finally. 17:24:06 clean_exit = False 17:24:06 17:24:06 # Rewind body position, if needed. Record current position 17:24:06 # for future rewinds in the event of a redirect/retry. 17:24:06 body_pos = set_file_position(body, body_pos) 17:24:06 17:24:06 try: 17:24:06 # Request a connection from the queue. 17:24:06 timeout_obj = self._get_timeout(timeout) 17:24:06 conn = self._get_conn(timeout=pool_timeout) 17:24:06 17:24:06 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:24:06 17:24:06 # Is this a closed/new connection that requires CONNECT tunnelling? 17:24:06 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:24:06 try: 17:24:06 self._prepare_proxy(conn) 17:24:06 except (BaseSSLError, OSError, SocketTimeout) as e: 17:24:06 self._raise_timeout( 17:24:06 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:24:06 ) 17:24:06 raise 17:24:06 17:24:06 # If we're going to release the connection in ``finally:``, then 17:24:06 # the response doesn't need to know about the connection. Otherwise 17:24:06 # it will also try to release it and we'll have a double-release 17:24:06 # mess. 17:24:06 response_conn = conn if not release_conn else None 17:24:06 17:24:06 # Make the request on the HTTPConnection object 17:24:06 > response = self._make_request( 17:24:06 conn, 17:24:06 method, 17:24:06 url, 17:24:06 timeout=timeout_obj, 17:24:06 body=body, 17:24:06 headers=headers, 17:24:06 chunked=chunked, 17:24:06 retries=retries, 17:24:06 response_conn=response_conn, 17:24:06 preload_content=preload_content, 17:24:06 decode_content=decode_content, 17:24:06 **response_kw, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 17:24:06 conn.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 17:24:06 self.endheaders() 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 17:24:06 self._send_output(message_body, encode_chunked=encode_chunked) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 17:24:06 self.send(msg) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 17:24:06 self.connect() 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 17:24:06 self.sock = self._new_conn() 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 except socket.gaierror as e: 17:24:06 raise NameResolutionError(self.host, self, e) from e 17:24:06 except SocketTimeout as e: 17:24:06 raise ConnectTimeoutError( 17:24:06 self, 17:24:06 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 17:24:06 ) from e 17:24:06 17:24:06 except OSError as e: 17:24:06 > raise NewConnectionError( 17:24:06 self, f"Failed to establish a new connection: {e}" 17:24:06 ) from e 17:24:06 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 > resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:24:06 retries = retries.increment( 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 method = 'GET' 17:24:06 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1' 17:24:06 response = None 17:24:06 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 17:24:06 _pool = 17:24:06 _stacktrace = 17:24:06 17:24:06 def increment( 17:24:06 self, 17:24:06 method: str | None = None, 17:24:06 url: str | None = None, 17:24:06 response: BaseHTTPResponse | None = None, 17:24:06 error: Exception | None = None, 17:24:06 _pool: ConnectionPool | None = None, 17:24:06 _stacktrace: TracebackType | None = None, 17:24:06 ) -> Self: 17:24:06 """Return a new Retry object with incremented retry counters. 17:24:06 17:24:06 :param response: A response object, or None, if the server did not 17:24:06 return a response. 17:24:06 :type response: :class:`~urllib3.response.BaseHTTPResponse` 17:24:06 :param Exception error: An error encountered during the request, or 17:24:06 None if the response was received successfully. 17:24:06 17:24:06 :return: A new ``Retry`` object. 17:24:06 """ 17:24:06 if self.total is False and error: 17:24:06 # Disabled, indicate to re-raise the error. 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 17:24:06 total = self.total 17:24:06 if total is not None: 17:24:06 total -= 1 17:24:06 17:24:06 connect = self.connect 17:24:06 read = self.read 17:24:06 redirect = self.redirect 17:24:06 status_count = self.status 17:24:06 other = self.other 17:24:06 cause = "unknown" 17:24:06 status = None 17:24:06 redirect_location = None 17:24:06 17:24:06 if error and self._is_connection_error(error): 17:24:06 # Connect retry? 17:24:06 if connect is False: 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif connect is not None: 17:24:06 connect -= 1 17:24:06 17:24:06 elif error and self._is_read_error(error): 17:24:06 # Read retry? 17:24:06 if read is False or method is None or not self._is_method_retryable(method): 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif read is not None: 17:24:06 read -= 1 17:24:06 17:24:06 elif error: 17:24:06 # Other retry? 17:24:06 if other is not None: 17:24:06 other -= 1 17:24:06 17:24:06 elif response and response.get_redirect_location(): 17:24:06 # Redirect retry? 17:24:06 if redirect is not None: 17:24:06 redirect -= 1 17:24:06 cause = "too many redirects" 17:24:06 response_redirect_location = response.get_redirect_location() 17:24:06 if response_redirect_location: 17:24:06 redirect_location = response_redirect_location 17:24:06 status = response.status 17:24:06 17:24:06 else: 17:24:06 # Incrementing because of a server error like a 500 in 17:24:06 # status_forcelist and the given method is in the allowed_methods 17:24:06 cause = ResponseError.GENERIC_ERROR 17:24:06 if response and response.status: 17:24:06 if status_count is not None: 17:24:06 status_count -= 1 17:24:06 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 17:24:06 status = response.status 17:24:06 17:24:06 history = self.history + ( 17:24:06 RequestHistory(method, url, error, status, redirect_location), 17:24:06 ) 17:24:06 17:24:06 new_retry = self.new( 17:24:06 total=total, 17:24:06 connect=connect, 17:24:06 read=read, 17:24:06 redirect=redirect, 17:24:06 status=status_count, 17:24:06 other=other, 17:24:06 history=history, 17:24:06 ) 17:24:06 17:24:06 if new_retry.is_exhausted(): 17:24:06 reason = error or ResponseError(cause) 17:24:06 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 17:24:06 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 17:24:06 17:24:06 During handling of the above exception, another exception occurred: 17:24:06 17:24:06 self = 17:24:06 17:24:06 def test_12_xpdr_portmapping_CLIENT1(self): 17:24:06 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT1") 17:24:06 17:24:06 transportpce_tests/1.2.1/test01_portmapping.py:144: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 transportpce_tests/common/test_utils.py:473: in get_portmapping_node_attr 17:24:06 response = get_request(target_url) 17:24:06 transportpce_tests/common/test_utils.py:116: in get_request 17:24:06 return requests.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:24:06 return session.request(method=method, url=url, **kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:24:06 resp = self.send(prep, **send_kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:24:06 r = adapter.send(request, **kwargs) 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 except (ProtocolError, OSError) as err: 17:24:06 raise ConnectionError(err, request=request) 17:24:06 17:24:06 except MaxRetryError as e: 17:24:06 if isinstance(e.reason, ConnectTimeoutError): 17:24:06 # TODO: Remove this in 3.0.0: see #2811 17:24:06 if not isinstance(e.reason, NewConnectionError): 17:24:06 raise ConnectTimeout(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, ResponseError): 17:24:06 raise RetryError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _ProxyError): 17:24:06 raise ProxyError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _SSLError): 17:24:06 # This branch is for urllib3 v1.22 and later. 17:24:06 raise SSLError(e, request=request) 17:24:06 17:24:06 > raise ConnectionError(e, request=request) 17:24:06 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 17:24:06 ----------------------------- Captured stdout call ----------------------------- 17:24:06 execution of test_12_xpdr_portmapping_CLIENT1 17:24:06 _______ TransportPCEPortMappingTesting.test_13_xpdr_portmapping_CLIENT2 ________ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 > sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 17:24:06 raise err 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 address = ('localhost', 8182), timeout = 10, source_address = None 17:24:06 socket_options = [(6, 1, 1)] 17:24:06 17:24:06 def create_connection( 17:24:06 address: tuple[str, int], 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 source_address: tuple[str, int] | None = None, 17:24:06 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 17:24:06 ) -> socket.socket: 17:24:06 """Connect to *address* and return the socket object. 17:24:06 17:24:06 Convenience function. Connect to *address* (a 2-tuple ``(host, 17:24:06 port)``) and return the socket object. Passing the optional 17:24:06 *timeout* parameter will set the timeout on the socket instance 17:24:06 before attempting to connect. If no *timeout* is supplied, the 17:24:06 global default timeout setting returned by :func:`socket.getdefaulttimeout` 17:24:06 is used. If *source_address* is set it must be a tuple of (host, port) 17:24:06 for the socket to bind as a source address before making the connection. 17:24:06 An host of '' or port 0 tells the OS to use the default. 17:24:06 """ 17:24:06 17:24:06 host, port = address 17:24:06 if host.startswith("["): 17:24:06 host = host.strip("[]") 17:24:06 err = None 17:24:06 17:24:06 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 17:24:06 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 17:24:06 # The original create_connection function always returns all records. 17:24:06 family = allowed_gai_family() 17:24:06 17:24:06 try: 17:24:06 host.encode("idna") 17:24:06 except UnicodeError: 17:24:06 raise LocationParseError(f"'{host}', label empty or too long") from None 17:24:06 17:24:06 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 17:24:06 af, socktype, proto, canonname, sa = res 17:24:06 sock = None 17:24:06 try: 17:24:06 sock = socket.socket(af, socktype, proto) 17:24:06 17:24:06 # If provided, set socket level options before connecting. 17:24:06 _set_socket_options(sock, socket_options) 17:24:06 17:24:06 if timeout is not _DEFAULT_TIMEOUT: 17:24:06 sock.settimeout(timeout) 17:24:06 if source_address: 17:24:06 sock.bind(source_address) 17:24:06 > sock.connect(sa) 17:24:06 E ConnectionRefusedError: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 method = 'GET' 17:24:06 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2' 17:24:06 body = None 17:24:06 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:24:06 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 redirect = False, assert_same_host = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:24:06 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:24:06 decode_content = False, response_kw = {} 17:24:06 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2', query=None, fragment=None) 17:24:06 destination_scheme = None, conn = None, release_this_conn = True 17:24:06 http_tunnel_required = False, err = None, clean_exit = False 17:24:06 17:24:06 def urlopen( # type: ignore[override] 17:24:06 self, 17:24:06 method: str, 17:24:06 url: str, 17:24:06 body: _TYPE_BODY | None = None, 17:24:06 headers: typing.Mapping[str, str] | None = None, 17:24:06 retries: Retry | bool | int | None = None, 17:24:06 redirect: bool = True, 17:24:06 assert_same_host: bool = True, 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 pool_timeout: int | None = None, 17:24:06 release_conn: bool | None = None, 17:24:06 chunked: bool = False, 17:24:06 body_pos: _TYPE_BODY_POSITION | None = None, 17:24:06 preload_content: bool = True, 17:24:06 decode_content: bool = True, 17:24:06 **response_kw: typing.Any, 17:24:06 ) -> BaseHTTPResponse: 17:24:06 """ 17:24:06 Get a connection from the pool and perform an HTTP request. This is the 17:24:06 lowest level call for making a request, so you'll need to specify all 17:24:06 the raw details. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 More commonly, it's appropriate to use a convenience method 17:24:06 such as :meth:`request`. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 `release_conn` will only behave as expected if 17:24:06 `preload_content=False` because we want to make 17:24:06 `preload_content=False` the default behaviour someday soon without 17:24:06 breaking backwards compatibility. 17:24:06 17:24:06 :param method: 17:24:06 HTTP request method (such as GET, POST, PUT, etc.) 17:24:06 17:24:06 :param url: 17:24:06 The URL to perform the request on. 17:24:06 17:24:06 :param body: 17:24:06 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:24:06 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:24:06 17:24:06 :param headers: 17:24:06 Dictionary of custom headers to send, such as User-Agent, 17:24:06 If-None-Match, etc. If None, pool headers are used. If provided, 17:24:06 these headers completely replace any pool-specific headers. 17:24:06 17:24:06 :param retries: 17:24:06 Configure the number of retries to allow before raising a 17:24:06 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:24:06 17:24:06 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:24:06 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:24:06 over different types of retries. 17:24:06 Pass an integer number to retry connection errors that many times, 17:24:06 but no other types of errors. Pass zero to never retry. 17:24:06 17:24:06 If ``False``, then retries are disabled and any exception is raised 17:24:06 immediately. Also, instead of raising a MaxRetryError on redirects, 17:24:06 the redirect response will be returned. 17:24:06 17:24:06 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:24:06 17:24:06 :param redirect: 17:24:06 If True, automatically handle redirects (status codes 301, 302, 17:24:06 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:24:06 will disable redirect, too. 17:24:06 17:24:06 :param assert_same_host: 17:24:06 If ``True``, will make sure that the host of the pool requests is 17:24:06 consistent else will raise HostChangedError. When ``False``, you can 17:24:06 use the pool on an HTTP proxy and request foreign hosts. 17:24:06 17:24:06 :param timeout: 17:24:06 If specified, overrides the default timeout for this one 17:24:06 request. It may be a float (in seconds) or an instance of 17:24:06 :class:`urllib3.util.Timeout`. 17:24:06 17:24:06 :param pool_timeout: 17:24:06 If set and the pool is set to block=True, then this method will 17:24:06 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:24:06 connection is available within the time period. 17:24:06 17:24:06 :param bool preload_content: 17:24:06 If True, the response's body will be preloaded into memory. 17:24:06 17:24:06 :param bool decode_content: 17:24:06 If True, will attempt to decode the body based on the 17:24:06 'content-encoding' header. 17:24:06 17:24:06 :param release_conn: 17:24:06 If False, then the urlopen call will not release the connection 17:24:06 back into the pool once a response is received (but will release if 17:24:06 you read the entire contents of the response such as when 17:24:06 `preload_content=True`). This is useful if you're not preloading 17:24:06 the response's content immediately. You will need to call 17:24:06 ``r.release_conn()`` on the response ``r`` to return the connection 17:24:06 back into the pool. If None, it takes the value of ``preload_content`` 17:24:06 which defaults to ``True``. 17:24:06 17:24:06 :param bool chunked: 17:24:06 If True, urllib3 will send the body using chunked transfer 17:24:06 encoding. Otherwise, urllib3 will send the body using the standard 17:24:06 content-length form. Defaults to False. 17:24:06 17:24:06 :param int body_pos: 17:24:06 Position to seek to in file-like body in the event of a retry or 17:24:06 redirect. Typically this won't need to be set because urllib3 will 17:24:06 auto-populate the value when needed. 17:24:06 """ 17:24:06 parsed_url = parse_url(url) 17:24:06 destination_scheme = parsed_url.scheme 17:24:06 17:24:06 if headers is None: 17:24:06 headers = self.headers 17:24:06 17:24:06 if not isinstance(retries, Retry): 17:24:06 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:24:06 17:24:06 if release_conn is None: 17:24:06 release_conn = preload_content 17:24:06 17:24:06 # Check host 17:24:06 if assert_same_host and not self.is_same_host(url): 17:24:06 raise HostChangedError(self, url, retries) 17:24:06 17:24:06 # Ensure that the URL we're connecting to is properly encoded 17:24:06 if url.startswith("/"): 17:24:06 url = to_str(_encode_target(url)) 17:24:06 else: 17:24:06 url = to_str(parsed_url.url) 17:24:06 17:24:06 conn = None 17:24:06 17:24:06 # Track whether `conn` needs to be released before 17:24:06 # returning/raising/recursing. Update this variable if necessary, and 17:24:06 # leave `release_conn` constant throughout the function. That way, if 17:24:06 # the function recurses, the original value of `release_conn` will be 17:24:06 # passed down into the recursive call, and its value will be respected. 17:24:06 # 17:24:06 # See issue #651 [1] for details. 17:24:06 # 17:24:06 # [1] 17:24:06 release_this_conn = release_conn 17:24:06 17:24:06 http_tunnel_required = connection_requires_http_tunnel( 17:24:06 self.proxy, self.proxy_config, destination_scheme 17:24:06 ) 17:24:06 17:24:06 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:24:06 # have to copy the headers dict so we can safely change it without those 17:24:06 # changes being reflected in anyone else's copy. 17:24:06 if not http_tunnel_required: 17:24:06 headers = headers.copy() # type: ignore[attr-defined] 17:24:06 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:24:06 17:24:06 # Must keep the exception bound to a separate variable or else Python 3 17:24:06 # complains about UnboundLocalError. 17:24:06 err = None 17:24:06 17:24:06 # Keep track of whether we cleanly exited the except block. This 17:24:06 # ensures we do proper cleanup in finally. 17:24:06 clean_exit = False 17:24:06 17:24:06 # Rewind body position, if needed. Record current position 17:24:06 # for future rewinds in the event of a redirect/retry. 17:24:06 body_pos = set_file_position(body, body_pos) 17:24:06 17:24:06 try: 17:24:06 # Request a connection from the queue. 17:24:06 timeout_obj = self._get_timeout(timeout) 17:24:06 conn = self._get_conn(timeout=pool_timeout) 17:24:06 17:24:06 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:24:06 17:24:06 # Is this a closed/new connection that requires CONNECT tunnelling? 17:24:06 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:24:06 try: 17:24:06 self._prepare_proxy(conn) 17:24:06 except (BaseSSLError, OSError, SocketTimeout) as e: 17:24:06 self._raise_timeout( 17:24:06 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:24:06 ) 17:24:06 raise 17:24:06 17:24:06 # If we're going to release the connection in ``finally:``, then 17:24:06 # the response doesn't need to know about the connection. Otherwise 17:24:06 # it will also try to release it and we'll have a double-release 17:24:06 # mess. 17:24:06 response_conn = conn if not release_conn else None 17:24:06 17:24:06 # Make the request on the HTTPConnection object 17:24:06 > response = self._make_request( 17:24:06 conn, 17:24:06 method, 17:24:06 url, 17:24:06 timeout=timeout_obj, 17:24:06 body=body, 17:24:06 headers=headers, 17:24:06 chunked=chunked, 17:24:06 retries=retries, 17:24:06 response_conn=response_conn, 17:24:06 preload_content=preload_content, 17:24:06 decode_content=decode_content, 17:24:06 **response_kw, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 17:24:06 conn.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 17:24:06 self.endheaders() 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 17:24:06 self._send_output(message_body, encode_chunked=encode_chunked) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 17:24:06 self.send(msg) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 17:24:06 self.connect() 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 17:24:06 self.sock = self._new_conn() 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 except socket.gaierror as e: 17:24:06 raise NameResolutionError(self.host, self, e) from e 17:24:06 except SocketTimeout as e: 17:24:06 raise ConnectTimeoutError( 17:24:06 self, 17:24:06 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 17:24:06 ) from e 17:24:06 17:24:06 except OSError as e: 17:24:06 > raise NewConnectionError( 17:24:06 self, f"Failed to establish a new connection: {e}" 17:24:06 ) from e 17:24:06 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 > resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:24:06 retries = retries.increment( 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 method = 'GET' 17:24:06 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2' 17:24:06 response = None 17:24:06 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 17:24:06 _pool = 17:24:06 _stacktrace = 17:24:06 17:24:06 def increment( 17:24:06 self, 17:24:06 method: str | None = None, 17:24:06 url: str | None = None, 17:24:06 response: BaseHTTPResponse | None = None, 17:24:06 error: Exception | None = None, 17:24:06 _pool: ConnectionPool | None = None, 17:24:06 _stacktrace: TracebackType | None = None, 17:24:06 ) -> Self: 17:24:06 """Return a new Retry object with incremented retry counters. 17:24:06 17:24:06 :param response: A response object, or None, if the server did not 17:24:06 return a response. 17:24:06 :type response: :class:`~urllib3.response.BaseHTTPResponse` 17:24:06 :param Exception error: An error encountered during the request, or 17:24:06 None if the response was received successfully. 17:24:06 17:24:06 :return: A new ``Retry`` object. 17:24:06 """ 17:24:06 if self.total is False and error: 17:24:06 # Disabled, indicate to re-raise the error. 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 17:24:06 total = self.total 17:24:06 if total is not None: 17:24:06 total -= 1 17:24:06 17:24:06 connect = self.connect 17:24:06 read = self.read 17:24:06 redirect = self.redirect 17:24:06 status_count = self.status 17:24:06 other = self.other 17:24:06 cause = "unknown" 17:24:06 status = None 17:24:06 redirect_location = None 17:24:06 17:24:06 if error and self._is_connection_error(error): 17:24:06 # Connect retry? 17:24:06 if connect is False: 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif connect is not None: 17:24:06 connect -= 1 17:24:06 17:24:06 elif error and self._is_read_error(error): 17:24:06 # Read retry? 17:24:06 if read is False or method is None or not self._is_method_retryable(method): 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif read is not None: 17:24:06 read -= 1 17:24:06 17:24:06 elif error: 17:24:06 # Other retry? 17:24:06 if other is not None: 17:24:06 other -= 1 17:24:06 17:24:06 elif response and response.get_redirect_location(): 17:24:06 # Redirect retry? 17:24:06 if redirect is not None: 17:24:06 redirect -= 1 17:24:06 cause = "too many redirects" 17:24:06 response_redirect_location = response.get_redirect_location() 17:24:06 if response_redirect_location: 17:24:06 redirect_location = response_redirect_location 17:24:06 status = response.status 17:24:06 17:24:06 else: 17:24:06 # Incrementing because of a server error like a 500 in 17:24:06 # status_forcelist and the given method is in the allowed_methods 17:24:06 cause = ResponseError.GENERIC_ERROR 17:24:06 if response and response.status: 17:24:06 if status_count is not None: 17:24:06 status_count -= 1 17:24:06 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 17:24:06 status = response.status 17:24:06 17:24:06 history = self.history + ( 17:24:06 RequestHistory(method, url, error, status, redirect_location), 17:24:06 ) 17:24:06 17:24:06 new_retry = self.new( 17:24:06 total=total, 17:24:06 connect=connect, 17:24:06 read=read, 17:24:06 redirect=redirect, 17:24:06 status=status_count, 17:24:06 other=other, 17:24:06 history=history, 17:24:06 ) 17:24:06 17:24:06 if new_retry.is_exhausted(): 17:24:06 reason = error or ResponseError(cause) 17:24:06 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 17:24:06 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 17:24:06 17:24:06 During handling of the above exception, another exception occurred: 17:24:06 17:24:06 self = 17:24:06 17:24:06 def test_13_xpdr_portmapping_CLIENT2(self): 17:24:06 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT2") 17:24:06 17:24:06 transportpce_tests/1.2.1/test01_portmapping.py:156: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 transportpce_tests/common/test_utils.py:473: in get_portmapping_node_attr 17:24:06 response = get_request(target_url) 17:24:06 transportpce_tests/common/test_utils.py:116: in get_request 17:24:06 return requests.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:24:06 return session.request(method=method, url=url, **kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:24:06 resp = self.send(prep, **send_kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:24:06 r = adapter.send(request, **kwargs) 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 except (ProtocolError, OSError) as err: 17:24:06 raise ConnectionError(err, request=request) 17:24:06 17:24:06 except MaxRetryError as e: 17:24:06 if isinstance(e.reason, ConnectTimeoutError): 17:24:06 # TODO: Remove this in 3.0.0: see #2811 17:24:06 if not isinstance(e.reason, NewConnectionError): 17:24:06 raise ConnectTimeout(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, ResponseError): 17:24:06 raise RetryError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _ProxyError): 17:24:06 raise ProxyError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _SSLError): 17:24:06 # This branch is for urllib3 v1.22 and later. 17:24:06 raise SSLError(e, request=request) 17:24:06 17:24:06 > raise ConnectionError(e, request=request) 17:24:06 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 17:24:06 ----------------------------- Captured stdout call ----------------------------- 17:24:06 execution of test_13_xpdr_portmapping_CLIENT2 17:24:06 _______ TransportPCEPortMappingTesting.test_14_xpdr_portmapping_CLIENT3 ________ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 > sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 17:24:06 raise err 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 address = ('localhost', 8182), timeout = 10, source_address = None 17:24:06 socket_options = [(6, 1, 1)] 17:24:06 17:24:06 def create_connection( 17:24:06 address: tuple[str, int], 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 source_address: tuple[str, int] | None = None, 17:24:06 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 17:24:06 ) -> socket.socket: 17:24:06 """Connect to *address* and return the socket object. 17:24:06 17:24:06 Convenience function. Connect to *address* (a 2-tuple ``(host, 17:24:06 port)``) and return the socket object. Passing the optional 17:24:06 *timeout* parameter will set the timeout on the socket instance 17:24:06 before attempting to connect. If no *timeout* is supplied, the 17:24:06 global default timeout setting returned by :func:`socket.getdefaulttimeout` 17:24:06 is used. If *source_address* is set it must be a tuple of (host, port) 17:24:06 for the socket to bind as a source address before making the connection. 17:24:06 An host of '' or port 0 tells the OS to use the default. 17:24:06 """ 17:24:06 17:24:06 host, port = address 17:24:06 if host.startswith("["): 17:24:06 host = host.strip("[]") 17:24:06 err = None 17:24:06 17:24:06 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 17:24:06 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 17:24:06 # The original create_connection function always returns all records. 17:24:06 family = allowed_gai_family() 17:24:06 17:24:06 try: 17:24:06 host.encode("idna") 17:24:06 except UnicodeError: 17:24:06 raise LocationParseError(f"'{host}', label empty or too long") from None 17:24:06 17:24:06 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 17:24:06 af, socktype, proto, canonname, sa = res 17:24:06 sock = None 17:24:06 try: 17:24:06 sock = socket.socket(af, socktype, proto) 17:24:06 17:24:06 # If provided, set socket level options before connecting. 17:24:06 _set_socket_options(sock, socket_options) 17:24:06 17:24:06 if timeout is not _DEFAULT_TIMEOUT: 17:24:06 sock.settimeout(timeout) 17:24:06 if source_address: 17:24:06 sock.bind(source_address) 17:24:06 > sock.connect(sa) 17:24:06 E ConnectionRefusedError: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 method = 'GET' 17:24:06 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3' 17:24:06 body = None 17:24:06 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:24:06 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 redirect = False, assert_same_host = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:24:06 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:24:06 decode_content = False, response_kw = {} 17:24:06 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3', query=None, fragment=None) 17:24:06 destination_scheme = None, conn = None, release_this_conn = True 17:24:06 http_tunnel_required = False, err = None, clean_exit = False 17:24:06 17:24:06 def urlopen( # type: ignore[override] 17:24:06 self, 17:24:06 method: str, 17:24:06 url: str, 17:24:06 body: _TYPE_BODY | None = None, 17:24:06 headers: typing.Mapping[str, str] | None = None, 17:24:06 retries: Retry | bool | int | None = None, 17:24:06 redirect: bool = True, 17:24:06 assert_same_host: bool = True, 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 pool_timeout: int | None = None, 17:24:06 release_conn: bool | None = None, 17:24:06 chunked: bool = False, 17:24:06 body_pos: _TYPE_BODY_POSITION | None = None, 17:24:06 preload_content: bool = True, 17:24:06 decode_content: bool = True, 17:24:06 **response_kw: typing.Any, 17:24:06 ) -> BaseHTTPResponse: 17:24:06 """ 17:24:06 Get a connection from the pool and perform an HTTP request. This is the 17:24:06 lowest level call for making a request, so you'll need to specify all 17:24:06 the raw details. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 More commonly, it's appropriate to use a convenience method 17:24:06 such as :meth:`request`. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 `release_conn` will only behave as expected if 17:24:06 `preload_content=False` because we want to make 17:24:06 `preload_content=False` the default behaviour someday soon without 17:24:06 breaking backwards compatibility. 17:24:06 17:24:06 :param method: 17:24:06 HTTP request method (such as GET, POST, PUT, etc.) 17:24:06 17:24:06 :param url: 17:24:06 The URL to perform the request on. 17:24:06 17:24:06 :param body: 17:24:06 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:24:06 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:24:06 17:24:06 :param headers: 17:24:06 Dictionary of custom headers to send, such as User-Agent, 17:24:06 If-None-Match, etc. If None, pool headers are used. If provided, 17:24:06 these headers completely replace any pool-specific headers. 17:24:06 17:24:06 :param retries: 17:24:06 Configure the number of retries to allow before raising a 17:24:06 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:24:06 17:24:06 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:24:06 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:24:06 over different types of retries. 17:24:06 Pass an integer number to retry connection errors that many times, 17:24:06 but no other types of errors. Pass zero to never retry. 17:24:06 17:24:06 If ``False``, then retries are disabled and any exception is raised 17:24:06 immediately. Also, instead of raising a MaxRetryError on redirects, 17:24:06 the redirect response will be returned. 17:24:06 17:24:06 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:24:06 17:24:06 :param redirect: 17:24:06 If True, automatically handle redirects (status codes 301, 302, 17:24:06 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:24:06 will disable redirect, too. 17:24:06 17:24:06 :param assert_same_host: 17:24:06 If ``True``, will make sure that the host of the pool requests is 17:24:06 consistent else will raise HostChangedError. When ``False``, you can 17:24:06 use the pool on an HTTP proxy and request foreign hosts. 17:24:06 17:24:06 :param timeout: 17:24:06 If specified, overrides the default timeout for this one 17:24:06 request. It may be a float (in seconds) or an instance of 17:24:06 :class:`urllib3.util.Timeout`. 17:24:06 17:24:06 :param pool_timeout: 17:24:06 If set and the pool is set to block=True, then this method will 17:24:06 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:24:06 connection is available within the time period. 17:24:06 17:24:06 :param bool preload_content: 17:24:06 If True, the response's body will be preloaded into memory. 17:24:06 17:24:06 :param bool decode_content: 17:24:06 If True, will attempt to decode the body based on the 17:24:06 'content-encoding' header. 17:24:06 17:24:06 :param release_conn: 17:24:06 If False, then the urlopen call will not release the connection 17:24:06 back into the pool once a response is received (but will release if 17:24:06 you read the entire contents of the response such as when 17:24:06 `preload_content=True`). This is useful if you're not preloading 17:24:06 the response's content immediately. You will need to call 17:24:06 ``r.release_conn()`` on the response ``r`` to return the connection 17:24:06 back into the pool. If None, it takes the value of ``preload_content`` 17:24:06 which defaults to ``True``. 17:24:06 17:24:06 :param bool chunked: 17:24:06 If True, urllib3 will send the body using chunked transfer 17:24:06 encoding. Otherwise, urllib3 will send the body using the standard 17:24:06 content-length form. Defaults to False. 17:24:06 17:24:06 :param int body_pos: 17:24:06 Position to seek to in file-like body in the event of a retry or 17:24:06 redirect. Typically this won't need to be set because urllib3 will 17:24:06 auto-populate the value when needed. 17:24:06 """ 17:24:06 parsed_url = parse_url(url) 17:24:06 destination_scheme = parsed_url.scheme 17:24:06 17:24:06 if headers is None: 17:24:06 headers = self.headers 17:24:06 17:24:06 if not isinstance(retries, Retry): 17:24:06 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:24:06 17:24:06 if release_conn is None: 17:24:06 release_conn = preload_content 17:24:06 17:24:06 # Check host 17:24:06 if assert_same_host and not self.is_same_host(url): 17:24:06 raise HostChangedError(self, url, retries) 17:24:06 17:24:06 # Ensure that the URL we're connecting to is properly encoded 17:24:06 if url.startswith("/"): 17:24:06 url = to_str(_encode_target(url)) 17:24:06 else: 17:24:06 url = to_str(parsed_url.url) 17:24:06 17:24:06 conn = None 17:24:06 17:24:06 # Track whether `conn` needs to be released before 17:24:06 # returning/raising/recursing. Update this variable if necessary, and 17:24:06 # leave `release_conn` constant throughout the function. That way, if 17:24:06 # the function recurses, the original value of `release_conn` will be 17:24:06 # passed down into the recursive call, and its value will be respected. 17:24:06 # 17:24:06 # See issue #651 [1] for details. 17:24:06 # 17:24:06 # [1] 17:24:06 release_this_conn = release_conn 17:24:06 17:24:06 http_tunnel_required = connection_requires_http_tunnel( 17:24:06 self.proxy, self.proxy_config, destination_scheme 17:24:06 ) 17:24:06 17:24:06 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:24:06 # have to copy the headers dict so we can safely change it without those 17:24:06 # changes being reflected in anyone else's copy. 17:24:06 if not http_tunnel_required: 17:24:06 headers = headers.copy() # type: ignore[attr-defined] 17:24:06 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:24:06 17:24:06 # Must keep the exception bound to a separate variable or else Python 3 17:24:06 # complains about UnboundLocalError. 17:24:06 err = None 17:24:06 17:24:06 # Keep track of whether we cleanly exited the except block. This 17:24:06 # ensures we do proper cleanup in finally. 17:24:06 clean_exit = False 17:24:06 17:24:06 # Rewind body position, if needed. Record current position 17:24:06 # for future rewinds in the event of a redirect/retry. 17:24:06 body_pos = set_file_position(body, body_pos) 17:24:06 17:24:06 try: 17:24:06 # Request a connection from the queue. 17:24:06 timeout_obj = self._get_timeout(timeout) 17:24:06 conn = self._get_conn(timeout=pool_timeout) 17:24:06 17:24:06 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:24:06 17:24:06 # Is this a closed/new connection that requires CONNECT tunnelling? 17:24:06 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:24:06 try: 17:24:06 self._prepare_proxy(conn) 17:24:06 except (BaseSSLError, OSError, SocketTimeout) as e: 17:24:06 self._raise_timeout( 17:24:06 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:24:06 ) 17:24:06 raise 17:24:06 17:24:06 # If we're going to release the connection in ``finally:``, then 17:24:06 # the response doesn't need to know about the connection. Otherwise 17:24:06 # it will also try to release it and we'll have a double-release 17:24:06 # mess. 17:24:06 response_conn = conn if not release_conn else None 17:24:06 17:24:06 # Make the request on the HTTPConnection object 17:24:06 > response = self._make_request( 17:24:06 conn, 17:24:06 method, 17:24:06 url, 17:24:06 timeout=timeout_obj, 17:24:06 body=body, 17:24:06 headers=headers, 17:24:06 chunked=chunked, 17:24:06 retries=retries, 17:24:06 response_conn=response_conn, 17:24:06 preload_content=preload_content, 17:24:06 decode_content=decode_content, 17:24:06 **response_kw, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 17:24:06 conn.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 17:24:06 self.endheaders() 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 17:24:06 self._send_output(message_body, encode_chunked=encode_chunked) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 17:24:06 self.send(msg) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 17:24:06 self.connect() 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 17:24:06 self.sock = self._new_conn() 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 except socket.gaierror as e: 17:24:06 raise NameResolutionError(self.host, self, e) from e 17:24:06 except SocketTimeout as e: 17:24:06 raise ConnectTimeoutError( 17:24:06 self, 17:24:06 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 17:24:06 ) from e 17:24:06 17:24:06 except OSError as e: 17:24:06 > raise NewConnectionError( 17:24:06 self, f"Failed to establish a new connection: {e}" 17:24:06 ) from e 17:24:06 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 > resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:24:06 retries = retries.increment( 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 method = 'GET' 17:24:06 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3' 17:24:06 response = None 17:24:06 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 17:24:06 _pool = 17:24:06 _stacktrace = 17:24:06 17:24:06 def increment( 17:24:06 self, 17:24:06 method: str | None = None, 17:24:06 url: str | None = None, 17:24:06 response: BaseHTTPResponse | None = None, 17:24:06 error: Exception | None = None, 17:24:06 _pool: ConnectionPool | None = None, 17:24:06 _stacktrace: TracebackType | None = None, 17:24:06 ) -> Self: 17:24:06 """Return a new Retry object with incremented retry counters. 17:24:06 17:24:06 :param response: A response object, or None, if the server did not 17:24:06 return a response. 17:24:06 :type response: :class:`~urllib3.response.BaseHTTPResponse` 17:24:06 :param Exception error: An error encountered during the request, or 17:24:06 None if the response was received successfully. 17:24:06 17:24:06 :return: A new ``Retry`` object. 17:24:06 """ 17:24:06 if self.total is False and error: 17:24:06 # Disabled, indicate to re-raise the error. 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 17:24:06 total = self.total 17:24:06 if total is not None: 17:24:06 total -= 1 17:24:06 17:24:06 connect = self.connect 17:24:06 read = self.read 17:24:06 redirect = self.redirect 17:24:06 status_count = self.status 17:24:06 other = self.other 17:24:06 cause = "unknown" 17:24:06 status = None 17:24:06 redirect_location = None 17:24:06 17:24:06 if error and self._is_connection_error(error): 17:24:06 # Connect retry? 17:24:06 if connect is False: 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif connect is not None: 17:24:06 connect -= 1 17:24:06 17:24:06 elif error and self._is_read_error(error): 17:24:06 # Read retry? 17:24:06 if read is False or method is None or not self._is_method_retryable(method): 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif read is not None: 17:24:06 read -= 1 17:24:06 17:24:06 elif error: 17:24:06 # Other retry? 17:24:06 if other is not None: 17:24:06 other -= 1 17:24:06 17:24:06 elif response and response.get_redirect_location(): 17:24:06 # Redirect retry? 17:24:06 if redirect is not None: 17:24:06 redirect -= 1 17:24:06 cause = "too many redirects" 17:24:06 response_redirect_location = response.get_redirect_location() 17:24:06 if response_redirect_location: 17:24:06 redirect_location = response_redirect_location 17:24:06 status = response.status 17:24:06 17:24:06 else: 17:24:06 # Incrementing because of a server error like a 500 in 17:24:06 # status_forcelist and the given method is in the allowed_methods 17:24:06 cause = ResponseError.GENERIC_ERROR 17:24:06 if response and response.status: 17:24:06 if status_count is not None: 17:24:06 status_count -= 1 17:24:06 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 17:24:06 status = response.status 17:24:06 17:24:06 history = self.history + ( 17:24:06 RequestHistory(method, url, error, status, redirect_location), 17:24:06 ) 17:24:06 17:24:06 new_retry = self.new( 17:24:06 total=total, 17:24:06 connect=connect, 17:24:06 read=read, 17:24:06 redirect=redirect, 17:24:06 status=status_count, 17:24:06 other=other, 17:24:06 history=history, 17:24:06 ) 17:24:06 17:24:06 if new_retry.is_exhausted(): 17:24:06 reason = error or ResponseError(cause) 17:24:06 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 17:24:06 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 17:24:06 17:24:06 During handling of the above exception, another exception occurred: 17:24:06 17:24:06 self = 17:24:06 17:24:06 def test_14_xpdr_portmapping_CLIENT3(self): 17:24:06 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT3") 17:24:06 17:24:06 transportpce_tests/1.2.1/test01_portmapping.py:168: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 transportpce_tests/common/test_utils.py:473: in get_portmapping_node_attr 17:24:06 response = get_request(target_url) 17:24:06 transportpce_tests/common/test_utils.py:116: in get_request 17:24:06 return requests.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:24:06 return session.request(method=method, url=url, **kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:24:06 resp = self.send(prep, **send_kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:24:06 r = adapter.send(request, **kwargs) 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 except (ProtocolError, OSError) as err: 17:24:06 raise ConnectionError(err, request=request) 17:24:06 17:24:06 except MaxRetryError as e: 17:24:06 if isinstance(e.reason, ConnectTimeoutError): 17:24:06 # TODO: Remove this in 3.0.0: see #2811 17:24:06 if not isinstance(e.reason, NewConnectionError): 17:24:06 raise ConnectTimeout(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, ResponseError): 17:24:06 raise RetryError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _ProxyError): 17:24:06 raise ProxyError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _SSLError): 17:24:06 # This branch is for urllib3 v1.22 and later. 17:24:06 raise SSLError(e, request=request) 17:24:06 17:24:06 > raise ConnectionError(e, request=request) 17:24:06 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 17:24:06 ----------------------------- Captured stdout call ----------------------------- 17:24:06 execution of test_14_xpdr_portmapping_CLIENT3 17:24:06 _______ TransportPCEPortMappingTesting.test_15_xpdr_portmapping_CLIENT4 ________ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 > sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 17:24:06 raise err 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 address = ('localhost', 8182), timeout = 10, source_address = None 17:24:06 socket_options = [(6, 1, 1)] 17:24:06 17:24:06 def create_connection( 17:24:06 address: tuple[str, int], 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 source_address: tuple[str, int] | None = None, 17:24:06 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 17:24:06 ) -> socket.socket: 17:24:06 """Connect to *address* and return the socket object. 17:24:06 17:24:06 Convenience function. Connect to *address* (a 2-tuple ``(host, 17:24:06 port)``) and return the socket object. Passing the optional 17:24:06 *timeout* parameter will set the timeout on the socket instance 17:24:06 before attempting to connect. If no *timeout* is supplied, the 17:24:06 global default timeout setting returned by :func:`socket.getdefaulttimeout` 17:24:06 is used. If *source_address* is set it must be a tuple of (host, port) 17:24:06 for the socket to bind as a source address before making the connection. 17:24:06 An host of '' or port 0 tells the OS to use the default. 17:24:06 """ 17:24:06 17:24:06 host, port = address 17:24:06 if host.startswith("["): 17:24:06 host = host.strip("[]") 17:24:06 err = None 17:24:06 17:24:06 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 17:24:06 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 17:24:06 # The original create_connection function always returns all records. 17:24:06 family = allowed_gai_family() 17:24:06 17:24:06 try: 17:24:06 host.encode("idna") 17:24:06 except UnicodeError: 17:24:06 raise LocationParseError(f"'{host}', label empty or too long") from None 17:24:06 17:24:06 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 17:24:06 af, socktype, proto, canonname, sa = res 17:24:06 sock = None 17:24:06 try: 17:24:06 sock = socket.socket(af, socktype, proto) 17:24:06 17:24:06 # If provided, set socket level options before connecting. 17:24:06 _set_socket_options(sock, socket_options) 17:24:06 17:24:06 if timeout is not _DEFAULT_TIMEOUT: 17:24:06 sock.settimeout(timeout) 17:24:06 if source_address: 17:24:06 sock.bind(source_address) 17:24:06 > sock.connect(sa) 17:24:06 E ConnectionRefusedError: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 method = 'GET' 17:24:06 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4' 17:24:06 body = None 17:24:06 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:24:06 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 redirect = False, assert_same_host = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:24:06 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:24:06 decode_content = False, response_kw = {} 17:24:06 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4', query=None, fragment=None) 17:24:06 destination_scheme = None, conn = None, release_this_conn = True 17:24:06 http_tunnel_required = False, err = None, clean_exit = False 17:24:06 17:24:06 def urlopen( # type: ignore[override] 17:24:06 self, 17:24:06 method: str, 17:24:06 url: str, 17:24:06 body: _TYPE_BODY | None = None, 17:24:06 headers: typing.Mapping[str, str] | None = None, 17:24:06 retries: Retry | bool | int | None = None, 17:24:06 redirect: bool = True, 17:24:06 assert_same_host: bool = True, 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 pool_timeout: int | None = None, 17:24:06 release_conn: bool | None = None, 17:24:06 chunked: bool = False, 17:24:06 body_pos: _TYPE_BODY_POSITION | None = None, 17:24:06 preload_content: bool = True, 17:24:06 decode_content: bool = True, 17:24:06 **response_kw: typing.Any, 17:24:06 ) -> BaseHTTPResponse: 17:24:06 """ 17:24:06 Get a connection from the pool and perform an HTTP request. This is the 17:24:06 lowest level call for making a request, so you'll need to specify all 17:24:06 the raw details. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 More commonly, it's appropriate to use a convenience method 17:24:06 such as :meth:`request`. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 `release_conn` will only behave as expected if 17:24:06 `preload_content=False` because we want to make 17:24:06 `preload_content=False` the default behaviour someday soon without 17:24:06 breaking backwards compatibility. 17:24:06 17:24:06 :param method: 17:24:06 HTTP request method (such as GET, POST, PUT, etc.) 17:24:06 17:24:06 :param url: 17:24:06 The URL to perform the request on. 17:24:06 17:24:06 :param body: 17:24:06 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:24:06 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:24:06 17:24:06 :param headers: 17:24:06 Dictionary of custom headers to send, such as User-Agent, 17:24:06 If-None-Match, etc. If None, pool headers are used. If provided, 17:24:06 these headers completely replace any pool-specific headers. 17:24:06 17:24:06 :param retries: 17:24:06 Configure the number of retries to allow before raising a 17:24:06 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:24:06 17:24:06 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:24:06 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:24:06 over different types of retries. 17:24:06 Pass an integer number to retry connection errors that many times, 17:24:06 but no other types of errors. Pass zero to never retry. 17:24:06 17:24:06 If ``False``, then retries are disabled and any exception is raised 17:24:06 immediately. Also, instead of raising a MaxRetryError on redirects, 17:24:06 the redirect response will be returned. 17:24:06 17:24:06 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:24:06 17:24:06 :param redirect: 17:24:06 If True, automatically handle redirects (status codes 301, 302, 17:24:06 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:24:06 will disable redirect, too. 17:24:06 17:24:06 :param assert_same_host: 17:24:06 If ``True``, will make sure that the host of the pool requests is 17:24:06 consistent else will raise HostChangedError. When ``False``, you can 17:24:06 use the pool on an HTTP proxy and request foreign hosts. 17:24:06 17:24:06 :param timeout: 17:24:06 If specified, overrides the default timeout for this one 17:24:06 request. It may be a float (in seconds) or an instance of 17:24:06 :class:`urllib3.util.Timeout`. 17:24:06 17:24:06 :param pool_timeout: 17:24:06 If set and the pool is set to block=True, then this method will 17:24:06 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:24:06 connection is available within the time period. 17:24:06 17:24:06 :param bool preload_content: 17:24:06 If True, the response's body will be preloaded into memory. 17:24:06 17:24:06 :param bool decode_content: 17:24:06 If True, will attempt to decode the body based on the 17:24:06 'content-encoding' header. 17:24:06 17:24:06 :param release_conn: 17:24:06 If False, then the urlopen call will not release the connection 17:24:06 back into the pool once a response is received (but will release if 17:24:06 you read the entire contents of the response such as when 17:24:06 `preload_content=True`). This is useful if you're not preloading 17:24:06 the response's content immediately. You will need to call 17:24:06 ``r.release_conn()`` on the response ``r`` to return the connection 17:24:06 back into the pool. If None, it takes the value of ``preload_content`` 17:24:06 which defaults to ``True``. 17:24:06 17:24:06 :param bool chunked: 17:24:06 If True, urllib3 will send the body using chunked transfer 17:24:06 encoding. Otherwise, urllib3 will send the body using the standard 17:24:06 content-length form. Defaults to False. 17:24:06 17:24:06 :param int body_pos: 17:24:06 Position to seek to in file-like body in the event of a retry or 17:24:06 redirect. Typically this won't need to be set because urllib3 will 17:24:06 auto-populate the value when needed. 17:24:06 """ 17:24:06 parsed_url = parse_url(url) 17:24:06 destination_scheme = parsed_url.scheme 17:24:06 17:24:06 if headers is None: 17:24:06 headers = self.headers 17:24:06 17:24:06 if not isinstance(retries, Retry): 17:24:06 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:24:06 17:24:06 if release_conn is None: 17:24:06 release_conn = preload_content 17:24:06 17:24:06 # Check host 17:24:06 if assert_same_host and not self.is_same_host(url): 17:24:06 raise HostChangedError(self, url, retries) 17:24:06 17:24:06 # Ensure that the URL we're connecting to is properly encoded 17:24:06 if url.startswith("/"): 17:24:06 url = to_str(_encode_target(url)) 17:24:06 else: 17:24:06 url = to_str(parsed_url.url) 17:24:06 17:24:06 conn = None 17:24:06 17:24:06 # Track whether `conn` needs to be released before 17:24:06 # returning/raising/recursing. Update this variable if necessary, and 17:24:06 # leave `release_conn` constant throughout the function. That way, if 17:24:06 # the function recurses, the original value of `release_conn` will be 17:24:06 # passed down into the recursive call, and its value will be respected. 17:24:06 # 17:24:06 # See issue #651 [1] for details. 17:24:06 # 17:24:06 # [1] 17:24:06 release_this_conn = release_conn 17:24:06 17:24:06 http_tunnel_required = connection_requires_http_tunnel( 17:24:06 self.proxy, self.proxy_config, destination_scheme 17:24:06 ) 17:24:06 17:24:06 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:24:06 # have to copy the headers dict so we can safely change it without those 17:24:06 # changes being reflected in anyone else's copy. 17:24:06 if not http_tunnel_required: 17:24:06 headers = headers.copy() # type: ignore[attr-defined] 17:24:06 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:24:06 17:24:06 # Must keep the exception bound to a separate variable or else Python 3 17:24:06 # complains about UnboundLocalError. 17:24:06 err = None 17:24:06 17:24:06 # Keep track of whether we cleanly exited the except block. This 17:24:06 # ensures we do proper cleanup in finally. 17:24:06 clean_exit = False 17:24:06 17:24:06 # Rewind body position, if needed. Record current position 17:24:06 # for future rewinds in the event of a redirect/retry. 17:24:06 body_pos = set_file_position(body, body_pos) 17:24:06 17:24:06 try: 17:24:06 # Request a connection from the queue. 17:24:06 timeout_obj = self._get_timeout(timeout) 17:24:06 conn = self._get_conn(timeout=pool_timeout) 17:24:06 17:24:06 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:24:06 17:24:06 # Is this a closed/new connection that requires CONNECT tunnelling? 17:24:06 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:24:06 try: 17:24:06 self._prepare_proxy(conn) 17:24:06 except (BaseSSLError, OSError, SocketTimeout) as e: 17:24:06 self._raise_timeout( 17:24:06 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:24:06 ) 17:24:06 raise 17:24:06 17:24:06 # If we're going to release the connection in ``finally:``, then 17:24:06 # the response doesn't need to know about the connection. Otherwise 17:24:06 # it will also try to release it and we'll have a double-release 17:24:06 # mess. 17:24:06 response_conn = conn if not release_conn else None 17:24:06 17:24:06 # Make the request on the HTTPConnection object 17:24:06 > response = self._make_request( 17:24:06 conn, 17:24:06 method, 17:24:06 url, 17:24:06 timeout=timeout_obj, 17:24:06 body=body, 17:24:06 headers=headers, 17:24:06 chunked=chunked, 17:24:06 retries=retries, 17:24:06 response_conn=response_conn, 17:24:06 preload_content=preload_content, 17:24:06 decode_content=decode_content, 17:24:06 **response_kw, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 17:24:06 conn.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 17:24:06 self.endheaders() 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 17:24:06 self._send_output(message_body, encode_chunked=encode_chunked) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 17:24:06 self.send(msg) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 17:24:06 self.connect() 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 17:24:06 self.sock = self._new_conn() 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 except socket.gaierror as e: 17:24:06 raise NameResolutionError(self.host, self, e) from e 17:24:06 except SocketTimeout as e: 17:24:06 raise ConnectTimeoutError( 17:24:06 self, 17:24:06 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 17:24:06 ) from e 17:24:06 17:24:06 except OSError as e: 17:24:06 > raise NewConnectionError( 17:24:06 self, f"Failed to establish a new connection: {e}" 17:24:06 ) from e 17:24:06 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 > resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:24:06 retries = retries.increment( 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 method = 'GET' 17:24:06 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4' 17:24:06 response = None 17:24:06 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 17:24:06 _pool = 17:24:06 _stacktrace = 17:24:06 17:24:06 def increment( 17:24:06 self, 17:24:06 method: str | None = None, 17:24:06 url: str | None = None, 17:24:06 response: BaseHTTPResponse | None = None, 17:24:06 error: Exception | None = None, 17:24:06 _pool: ConnectionPool | None = None, 17:24:06 _stacktrace: TracebackType | None = None, 17:24:06 ) -> Self: 17:24:06 """Return a new Retry object with incremented retry counters. 17:24:06 17:24:06 :param response: A response object, or None, if the server did not 17:24:06 return a response. 17:24:06 :type response: :class:`~urllib3.response.BaseHTTPResponse` 17:24:06 :param Exception error: An error encountered during the request, or 17:24:06 None if the response was received successfully. 17:24:06 17:24:06 :return: A new ``Retry`` object. 17:24:06 """ 17:24:06 if self.total is False and error: 17:24:06 # Disabled, indicate to re-raise the error. 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 17:24:06 total = self.total 17:24:06 if total is not None: 17:24:06 total -= 1 17:24:06 17:24:06 connect = self.connect 17:24:06 read = self.read 17:24:06 redirect = self.redirect 17:24:06 status_count = self.status 17:24:06 other = self.other 17:24:06 cause = "unknown" 17:24:06 status = None 17:24:06 redirect_location = None 17:24:06 17:24:06 if error and self._is_connection_error(error): 17:24:06 # Connect retry? 17:24:06 if connect is False: 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif connect is not None: 17:24:06 connect -= 1 17:24:06 17:24:06 elif error and self._is_read_error(error): 17:24:06 # Read retry? 17:24:06 if read is False or method is None or not self._is_method_retryable(method): 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif read is not None: 17:24:06 read -= 1 17:24:06 17:24:06 elif error: 17:24:06 # Other retry? 17:24:06 if other is not None: 17:24:06 other -= 1 17:24:06 17:24:06 elif response and response.get_redirect_location(): 17:24:06 # Redirect retry? 17:24:06 if redirect is not None: 17:24:06 redirect -= 1 17:24:06 cause = "too many redirects" 17:24:06 response_redirect_location = response.get_redirect_location() 17:24:06 if response_redirect_location: 17:24:06 redirect_location = response_redirect_location 17:24:06 status = response.status 17:24:06 17:24:06 else: 17:24:06 # Incrementing because of a server error like a 500 in 17:24:06 # status_forcelist and the given method is in the allowed_methods 17:24:06 cause = ResponseError.GENERIC_ERROR 17:24:06 if response and response.status: 17:24:06 if status_count is not None: 17:24:06 status_count -= 1 17:24:06 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 17:24:06 status = response.status 17:24:06 17:24:06 history = self.history + ( 17:24:06 RequestHistory(method, url, error, status, redirect_location), 17:24:06 ) 17:24:06 17:24:06 new_retry = self.new( 17:24:06 total=total, 17:24:06 connect=connect, 17:24:06 read=read, 17:24:06 redirect=redirect, 17:24:06 status=status_count, 17:24:06 other=other, 17:24:06 history=history, 17:24:06 ) 17:24:06 17:24:06 if new_retry.is_exhausted(): 17:24:06 reason = error or ResponseError(cause) 17:24:06 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 17:24:06 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 17:24:06 17:24:06 During handling of the above exception, another exception occurred: 17:24:06 17:24:06 self = 17:24:06 17:24:06 def test_15_xpdr_portmapping_CLIENT4(self): 17:24:06 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT4") 17:24:06 17:24:06 transportpce_tests/1.2.1/test01_portmapping.py:180: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 transportpce_tests/common/test_utils.py:473: in get_portmapping_node_attr 17:24:06 response = get_request(target_url) 17:24:06 transportpce_tests/common/test_utils.py:116: in get_request 17:24:06 return requests.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:24:06 return session.request(method=method, url=url, **kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:24:06 resp = self.send(prep, **send_kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:24:06 r = adapter.send(request, **kwargs) 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 except (ProtocolError, OSError) as err: 17:24:06 raise ConnectionError(err, request=request) 17:24:06 17:24:06 except MaxRetryError as e: 17:24:06 if isinstance(e.reason, ConnectTimeoutError): 17:24:06 # TODO: Remove this in 3.0.0: see #2811 17:24:06 if not isinstance(e.reason, NewConnectionError): 17:24:06 raise ConnectTimeout(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, ResponseError): 17:24:06 raise RetryError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _ProxyError): 17:24:06 raise ProxyError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _SSLError): 17:24:06 # This branch is for urllib3 v1.22 and later. 17:24:06 raise SSLError(e, request=request) 17:24:06 17:24:06 > raise ConnectionError(e, request=request) 17:24:06 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 17:24:06 ----------------------------- Captured stdout call ----------------------------- 17:24:06 execution of test_15_xpdr_portmapping_CLIENT4 17:24:06 _______ TransportPCEPortMappingTesting.test_16_xpdr_device_disconnection _______ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 > sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 17:24:06 raise err 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 address = ('localhost', 8182), timeout = 10, source_address = None 17:24:06 socket_options = [(6, 1, 1)] 17:24:06 17:24:06 def create_connection( 17:24:06 address: tuple[str, int], 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 source_address: tuple[str, int] | None = None, 17:24:06 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 17:24:06 ) -> socket.socket: 17:24:06 """Connect to *address* and return the socket object. 17:24:06 17:24:06 Convenience function. Connect to *address* (a 2-tuple ``(host, 17:24:06 port)``) and return the socket object. Passing the optional 17:24:06 *timeout* parameter will set the timeout on the socket instance 17:24:06 before attempting to connect. If no *timeout* is supplied, the 17:24:06 global default timeout setting returned by :func:`socket.getdefaulttimeout` 17:24:06 is used. If *source_address* is set it must be a tuple of (host, port) 17:24:06 for the socket to bind as a source address before making the connection. 17:24:06 An host of '' or port 0 tells the OS to use the default. 17:24:06 """ 17:24:06 17:24:06 host, port = address 17:24:06 if host.startswith("["): 17:24:06 host = host.strip("[]") 17:24:06 err = None 17:24:06 17:24:06 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 17:24:06 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 17:24:06 # The original create_connection function always returns all records. 17:24:06 family = allowed_gai_family() 17:24:06 17:24:06 try: 17:24:06 host.encode("idna") 17:24:06 except UnicodeError: 17:24:06 raise LocationParseError(f"'{host}', label empty or too long") from None 17:24:06 17:24:06 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 17:24:06 af, socktype, proto, canonname, sa = res 17:24:06 sock = None 17:24:06 try: 17:24:06 sock = socket.socket(af, socktype, proto) 17:24:06 17:24:06 # If provided, set socket level options before connecting. 17:24:06 _set_socket_options(sock, socket_options) 17:24:06 17:24:06 if timeout is not _DEFAULT_TIMEOUT: 17:24:06 sock.settimeout(timeout) 17:24:06 if source_address: 17:24:06 sock.bind(source_address) 17:24:06 > sock.connect(sa) 17:24:06 E ConnectionRefusedError: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 method = 'DELETE' 17:24:06 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' 17:24:06 body = None 17:24:06 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:24:06 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 redirect = False, assert_same_host = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:24:06 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:24:06 decode_content = False, response_kw = {} 17:24:06 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query=None, fragment=None) 17:24:06 destination_scheme = None, conn = None, release_this_conn = True 17:24:06 http_tunnel_required = False, err = None, clean_exit = False 17:24:06 17:24:06 def urlopen( # type: ignore[override] 17:24:06 self, 17:24:06 method: str, 17:24:06 url: str, 17:24:06 body: _TYPE_BODY | None = None, 17:24:06 headers: typing.Mapping[str, str] | None = None, 17:24:06 retries: Retry | bool | int | None = None, 17:24:06 redirect: bool = True, 17:24:06 assert_same_host: bool = True, 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 pool_timeout: int | None = None, 17:24:06 release_conn: bool | None = None, 17:24:06 chunked: bool = False, 17:24:06 body_pos: _TYPE_BODY_POSITION | None = None, 17:24:06 preload_content: bool = True, 17:24:06 decode_content: bool = True, 17:24:06 **response_kw: typing.Any, 17:24:06 ) -> BaseHTTPResponse: 17:24:06 """ 17:24:06 Get a connection from the pool and perform an HTTP request. This is the 17:24:06 lowest level call for making a request, so you'll need to specify all 17:24:06 the raw details. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 More commonly, it's appropriate to use a convenience method 17:24:06 such as :meth:`request`. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 `release_conn` will only behave as expected if 17:24:06 `preload_content=False` because we want to make 17:24:06 `preload_content=False` the default behaviour someday soon without 17:24:06 breaking backwards compatibility. 17:24:06 17:24:06 :param method: 17:24:06 HTTP request method (such as GET, POST, PUT, etc.) 17:24:06 17:24:06 :param url: 17:24:06 The URL to perform the request on. 17:24:06 17:24:06 :param body: 17:24:06 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:24:06 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:24:06 17:24:06 :param headers: 17:24:06 Dictionary of custom headers to send, such as User-Agent, 17:24:06 If-None-Match, etc. If None, pool headers are used. If provided, 17:24:06 these headers completely replace any pool-specific headers. 17:24:06 17:24:06 :param retries: 17:24:06 Configure the number of retries to allow before raising a 17:24:06 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:24:06 17:24:06 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:24:06 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:24:06 over different types of retries. 17:24:06 Pass an integer number to retry connection errors that many times, 17:24:06 but no other types of errors. Pass zero to never retry. 17:24:06 17:24:06 If ``False``, then retries are disabled and any exception is raised 17:24:06 immediately. Also, instead of raising a MaxRetryError on redirects, 17:24:06 the redirect response will be returned. 17:24:06 17:24:06 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:24:06 17:24:06 :param redirect: 17:24:06 If True, automatically handle redirects (status codes 301, 302, 17:24:06 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:24:06 will disable redirect, too. 17:24:06 17:24:06 :param assert_same_host: 17:24:06 If ``True``, will make sure that the host of the pool requests is 17:24:06 consistent else will raise HostChangedError. When ``False``, you can 17:24:06 use the pool on an HTTP proxy and request foreign hosts. 17:24:06 17:24:06 :param timeout: 17:24:06 If specified, overrides the default timeout for this one 17:24:06 request. It may be a float (in seconds) or an instance of 17:24:06 :class:`urllib3.util.Timeout`. 17:24:06 17:24:06 :param pool_timeout: 17:24:06 If set and the pool is set to block=True, then this method will 17:24:06 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:24:06 connection is available within the time period. 17:24:06 17:24:06 :param bool preload_content: 17:24:06 If True, the response's body will be preloaded into memory. 17:24:06 17:24:06 :param bool decode_content: 17:24:06 If True, will attempt to decode the body based on the 17:24:06 'content-encoding' header. 17:24:06 17:24:06 :param release_conn: 17:24:06 If False, then the urlopen call will not release the connection 17:24:06 back into the pool once a response is received (but will release if 17:24:06 you read the entire contents of the response such as when 17:24:06 `preload_content=True`). This is useful if you're not preloading 17:24:06 the response's content immediately. You will need to call 17:24:06 ``r.release_conn()`` on the response ``r`` to return the connection 17:24:06 back into the pool. If None, it takes the value of ``preload_content`` 17:24:06 which defaults to ``True``. 17:24:06 17:24:06 :param bool chunked: 17:24:06 If True, urllib3 will send the body using chunked transfer 17:24:06 encoding. Otherwise, urllib3 will send the body using the standard 17:24:06 content-length form. Defaults to False. 17:24:06 17:24:06 :param int body_pos: 17:24:06 Position to seek to in file-like body in the event of a retry or 17:24:06 redirect. Typically this won't need to be set because urllib3 will 17:24:06 auto-populate the value when needed. 17:24:06 """ 17:24:06 parsed_url = parse_url(url) 17:24:06 destination_scheme = parsed_url.scheme 17:24:06 17:24:06 if headers is None: 17:24:06 headers = self.headers 17:24:06 17:24:06 if not isinstance(retries, Retry): 17:24:06 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:24:06 17:24:06 if release_conn is None: 17:24:06 release_conn = preload_content 17:24:06 17:24:06 # Check host 17:24:06 if assert_same_host and not self.is_same_host(url): 17:24:06 raise HostChangedError(self, url, retries) 17:24:06 17:24:06 # Ensure that the URL we're connecting to is properly encoded 17:24:06 if url.startswith("/"): 17:24:06 url = to_str(_encode_target(url)) 17:24:06 else: 17:24:06 url = to_str(parsed_url.url) 17:24:06 17:24:06 conn = None 17:24:06 17:24:06 # Track whether `conn` needs to be released before 17:24:06 # returning/raising/recursing. Update this variable if necessary, and 17:24:06 # leave `release_conn` constant throughout the function. That way, if 17:24:06 # the function recurses, the original value of `release_conn` will be 17:24:06 # passed down into the recursive call, and its value will be respected. 17:24:06 # 17:24:06 # See issue #651 [1] for details. 17:24:06 # 17:24:06 # [1] 17:24:06 release_this_conn = release_conn 17:24:06 17:24:06 http_tunnel_required = connection_requires_http_tunnel( 17:24:06 self.proxy, self.proxy_config, destination_scheme 17:24:06 ) 17:24:06 17:24:06 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:24:06 # have to copy the headers dict so we can safely change it without those 17:24:06 # changes being reflected in anyone else's copy. 17:24:06 if not http_tunnel_required: 17:24:06 headers = headers.copy() # type: ignore[attr-defined] 17:24:06 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:24:06 17:24:06 # Must keep the exception bound to a separate variable or else Python 3 17:24:06 # complains about UnboundLocalError. 17:24:06 err = None 17:24:06 17:24:06 # Keep track of whether we cleanly exited the except block. This 17:24:06 # ensures we do proper cleanup in finally. 17:24:06 clean_exit = False 17:24:06 17:24:06 # Rewind body position, if needed. Record current position 17:24:06 # for future rewinds in the event of a redirect/retry. 17:24:06 body_pos = set_file_position(body, body_pos) 17:24:06 17:24:06 try: 17:24:06 # Request a connection from the queue. 17:24:06 timeout_obj = self._get_timeout(timeout) 17:24:06 conn = self._get_conn(timeout=pool_timeout) 17:24:06 17:24:06 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:24:06 17:24:06 # Is this a closed/new connection that requires CONNECT tunnelling? 17:24:06 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:24:06 try: 17:24:06 self._prepare_proxy(conn) 17:24:06 except (BaseSSLError, OSError, SocketTimeout) as e: 17:24:06 self._raise_timeout( 17:24:06 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:24:06 ) 17:24:06 raise 17:24:06 17:24:06 # If we're going to release the connection in ``finally:``, then 17:24:06 # the response doesn't need to know about the connection. Otherwise 17:24:06 # it will also try to release it and we'll have a double-release 17:24:06 # mess. 17:24:06 response_conn = conn if not release_conn else None 17:24:06 17:24:06 # Make the request on the HTTPConnection object 17:24:06 > response = self._make_request( 17:24:06 conn, 17:24:06 method, 17:24:06 url, 17:24:06 timeout=timeout_obj, 17:24:06 body=body, 17:24:06 headers=headers, 17:24:06 chunked=chunked, 17:24:06 retries=retries, 17:24:06 response_conn=response_conn, 17:24:06 preload_content=preload_content, 17:24:06 decode_content=decode_content, 17:24:06 **response_kw, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 17:24:06 conn.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 17:24:06 self.endheaders() 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 17:24:06 self._send_output(message_body, encode_chunked=encode_chunked) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 17:24:06 self.send(msg) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 17:24:06 self.connect() 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 17:24:06 self.sock = self._new_conn() 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 except socket.gaierror as e: 17:24:06 raise NameResolutionError(self.host, self, e) from e 17:24:06 except SocketTimeout as e: 17:24:06 raise ConnectTimeoutError( 17:24:06 self, 17:24:06 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 17:24:06 ) from e 17:24:06 17:24:06 except OSError as e: 17:24:06 > raise NewConnectionError( 17:24:06 self, f"Failed to establish a new connection: {e}" 17:24:06 ) from e 17:24:06 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 > resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:24:06 retries = retries.increment( 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 method = 'DELETE' 17:24:06 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' 17:24:06 response = None 17:24:06 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 17:24:06 _pool = 17:24:06 _stacktrace = 17:24:06 17:24:06 def increment( 17:24:06 self, 17:24:06 method: str | None = None, 17:24:06 url: str | None = None, 17:24:06 response: BaseHTTPResponse | None = None, 17:24:06 error: Exception | None = None, 17:24:06 _pool: ConnectionPool | None = None, 17:24:06 _stacktrace: TracebackType | None = None, 17:24:06 ) -> Self: 17:24:06 """Return a new Retry object with incremented retry counters. 17:24:06 17:24:06 :param response: A response object, or None, if the server did not 17:24:06 return a response. 17:24:06 :type response: :class:`~urllib3.response.BaseHTTPResponse` 17:24:06 :param Exception error: An error encountered during the request, or 17:24:06 None if the response was received successfully. 17:24:06 17:24:06 :return: A new ``Retry`` object. 17:24:06 """ 17:24:06 if self.total is False and error: 17:24:06 # Disabled, indicate to re-raise the error. 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 17:24:06 total = self.total 17:24:06 if total is not None: 17:24:06 total -= 1 17:24:06 17:24:06 connect = self.connect 17:24:06 read = self.read 17:24:06 redirect = self.redirect 17:24:06 status_count = self.status 17:24:06 other = self.other 17:24:06 cause = "unknown" 17:24:06 status = None 17:24:06 redirect_location = None 17:24:06 17:24:06 if error and self._is_connection_error(error): 17:24:06 # Connect retry? 17:24:06 if connect is False: 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif connect is not None: 17:24:06 connect -= 1 17:24:06 17:24:06 elif error and self._is_read_error(error): 17:24:06 # Read retry? 17:24:06 if read is False or method is None or not self._is_method_retryable(method): 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif read is not None: 17:24:06 read -= 1 17:24:06 17:24:06 elif error: 17:24:06 # Other retry? 17:24:06 if other is not None: 17:24:06 other -= 1 17:24:06 17:24:06 elif response and response.get_redirect_location(): 17:24:06 # Redirect retry? 17:24:06 if redirect is not None: 17:24:06 redirect -= 1 17:24:06 cause = "too many redirects" 17:24:06 response_redirect_location = response.get_redirect_location() 17:24:06 if response_redirect_location: 17:24:06 redirect_location = response_redirect_location 17:24:06 status = response.status 17:24:06 17:24:06 else: 17:24:06 # Incrementing because of a server error like a 500 in 17:24:06 # status_forcelist and the given method is in the allowed_methods 17:24:06 cause = ResponseError.GENERIC_ERROR 17:24:06 if response and response.status: 17:24:06 if status_count is not None: 17:24:06 status_count -= 1 17:24:06 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 17:24:06 status = response.status 17:24:06 17:24:06 history = self.history + ( 17:24:06 RequestHistory(method, url, error, status, redirect_location), 17:24:06 ) 17:24:06 17:24:06 new_retry = self.new( 17:24:06 total=total, 17:24:06 connect=connect, 17:24:06 read=read, 17:24:06 redirect=redirect, 17:24:06 status=status_count, 17:24:06 other=other, 17:24:06 history=history, 17:24:06 ) 17:24:06 17:24:06 if new_retry.is_exhausted(): 17:24:06 reason = error or ResponseError(cause) 17:24:06 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 17:24:06 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 17:24:06 17:24:06 During handling of the above exception, another exception occurred: 17:24:06 17:24:06 self = 17:24:06 17:24:06 def test_16_xpdr_device_disconnection(self): 17:24:06 > response = test_utils.unmount_device("XPDRA01") 17:24:06 17:24:06 transportpce_tests/1.2.1/test01_portmapping.py:191: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 transportpce_tests/common/test_utils.py:360: in unmount_device 17:24:06 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 17:24:06 transportpce_tests/common/test_utils.py:133: in delete_request 17:24:06 return requests.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:24:06 return session.request(method=method, url=url, **kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:24:06 resp = self.send(prep, **send_kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:24:06 r = adapter.send(request, **kwargs) 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 except (ProtocolError, OSError) as err: 17:24:06 raise ConnectionError(err, request=request) 17:24:06 17:24:06 except MaxRetryError as e: 17:24:06 if isinstance(e.reason, ConnectTimeoutError): 17:24:06 # TODO: Remove this in 3.0.0: see #2811 17:24:06 if not isinstance(e.reason, NewConnectionError): 17:24:06 raise ConnectTimeout(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, ResponseError): 17:24:06 raise RetryError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _ProxyError): 17:24:06 raise ProxyError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _SSLError): 17:24:06 # This branch is for urllib3 v1.22 and later. 17:24:06 raise SSLError(e, request=request) 17:24:06 17:24:06 > raise ConnectionError(e, request=request) 17:24:06 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 17:24:06 ----------------------------- Captured stdout call ----------------------------- 17:24:06 execution of test_16_xpdr_device_disconnection 17:24:06 _______ TransportPCEPortMappingTesting.test_17_xpdr_device_disconnected ________ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 > sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 17:24:06 raise err 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 address = ('localhost', 8182), timeout = 10, source_address = None 17:24:06 socket_options = [(6, 1, 1)] 17:24:06 17:24:06 def create_connection( 17:24:06 address: tuple[str, int], 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 source_address: tuple[str, int] | None = None, 17:24:06 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 17:24:06 ) -> socket.socket: 17:24:06 """Connect to *address* and return the socket object. 17:24:06 17:24:06 Convenience function. Connect to *address* (a 2-tuple ``(host, 17:24:06 port)``) and return the socket object. Passing the optional 17:24:06 *timeout* parameter will set the timeout on the socket instance 17:24:06 before attempting to connect. If no *timeout* is supplied, the 17:24:06 global default timeout setting returned by :func:`socket.getdefaulttimeout` 17:24:06 is used. If *source_address* is set it must be a tuple of (host, port) 17:24:06 for the socket to bind as a source address before making the connection. 17:24:06 An host of '' or port 0 tells the OS to use the default. 17:24:06 """ 17:24:06 17:24:06 host, port = address 17:24:06 if host.startswith("["): 17:24:06 host = host.strip("[]") 17:24:06 err = None 17:24:06 17:24:06 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 17:24:06 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 17:24:06 # The original create_connection function always returns all records. 17:24:06 family = allowed_gai_family() 17:24:06 17:24:06 try: 17:24:06 host.encode("idna") 17:24:06 except UnicodeError: 17:24:06 raise LocationParseError(f"'{host}', label empty or too long") from None 17:24:06 17:24:06 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 17:24:06 af, socktype, proto, canonname, sa = res 17:24:06 sock = None 17:24:06 try: 17:24:06 sock = socket.socket(af, socktype, proto) 17:24:06 17:24:06 # If provided, set socket level options before connecting. 17:24:06 _set_socket_options(sock, socket_options) 17:24:06 17:24:06 if timeout is not _DEFAULT_TIMEOUT: 17:24:06 sock.settimeout(timeout) 17:24:06 if source_address: 17:24:06 sock.bind(source_address) 17:24:06 > sock.connect(sa) 17:24:06 E ConnectionRefusedError: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 method = 'GET' 17:24:06 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' 17:24:06 body = None 17:24:06 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:24:06 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 redirect = False, assert_same_host = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:24:06 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:24:06 decode_content = False, response_kw = {} 17:24:06 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query='content=nonconfig', fragment=None) 17:24:06 destination_scheme = None, conn = None, release_this_conn = True 17:24:06 http_tunnel_required = False, err = None, clean_exit = False 17:24:06 17:24:06 def urlopen( # type: ignore[override] 17:24:06 self, 17:24:06 method: str, 17:24:06 url: str, 17:24:06 body: _TYPE_BODY | None = None, 17:24:06 headers: typing.Mapping[str, str] | None = None, 17:24:06 retries: Retry | bool | int | None = None, 17:24:06 redirect: bool = True, 17:24:06 assert_same_host: bool = True, 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 pool_timeout: int | None = None, 17:24:06 release_conn: bool | None = None, 17:24:06 chunked: bool = False, 17:24:06 body_pos: _TYPE_BODY_POSITION | None = None, 17:24:06 preload_content: bool = True, 17:24:06 decode_content: bool = True, 17:24:06 **response_kw: typing.Any, 17:24:06 ) -> BaseHTTPResponse: 17:24:06 """ 17:24:06 Get a connection from the pool and perform an HTTP request. This is the 17:24:06 lowest level call for making a request, so you'll need to specify all 17:24:06 the raw details. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 More commonly, it's appropriate to use a convenience method 17:24:06 such as :meth:`request`. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 `release_conn` will only behave as expected if 17:24:06 `preload_content=False` because we want to make 17:24:06 `preload_content=False` the default behaviour someday soon without 17:24:06 breaking backwards compatibility. 17:24:06 17:24:06 :param method: 17:24:06 HTTP request method (such as GET, POST, PUT, etc.) 17:24:06 17:24:06 :param url: 17:24:06 The URL to perform the request on. 17:24:06 17:24:06 :param body: 17:24:06 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:24:06 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:24:06 17:24:06 :param headers: 17:24:06 Dictionary of custom headers to send, such as User-Agent, 17:24:06 If-None-Match, etc. If None, pool headers are used. If provided, 17:24:06 these headers completely replace any pool-specific headers. 17:24:06 17:24:06 :param retries: 17:24:06 Configure the number of retries to allow before raising a 17:24:06 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:24:06 17:24:06 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:24:06 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:24:06 over different types of retries. 17:24:06 Pass an integer number to retry connection errors that many times, 17:24:06 but no other types of errors. Pass zero to never retry. 17:24:06 17:24:06 If ``False``, then retries are disabled and any exception is raised 17:24:06 immediately. Also, instead of raising a MaxRetryError on redirects, 17:24:06 the redirect response will be returned. 17:24:06 17:24:06 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:24:06 17:24:06 :param redirect: 17:24:06 If True, automatically handle redirects (status codes 301, 302, 17:24:06 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:24:06 will disable redirect, too. 17:24:06 17:24:06 :param assert_same_host: 17:24:06 If ``True``, will make sure that the host of the pool requests is 17:24:06 consistent else will raise HostChangedError. When ``False``, you can 17:24:06 use the pool on an HTTP proxy and request foreign hosts. 17:24:06 17:24:06 :param timeout: 17:24:06 If specified, overrides the default timeout for this one 17:24:06 request. It may be a float (in seconds) or an instance of 17:24:06 :class:`urllib3.util.Timeout`. 17:24:06 17:24:06 :param pool_timeout: 17:24:06 If set and the pool is set to block=True, then this method will 17:24:06 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:24:06 connection is available within the time period. 17:24:06 17:24:06 :param bool preload_content: 17:24:06 If True, the response's body will be preloaded into memory. 17:24:06 17:24:06 :param bool decode_content: 17:24:06 If True, will attempt to decode the body based on the 17:24:06 'content-encoding' header. 17:24:06 17:24:06 :param release_conn: 17:24:06 If False, then the urlopen call will not release the connection 17:24:06 back into the pool once a response is received (but will release if 17:24:06 you read the entire contents of the response such as when 17:24:06 `preload_content=True`). This is useful if you're not preloading 17:24:06 the response's content immediately. You will need to call 17:24:06 ``r.release_conn()`` on the response ``r`` to return the connection 17:24:06 back into the pool. If None, it takes the value of ``preload_content`` 17:24:06 which defaults to ``True``. 17:24:06 17:24:06 :param bool chunked: 17:24:06 If True, urllib3 will send the body using chunked transfer 17:24:06 encoding. Otherwise, urllib3 will send the body using the standard 17:24:06 content-length form. Defaults to False. 17:24:06 17:24:06 :param int body_pos: 17:24:06 Position to seek to in file-like body in the event of a retry or 17:24:06 redirect. Typically this won't need to be set because urllib3 will 17:24:06 auto-populate the value when needed. 17:24:06 """ 17:24:06 parsed_url = parse_url(url) 17:24:06 destination_scheme = parsed_url.scheme 17:24:06 17:24:06 if headers is None: 17:24:06 headers = self.headers 17:24:06 17:24:06 if not isinstance(retries, Retry): 17:24:06 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:24:06 17:24:06 if release_conn is None: 17:24:06 release_conn = preload_content 17:24:06 17:24:06 # Check host 17:24:06 if assert_same_host and not self.is_same_host(url): 17:24:06 raise HostChangedError(self, url, retries) 17:24:06 17:24:06 # Ensure that the URL we're connecting to is properly encoded 17:24:06 if url.startswith("/"): 17:24:06 url = to_str(_encode_target(url)) 17:24:06 else: 17:24:06 url = to_str(parsed_url.url) 17:24:06 17:24:06 conn = None 17:24:06 17:24:06 # Track whether `conn` needs to be released before 17:24:06 # returning/raising/recursing. Update this variable if necessary, and 17:24:06 # leave `release_conn` constant throughout the function. That way, if 17:24:06 # the function recurses, the original value of `release_conn` will be 17:24:06 # passed down into the recursive call, and its value will be respected. 17:24:06 # 17:24:06 # See issue #651 [1] for details. 17:24:06 # 17:24:06 # [1] 17:24:06 release_this_conn = release_conn 17:24:06 17:24:06 http_tunnel_required = connection_requires_http_tunnel( 17:24:06 self.proxy, self.proxy_config, destination_scheme 17:24:06 ) 17:24:06 17:24:06 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:24:06 # have to copy the headers dict so we can safely change it without those 17:24:06 # changes being reflected in anyone else's copy. 17:24:06 if not http_tunnel_required: 17:24:06 headers = headers.copy() # type: ignore[attr-defined] 17:24:06 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:24:06 17:24:06 # Must keep the exception bound to a separate variable or else Python 3 17:24:06 # complains about UnboundLocalError. 17:24:06 err = None 17:24:06 17:24:06 # Keep track of whether we cleanly exited the except block. This 17:24:06 # ensures we do proper cleanup in finally. 17:24:06 clean_exit = False 17:24:06 17:24:06 # Rewind body position, if needed. Record current position 17:24:06 # for future rewinds in the event of a redirect/retry. 17:24:06 body_pos = set_file_position(body, body_pos) 17:24:06 17:24:06 try: 17:24:06 # Request a connection from the queue. 17:24:06 timeout_obj = self._get_timeout(timeout) 17:24:06 conn = self._get_conn(timeout=pool_timeout) 17:24:06 17:24:06 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:24:06 17:24:06 # Is this a closed/new connection that requires CONNECT tunnelling? 17:24:06 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:24:06 try: 17:24:06 self._prepare_proxy(conn) 17:24:06 except (BaseSSLError, OSError, SocketTimeout) as e: 17:24:06 self._raise_timeout( 17:24:06 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:24:06 ) 17:24:06 raise 17:24:06 17:24:06 # If we're going to release the connection in ``finally:``, then 17:24:06 # the response doesn't need to know about the connection. Otherwise 17:24:06 # it will also try to release it and we'll have a double-release 17:24:06 # mess. 17:24:06 response_conn = conn if not release_conn else None 17:24:06 17:24:06 # Make the request on the HTTPConnection object 17:24:06 > response = self._make_request( 17:24:06 conn, 17:24:06 method, 17:24:06 url, 17:24:06 timeout=timeout_obj, 17:24:06 body=body, 17:24:06 headers=headers, 17:24:06 chunked=chunked, 17:24:06 retries=retries, 17:24:06 response_conn=response_conn, 17:24:06 preload_content=preload_content, 17:24:06 decode_content=decode_content, 17:24:06 **response_kw, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 17:24:06 conn.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 17:24:06 self.endheaders() 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 17:24:06 self._send_output(message_body, encode_chunked=encode_chunked) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 17:24:06 self.send(msg) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 17:24:06 self.connect() 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 17:24:06 self.sock = self._new_conn() 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 except socket.gaierror as e: 17:24:06 raise NameResolutionError(self.host, self, e) from e 17:24:06 except SocketTimeout as e: 17:24:06 raise ConnectTimeoutError( 17:24:06 self, 17:24:06 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 17:24:06 ) from e 17:24:06 17:24:06 except OSError as e: 17:24:06 > raise NewConnectionError( 17:24:06 self, f"Failed to establish a new connection: {e}" 17:24:06 ) from e 17:24:06 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 > resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:24:06 retries = retries.increment( 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 method = 'GET' 17:24:06 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' 17:24:06 response = None 17:24:06 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 17:24:06 _pool = 17:24:06 _stacktrace = 17:24:06 17:24:06 def increment( 17:24:06 self, 17:24:06 method: str | None = None, 17:24:06 url: str | None = None, 17:24:06 response: BaseHTTPResponse | None = None, 17:24:06 error: Exception | None = None, 17:24:06 _pool: ConnectionPool | None = None, 17:24:06 _stacktrace: TracebackType | None = None, 17:24:06 ) -> Self: 17:24:06 """Return a new Retry object with incremented retry counters. 17:24:06 17:24:06 :param response: A response object, or None, if the server did not 17:24:06 return a response. 17:24:06 :type response: :class:`~urllib3.response.BaseHTTPResponse` 17:24:06 :param Exception error: An error encountered during the request, or 17:24:06 None if the response was received successfully. 17:24:06 17:24:06 :return: A new ``Retry`` object. 17:24:06 """ 17:24:06 if self.total is False and error: 17:24:06 # Disabled, indicate to re-raise the error. 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 17:24:06 total = self.total 17:24:06 if total is not None: 17:24:06 total -= 1 17:24:06 17:24:06 connect = self.connect 17:24:06 read = self.read 17:24:06 redirect = self.redirect 17:24:06 status_count = self.status 17:24:06 other = self.other 17:24:06 cause = "unknown" 17:24:06 status = None 17:24:06 redirect_location = None 17:24:06 17:24:06 if error and self._is_connection_error(error): 17:24:06 # Connect retry? 17:24:06 if connect is False: 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif connect is not None: 17:24:06 connect -= 1 17:24:06 17:24:06 elif error and self._is_read_error(error): 17:24:06 # Read retry? 17:24:06 if read is False or method is None or not self._is_method_retryable(method): 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif read is not None: 17:24:06 read -= 1 17:24:06 17:24:06 elif error: 17:24:06 # Other retry? 17:24:06 if other is not None: 17:24:06 other -= 1 17:24:06 17:24:06 elif response and response.get_redirect_location(): 17:24:06 # Redirect retry? 17:24:06 if redirect is not None: 17:24:06 redirect -= 1 17:24:06 cause = "too many redirects" 17:24:06 response_redirect_location = response.get_redirect_location() 17:24:06 if response_redirect_location: 17:24:06 redirect_location = response_redirect_location 17:24:06 status = response.status 17:24:06 17:24:06 else: 17:24:06 # Incrementing because of a server error like a 500 in 17:24:06 # status_forcelist and the given method is in the allowed_methods 17:24:06 cause = ResponseError.GENERIC_ERROR 17:24:06 if response and response.status: 17:24:06 if status_count is not None: 17:24:06 status_count -= 1 17:24:06 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 17:24:06 status = response.status 17:24:06 17:24:06 history = self.history + ( 17:24:06 RequestHistory(method, url, error, status, redirect_location), 17:24:06 ) 17:24:06 17:24:06 new_retry = self.new( 17:24:06 total=total, 17:24:06 connect=connect, 17:24:06 read=read, 17:24:06 redirect=redirect, 17:24:06 status=status_count, 17:24:06 other=other, 17:24:06 history=history, 17:24:06 ) 17:24:06 17:24:06 if new_retry.is_exhausted(): 17:24:06 reason = error or ResponseError(cause) 17:24:06 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 17:24:06 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 17:24:06 17:24:06 During handling of the above exception, another exception occurred: 17:24:06 17:24:06 self = 17:24:06 17:24:06 def test_17_xpdr_device_disconnected(self): 17:24:06 > response = test_utils.check_device_connection("XPDRA01") 17:24:06 17:24:06 transportpce_tests/1.2.1/test01_portmapping.py:195: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 transportpce_tests/common/test_utils.py:371: in check_device_connection 17:24:06 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 17:24:06 transportpce_tests/common/test_utils.py:116: in get_request 17:24:06 return requests.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:24:06 return session.request(method=method, url=url, **kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:24:06 resp = self.send(prep, **send_kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:24:06 r = adapter.send(request, **kwargs) 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 except (ProtocolError, OSError) as err: 17:24:06 raise ConnectionError(err, request=request) 17:24:06 17:24:06 except MaxRetryError as e: 17:24:06 if isinstance(e.reason, ConnectTimeoutError): 17:24:06 # TODO: Remove this in 3.0.0: see #2811 17:24:06 if not isinstance(e.reason, NewConnectionError): 17:24:06 raise ConnectTimeout(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, ResponseError): 17:24:06 raise RetryError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _ProxyError): 17:24:06 raise ProxyError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _SSLError): 17:24:06 # This branch is for urllib3 v1.22 and later. 17:24:06 raise SSLError(e, request=request) 17:24:06 17:24:06 > raise ConnectionError(e, request=request) 17:24:06 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 17:24:06 ----------------------------- Captured stdout call ----------------------------- 17:24:06 execution of test_17_xpdr_device_disconnected 17:24:06 _______ TransportPCEPortMappingTesting.test_18_xpdr_device_not_connected _______ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 > sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 17:24:06 raise err 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 address = ('localhost', 8182), timeout = 10, source_address = None 17:24:06 socket_options = [(6, 1, 1)] 17:24:06 17:24:06 def create_connection( 17:24:06 address: tuple[str, int], 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 source_address: tuple[str, int] | None = None, 17:24:06 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 17:24:06 ) -> socket.socket: 17:24:06 """Connect to *address* and return the socket object. 17:24:06 17:24:06 Convenience function. Connect to *address* (a 2-tuple ``(host, 17:24:06 port)``) and return the socket object. Passing the optional 17:24:06 *timeout* parameter will set the timeout on the socket instance 17:24:06 before attempting to connect. If no *timeout* is supplied, the 17:24:06 global default timeout setting returned by :func:`socket.getdefaulttimeout` 17:24:06 is used. If *source_address* is set it must be a tuple of (host, port) 17:24:06 for the socket to bind as a source address before making the connection. 17:24:06 An host of '' or port 0 tells the OS to use the default. 17:24:06 """ 17:24:06 17:24:06 host, port = address 17:24:06 if host.startswith("["): 17:24:06 host = host.strip("[]") 17:24:06 err = None 17:24:06 17:24:06 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 17:24:06 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 17:24:06 # The original create_connection function always returns all records. 17:24:06 family = allowed_gai_family() 17:24:06 17:24:06 try: 17:24:06 host.encode("idna") 17:24:06 except UnicodeError: 17:24:06 raise LocationParseError(f"'{host}', label empty or too long") from None 17:24:06 17:24:06 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 17:24:06 af, socktype, proto, canonname, sa = res 17:24:06 sock = None 17:24:06 try: 17:24:06 sock = socket.socket(af, socktype, proto) 17:24:06 17:24:06 # If provided, set socket level options before connecting. 17:24:06 _set_socket_options(sock, socket_options) 17:24:06 17:24:06 if timeout is not _DEFAULT_TIMEOUT: 17:24:06 sock.settimeout(timeout) 17:24:06 if source_address: 17:24:06 sock.bind(source_address) 17:24:06 > sock.connect(sa) 17:24:06 E ConnectionRefusedError: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 method = 'GET' 17:24:06 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' 17:24:06 body = None 17:24:06 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:24:06 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 redirect = False, assert_same_host = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:24:06 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:24:06 decode_content = False, response_kw = {} 17:24:06 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info', query=None, fragment=None) 17:24:06 destination_scheme = None, conn = None, release_this_conn = True 17:24:06 http_tunnel_required = False, err = None, clean_exit = False 17:24:06 17:24:06 def urlopen( # type: ignore[override] 17:24:06 self, 17:24:06 method: str, 17:24:06 url: str, 17:24:06 body: _TYPE_BODY | None = None, 17:24:06 headers: typing.Mapping[str, str] | None = None, 17:24:06 retries: Retry | bool | int | None = None, 17:24:06 redirect: bool = True, 17:24:06 assert_same_host: bool = True, 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 pool_timeout: int | None = None, 17:24:06 release_conn: bool | None = None, 17:24:06 chunked: bool = False, 17:24:06 body_pos: _TYPE_BODY_POSITION | None = None, 17:24:06 preload_content: bool = True, 17:24:06 decode_content: bool = True, 17:24:06 **response_kw: typing.Any, 17:24:06 ) -> BaseHTTPResponse: 17:24:06 """ 17:24:06 Get a connection from the pool and perform an HTTP request. This is the 17:24:06 lowest level call for making a request, so you'll need to specify all 17:24:06 the raw details. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 More commonly, it's appropriate to use a convenience method 17:24:06 such as :meth:`request`. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 `release_conn` will only behave as expected if 17:24:06 `preload_content=False` because we want to make 17:24:06 `preload_content=False` the default behaviour someday soon without 17:24:06 breaking backwards compatibility. 17:24:06 17:24:06 :param method: 17:24:06 HTTP request method (such as GET, POST, PUT, etc.) 17:24:06 17:24:06 :param url: 17:24:06 The URL to perform the request on. 17:24:06 17:24:06 :param body: 17:24:06 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:24:06 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:24:06 17:24:06 :param headers: 17:24:06 Dictionary of custom headers to send, such as User-Agent, 17:24:06 If-None-Match, etc. If None, pool headers are used. If provided, 17:24:06 these headers completely replace any pool-specific headers. 17:24:06 17:24:06 :param retries: 17:24:06 Configure the number of retries to allow before raising a 17:24:06 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:24:06 17:24:06 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:24:06 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:24:06 over different types of retries. 17:24:06 Pass an integer number to retry connection errors that many times, 17:24:06 but no other types of errors. Pass zero to never retry. 17:24:06 17:24:06 If ``False``, then retries are disabled and any exception is raised 17:24:06 immediately. Also, instead of raising a MaxRetryError on redirects, 17:24:06 the redirect response will be returned. 17:24:06 17:24:06 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:24:06 17:24:06 :param redirect: 17:24:06 If True, automatically handle redirects (status codes 301, 302, 17:24:06 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:24:06 will disable redirect, too. 17:24:06 17:24:06 :param assert_same_host: 17:24:06 If ``True``, will make sure that the host of the pool requests is 17:24:06 consistent else will raise HostChangedError. When ``False``, you can 17:24:06 use the pool on an HTTP proxy and request foreign hosts. 17:24:06 17:24:06 :param timeout: 17:24:06 If specified, overrides the default timeout for this one 17:24:06 request. It may be a float (in seconds) or an instance of 17:24:06 :class:`urllib3.util.Timeout`. 17:24:06 17:24:06 :param pool_timeout: 17:24:06 If set and the pool is set to block=True, then this method will 17:24:06 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:24:06 connection is available within the time period. 17:24:06 17:24:06 :param bool preload_content: 17:24:06 If True, the response's body will be preloaded into memory. 17:24:06 17:24:06 :param bool decode_content: 17:24:06 If True, will attempt to decode the body based on the 17:24:06 'content-encoding' header. 17:24:06 17:24:06 :param release_conn: 17:24:06 If False, then the urlopen call will not release the connection 17:24:06 back into the pool once a response is received (but will release if 17:24:06 you read the entire contents of the response such as when 17:24:06 `preload_content=True`). This is useful if you're not preloading 17:24:06 the response's content immediately. You will need to call 17:24:06 ``r.release_conn()`` on the response ``r`` to return the connection 17:24:06 back into the pool. If None, it takes the value of ``preload_content`` 17:24:06 which defaults to ``True``. 17:24:06 17:24:06 :param bool chunked: 17:24:06 If True, urllib3 will send the body using chunked transfer 17:24:06 encoding. Otherwise, urllib3 will send the body using the standard 17:24:06 content-length form. Defaults to False. 17:24:06 17:24:06 :param int body_pos: 17:24:06 Position to seek to in file-like body in the event of a retry or 17:24:06 redirect. Typically this won't need to be set because urllib3 will 17:24:06 auto-populate the value when needed. 17:24:06 """ 17:24:06 parsed_url = parse_url(url) 17:24:06 destination_scheme = parsed_url.scheme 17:24:06 17:24:06 if headers is None: 17:24:06 headers = self.headers 17:24:06 17:24:06 if not isinstance(retries, Retry): 17:24:06 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:24:06 17:24:06 if release_conn is None: 17:24:06 release_conn = preload_content 17:24:06 17:24:06 # Check host 17:24:06 if assert_same_host and not self.is_same_host(url): 17:24:06 raise HostChangedError(self, url, retries) 17:24:06 17:24:06 # Ensure that the URL we're connecting to is properly encoded 17:24:06 if url.startswith("/"): 17:24:06 url = to_str(_encode_target(url)) 17:24:06 else: 17:24:06 url = to_str(parsed_url.url) 17:24:06 17:24:06 conn = None 17:24:06 17:24:06 # Track whether `conn` needs to be released before 17:24:06 # returning/raising/recursing. Update this variable if necessary, and 17:24:06 # leave `release_conn` constant throughout the function. That way, if 17:24:06 # the function recurses, the original value of `release_conn` will be 17:24:06 # passed down into the recursive call, and its value will be respected. 17:24:06 # 17:24:06 # See issue #651 [1] for details. 17:24:06 # 17:24:06 # [1] 17:24:06 release_this_conn = release_conn 17:24:06 17:24:06 http_tunnel_required = connection_requires_http_tunnel( 17:24:06 self.proxy, self.proxy_config, destination_scheme 17:24:06 ) 17:24:06 17:24:06 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:24:06 # have to copy the headers dict so we can safely change it without those 17:24:06 # changes being reflected in anyone else's copy. 17:24:06 if not http_tunnel_required: 17:24:06 headers = headers.copy() # type: ignore[attr-defined] 17:24:06 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:24:06 17:24:06 # Must keep the exception bound to a separate variable or else Python 3 17:24:06 # complains about UnboundLocalError. 17:24:06 err = None 17:24:06 17:24:06 # Keep track of whether we cleanly exited the except block. This 17:24:06 # ensures we do proper cleanup in finally. 17:24:06 clean_exit = False 17:24:06 17:24:06 # Rewind body position, if needed. Record current position 17:24:06 # for future rewinds in the event of a redirect/retry. 17:24:06 body_pos = set_file_position(body, body_pos) 17:24:06 17:24:06 try: 17:24:06 # Request a connection from the queue. 17:24:06 timeout_obj = self._get_timeout(timeout) 17:24:06 conn = self._get_conn(timeout=pool_timeout) 17:24:06 17:24:06 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:24:06 17:24:06 # Is this a closed/new connection that requires CONNECT tunnelling? 17:24:06 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:24:06 try: 17:24:06 self._prepare_proxy(conn) 17:24:06 except (BaseSSLError, OSError, SocketTimeout) as e: 17:24:06 self._raise_timeout( 17:24:06 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:24:06 ) 17:24:06 raise 17:24:06 17:24:06 # If we're going to release the connection in ``finally:``, then 17:24:06 # the response doesn't need to know about the connection. Otherwise 17:24:06 # it will also try to release it and we'll have a double-release 17:24:06 # mess. 17:24:06 response_conn = conn if not release_conn else None 17:24:06 17:24:06 # Make the request on the HTTPConnection object 17:24:06 > response = self._make_request( 17:24:06 conn, 17:24:06 method, 17:24:06 url, 17:24:06 timeout=timeout_obj, 17:24:06 body=body, 17:24:06 headers=headers, 17:24:06 chunked=chunked, 17:24:06 retries=retries, 17:24:06 response_conn=response_conn, 17:24:06 preload_content=preload_content, 17:24:06 decode_content=decode_content, 17:24:06 **response_kw, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 17:24:06 conn.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 17:24:06 self.endheaders() 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 17:24:06 self._send_output(message_body, encode_chunked=encode_chunked) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 17:24:06 self.send(msg) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 17:24:06 self.connect() 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 17:24:06 self.sock = self._new_conn() 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 except socket.gaierror as e: 17:24:06 raise NameResolutionError(self.host, self, e) from e 17:24:06 except SocketTimeout as e: 17:24:06 raise ConnectTimeoutError( 17:24:06 self, 17:24:06 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 17:24:06 ) from e 17:24:06 17:24:06 except OSError as e: 17:24:06 > raise NewConnectionError( 17:24:06 self, f"Failed to establish a new connection: {e}" 17:24:06 ) from e 17:24:06 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 > resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:24:06 retries = retries.increment( 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 method = 'GET' 17:24:06 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' 17:24:06 response = None 17:24:06 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 17:24:06 _pool = 17:24:06 _stacktrace = 17:24:06 17:24:06 def increment( 17:24:06 self, 17:24:06 method: str | None = None, 17:24:06 url: str | None = None, 17:24:06 response: BaseHTTPResponse | None = None, 17:24:06 error: Exception | None = None, 17:24:06 _pool: ConnectionPool | None = None, 17:24:06 _stacktrace: TracebackType | None = None, 17:24:06 ) -> Self: 17:24:06 """Return a new Retry object with incremented retry counters. 17:24:06 17:24:06 :param response: A response object, or None, if the server did not 17:24:06 return a response. 17:24:06 :type response: :class:`~urllib3.response.BaseHTTPResponse` 17:24:06 :param Exception error: An error encountered during the request, or 17:24:06 None if the response was received successfully. 17:24:06 17:24:06 :return: A new ``Retry`` object. 17:24:06 """ 17:24:06 if self.total is False and error: 17:24:06 # Disabled, indicate to re-raise the error. 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 17:24:06 total = self.total 17:24:06 if total is not None: 17:24:06 total -= 1 17:24:06 17:24:06 connect = self.connect 17:24:06 read = self.read 17:24:06 redirect = self.redirect 17:24:06 status_count = self.status 17:24:06 other = self.other 17:24:06 cause = "unknown" 17:24:06 status = None 17:24:06 redirect_location = None 17:24:06 17:24:06 if error and self._is_connection_error(error): 17:24:06 # Connect retry? 17:24:06 if connect is False: 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif connect is not None: 17:24:06 connect -= 1 17:24:06 17:24:06 elif error and self._is_read_error(error): 17:24:06 # Read retry? 17:24:06 if read is False or method is None or not self._is_method_retryable(method): 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif read is not None: 17:24:06 read -= 1 17:24:06 17:24:06 elif error: 17:24:06 # Other retry? 17:24:06 if other is not None: 17:24:06 other -= 1 17:24:06 17:24:06 elif response and response.get_redirect_location(): 17:24:06 # Redirect retry? 17:24:06 if redirect is not None: 17:24:06 redirect -= 1 17:24:06 cause = "too many redirects" 17:24:06 response_redirect_location = response.get_redirect_location() 17:24:06 if response_redirect_location: 17:24:06 redirect_location = response_redirect_location 17:24:06 status = response.status 17:24:06 17:24:06 else: 17:24:06 # Incrementing because of a server error like a 500 in 17:24:06 # status_forcelist and the given method is in the allowed_methods 17:24:06 cause = ResponseError.GENERIC_ERROR 17:24:06 if response and response.status: 17:24:06 if status_count is not None: 17:24:06 status_count -= 1 17:24:06 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 17:24:06 status = response.status 17:24:06 17:24:06 history = self.history + ( 17:24:06 RequestHistory(method, url, error, status, redirect_location), 17:24:06 ) 17:24:06 17:24:06 new_retry = self.new( 17:24:06 total=total, 17:24:06 connect=connect, 17:24:06 read=read, 17:24:06 redirect=redirect, 17:24:06 status=status_count, 17:24:06 other=other, 17:24:06 history=history, 17:24:06 ) 17:24:06 17:24:06 if new_retry.is_exhausted(): 17:24:06 reason = error or ResponseError(cause) 17:24:06 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 17:24:06 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 17:24:06 17:24:06 During handling of the above exception, another exception occurred: 17:24:06 17:24:06 self = 17:24:06 17:24:06 def test_18_xpdr_device_not_connected(self): 17:24:06 > response = test_utils.get_portmapping_node_attr("XPDRA01", "node-info", None) 17:24:06 17:24:06 transportpce_tests/1.2.1/test01_portmapping.py:203: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 transportpce_tests/common/test_utils.py:473: in get_portmapping_node_attr 17:24:06 response = get_request(target_url) 17:24:06 transportpce_tests/common/test_utils.py:116: in get_request 17:24:06 return requests.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:24:06 return session.request(method=method, url=url, **kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:24:06 resp = self.send(prep, **send_kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:24:06 r = adapter.send(request, **kwargs) 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 except (ProtocolError, OSError) as err: 17:24:06 raise ConnectionError(err, request=request) 17:24:06 17:24:06 except MaxRetryError as e: 17:24:06 if isinstance(e.reason, ConnectTimeoutError): 17:24:06 # TODO: Remove this in 3.0.0: see #2811 17:24:06 if not isinstance(e.reason, NewConnectionError): 17:24:06 raise ConnectTimeout(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, ResponseError): 17:24:06 raise RetryError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _ProxyError): 17:24:06 raise ProxyError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _SSLError): 17:24:06 # This branch is for urllib3 v1.22 and later. 17:24:06 raise SSLError(e, request=request) 17:24:06 17:24:06 > raise ConnectionError(e, request=request) 17:24:06 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 17:24:06 ----------------------------- Captured stdout call ----------------------------- 17:24:06 execution of test_18_xpdr_device_not_connected 17:24:06 _______ TransportPCEPortMappingTesting.test_19_rdm_device_disconnection ________ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 > sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 17:24:06 raise err 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 address = ('localhost', 8182), timeout = 10, source_address = None 17:24:06 socket_options = [(6, 1, 1)] 17:24:06 17:24:06 def create_connection( 17:24:06 address: tuple[str, int], 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 source_address: tuple[str, int] | None = None, 17:24:06 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 17:24:06 ) -> socket.socket: 17:24:06 """Connect to *address* and return the socket object. 17:24:06 17:24:06 Convenience function. Connect to *address* (a 2-tuple ``(host, 17:24:06 port)``) and return the socket object. Passing the optional 17:24:06 *timeout* parameter will set the timeout on the socket instance 17:24:06 before attempting to connect. If no *timeout* is supplied, the 17:24:06 global default timeout setting returned by :func:`socket.getdefaulttimeout` 17:24:06 is used. If *source_address* is set it must be a tuple of (host, port) 17:24:06 for the socket to bind as a source address before making the connection. 17:24:06 An host of '' or port 0 tells the OS to use the default. 17:24:06 """ 17:24:06 17:24:06 host, port = address 17:24:06 if host.startswith("["): 17:24:06 host = host.strip("[]") 17:24:06 err = None 17:24:06 17:24:06 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 17:24:06 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 17:24:06 # The original create_connection function always returns all records. 17:24:06 family = allowed_gai_family() 17:24:06 17:24:06 try: 17:24:06 host.encode("idna") 17:24:06 except UnicodeError: 17:24:06 raise LocationParseError(f"'{host}', label empty or too long") from None 17:24:06 17:24:06 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 17:24:06 af, socktype, proto, canonname, sa = res 17:24:06 sock = None 17:24:06 try: 17:24:06 sock = socket.socket(af, socktype, proto) 17:24:06 17:24:06 # If provided, set socket level options before connecting. 17:24:06 _set_socket_options(sock, socket_options) 17:24:06 17:24:06 if timeout is not _DEFAULT_TIMEOUT: 17:24:06 sock.settimeout(timeout) 17:24:06 if source_address: 17:24:06 sock.bind(source_address) 17:24:06 > sock.connect(sa) 17:24:06 E ConnectionRefusedError: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 method = 'DELETE' 17:24:06 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' 17:24:06 body = None 17:24:06 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:24:06 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 redirect = False, assert_same_host = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:24:06 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:24:06 decode_content = False, response_kw = {} 17:24:06 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query=None, fragment=None) 17:24:06 destination_scheme = None, conn = None, release_this_conn = True 17:24:06 http_tunnel_required = False, err = None, clean_exit = False 17:24:06 17:24:06 def urlopen( # type: ignore[override] 17:24:06 self, 17:24:06 method: str, 17:24:06 url: str, 17:24:06 body: _TYPE_BODY | None = None, 17:24:06 headers: typing.Mapping[str, str] | None = None, 17:24:06 retries: Retry | bool | int | None = None, 17:24:06 redirect: bool = True, 17:24:06 assert_same_host: bool = True, 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 pool_timeout: int | None = None, 17:24:06 release_conn: bool | None = None, 17:24:06 chunked: bool = False, 17:24:06 body_pos: _TYPE_BODY_POSITION | None = None, 17:24:06 preload_content: bool = True, 17:24:06 decode_content: bool = True, 17:24:06 **response_kw: typing.Any, 17:24:06 ) -> BaseHTTPResponse: 17:24:06 """ 17:24:06 Get a connection from the pool and perform an HTTP request. This is the 17:24:06 lowest level call for making a request, so you'll need to specify all 17:24:06 the raw details. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 More commonly, it's appropriate to use a convenience method 17:24:06 such as :meth:`request`. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 `release_conn` will only behave as expected if 17:24:06 `preload_content=False` because we want to make 17:24:06 `preload_content=False` the default behaviour someday soon without 17:24:06 breaking backwards compatibility. 17:24:06 17:24:06 :param method: 17:24:06 HTTP request method (such as GET, POST, PUT, etc.) 17:24:06 17:24:06 :param url: 17:24:06 The URL to perform the request on. 17:24:06 17:24:06 :param body: 17:24:06 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:24:06 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:24:06 17:24:06 :param headers: 17:24:06 Dictionary of custom headers to send, such as User-Agent, 17:24:06 If-None-Match, etc. If None, pool headers are used. If provided, 17:24:06 these headers completely replace any pool-specific headers. 17:24:06 17:24:06 :param retries: 17:24:06 Configure the number of retries to allow before raising a 17:24:06 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:24:06 17:24:06 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:24:06 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:24:06 over different types of retries. 17:24:06 Pass an integer number to retry connection errors that many times, 17:24:06 but no other types of errors. Pass zero to never retry. 17:24:06 17:24:06 If ``False``, then retries are disabled and any exception is raised 17:24:06 immediately. Also, instead of raising a MaxRetryError on redirects, 17:24:06 the redirect response will be returned. 17:24:06 17:24:06 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:24:06 17:24:06 :param redirect: 17:24:06 If True, automatically handle redirects (status codes 301, 302, 17:24:06 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:24:06 will disable redirect, too. 17:24:06 17:24:06 :param assert_same_host: 17:24:06 If ``True``, will make sure that the host of the pool requests is 17:24:06 consistent else will raise HostChangedError. When ``False``, you can 17:24:06 use the pool on an HTTP proxy and request foreign hosts. 17:24:06 17:24:06 :param timeout: 17:24:06 If specified, overrides the default timeout for this one 17:24:06 request. It may be a float (in seconds) or an instance of 17:24:06 :class:`urllib3.util.Timeout`. 17:24:06 17:24:06 :param pool_timeout: 17:24:06 If set and the pool is set to block=True, then this method will 17:24:06 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:24:06 connection is available within the time period. 17:24:06 17:24:06 :param bool preload_content: 17:24:06 If True, the response's body will be preloaded into memory. 17:24:06 17:24:06 :param bool decode_content: 17:24:06 If True, will attempt to decode the body based on the 17:24:06 'content-encoding' header. 17:24:06 17:24:06 :param release_conn: 17:24:06 If False, then the urlopen call will not release the connection 17:24:06 back into the pool once a response is received (but will release if 17:24:06 you read the entire contents of the response such as when 17:24:06 `preload_content=True`). This is useful if you're not preloading 17:24:06 the response's content immediately. You will need to call 17:24:06 ``r.release_conn()`` on the response ``r`` to return the connection 17:24:06 back into the pool. If None, it takes the value of ``preload_content`` 17:24:06 which defaults to ``True``. 17:24:06 17:24:06 :param bool chunked: 17:24:06 If True, urllib3 will send the body using chunked transfer 17:24:06 encoding. Otherwise, urllib3 will send the body using the standard 17:24:06 content-length form. Defaults to False. 17:24:06 17:24:06 :param int body_pos: 17:24:06 Position to seek to in file-like body in the event of a retry or 17:24:06 redirect. Typically this won't need to be set because urllib3 will 17:24:06 auto-populate the value when needed. 17:24:06 """ 17:24:06 parsed_url = parse_url(url) 17:24:06 destination_scheme = parsed_url.scheme 17:24:06 17:24:06 if headers is None: 17:24:06 headers = self.headers 17:24:06 17:24:06 if not isinstance(retries, Retry): 17:24:06 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:24:06 17:24:06 if release_conn is None: 17:24:06 release_conn = preload_content 17:24:06 17:24:06 # Check host 17:24:06 if assert_same_host and not self.is_same_host(url): 17:24:06 raise HostChangedError(self, url, retries) 17:24:06 17:24:06 # Ensure that the URL we're connecting to is properly encoded 17:24:06 if url.startswith("/"): 17:24:06 url = to_str(_encode_target(url)) 17:24:06 else: 17:24:06 url = to_str(parsed_url.url) 17:24:06 17:24:06 conn = None 17:24:06 17:24:06 # Track whether `conn` needs to be released before 17:24:06 # returning/raising/recursing. Update this variable if necessary, and 17:24:06 # leave `release_conn` constant throughout the function. That way, if 17:24:06 # the function recurses, the original value of `release_conn` will be 17:24:06 # passed down into the recursive call, and its value will be respected. 17:24:06 # 17:24:06 # See issue #651 [1] for details. 17:24:06 # 17:24:06 # [1] 17:24:06 release_this_conn = release_conn 17:24:06 17:24:06 http_tunnel_required = connection_requires_http_tunnel( 17:24:06 self.proxy, self.proxy_config, destination_scheme 17:24:06 ) 17:24:06 17:24:06 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:24:06 # have to copy the headers dict so we can safely change it without those 17:24:06 # changes being reflected in anyone else's copy. 17:24:06 if not http_tunnel_required: 17:24:06 headers = headers.copy() # type: ignore[attr-defined] 17:24:06 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:24:06 17:24:06 # Must keep the exception bound to a separate variable or else Python 3 17:24:06 # complains about UnboundLocalError. 17:24:06 err = None 17:24:06 17:24:06 # Keep track of whether we cleanly exited the except block. This 17:24:06 # ensures we do proper cleanup in finally. 17:24:06 clean_exit = False 17:24:06 17:24:06 # Rewind body position, if needed. Record current position 17:24:06 # for future rewinds in the event of a redirect/retry. 17:24:06 body_pos = set_file_position(body, body_pos) 17:24:06 17:24:06 try: 17:24:06 # Request a connection from the queue. 17:24:06 timeout_obj = self._get_timeout(timeout) 17:24:06 conn = self._get_conn(timeout=pool_timeout) 17:24:06 17:24:06 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:24:06 17:24:06 # Is this a closed/new connection that requires CONNECT tunnelling? 17:24:06 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:24:06 try: 17:24:06 self._prepare_proxy(conn) 17:24:06 except (BaseSSLError, OSError, SocketTimeout) as e: 17:24:06 self._raise_timeout( 17:24:06 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:24:06 ) 17:24:06 raise 17:24:06 17:24:06 # If we're going to release the connection in ``finally:``, then 17:24:06 # the response doesn't need to know about the connection. Otherwise 17:24:06 # it will also try to release it and we'll have a double-release 17:24:06 # mess. 17:24:06 response_conn = conn if not release_conn else None 17:24:06 17:24:06 # Make the request on the HTTPConnection object 17:24:06 > response = self._make_request( 17:24:06 conn, 17:24:06 method, 17:24:06 url, 17:24:06 timeout=timeout_obj, 17:24:06 body=body, 17:24:06 headers=headers, 17:24:06 chunked=chunked, 17:24:06 retries=retries, 17:24:06 response_conn=response_conn, 17:24:06 preload_content=preload_content, 17:24:06 decode_content=decode_content, 17:24:06 **response_kw, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 17:24:06 conn.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 17:24:06 self.endheaders() 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 17:24:06 self._send_output(message_body, encode_chunked=encode_chunked) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 17:24:06 self.send(msg) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 17:24:06 self.connect() 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 17:24:06 self.sock = self._new_conn() 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 except socket.gaierror as e: 17:24:06 raise NameResolutionError(self.host, self, e) from e 17:24:06 except SocketTimeout as e: 17:24:06 raise ConnectTimeoutError( 17:24:06 self, 17:24:06 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 17:24:06 ) from e 17:24:06 17:24:06 except OSError as e: 17:24:06 > raise NewConnectionError( 17:24:06 self, f"Failed to establish a new connection: {e}" 17:24:06 ) from e 17:24:06 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 > resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:24:06 retries = retries.increment( 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 method = 'DELETE' 17:24:06 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' 17:24:06 response = None 17:24:06 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 17:24:06 _pool = 17:24:06 _stacktrace = 17:24:06 17:24:06 def increment( 17:24:06 self, 17:24:06 method: str | None = None, 17:24:06 url: str | None = None, 17:24:06 response: BaseHTTPResponse | None = None, 17:24:06 error: Exception | None = None, 17:24:06 _pool: ConnectionPool | None = None, 17:24:06 _stacktrace: TracebackType | None = None, 17:24:06 ) -> Self: 17:24:06 """Return a new Retry object with incremented retry counters. 17:24:06 17:24:06 :param response: A response object, or None, if the server did not 17:24:06 return a response. 17:24:06 :type response: :class:`~urllib3.response.BaseHTTPResponse` 17:24:06 :param Exception error: An error encountered during the request, or 17:24:06 None if the response was received successfully. 17:24:06 17:24:06 :return: A new ``Retry`` object. 17:24:06 """ 17:24:06 if self.total is False and error: 17:24:06 # Disabled, indicate to re-raise the error. 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 17:24:06 total = self.total 17:24:06 if total is not None: 17:24:06 total -= 1 17:24:06 17:24:06 connect = self.connect 17:24:06 read = self.read 17:24:06 redirect = self.redirect 17:24:06 status_count = self.status 17:24:06 other = self.other 17:24:06 cause = "unknown" 17:24:06 status = None 17:24:06 redirect_location = None 17:24:06 17:24:06 if error and self._is_connection_error(error): 17:24:06 # Connect retry? 17:24:06 if connect is False: 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif connect is not None: 17:24:06 connect -= 1 17:24:06 17:24:06 elif error and self._is_read_error(error): 17:24:06 # Read retry? 17:24:06 if read is False or method is None or not self._is_method_retryable(method): 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif read is not None: 17:24:06 read -= 1 17:24:06 17:24:06 elif error: 17:24:06 # Other retry? 17:24:06 if other is not None: 17:24:06 other -= 1 17:24:06 17:24:06 elif response and response.get_redirect_location(): 17:24:06 # Redirect retry? 17:24:06 if redirect is not None: 17:24:06 redirect -= 1 17:24:06 cause = "too many redirects" 17:24:06 response_redirect_location = response.get_redirect_location() 17:24:06 if response_redirect_location: 17:24:06 redirect_location = response_redirect_location 17:24:06 status = response.status 17:24:06 17:24:06 else: 17:24:06 # Incrementing because of a server error like a 500 in 17:24:06 # status_forcelist and the given method is in the allowed_methods 17:24:06 cause = ResponseError.GENERIC_ERROR 17:24:06 if response and response.status: 17:24:06 if status_count is not None: 17:24:06 status_count -= 1 17:24:06 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 17:24:06 status = response.status 17:24:06 17:24:06 history = self.history + ( 17:24:06 RequestHistory(method, url, error, status, redirect_location), 17:24:06 ) 17:24:06 17:24:06 new_retry = self.new( 17:24:06 total=total, 17:24:06 connect=connect, 17:24:06 read=read, 17:24:06 redirect=redirect, 17:24:06 status=status_count, 17:24:06 other=other, 17:24:06 history=history, 17:24:06 ) 17:24:06 17:24:06 if new_retry.is_exhausted(): 17:24:06 reason = error or ResponseError(cause) 17:24:06 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 17:24:06 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 17:24:06 17:24:06 During handling of the above exception, another exception occurred: 17:24:06 17:24:06 self = 17:24:06 17:24:06 def test_19_rdm_device_disconnection(self): 17:24:06 > response = test_utils.unmount_device("ROADMA01") 17:24:06 17:24:06 transportpce_tests/1.2.1/test01_portmapping.py:211: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 transportpce_tests/common/test_utils.py:360: in unmount_device 17:24:06 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 17:24:06 transportpce_tests/common/test_utils.py:133: in delete_request 17:24:06 return requests.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:24:06 return session.request(method=method, url=url, **kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:24:06 resp = self.send(prep, **send_kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:24:06 r = adapter.send(request, **kwargs) 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 except (ProtocolError, OSError) as err: 17:24:06 raise ConnectionError(err, request=request) 17:24:06 17:24:06 except MaxRetryError as e: 17:24:06 if isinstance(e.reason, ConnectTimeoutError): 17:24:06 # TODO: Remove this in 3.0.0: see #2811 17:24:06 if not isinstance(e.reason, NewConnectionError): 17:24:06 raise ConnectTimeout(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, ResponseError): 17:24:06 raise RetryError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _ProxyError): 17:24:06 raise ProxyError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _SSLError): 17:24:06 # This branch is for urllib3 v1.22 and later. 17:24:06 raise SSLError(e, request=request) 17:24:06 17:24:06 > raise ConnectionError(e, request=request) 17:24:06 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 17:24:06 ----------------------------- Captured stdout call ----------------------------- 17:24:06 execution of test_19_rdm_device_disconnection 17:24:06 ________ TransportPCEPortMappingTesting.test_20_rdm_device_disconnected ________ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 > sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 17:24:06 raise err 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 address = ('localhost', 8182), timeout = 10, source_address = None 17:24:06 socket_options = [(6, 1, 1)] 17:24:06 17:24:06 def create_connection( 17:24:06 address: tuple[str, int], 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 source_address: tuple[str, int] | None = None, 17:24:06 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 17:24:06 ) -> socket.socket: 17:24:06 """Connect to *address* and return the socket object. 17:24:06 17:24:06 Convenience function. Connect to *address* (a 2-tuple ``(host, 17:24:06 port)``) and return the socket object. Passing the optional 17:24:06 *timeout* parameter will set the timeout on the socket instance 17:24:06 before attempting to connect. If no *timeout* is supplied, the 17:24:06 global default timeout setting returned by :func:`socket.getdefaulttimeout` 17:24:06 is used. If *source_address* is set it must be a tuple of (host, port) 17:24:06 for the socket to bind as a source address before making the connection. 17:24:06 An host of '' or port 0 tells the OS to use the default. 17:24:06 """ 17:24:06 17:24:06 host, port = address 17:24:06 if host.startswith("["): 17:24:06 host = host.strip("[]") 17:24:06 err = None 17:24:06 17:24:06 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 17:24:06 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 17:24:06 # The original create_connection function always returns all records. 17:24:06 family = allowed_gai_family() 17:24:06 17:24:06 try: 17:24:06 host.encode("idna") 17:24:06 except UnicodeError: 17:24:06 raise LocationParseError(f"'{host}', label empty or too long") from None 17:24:06 17:24:06 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 17:24:06 af, socktype, proto, canonname, sa = res 17:24:06 sock = None 17:24:06 try: 17:24:06 sock = socket.socket(af, socktype, proto) 17:24:06 17:24:06 # If provided, set socket level options before connecting. 17:24:06 _set_socket_options(sock, socket_options) 17:24:06 17:24:06 if timeout is not _DEFAULT_TIMEOUT: 17:24:06 sock.settimeout(timeout) 17:24:06 if source_address: 17:24:06 sock.bind(source_address) 17:24:06 > sock.connect(sa) 17:24:06 E ConnectionRefusedError: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 method = 'GET' 17:24:06 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' 17:24:06 body = None 17:24:06 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:24:06 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 redirect = False, assert_same_host = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:24:06 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:24:06 decode_content = False, response_kw = {} 17:24:06 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query='content=nonconfig', fragment=None) 17:24:06 destination_scheme = None, conn = None, release_this_conn = True 17:24:06 http_tunnel_required = False, err = None, clean_exit = False 17:24:06 17:24:06 def urlopen( # type: ignore[override] 17:24:06 self, 17:24:06 method: str, 17:24:06 url: str, 17:24:06 body: _TYPE_BODY | None = None, 17:24:06 headers: typing.Mapping[str, str] | None = None, 17:24:06 retries: Retry | bool | int | None = None, 17:24:06 redirect: bool = True, 17:24:06 assert_same_host: bool = True, 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 pool_timeout: int | None = None, 17:24:06 release_conn: bool | None = None, 17:24:06 chunked: bool = False, 17:24:06 body_pos: _TYPE_BODY_POSITION | None = None, 17:24:06 preload_content: bool = True, 17:24:06 decode_content: bool = True, 17:24:06 **response_kw: typing.Any, 17:24:06 ) -> BaseHTTPResponse: 17:24:06 """ 17:24:06 Get a connection from the pool and perform an HTTP request. This is the 17:24:06 lowest level call for making a request, so you'll need to specify all 17:24:06 the raw details. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 More commonly, it's appropriate to use a convenience method 17:24:06 such as :meth:`request`. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 `release_conn` will only behave as expected if 17:24:06 `preload_content=False` because we want to make 17:24:06 `preload_content=False` the default behaviour someday soon without 17:24:06 breaking backwards compatibility. 17:24:06 17:24:06 :param method: 17:24:06 HTTP request method (such as GET, POST, PUT, etc.) 17:24:06 17:24:06 :param url: 17:24:06 The URL to perform the request on. 17:24:06 17:24:06 :param body: 17:24:06 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:24:06 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:24:06 17:24:06 :param headers: 17:24:06 Dictionary of custom headers to send, such as User-Agent, 17:24:06 If-None-Match, etc. If None, pool headers are used. If provided, 17:24:06 these headers completely replace any pool-specific headers. 17:24:06 17:24:06 :param retries: 17:24:06 Configure the number of retries to allow before raising a 17:24:06 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:24:06 17:24:06 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:24:06 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:24:06 over different types of retries. 17:24:06 Pass an integer number to retry connection errors that many times, 17:24:06 but no other types of errors. Pass zero to never retry. 17:24:06 17:24:06 If ``False``, then retries are disabled and any exception is raised 17:24:06 immediately. Also, instead of raising a MaxRetryError on redirects, 17:24:06 the redirect response will be returned. 17:24:06 17:24:06 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:24:06 17:24:06 :param redirect: 17:24:06 If True, automatically handle redirects (status codes 301, 302, 17:24:06 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:24:06 will disable redirect, too. 17:24:06 17:24:06 :param assert_same_host: 17:24:06 If ``True``, will make sure that the host of the pool requests is 17:24:06 consistent else will raise HostChangedError. When ``False``, you can 17:24:06 use the pool on an HTTP proxy and request foreign hosts. 17:24:06 17:24:06 :param timeout: 17:24:06 If specified, overrides the default timeout for this one 17:24:06 request. It may be a float (in seconds) or an instance of 17:24:06 :class:`urllib3.util.Timeout`. 17:24:06 17:24:06 :param pool_timeout: 17:24:06 If set and the pool is set to block=True, then this method will 17:24:06 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:24:06 connection is available within the time period. 17:24:06 17:24:06 :param bool preload_content: 17:24:06 If True, the response's body will be preloaded into memory. 17:24:06 17:24:06 :param bool decode_content: 17:24:06 If True, will attempt to decode the body based on the 17:24:06 'content-encoding' header. 17:24:06 17:24:06 :param release_conn: 17:24:06 If False, then the urlopen call will not release the connection 17:24:06 back into the pool once a response is received (but will release if 17:24:06 you read the entire contents of the response such as when 17:24:06 `preload_content=True`). This is useful if you're not preloading 17:24:06 the response's content immediately. You will need to call 17:24:06 ``r.release_conn()`` on the response ``r`` to return the connection 17:24:06 back into the pool. If None, it takes the value of ``preload_content`` 17:24:06 which defaults to ``True``. 17:24:06 17:24:06 :param bool chunked: 17:24:06 If True, urllib3 will send the body using chunked transfer 17:24:06 encoding. Otherwise, urllib3 will send the body using the standard 17:24:06 content-length form. Defaults to False. 17:24:06 17:24:06 :param int body_pos: 17:24:06 Position to seek to in file-like body in the event of a retry or 17:24:06 redirect. Typically this won't need to be set because urllib3 will 17:24:06 auto-populate the value when needed. 17:24:06 """ 17:24:06 parsed_url = parse_url(url) 17:24:06 destination_scheme = parsed_url.scheme 17:24:06 17:24:06 if headers is None: 17:24:06 headers = self.headers 17:24:06 17:24:06 if not isinstance(retries, Retry): 17:24:06 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:24:06 17:24:06 if release_conn is None: 17:24:06 release_conn = preload_content 17:24:06 17:24:06 # Check host 17:24:06 if assert_same_host and not self.is_same_host(url): 17:24:06 raise HostChangedError(self, url, retries) 17:24:06 17:24:06 # Ensure that the URL we're connecting to is properly encoded 17:24:06 if url.startswith("/"): 17:24:06 url = to_str(_encode_target(url)) 17:24:06 else: 17:24:06 url = to_str(parsed_url.url) 17:24:06 17:24:06 conn = None 17:24:06 17:24:06 # Track whether `conn` needs to be released before 17:24:06 # returning/raising/recursing. Update this variable if necessary, and 17:24:06 # leave `release_conn` constant throughout the function. That way, if 17:24:06 # the function recurses, the original value of `release_conn` will be 17:24:06 # passed down into the recursive call, and its value will be respected. 17:24:06 # 17:24:06 # See issue #651 [1] for details. 17:24:06 # 17:24:06 # [1] 17:24:06 release_this_conn = release_conn 17:24:06 17:24:06 http_tunnel_required = connection_requires_http_tunnel( 17:24:06 self.proxy, self.proxy_config, destination_scheme 17:24:06 ) 17:24:06 17:24:06 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:24:06 # have to copy the headers dict so we can safely change it without those 17:24:06 # changes being reflected in anyone else's copy. 17:24:06 if not http_tunnel_required: 17:24:06 headers = headers.copy() # type: ignore[attr-defined] 17:24:06 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:24:06 17:24:06 # Must keep the exception bound to a separate variable or else Python 3 17:24:06 # complains about UnboundLocalError. 17:24:06 err = None 17:24:06 17:24:06 # Keep track of whether we cleanly exited the except block. This 17:24:06 # ensures we do proper cleanup in finally. 17:24:06 clean_exit = False 17:24:06 17:24:06 # Rewind body position, if needed. Record current position 17:24:06 # for future rewinds in the event of a redirect/retry. 17:24:06 body_pos = set_file_position(body, body_pos) 17:24:06 17:24:06 try: 17:24:06 # Request a connection from the queue. 17:24:06 timeout_obj = self._get_timeout(timeout) 17:24:06 conn = self._get_conn(timeout=pool_timeout) 17:24:06 17:24:06 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:24:06 17:24:06 # Is this a closed/new connection that requires CONNECT tunnelling? 17:24:06 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:24:06 try: 17:24:06 self._prepare_proxy(conn) 17:24:06 except (BaseSSLError, OSError, SocketTimeout) as e: 17:24:06 self._raise_timeout( 17:24:06 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:24:06 ) 17:24:06 raise 17:24:06 17:24:06 # If we're going to release the connection in ``finally:``, then 17:24:06 # the response doesn't need to know about the connection. Otherwise 17:24:06 # it will also try to release it and we'll have a double-release 17:24:06 # mess. 17:24:06 response_conn = conn if not release_conn else None 17:24:06 17:24:06 # Make the request on the HTTPConnection object 17:24:06 > response = self._make_request( 17:24:06 conn, 17:24:06 method, 17:24:06 url, 17:24:06 timeout=timeout_obj, 17:24:06 body=body, 17:24:06 headers=headers, 17:24:06 chunked=chunked, 17:24:06 retries=retries, 17:24:06 response_conn=response_conn, 17:24:06 preload_content=preload_content, 17:24:06 decode_content=decode_content, 17:24:06 **response_kw, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 17:24:06 conn.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 17:24:06 self.endheaders() 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 17:24:06 self._send_output(message_body, encode_chunked=encode_chunked) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 17:24:06 self.send(msg) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 17:24:06 self.connect() 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 17:24:06 self.sock = self._new_conn() 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 except socket.gaierror as e: 17:24:06 raise NameResolutionError(self.host, self, e) from e 17:24:06 except SocketTimeout as e: 17:24:06 raise ConnectTimeoutError( 17:24:06 self, 17:24:06 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 17:24:06 ) from e 17:24:06 17:24:06 except OSError as e: 17:24:06 > raise NewConnectionError( 17:24:06 self, f"Failed to establish a new connection: {e}" 17:24:06 ) from e 17:24:06 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 > resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:24:06 retries = retries.increment( 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 method = 'GET' 17:24:06 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' 17:24:06 response = None 17:24:06 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 17:24:06 _pool = 17:24:06 _stacktrace = 17:24:06 17:24:06 def increment( 17:24:06 self, 17:24:06 method: str | None = None, 17:24:06 url: str | None = None, 17:24:06 response: BaseHTTPResponse | None = None, 17:24:06 error: Exception | None = None, 17:24:06 _pool: ConnectionPool | None = None, 17:24:06 _stacktrace: TracebackType | None = None, 17:24:06 ) -> Self: 17:24:06 """Return a new Retry object with incremented retry counters. 17:24:06 17:24:06 :param response: A response object, or None, if the server did not 17:24:06 return a response. 17:24:06 :type response: :class:`~urllib3.response.BaseHTTPResponse` 17:24:06 :param Exception error: An error encountered during the request, or 17:24:06 None if the response was received successfully. 17:24:06 17:24:06 :return: A new ``Retry`` object. 17:24:06 """ 17:24:06 if self.total is False and error: 17:24:06 # Disabled, indicate to re-raise the error. 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 17:24:06 total = self.total 17:24:06 if total is not None: 17:24:06 total -= 1 17:24:06 17:24:06 connect = self.connect 17:24:06 read = self.read 17:24:06 redirect = self.redirect 17:24:06 status_count = self.status 17:24:06 other = self.other 17:24:06 cause = "unknown" 17:24:06 status = None 17:24:06 redirect_location = None 17:24:06 17:24:06 if error and self._is_connection_error(error): 17:24:06 # Connect retry? 17:24:06 if connect is False: 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif connect is not None: 17:24:06 connect -= 1 17:24:06 17:24:06 elif error and self._is_read_error(error): 17:24:06 # Read retry? 17:24:06 if read is False or method is None or not self._is_method_retryable(method): 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif read is not None: 17:24:06 read -= 1 17:24:06 17:24:06 elif error: 17:24:06 # Other retry? 17:24:06 if other is not None: 17:24:06 other -= 1 17:24:06 17:24:06 elif response and response.get_redirect_location(): 17:24:06 # Redirect retry? 17:24:06 if redirect is not None: 17:24:06 redirect -= 1 17:24:06 cause = "too many redirects" 17:24:06 response_redirect_location = response.get_redirect_location() 17:24:06 if response_redirect_location: 17:24:06 redirect_location = response_redirect_location 17:24:06 status = response.status 17:24:06 17:24:06 else: 17:24:06 # Incrementing because of a server error like a 500 in 17:24:06 # status_forcelist and the given method is in the allowed_methods 17:24:06 cause = ResponseError.GENERIC_ERROR 17:24:06 if response and response.status: 17:24:06 if status_count is not None: 17:24:06 status_count -= 1 17:24:06 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 17:24:06 status = response.status 17:24:06 17:24:06 history = self.history + ( 17:24:06 RequestHistory(method, url, error, status, redirect_location), 17:24:06 ) 17:24:06 17:24:06 new_retry = self.new( 17:24:06 total=total, 17:24:06 connect=connect, 17:24:06 read=read, 17:24:06 redirect=redirect, 17:24:06 status=status_count, 17:24:06 other=other, 17:24:06 history=history, 17:24:06 ) 17:24:06 17:24:06 if new_retry.is_exhausted(): 17:24:06 reason = error or ResponseError(cause) 17:24:06 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 17:24:06 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 17:24:06 17:24:06 During handling of the above exception, another exception occurred: 17:24:06 17:24:06 self = 17:24:06 17:24:06 def test_20_rdm_device_disconnected(self): 17:24:06 > response = test_utils.check_device_connection("ROADMA01") 17:24:06 17:24:06 transportpce_tests/1.2.1/test01_portmapping.py:215: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 transportpce_tests/common/test_utils.py:371: in check_device_connection 17:24:06 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 17:24:06 transportpce_tests/common/test_utils.py:116: in get_request 17:24:06 return requests.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:24:06 return session.request(method=method, url=url, **kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:24:06 resp = self.send(prep, **send_kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:24:06 r = adapter.send(request, **kwargs) 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 except (ProtocolError, OSError) as err: 17:24:06 raise ConnectionError(err, request=request) 17:24:06 17:24:06 except MaxRetryError as e: 17:24:06 if isinstance(e.reason, ConnectTimeoutError): 17:24:06 # TODO: Remove this in 3.0.0: see #2811 17:24:06 if not isinstance(e.reason, NewConnectionError): 17:24:06 raise ConnectTimeout(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, ResponseError): 17:24:06 raise RetryError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _ProxyError): 17:24:06 raise ProxyError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _SSLError): 17:24:06 # This branch is for urllib3 v1.22 and later. 17:24:06 raise SSLError(e, request=request) 17:24:06 17:24:06 > raise ConnectionError(e, request=request) 17:24:06 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 17:24:06 ----------------------------- Captured stdout call ----------------------------- 17:24:06 execution of test_20_rdm_device_disconnected 17:24:06 _______ TransportPCEPortMappingTesting.test_21_rdm_device_not_connected ________ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 > sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 17:24:06 raise err 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 address = ('localhost', 8182), timeout = 10, source_address = None 17:24:06 socket_options = [(6, 1, 1)] 17:24:06 17:24:06 def create_connection( 17:24:06 address: tuple[str, int], 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 source_address: tuple[str, int] | None = None, 17:24:06 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 17:24:06 ) -> socket.socket: 17:24:06 """Connect to *address* and return the socket object. 17:24:06 17:24:06 Convenience function. Connect to *address* (a 2-tuple ``(host, 17:24:06 port)``) and return the socket object. Passing the optional 17:24:06 *timeout* parameter will set the timeout on the socket instance 17:24:06 before attempting to connect. If no *timeout* is supplied, the 17:24:06 global default timeout setting returned by :func:`socket.getdefaulttimeout` 17:24:06 is used. If *source_address* is set it must be a tuple of (host, port) 17:24:06 for the socket to bind as a source address before making the connection. 17:24:06 An host of '' or port 0 tells the OS to use the default. 17:24:06 """ 17:24:06 17:24:06 host, port = address 17:24:06 if host.startswith("["): 17:24:06 host = host.strip("[]") 17:24:06 err = None 17:24:06 17:24:06 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 17:24:06 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 17:24:06 # The original create_connection function always returns all records. 17:24:06 family = allowed_gai_family() 17:24:06 17:24:06 try: 17:24:06 host.encode("idna") 17:24:06 except UnicodeError: 17:24:06 raise LocationParseError(f"'{host}', label empty or too long") from None 17:24:06 17:24:06 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 17:24:06 af, socktype, proto, canonname, sa = res 17:24:06 sock = None 17:24:06 try: 17:24:06 sock = socket.socket(af, socktype, proto) 17:24:06 17:24:06 # If provided, set socket level options before connecting. 17:24:06 _set_socket_options(sock, socket_options) 17:24:06 17:24:06 if timeout is not _DEFAULT_TIMEOUT: 17:24:06 sock.settimeout(timeout) 17:24:06 if source_address: 17:24:06 sock.bind(source_address) 17:24:06 > sock.connect(sa) 17:24:06 E ConnectionRefusedError: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 method = 'GET' 17:24:06 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' 17:24:06 body = None 17:24:06 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 17:24:06 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 redirect = False, assert_same_host = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 17:24:06 release_conn = False, chunked = False, body_pos = None, preload_content = False 17:24:06 decode_content = False, response_kw = {} 17:24:06 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info', query=None, fragment=None) 17:24:06 destination_scheme = None, conn = None, release_this_conn = True 17:24:06 http_tunnel_required = False, err = None, clean_exit = False 17:24:06 17:24:06 def urlopen( # type: ignore[override] 17:24:06 self, 17:24:06 method: str, 17:24:06 url: str, 17:24:06 body: _TYPE_BODY | None = None, 17:24:06 headers: typing.Mapping[str, str] | None = None, 17:24:06 retries: Retry | bool | int | None = None, 17:24:06 redirect: bool = True, 17:24:06 assert_same_host: bool = True, 17:24:06 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 17:24:06 pool_timeout: int | None = None, 17:24:06 release_conn: bool | None = None, 17:24:06 chunked: bool = False, 17:24:06 body_pos: _TYPE_BODY_POSITION | None = None, 17:24:06 preload_content: bool = True, 17:24:06 decode_content: bool = True, 17:24:06 **response_kw: typing.Any, 17:24:06 ) -> BaseHTTPResponse: 17:24:06 """ 17:24:06 Get a connection from the pool and perform an HTTP request. This is the 17:24:06 lowest level call for making a request, so you'll need to specify all 17:24:06 the raw details. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 More commonly, it's appropriate to use a convenience method 17:24:06 such as :meth:`request`. 17:24:06 17:24:06 .. note:: 17:24:06 17:24:06 `release_conn` will only behave as expected if 17:24:06 `preload_content=False` because we want to make 17:24:06 `preload_content=False` the default behaviour someday soon without 17:24:06 breaking backwards compatibility. 17:24:06 17:24:06 :param method: 17:24:06 HTTP request method (such as GET, POST, PUT, etc.) 17:24:06 17:24:06 :param url: 17:24:06 The URL to perform the request on. 17:24:06 17:24:06 :param body: 17:24:06 Data to send in the request body, either :class:`str`, :class:`bytes`, 17:24:06 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 17:24:06 17:24:06 :param headers: 17:24:06 Dictionary of custom headers to send, such as User-Agent, 17:24:06 If-None-Match, etc. If None, pool headers are used. If provided, 17:24:06 these headers completely replace any pool-specific headers. 17:24:06 17:24:06 :param retries: 17:24:06 Configure the number of retries to allow before raising a 17:24:06 :class:`~urllib3.exceptions.MaxRetryError` exception. 17:24:06 17:24:06 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 17:24:06 :class:`~urllib3.util.retry.Retry` object for fine-grained control 17:24:06 over different types of retries. 17:24:06 Pass an integer number to retry connection errors that many times, 17:24:06 but no other types of errors. Pass zero to never retry. 17:24:06 17:24:06 If ``False``, then retries are disabled and any exception is raised 17:24:06 immediately. Also, instead of raising a MaxRetryError on redirects, 17:24:06 the redirect response will be returned. 17:24:06 17:24:06 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 17:24:06 17:24:06 :param redirect: 17:24:06 If True, automatically handle redirects (status codes 301, 302, 17:24:06 303, 307, 308). Each redirect counts as a retry. Disabling retries 17:24:06 will disable redirect, too. 17:24:06 17:24:06 :param assert_same_host: 17:24:06 If ``True``, will make sure that the host of the pool requests is 17:24:06 consistent else will raise HostChangedError. When ``False``, you can 17:24:06 use the pool on an HTTP proxy and request foreign hosts. 17:24:06 17:24:06 :param timeout: 17:24:06 If specified, overrides the default timeout for this one 17:24:06 request. It may be a float (in seconds) or an instance of 17:24:06 :class:`urllib3.util.Timeout`. 17:24:06 17:24:06 :param pool_timeout: 17:24:06 If set and the pool is set to block=True, then this method will 17:24:06 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 17:24:06 connection is available within the time period. 17:24:06 17:24:06 :param bool preload_content: 17:24:06 If True, the response's body will be preloaded into memory. 17:24:06 17:24:06 :param bool decode_content: 17:24:06 If True, will attempt to decode the body based on the 17:24:06 'content-encoding' header. 17:24:06 17:24:06 :param release_conn: 17:24:06 If False, then the urlopen call will not release the connection 17:24:06 back into the pool once a response is received (but will release if 17:24:06 you read the entire contents of the response such as when 17:24:06 `preload_content=True`). This is useful if you're not preloading 17:24:06 the response's content immediately. You will need to call 17:24:06 ``r.release_conn()`` on the response ``r`` to return the connection 17:24:06 back into the pool. If None, it takes the value of ``preload_content`` 17:24:06 which defaults to ``True``. 17:24:06 17:24:06 :param bool chunked: 17:24:06 If True, urllib3 will send the body using chunked transfer 17:24:06 encoding. Otherwise, urllib3 will send the body using the standard 17:24:06 content-length form. Defaults to False. 17:24:06 17:24:06 :param int body_pos: 17:24:06 Position to seek to in file-like body in the event of a retry or 17:24:06 redirect. Typically this won't need to be set because urllib3 will 17:24:06 auto-populate the value when needed. 17:24:06 """ 17:24:06 parsed_url = parse_url(url) 17:24:06 destination_scheme = parsed_url.scheme 17:24:06 17:24:06 if headers is None: 17:24:06 headers = self.headers 17:24:06 17:24:06 if not isinstance(retries, Retry): 17:24:06 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 17:24:06 17:24:06 if release_conn is None: 17:24:06 release_conn = preload_content 17:24:06 17:24:06 # Check host 17:24:06 if assert_same_host and not self.is_same_host(url): 17:24:06 raise HostChangedError(self, url, retries) 17:24:06 17:24:06 # Ensure that the URL we're connecting to is properly encoded 17:24:06 if url.startswith("/"): 17:24:06 url = to_str(_encode_target(url)) 17:24:06 else: 17:24:06 url = to_str(parsed_url.url) 17:24:06 17:24:06 conn = None 17:24:06 17:24:06 # Track whether `conn` needs to be released before 17:24:06 # returning/raising/recursing. Update this variable if necessary, and 17:24:06 # leave `release_conn` constant throughout the function. That way, if 17:24:06 # the function recurses, the original value of `release_conn` will be 17:24:06 # passed down into the recursive call, and its value will be respected. 17:24:06 # 17:24:06 # See issue #651 [1] for details. 17:24:06 # 17:24:06 # [1] 17:24:06 release_this_conn = release_conn 17:24:06 17:24:06 http_tunnel_required = connection_requires_http_tunnel( 17:24:06 self.proxy, self.proxy_config, destination_scheme 17:24:06 ) 17:24:06 17:24:06 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 17:24:06 # have to copy the headers dict so we can safely change it without those 17:24:06 # changes being reflected in anyone else's copy. 17:24:06 if not http_tunnel_required: 17:24:06 headers = headers.copy() # type: ignore[attr-defined] 17:24:06 headers.update(self.proxy_headers) # type: ignore[union-attr] 17:24:06 17:24:06 # Must keep the exception bound to a separate variable or else Python 3 17:24:06 # complains about UnboundLocalError. 17:24:06 err = None 17:24:06 17:24:06 # Keep track of whether we cleanly exited the except block. This 17:24:06 # ensures we do proper cleanup in finally. 17:24:06 clean_exit = False 17:24:06 17:24:06 # Rewind body position, if needed. Record current position 17:24:06 # for future rewinds in the event of a redirect/retry. 17:24:06 body_pos = set_file_position(body, body_pos) 17:24:06 17:24:06 try: 17:24:06 # Request a connection from the queue. 17:24:06 timeout_obj = self._get_timeout(timeout) 17:24:06 conn = self._get_conn(timeout=pool_timeout) 17:24:06 17:24:06 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 17:24:06 17:24:06 # Is this a closed/new connection that requires CONNECT tunnelling? 17:24:06 if self.proxy is not None and http_tunnel_required and conn.is_closed: 17:24:06 try: 17:24:06 self._prepare_proxy(conn) 17:24:06 except (BaseSSLError, OSError, SocketTimeout) as e: 17:24:06 self._raise_timeout( 17:24:06 err=e, url=self.proxy.url, timeout_value=conn.timeout 17:24:06 ) 17:24:06 raise 17:24:06 17:24:06 # If we're going to release the connection in ``finally:``, then 17:24:06 # the response doesn't need to know about the connection. Otherwise 17:24:06 # it will also try to release it and we'll have a double-release 17:24:06 # mess. 17:24:06 response_conn = conn if not release_conn else None 17:24:06 17:24:06 # Make the request on the HTTPConnection object 17:24:06 > response = self._make_request( 17:24:06 conn, 17:24:06 method, 17:24:06 url, 17:24:06 timeout=timeout_obj, 17:24:06 body=body, 17:24:06 headers=headers, 17:24:06 chunked=chunked, 17:24:06 retries=retries, 17:24:06 response_conn=response_conn, 17:24:06 preload_content=preload_content, 17:24:06 decode_content=decode_content, 17:24:06 **response_kw, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 17:24:06 conn.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 17:24:06 self.endheaders() 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 17:24:06 self._send_output(message_body, encode_chunked=encode_chunked) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 17:24:06 self.send(msg) 17:24:06 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 17:24:06 self.connect() 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 17:24:06 self.sock = self._new_conn() 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 17:24:06 def _new_conn(self) -> socket.socket: 17:24:06 """Establish a socket connection and set nodelay settings on it. 17:24:06 17:24:06 :return: New socket connection. 17:24:06 """ 17:24:06 try: 17:24:06 sock = connection.create_connection( 17:24:06 (self._dns_host, self.port), 17:24:06 self.timeout, 17:24:06 source_address=self.source_address, 17:24:06 socket_options=self.socket_options, 17:24:06 ) 17:24:06 except socket.gaierror as e: 17:24:06 raise NameResolutionError(self.host, self, e) from e 17:24:06 except SocketTimeout as e: 17:24:06 raise ConnectTimeoutError( 17:24:06 self, 17:24:06 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 17:24:06 ) from e 17:24:06 17:24:06 except OSError as e: 17:24:06 > raise NewConnectionError( 17:24:06 self, f"Failed to establish a new connection: {e}" 17:24:06 ) from e 17:24:06 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 17:24:06 17:24:06 The above exception was the direct cause of the following exception: 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 > resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 17:24:06 retries = retries.increment( 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 17:24:06 method = 'GET' 17:24:06 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' 17:24:06 response = None 17:24:06 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 17:24:06 _pool = 17:24:06 _stacktrace = 17:24:06 17:24:06 def increment( 17:24:06 self, 17:24:06 method: str | None = None, 17:24:06 url: str | None = None, 17:24:06 response: BaseHTTPResponse | None = None, 17:24:06 error: Exception | None = None, 17:24:06 _pool: ConnectionPool | None = None, 17:24:06 _stacktrace: TracebackType | None = None, 17:24:06 ) -> Self: 17:24:06 """Return a new Retry object with incremented retry counters. 17:24:06 17:24:06 :param response: A response object, or None, if the server did not 17:24:06 return a response. 17:24:06 :type response: :class:`~urllib3.response.BaseHTTPResponse` 17:24:06 :param Exception error: An error encountered during the request, or 17:24:06 None if the response was received successfully. 17:24:06 17:24:06 :return: A new ``Retry`` object. 17:24:06 """ 17:24:06 if self.total is False and error: 17:24:06 # Disabled, indicate to re-raise the error. 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 17:24:06 total = self.total 17:24:06 if total is not None: 17:24:06 total -= 1 17:24:06 17:24:06 connect = self.connect 17:24:06 read = self.read 17:24:06 redirect = self.redirect 17:24:06 status_count = self.status 17:24:06 other = self.other 17:24:06 cause = "unknown" 17:24:06 status = None 17:24:06 redirect_location = None 17:24:06 17:24:06 if error and self._is_connection_error(error): 17:24:06 # Connect retry? 17:24:06 if connect is False: 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif connect is not None: 17:24:06 connect -= 1 17:24:06 17:24:06 elif error and self._is_read_error(error): 17:24:06 # Read retry? 17:24:06 if read is False or method is None or not self._is_method_retryable(method): 17:24:06 raise reraise(type(error), error, _stacktrace) 17:24:06 elif read is not None: 17:24:06 read -= 1 17:24:06 17:24:06 elif error: 17:24:06 # Other retry? 17:24:06 if other is not None: 17:24:06 other -= 1 17:24:06 17:24:06 elif response and response.get_redirect_location(): 17:24:06 # Redirect retry? 17:24:06 if redirect is not None: 17:24:06 redirect -= 1 17:24:06 cause = "too many redirects" 17:24:06 response_redirect_location = response.get_redirect_location() 17:24:06 if response_redirect_location: 17:24:06 redirect_location = response_redirect_location 17:24:06 status = response.status 17:24:06 17:24:06 else: 17:24:06 # Incrementing because of a server error like a 500 in 17:24:06 # status_forcelist and the given method is in the allowed_methods 17:24:06 cause = ResponseError.GENERIC_ERROR 17:24:06 if response and response.status: 17:24:06 if status_count is not None: 17:24:06 status_count -= 1 17:24:06 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 17:24:06 status = response.status 17:24:06 17:24:06 history = self.history + ( 17:24:06 RequestHistory(method, url, error, status, redirect_location), 17:24:06 ) 17:24:06 17:24:06 new_retry = self.new( 17:24:06 total=total, 17:24:06 connect=connect, 17:24:06 read=read, 17:24:06 redirect=redirect, 17:24:06 status=status_count, 17:24:06 other=other, 17:24:06 history=history, 17:24:06 ) 17:24:06 17:24:06 if new_retry.is_exhausted(): 17:24:06 reason = error or ResponseError(cause) 17:24:06 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 17:24:06 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 17:24:06 17:24:06 During handling of the above exception, another exception occurred: 17:24:06 17:24:06 self = 17:24:06 17:24:06 def test_21_rdm_device_not_connected(self): 17:24:06 > response = test_utils.get_portmapping_node_attr("ROADMA01", "node-info", None) 17:24:06 17:24:06 transportpce_tests/1.2.1/test01_portmapping.py:223: 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 transportpce_tests/common/test_utils.py:473: in get_portmapping_node_attr 17:24:06 response = get_request(target_url) 17:24:06 transportpce_tests/common/test_utils.py:116: in get_request 17:24:06 return requests.request( 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 17:24:06 return session.request(method=method, url=url, **kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 17:24:06 resp = self.send(prep, **send_kwargs) 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 17:24:06 r = adapter.send(request, **kwargs) 17:24:06 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 17:24:06 17:24:06 self = 17:24:06 request = , stream = False 17:24:06 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 17:24:06 proxies = OrderedDict() 17:24:06 17:24:06 def send( 17:24:06 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 17:24:06 ): 17:24:06 """Sends PreparedRequest object. Returns Response object. 17:24:06 17:24:06 :param request: The :class:`PreparedRequest ` being sent. 17:24:06 :param stream: (optional) Whether to stream the request content. 17:24:06 :param timeout: (optional) How long to wait for the server to send 17:24:06 data before giving up, as a float, or a :ref:`(connect timeout, 17:24:06 read timeout) ` tuple. 17:24:06 :type timeout: float or tuple or urllib3 Timeout object 17:24:06 :param verify: (optional) Either a boolean, in which case it controls whether 17:24:06 we verify the server's TLS certificate, or a string, in which case it 17:24:06 must be a path to a CA bundle to use 17:24:06 :param cert: (optional) Any user-provided SSL certificate to be trusted. 17:24:06 :param proxies: (optional) The proxies dictionary to apply to the request. 17:24:06 :rtype: requests.Response 17:24:06 """ 17:24:06 17:24:06 try: 17:24:06 conn = self.get_connection_with_tls_context( 17:24:06 request, verify, proxies=proxies, cert=cert 17:24:06 ) 17:24:06 except LocationValueError as e: 17:24:06 raise InvalidURL(e, request=request) 17:24:06 17:24:06 self.cert_verify(conn, request.url, verify, cert) 17:24:06 url = self.request_url(request, proxies) 17:24:06 self.add_headers( 17:24:06 request, 17:24:06 stream=stream, 17:24:06 timeout=timeout, 17:24:06 verify=verify, 17:24:06 cert=cert, 17:24:06 proxies=proxies, 17:24:06 ) 17:24:06 17:24:06 chunked = not (request.body is None or "Content-Length" in request.headers) 17:24:06 17:24:06 if isinstance(timeout, tuple): 17:24:06 try: 17:24:06 connect, read = timeout 17:24:06 timeout = TimeoutSauce(connect=connect, read=read) 17:24:06 except ValueError: 17:24:06 raise ValueError( 17:24:06 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 17:24:06 f"or a single float to set both timeouts to the same value." 17:24:06 ) 17:24:06 elif isinstance(timeout, TimeoutSauce): 17:24:06 pass 17:24:06 else: 17:24:06 timeout = TimeoutSauce(connect=timeout, read=timeout) 17:24:06 17:24:06 try: 17:24:06 resp = conn.urlopen( 17:24:06 method=request.method, 17:24:06 url=url, 17:24:06 body=request.body, 17:24:06 headers=request.headers, 17:24:06 redirect=False, 17:24:06 assert_same_host=False, 17:24:06 preload_content=False, 17:24:06 decode_content=False, 17:24:06 retries=self.max_retries, 17:24:06 timeout=timeout, 17:24:06 chunked=chunked, 17:24:06 ) 17:24:06 17:24:06 except (ProtocolError, OSError) as err: 17:24:06 raise ConnectionError(err, request=request) 17:24:06 17:24:06 except MaxRetryError as e: 17:24:06 if isinstance(e.reason, ConnectTimeoutError): 17:24:06 # TODO: Remove this in 3.0.0: see #2811 17:24:06 if not isinstance(e.reason, NewConnectionError): 17:24:06 raise ConnectTimeout(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, ResponseError): 17:24:06 raise RetryError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _ProxyError): 17:24:06 raise ProxyError(e, request=request) 17:24:06 17:24:06 if isinstance(e.reason, _SSLError): 17:24:06 # This branch is for urllib3 v1.22 and later. 17:24:06 raise SSLError(e, request=request) 17:24:06 17:24:06 > raise ConnectionError(e, request=request) 17:24:06 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 17:24:06 17:24:06 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 17:24:06 ----------------------------- Captured stdout call ----------------------------- 17:24:06 execution of test_21_rdm_device_not_connected 17:24:06 --------------------------- Captured stdout teardown --------------------------- 17:24:06 all processes killed 17:24:06 =========================== short test summary info ============================ 17:24:06 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_04_rdm_portmapping_DEG1_TTP_TXRX 17:24:06 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_05_rdm_portmapping_SRG1_PP7_TXRX 17:24:06 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_06_rdm_portmapping_SRG3_PP1_TXRX 17:24:06 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_07_xpdr_device_connection 17:24:06 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_08_xpdr_device_connected 17:24:06 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_09_xpdr_portmapping_info 17:24:06 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_10_xpdr_portmapping_NETWORK1 17:24:06 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_11_xpdr_portmapping_NETWORK2 17:24:06 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_12_xpdr_portmapping_CLIENT1 17:24:06 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_13_xpdr_portmapping_CLIENT2 17:24:06 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_14_xpdr_portmapping_CLIENT3 17:24:06 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_15_xpdr_portmapping_CLIENT4 17:24:06 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_16_xpdr_device_disconnection 17:24:06 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_17_xpdr_device_disconnected 17:24:06 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_18_xpdr_device_not_connected 17:24:06 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_19_rdm_device_disconnection 17:24:06 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_20_rdm_device_disconnected 17:24:06 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_21_rdm_device_not_connected 17:24:06 18 failed, 3 passed in 257.00s (0:04:16) 17:24:06 tests121: exit 1 (257.40 seconds) /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh 1.2.1 pid=36167 17:24:20 ............ [100%] 17:24:33 12 passed in 42.14s 17:24:33 pytest -q transportpce_tests/7.1/test02_otn_renderer.py 17:24:58 .............................................................. [100%] 17:27:08 62 passed in 154.66s (0:02:34) 17:27:08 pytest -q transportpce_tests/7.1/test03_renderer_or_modes.py 17:27:38 ................................................ [100%] 17:29:22 48 passed in 133.78s (0:02:13) 17:29:22 pytest -q transportpce_tests/7.1/test04_renderer_regen_mode.py 17:29:47 ...................... [100%] 17:30:34 22 passed in 72.49s (0:01:12) 17:30:35 tests121: FAIL ✖ in 4 minutes 25.34 seconds 17:30:35 tests71: OK ✔ in 6 minutes 50.33 seconds 17:30:35 tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 17:30:40 tests221: freeze> python -m pip freeze --all 17:30:41 tests221: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 17:30:41 tests221: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh 2.2.1 17:30:41 using environment variables from ./karaf221.env 17:30:41 pytest -q transportpce_tests/2.2.1/test01_portmapping.py 17:31:17 ................................... [100%] 17:31:56 35 passed in 75.35s (0:01:15) 17:31:56 pytest -q transportpce_tests/2.2.1/test02_topo_portmapping.py 17:32:27 ...... [100%] 17:32:40 6 passed in 44.12s 17:32:40 pytest -q transportpce_tests/2.2.1/test03_topology.py 17:33:22 ............................................ [100%] 17:34:56 44 passed in 135.84s (0:02:15) 17:34:57 pytest -q transportpce_tests/2.2.1/test04_otn_topology.py 17:35:31 ............ [100%] 17:35:55 12 passed in 58.47s 17:35:55 pytest -q transportpce_tests/2.2.1/test05_flex_grid.py 17:36:20 ................ [100%] 17:37:49 16 passed in 113.35s (0:01:53) 17:37:49 pytest -q transportpce_tests/2.2.1/test06_renderer_service_path_nominal.py 17:38:17 ............................... [100%] 17:38:24 31 passed in 34.62s 17:38:24 pytest -q transportpce_tests/2.2.1/test07_otn_renderer.py 17:38:58 .......................... [100%] 17:39:54 26 passed in 89.97s (0:01:29) 17:39:54 pytest -q transportpce_tests/2.2.1/test08_otn_sh_renderer.py 17:40:30 ...................... [100%] 17:41:34 22 passed in 99.87s (0:01:39) 17:41:34 pytest -q transportpce_tests/2.2.1/test09_olm.py 17:42:14 ........................................ [100%] 17:44:36 40 passed in 181.97s (0:03:01) 17:44:36 pytest -q transportpce_tests/2.2.1/test11_otn_end2end.py 17:45:18 ........................................................................ [ 74%] 17:50:55 ......................... [100%] 17:52:47 97 passed in 490.34s (0:08:10) 17:52:47 pytest -q transportpce_tests/2.2.1/test12_end2end.py 17:53:25 ...................................................... [100%] 18:00:12 54 passed in 445.27s (0:07:25) 18:00:12 pytest -q transportpce_tests/2.2.1/test14_otn_switch_end2end.py 18:01:06 ........................................................................ [ 71%] 18:06:14 ............................. [100%] 18:08:24 101 passed in 491.37s (0:08:11) 18:08:25 pytest -q transportpce_tests/2.2.1/test15_otn_end2end_with_intermediate_switch.py 18:09:17 ........................................................................ [ 67%] 18:15:03 ................................... [100%] 18:18:24 107 passed in 599.95s (0:09:59) 18:18:24 tests221: OK ✔ in 47 minutes 49.4 seconds 18:18:24 tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 18:18:30 tests_hybrid: freeze> python -m pip freeze --all 18:18:30 tests_hybrid: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 18:18:30 tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh hybrid 18:18:30 using environment variables from ./karaf121.env 18:18:30 pytest -q transportpce_tests/hybrid/test01_device_change_notifications.py 18:19:16 ................................................... [100%] 18:21:02 51 passed in 151.80s (0:02:31) 18:21:02 pytest -q transportpce_tests/hybrid/test02_B100G_end2end.py 18:21:47 ........................................................................ [ 66%] 18:26:07 ..................................... [100%] 18:28:12 109 passed in 429.53s (0:07:09) 18:28:12 pytest -q transportpce_tests/hybrid/test03_autonomous_reroute.py 18:28:59 ..................................................... [100%] 18:32:30 53 passed in 257.85s (0:04:17) 18:32:30 tests_hybrid: OK ✔ in 14 minutes 6.1 seconds 18:32:30 buildlighty: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 18:32:36 buildlighty: freeze> python -m pip freeze --all 18:32:36 buildlighty: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 18:32:36 buildlighty: commands[0] /w/workspace/transportpce-tox-verify-scandium/lighty> ./build.sh 18:32:36 NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED 18:32:48 [ERROR] COMPILATION ERROR : 18:32:48 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[17,42] cannot find symbol 18:32:48 symbol: class YangModuleInfo 18:32:48 location: package org.opendaylight.yangtools.binding 18:32:48 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[21,30] cannot find symbol 18:32:48 symbol: class YangModuleInfo 18:32:48 location: class io.lighty.controllers.tpce.utils.TPCEUtils 18:32:48 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[343,30] cannot find symbol 18:32:48 symbol: class YangModuleInfo 18:32:48 location: class io.lighty.controllers.tpce.utils.TPCEUtils 18:32:48 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[350,23] cannot find symbol 18:32:48 symbol: class YangModuleInfo 18:32:48 location: class io.lighty.controllers.tpce.utils.TPCEUtils 18:32:48 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.13.0:compile (default-compile) on project tpce: Compilation failure: Compilation failure: 18:32:48 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[17,42] cannot find symbol 18:32:48 [ERROR] symbol: class YangModuleInfo 18:32:48 [ERROR] location: package org.opendaylight.yangtools.binding 18:32:48 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[21,30] cannot find symbol 18:32:48 [ERROR] symbol: class YangModuleInfo 18:32:48 [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils 18:32:48 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[343,30] cannot find symbol 18:32:48 [ERROR] symbol: class YangModuleInfo 18:32:48 [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils 18:32:48 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[350,23] cannot find symbol 18:32:48 [ERROR] symbol: class YangModuleInfo 18:32:48 [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils 18:32:48 [ERROR] -> [Help 1] 18:32:48 [ERROR] 18:32:48 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. 18:32:48 [ERROR] Re-run Maven using the -X switch to enable full debug logging. 18:32:48 [ERROR] 18:32:48 [ERROR] For more information about the errors and possible solutions, please read the following articles: 18:32:48 [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException 18:32:48 unzip: cannot find or open target/tpce-bin.zip, target/tpce-bin.zip.zip or target/tpce-bin.zip.ZIP. 18:32:48 buildlighty: exit 9 (12.10 seconds) /w/workspace/transportpce-tox-verify-scandium/lighty> ./build.sh pid=59863 18:32:48 buildlighty: command failed but is marked ignore outcome so handling it as success 18:32:48 buildcontroller: OK (109.98=setup[8.58]+cmd[101.40] seconds) 18:32:48 testsPCE: OK (334.53=setup[88.33]+cmd[246.20] seconds) 18:32:48 sims: OK (11.37=setup[8.38]+cmd[2.99] seconds) 18:32:48 build_karaf_tests121: OK (58.54=setup[8.43]+cmd[50.11] seconds) 18:32:48 tests121: FAIL code 1 (265.34=setup[7.94]+cmd[257.40] seconds) 18:32:48 build_karaf_tests221: OK (58.55=setup[8.25]+cmd[50.30] seconds) 18:32:48 tests_tapi: FAIL code 1 (519.04=setup[10.91]+cmd[508.13] seconds) 18:32:48 tests221: OK (2869.40=setup[6.01]+cmd[2863.39] seconds) 18:32:48 build_karaf_tests71: OK (62.18=setup[15.44]+cmd[46.74] seconds) 18:32:48 tests71: OK (410.33=setup[6.34]+cmd[403.99] seconds) 18:32:48 build_karaf_tests_hybrid: OK (51.75=setup[10.89]+cmd[40.86] seconds) 18:32:48 tests_hybrid: OK (846.10=setup[6.19]+cmd[839.91] seconds) 18:32:48 buildlighty: OK (17.86=setup[5.76]+cmd[12.10] seconds) 18:32:48 docs: OK (34.58=setup[32.15]+cmd[2.44] seconds) 18:32:48 docs-linkcheck: OK (35.92=setup[32.47]+cmd[3.45] seconds) 18:32:48 checkbashisms: OK (2.90=setup[1.89]+cmd[0.02,0.07,0.91] seconds) 18:32:48 pre-commit: OK (57.48=setup[3.44]+cmd[0.00,0.01,35.96,18.08] seconds) 18:32:48 pylint: OK (26.98=setup[5.83]+cmd[21.15] seconds) 18:32:48 evaluation failed :( (4831.35 seconds) 18:32:48 + tox_status=255 18:32:48 + echo '---> Completed tox runs' 18:32:48 ---> Completed tox runs 18:32:48 + for i in .tox/*/log 18:32:48 ++ echo .tox/build_karaf_tests121/log 18:32:48 ++ awk -F/ '{print $2}' 18:32:48 + tox_env=build_karaf_tests121 18:32:48 + cp -r .tox/build_karaf_tests121/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/build_karaf_tests121 18:32:48 + for i in .tox/*/log 18:32:48 ++ echo .tox/build_karaf_tests221/log 18:32:48 ++ awk -F/ '{print $2}' 18:32:48 + tox_env=build_karaf_tests221 18:32:48 + cp -r .tox/build_karaf_tests221/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/build_karaf_tests221 18:32:48 + for i in .tox/*/log 18:32:48 ++ echo .tox/build_karaf_tests71/log 18:32:48 ++ awk -F/ '{print $2}' 18:32:48 + tox_env=build_karaf_tests71 18:32:48 + cp -r .tox/build_karaf_tests71/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/build_karaf_tests71 18:32:48 + for i in .tox/*/log 18:32:48 ++ echo .tox/build_karaf_tests_hybrid/log 18:32:48 ++ awk -F/ '{print $2}' 18:32:48 + tox_env=build_karaf_tests_hybrid 18:32:48 + cp -r .tox/build_karaf_tests_hybrid/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/build_karaf_tests_hybrid 18:32:48 + for i in .tox/*/log 18:32:48 ++ echo .tox/buildcontroller/log 18:32:48 ++ awk -F/ '{print $2}' 18:32:48 + tox_env=buildcontroller 18:32:48 + cp -r .tox/buildcontroller/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/buildcontroller 18:32:48 + for i in .tox/*/log 18:32:48 ++ echo .tox/buildlighty/log 18:32:48 ++ awk -F/ '{print $2}' 18:32:48 + tox_env=buildlighty 18:32:48 + cp -r .tox/buildlighty/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/buildlighty 18:32:48 + for i in .tox/*/log 18:32:48 ++ echo .tox/checkbashisms/log 18:32:48 ++ awk -F/ '{print $2}' 18:32:48 + tox_env=checkbashisms 18:32:48 + cp -r .tox/checkbashisms/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/checkbashisms 18:32:48 + for i in .tox/*/log 18:32:48 ++ echo .tox/docs-linkcheck/log 18:32:48 ++ awk -F/ '{print $2}' 18:32:48 + tox_env=docs-linkcheck 18:32:48 + cp -r .tox/docs-linkcheck/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/docs-linkcheck 18:32:48 + for i in .tox/*/log 18:32:48 ++ echo .tox/docs/log 18:32:48 ++ awk -F/ '{print $2}' 18:32:48 + tox_env=docs 18:32:48 + cp -r .tox/docs/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/docs 18:32:48 + for i in .tox/*/log 18:32:48 ++ echo .tox/pre-commit/log 18:32:48 ++ awk -F/ '{print $2}' 18:32:48 + tox_env=pre-commit 18:32:48 + cp -r .tox/pre-commit/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/pre-commit 18:32:48 + for i in .tox/*/log 18:32:48 ++ echo .tox/pylint/log 18:32:48 ++ awk -F/ '{print $2}' 18:32:48 + tox_env=pylint 18:32:48 + cp -r .tox/pylint/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/pylint 18:32:48 + for i in .tox/*/log 18:32:48 ++ echo .tox/sims/log 18:32:48 ++ awk -F/ '{print $2}' 18:32:48 + tox_env=sims 18:32:48 + cp -r .tox/sims/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/sims 18:32:48 + for i in .tox/*/log 18:32:48 ++ echo .tox/tests121/log 18:32:48 ++ awk -F/ '{print $2}' 18:32:48 + tox_env=tests121 18:32:48 + cp -r .tox/tests121/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/tests121 18:32:48 + for i in .tox/*/log 18:32:48 ++ echo .tox/tests221/log 18:32:48 ++ awk -F/ '{print $2}' 18:32:48 + tox_env=tests221 18:32:48 + cp -r .tox/tests221/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/tests221 18:32:48 + for i in .tox/*/log 18:32:48 ++ echo .tox/tests71/log 18:32:48 ++ awk -F/ '{print $2}' 18:32:48 + tox_env=tests71 18:32:48 + cp -r .tox/tests71/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/tests71 18:32:48 + for i in .tox/*/log 18:32:48 ++ echo .tox/testsPCE/log 18:32:48 ++ awk -F/ '{print $2}' 18:32:48 + tox_env=testsPCE 18:32:48 + cp -r .tox/testsPCE/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/testsPCE 18:32:48 + for i in .tox/*/log 18:32:48 ++ echo .tox/tests_hybrid/log 18:32:48 ++ awk -F/ '{print $2}' 18:32:48 + tox_env=tests_hybrid 18:32:48 + cp -r .tox/tests_hybrid/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/tests_hybrid 18:32:48 + for i in .tox/*/log 18:32:48 ++ echo .tox/tests_tapi/log 18:32:48 ++ awk -F/ '{print $2}' 18:32:48 + tox_env=tests_tapi 18:32:48 + cp -r .tox/tests_tapi/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/tests_tapi 18:32:48 + DOC_DIR=docs/_build/html 18:32:48 + [[ -d docs/_build/html ]] 18:32:48 + echo '---> Archiving generated docs' 18:32:48 ---> Archiving generated docs 18:32:48 + mv docs/_build/html /w/workspace/transportpce-tox-verify-scandium/archives/docs 18:32:48 + echo '---> tox-run.sh ends' 18:32:48 ---> tox-run.sh ends 18:32:48 + test 255 -eq 0 18:32:48 + exit 255 18:32:48 ++ '[' 1 = 1 ']' 18:32:48 ++ '[' -x /usr/bin/clear_console ']' 18:32:48 ++ /usr/bin/clear_console -q 18:32:48 Build step 'Execute shell' marked build as failure 18:32:48 $ ssh-agent -k 18:32:48 unset SSH_AUTH_SOCK; 18:32:48 unset SSH_AGENT_PID; 18:32:48 echo Agent pid 11713 killed; 18:32:48 [ssh-agent] Stopped. 18:32:48 [PostBuildScript] - [INFO] Executing post build scripts. 18:32:48 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins16729083980309216032.sh 18:32:48 ---> sysstat.sh 18:32:49 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins15454446166778155474.sh 18:32:49 ---> package-listing.sh 18:32:49 ++ facter osfamily 18:32:49 ++ tr '[:upper:]' '[:lower:]' 18:32:49 + OS_FAMILY=debian 18:32:49 + workspace=/w/workspace/transportpce-tox-verify-scandium 18:32:49 + START_PACKAGES=/tmp/packages_start.txt 18:32:49 + END_PACKAGES=/tmp/packages_end.txt 18:32:49 + DIFF_PACKAGES=/tmp/packages_diff.txt 18:32:49 + PACKAGES=/tmp/packages_start.txt 18:32:49 + '[' /w/workspace/transportpce-tox-verify-scandium ']' 18:32:49 + PACKAGES=/tmp/packages_end.txt 18:32:49 + case "${OS_FAMILY}" in 18:32:49 + grep '^ii' 18:32:49 + dpkg -l 18:32:49 + '[' -f /tmp/packages_start.txt ']' 18:32:49 + '[' -f /tmp/packages_end.txt ']' 18:32:49 + diff /tmp/packages_start.txt /tmp/packages_end.txt 18:32:49 + '[' /w/workspace/transportpce-tox-verify-scandium ']' 18:32:49 + mkdir -p /w/workspace/transportpce-tox-verify-scandium/archives/ 18:32:49 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/transportpce-tox-verify-scandium/archives/ 18:32:49 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins4957497636284057983.sh 18:32:49 ---> capture-instance-metadata.sh 18:32:49 Setup pyenv: 18:32:49 system 18:32:49 3.8.13 18:32:49 3.9.13 18:32:49 3.10.13 18:32:49 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) 18:32:49 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-0SmD from file:/tmp/.os_lf_venv 18:32:50 lf-activate-venv(): INFO: Installing: lftools 18:32:59 lf-activate-venv(): INFO: Adding /tmp/venv-0SmD/bin to PATH 18:32:59 INFO: Running in OpenStack, capturing instance metadata 18:33:00 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins4413588536230612010.sh 18:33:00 provisioning config files... 18:33:00 Could not find credentials [logs] for transportpce-tox-verify-scandium #25 18:33:00 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/transportpce-tox-verify-scandium@tmp/config6812416909388052675tmp 18:33:00 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[odl-logs-s3-cloudfront-index] 18:33:00 Run condition [Regular expression match] enabling perform for step [Provide Configuration files] 18:33:00 provisioning config files... 18:33:00 copy managed file [jenkins-s3-log-ship] to file:/home/jenkins/.aws/credentials 18:33:00 [EnvInject] - Injecting environment variables from a build step. 18:33:00 [EnvInject] - Injecting as environment variables the properties content 18:33:00 SERVER_ID=logs 18:33:00 18:33:00 [EnvInject] - Variables injected successfully. 18:33:00 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins7056931895144849657.sh 18:33:01 ---> create-netrc.sh 18:33:01 WARN: Log server credential not found. 18:33:01 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins16773170187532762299.sh 18:33:01 ---> python-tools-install.sh 18:33:01 Setup pyenv: 18:33:01 system 18:33:01 3.8.13 18:33:01 3.9.13 18:33:01 3.10.13 18:33:01 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) 18:33:01 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-0SmD from file:/tmp/.os_lf_venv 18:33:02 lf-activate-venv(): INFO: Installing: lftools 18:33:13 lf-activate-venv(): INFO: Adding /tmp/venv-0SmD/bin to PATH 18:33:13 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins9810463650950251251.sh 18:33:13 ---> sudo-logs.sh 18:33:13 Archiving 'sudo' log.. 18:33:14 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins16704414501903658083.sh 18:33:14 ---> job-cost.sh 18:33:14 Setup pyenv: 18:33:14 system 18:33:14 3.8.13 18:33:14 3.9.13 18:33:14 3.10.13 18:33:14 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) 18:33:14 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-0SmD from file:/tmp/.os_lf_venv 18:33:15 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 18:33:19 lf-activate-venv(): INFO: Adding /tmp/venv-0SmD/bin to PATH 18:33:19 INFO: No Stack... 18:33:19 INFO: Retrieving Pricing Info for: v3-standard-4 18:33:20 INFO: Archiving Costs 18:33:20 [transportpce-tox-verify-scandium] $ /bin/bash -l /tmp/jenkins5654643790673463010.sh 18:33:20 ---> logs-deploy.sh 18:33:20 Setup pyenv: 18:33:20 system 18:33:20 3.8.13 18:33:20 3.9.13 18:33:20 3.10.13 18:33:20 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) 18:33:20 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-0SmD from file:/tmp/.os_lf_venv 18:33:21 lf-activate-venv(): INFO: Installing: lftools 18:33:29 lf-activate-venv(): INFO: Adding /tmp/venv-0SmD/bin to PATH 18:33:29 WARNING: Nexus logging server not set 18:33:29 INFO: S3 path logs/releng/vex-yul-odl-jenkins-1/transportpce-tox-verify-scandium/25/ 18:33:29 INFO: archiving logs to S3 18:33:31 ---> uname -a: 18:33:31 Linux prd-ubuntu2004-docker-4c-16g-2598 5.4.0-190-generic #210-Ubuntu SMP Fri Jul 5 17:03:38 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 18:33:31 18:33:31 18:33:31 ---> lscpu: 18:33:31 Architecture: x86_64 18:33:31 CPU op-mode(s): 32-bit, 64-bit 18:33:31 Byte Order: Little Endian 18:33:31 Address sizes: 40 bits physical, 48 bits virtual 18:33:31 CPU(s): 4 18:33:31 On-line CPU(s) list: 0-3 18:33:31 Thread(s) per core: 1 18:33:31 Core(s) per socket: 1 18:33:31 Socket(s): 4 18:33:31 NUMA node(s): 1 18:33:31 Vendor ID: AuthenticAMD 18:33:31 CPU family: 23 18:33:31 Model: 49 18:33:31 Model name: AMD EPYC-Rome Processor 18:33:31 Stepping: 0 18:33:31 CPU MHz: 2800.000 18:33:31 BogoMIPS: 5600.00 18:33:31 Virtualization: AMD-V 18:33:31 Hypervisor vendor: KVM 18:33:31 Virtualization type: full 18:33:31 L1d cache: 128 KiB 18:33:31 L1i cache: 128 KiB 18:33:31 L2 cache: 2 MiB 18:33:31 L3 cache: 64 MiB 18:33:31 NUMA node0 CPU(s): 0-3 18:33:31 Vulnerability Gather data sampling: Not affected 18:33:31 Vulnerability Itlb multihit: Not affected 18:33:31 Vulnerability L1tf: Not affected 18:33:31 Vulnerability Mds: Not affected 18:33:31 Vulnerability Meltdown: Not affected 18:33:31 Vulnerability Mmio stale data: Not affected 18:33:31 Vulnerability Retbleed: Vulnerable 18:33:31 Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp 18:33:31 Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization 18:33:31 Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected 18:33:31 Vulnerability Srbds: Not affected 18:33:31 Vulnerability Tsx async abort: Not affected 18:33:31 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities 18:33:31 18:33:31 18:33:31 ---> nproc: 18:33:31 4 18:33:31 18:33:31 18:33:31 ---> df -h: 18:33:31 Filesystem Size Used Avail Use% Mounted on 18:33:31 udev 7.8G 0 7.8G 0% /dev 18:33:31 tmpfs 1.6G 1.1M 1.6G 1% /run 18:33:31 /dev/vda1 78G 17G 62G 21% / 18:33:31 tmpfs 7.9G 0 7.9G 0% /dev/shm 18:33:31 tmpfs 5.0M 0 5.0M 0% /run/lock 18:33:31 tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup 18:33:31 /dev/loop1 68M 68M 0 100% /snap/lxd/22753 18:33:31 /dev/loop2 44M 44M 0 100% /snap/snapd/15177 18:33:31 /dev/loop0 62M 62M 0 100% /snap/core20/1405 18:33:31 /dev/vda15 105M 6.1M 99M 6% /boot/efi 18:33:31 tmpfs 1.6G 0 1.6G 0% /run/user/1001 18:33:31 /dev/loop3 64M 64M 0 100% /snap/core20/2434 18:33:31 /dev/loop4 92M 92M 0 100% /snap/lxd/29619 18:33:31 18:33:31 18:33:31 ---> free -m: 18:33:31 total used free shared buff/cache available 18:33:31 Mem: 15997 666 5727 1 9603 14992 18:33:31 Swap: 1023 0 1023 18:33:31 18:33:31 18:33:31 ---> ip addr: 18:33:31 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 18:33:31 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 18:33:31 inet 127.0.0.1/8 scope host lo 18:33:31 valid_lft forever preferred_lft forever 18:33:31 inet6 ::1/128 scope host 18:33:31 valid_lft forever preferred_lft forever 18:33:31 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 18:33:31 link/ether fa:16:3e:ec:7a:d4 brd ff:ff:ff:ff:ff:ff 18:33:31 inet 10.30.170.194/23 brd 10.30.171.255 scope global dynamic ens3 18:33:31 valid_lft 81367sec preferred_lft 81367sec 18:33:31 inet6 fe80::f816:3eff:feec:7ad4/64 scope link 18:33:31 valid_lft forever preferred_lft forever 18:33:31 3: docker0: mtu 1458 qdisc noqueue state DOWN group default 18:33:31 link/ether 02:42:f3:b8:1d:80 brd ff:ff:ff:ff:ff:ff 18:33:31 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 18:33:31 valid_lft forever preferred_lft forever 18:33:31 18:33:31 18:33:31 ---> sar -b -r -n DEV: 18:33:31 Linux 5.4.0-190-generic (prd-ubuntu2004-docker-4c-16g-2598) 10/29/24 _x86_64_ (4 CPU) 18:33:31 18:33:31 17:09:39 LINUX RESTART (4 CPU) 18:33:31 18:33:31 17:10:02 tps rtps wtps dtps bread/s bwrtn/s bdscd/s 18:33:31 17:11:01 255.67 126.94 128.72 0.00 10509.27 60583.36 0.00 18:33:31 17:12:01 97.32 35.46 61.86 0.00 1243.39 24938.13 0.00 18:33:31 17:13:01 181.65 36.49 145.16 0.00 2514.80 39645.08 0.00 18:33:31 17:14:01 118.16 6.43 111.73 0.00 286.09 53408.17 0.00 18:33:31 17:15:01 145.23 3.43 141.80 0.00 2102.77 116234.99 0.00 18:33:31 17:16:01 157.16 13.63 143.53 0.00 2882.72 85232.19 0.00 18:33:31 17:17:01 152.24 1.17 151.07 0.00 50.79 34512.51 0.00 18:33:31 17:18:01 94.92 2.42 92.50 0.00 169.14 1631.72 0.00 18:33:31 17:19:01 67.09 0.25 66.84 0.00 37.06 1067.96 0.00 18:33:31 17:20:01 124.00 0.23 123.77 0.00 13.33 2964.13 0.00 18:33:31 17:21:01 105.25 0.02 105.23 0.00 0.27 9288.05 0.00 18:33:31 17:22:01 3.97 0.02 3.95 0.00 0.13 97.45 0.00 18:33:31 17:23:01 1.73 0.00 1.73 0.00 0.00 36.26 0.00 18:33:31 17:24:01 8.63 0.10 8.53 0.00 3.33 934.62 0.00 18:33:31 17:25:01 169.91 0.87 169.04 0.00 16.53 10237.49 0.00 18:33:31 17:26:01 2.97 0.00 2.97 0.00 0.00 57.20 0.00 18:33:31 17:27:01 1.43 0.00 1.43 0.00 0.00 28.66 0.00 18:33:31 17:28:01 70.00 0.00 70.00 0.00 0.00 1042.09 0.00 18:33:31 17:29:01 2.12 0.00 2.12 0.00 0.00 40.52 0.00 18:33:31 17:30:01 75.72 0.00 75.72 0.00 0.00 1307.92 0.00 18:33:31 17:31:01 30.53 0.02 30.52 0.00 0.13 1249.87 0.00 18:33:31 17:32:01 67.96 0.00 67.96 0.00 0.00 2419.60 0.00 18:33:31 17:33:01 77.72 0.00 77.72 0.00 0.00 1167.67 0.00 18:33:31 17:34:01 46.13 0.02 46.11 0.00 0.13 674.95 0.00 18:33:31 17:35:01 15.23 0.00 15.23 0.00 0.00 251.56 0.00 18:33:31 17:36:01 74.89 0.00 74.89 0.00 0.00 1105.28 0.00 18:33:31 17:37:01 59.12 0.00 59.12 0.00 0.00 854.12 0.00 18:33:31 17:38:01 24.76 0.00 24.76 0.00 0.00 387.14 0.00 18:33:31 17:39:01 128.75 0.00 128.75 0.00 0.00 1865.96 0.00 18:33:31 17:40:02 15.68 0.00 15.68 0.00 0.00 261.29 0.00 18:33:31 17:41:01 47.97 0.00 47.97 0.00 0.00 704.02 0.00 18:33:31 17:42:01 16.81 0.00 16.81 0.00 0.00 278.49 0.00 18:33:31 17:43:01 55.57 0.00 55.57 0.00 0.00 811.86 0.00 18:33:31 17:44:01 3.52 0.00 3.52 0.00 0.00 70.12 0.00 18:33:31 17:45:01 17.08 0.00 17.08 0.00 0.00 295.02 0.00 18:33:31 17:46:01 57.59 0.00 57.59 0.00 0.00 829.20 0.00 18:33:31 17:47:01 1.63 0.00 1.63 0.00 0.00 32.39 0.00 18:33:31 17:48:01 2.30 0.00 2.30 0.00 0.00 42.39 0.00 18:33:31 17:49:01 2.02 0.00 2.02 0.00 0.00 37.06 0.00 18:33:31 17:50:01 1.52 0.00 1.52 0.00 0.00 33.59 0.00 18:33:31 17:51:01 2.45 0.00 2.45 0.00 0.00 54.52 0.00 18:33:31 17:52:01 1.62 0.00 1.62 0.00 0.00 31.19 0.00 18:33:31 17:53:01 16.68 0.00 16.68 0.00 0.00 285.37 0.00 18:33:31 17:54:01 51.96 0.00 51.96 0.00 0.00 764.94 0.00 18:33:31 17:55:01 2.17 0.00 2.17 0.00 0.00 46.93 0.00 18:33:31 17:56:01 4.05 0.00 4.05 0.00 0.00 86.65 0.00 18:33:31 17:57:01 1.58 0.00 1.58 0.00 0.00 36.39 0.00 18:33:31 17:58:01 1.85 0.00 1.85 0.00 0.00 42.79 0.00 18:33:31 17:59:01 2.53 0.00 2.53 0.00 0.00 60.66 0.00 18:33:31 18:00:01 1.62 0.00 1.62 0.00 0.00 55.06 0.00 18:33:31 18:01:01 18.03 0.00 18.03 0.00 0.00 312.48 0.00 18:33:31 18:02:01 55.41 0.00 55.41 0.00 0.00 797.87 0.00 18:33:31 18:03:01 1.72 0.00 1.72 0.00 0.00 43.86 0.00 18:33:31 18:04:01 3.70 0.00 3.70 0.00 0.00 72.39 0.00 18:33:31 18:05:01 2.10 0.00 2.10 0.00 0.00 38.53 0.00 18:33:31 18:06:01 3.07 0.00 3.07 0.00 0.00 56.79 0.00 18:33:31 18:07:01 1.93 0.00 1.93 0.00 0.00 37.59 0.00 18:33:31 18:08:01 2.72 0.00 2.72 0.00 0.00 58.12 0.00 18:33:31 18:09:01 25.04 0.00 25.04 0.00 0.00 411.06 0.00 18:33:31 18:10:01 57.39 0.00 57.39 0.00 0.00 824.26 0.00 18:33:31 18:11:01 2.22 0.00 2.22 0.00 0.00 52.12 0.00 18:33:31 18:12:01 3.52 0.00 3.52 0.00 0.00 66.12 0.00 18:33:31 18:13:01 2.30 0.00 2.30 0.00 0.00 47.59 0.00 18:33:31 18:14:01 2.83 0.00 2.83 0.00 0.00 45.33 0.00 18:33:31 18:15:01 2.48 0.00 2.48 0.00 0.00 42.13 0.00 18:33:31 18:16:01 3.08 0.00 3.08 0.00 0.00 55.86 0.00 18:33:31 18:17:01 2.23 0.00 2.23 0.00 0.00 34.92 0.00 18:33:31 18:18:01 4.23 0.00 4.23 0.00 0.00 73.72 0.00 18:33:31 18:19:01 55.24 0.02 55.22 0.00 0.13 3954.14 0.00 18:33:31 18:20:01 55.95 0.00 55.95 0.00 0.00 6175.27 0.00 18:33:31 18:21:01 3.07 0.00 3.07 0.00 0.00 61.46 0.00 18:33:31 18:22:01 81.40 0.00 81.40 0.00 0.00 1284.72 0.00 18:33:31 18:23:01 2.48 0.00 2.48 0.00 0.00 56.26 0.00 18:33:31 18:24:01 2.90 0.00 2.90 0.00 0.00 47.98 0.00 18:33:31 18:25:01 1.92 0.00 1.92 0.00 0.00 39.99 0.00 18:33:31 18:26:01 3.05 0.00 3.05 0.00 0.00 49.19 0.00 18:33:31 18:27:01 2.27 0.00 2.27 0.00 0.00 55.46 0.00 18:33:31 18:28:01 2.45 0.00 2.45 0.00 0.00 46.26 0.00 18:33:31 18:29:01 71.94 0.00 71.94 0.00 0.00 1062.49 0.00 18:33:31 18:30:01 2.35 0.00 2.35 0.00 0.00 156.24 0.00 18:33:31 18:31:01 1.97 0.00 1.97 0.00 0.00 58.66 0.00 18:33:31 18:32:01 2.03 0.00 2.03 0.00 0.00 52.12 0.00 18:33:31 18:33:01 37.93 6.00 31.93 0.00 257.42 7134.01 0.00 18:33:31 Average: 38.38 2.79 35.59 0.00 240.03 5790.96 0.00 18:33:31 18:33:31 17:10:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 18:33:31 17:11:01 13547224 15417888 569812 3.48 54696 2023024 1315684 7.55 781348 1793648 80184 18:33:31 17:12:01 13252572 15391340 582300 3.55 73400 2257248 1306332 7.49 850508 1992020 124500 18:33:31 17:13:01 10838052 14527044 1428880 8.72 131688 3633544 2221164 12.74 1874416 3245344 990008 18:33:31 17:14:01 9591868 14144504 1811460 11.06 151704 4435168 2478860 14.22 2446996 3876744 647888 18:33:31 17:15:01 5423976 13021408 2927160 17.87 189196 7324052 3723500 21.36 4441464 5918124 226016 18:33:31 17:16:01 2954912 12306832 3635768 22.19 222140 8962120 5034912 28.89 5801232 6935980 919204 18:33:31 17:17:01 156684 9351020 6588700 40.22 223432 8809124 8040156 46.13 8740676 6785428 616 18:33:31 17:18:01 247008 8623208 7316372 44.66 226420 7997236 8369888 48.02 9409752 6030140 1008 18:33:31 17:19:01 6773968 15155352 788272 4.81 229528 7999220 1868884 10.72 2928632 6012488 884 18:33:31 17:20:01 3239248 11861740 4078956 24.90 239292 8224060 5484000 31.46 6248176 6207572 223064 18:33:31 17:21:01 806048 9434724 6505112 39.71 242424 8226704 7382184 42.35 8691436 6186020 988 18:33:31 17:22:01 750676 9380004 6559540 40.04 242472 8227344 7446204 42.72 8746228 6186192 516 18:33:31 17:23:01 742804 9372320 6567228 40.09 242500 8227504 7446204 42.72 8754548 6186300 164 18:33:31 17:24:01 3762696 12631868 3309200 20.20 248608 8449216 4232816 24.28 5553172 6368376 223612 18:33:31 17:25:01 4948864 13828728 2113272 12.90 252812 8455292 2930624 16.81 4416188 6326412 636 18:33:31 17:26:01 4932632 13812664 2129216 13.00 252840 8455428 2962608 17.00 4431888 6326240 120 18:33:31 17:27:01 4919120 13799296 2142688 13.08 252856 8455552 2962608 17.00 4444940 6326348 208 18:33:31 17:28:01 4182092 13064176 2877292 17.56 254500 8455732 3679660 21.11 5190140 6315672 396 18:33:31 17:29:01 4153500 13035840 2905608 17.74 254520 8455968 3727664 21.39 5217680 6315904 104 18:33:31 17:30:01 4950152 13834820 2107028 12.86 256492 8456164 2917700 16.74 4424940 6315020 212 18:33:31 17:31:01 4727520 13681488 2260072 13.80 258944 8516520 3465112 19.88 4581220 6374448 44876 18:33:31 17:32:01 6019524 14974328 968056 5.91 259732 8516556 1793384 10.29 3299616 6369368 520 18:33:31 17:33:01 4701880 13658732 2282876 13.94 261076 8517236 3482396 19.98 4621180 6360636 256 18:33:31 17:34:01 2892284 11850012 4090700 24.97 261740 8517424 5045996 28.95 6426532 6360824 284 18:33:31 17:35:01 6040680 14998380 944088 5.76 261752 8517408 1772796 10.17 3289764 6360780 296 18:33:31 17:36:01 5971928 14931080 1011212 6.17 262856 8517740 1853288 10.63 3356044 6360424 328 18:33:31 17:37:01 4801952 13762448 2179148 13.30 263796 8518120 3019928 17.33 4522088 6360800 12 18:33:31 17:38:01 5577236 14538124 1403820 8.57 263960 8518380 2780684 15.95 3749548 6360952 580 18:33:31 17:39:01 4875160 13837924 2103824 12.84 265340 8518828 2920608 16.76 4447556 6361308 80 18:33:31 17:40:02 5975668 14938652 1003684 6.13 265368 8519004 1818836 10.44 3352524 6361384 416 18:33:31 17:41:01 4056396 13020392 2920816 17.83 266044 8519320 3707428 21.27 5263072 6361696 148 18:33:31 17:42:01 4094808 13059128 2881980 17.59 266064 8519600 4193388 24.06 5224968 6361856 568 18:33:31 17:43:01 2589376 11554948 4385344 26.77 266812 8520072 5156924 29.59 6726048 6362312 272 18:33:31 17:44:01 2563044 11528976 4411336 26.93 266812 8520432 5172952 29.68 6751300 6362672 40 18:33:31 17:45:01 4482356 13448608 2492856 15.22 266856 8520672 4049416 23.23 4837640 6362872 268 18:33:31 17:46:01 2515972 11483200 4456976 27.21 267304 8521196 5232380 30.02 6798392 6363392 372 18:33:31 17:47:01 2497284 11464704 4475432 27.32 267324 8521376 5232380 30.02 6816572 6363564 256 18:33:31 17:48:01 2488228 11455768 4484364 27.37 267344 8521480 5232380 30.02 6824996 6363672 168 18:33:31 17:49:01 2467280 11435076 4505044 27.50 267352 8521736 5248416 30.11 6845076 6363912 332 18:33:31 17:50:01 2458556 11426676 4513460 27.55 267356 8522040 5248416 30.11 6854068 6364232 596 18:33:31 17:51:01 2434176 11402488 4537640 27.70 267364 8522216 5265408 30.21 6877708 6364408 312 18:33:31 17:52:01 2406800 11375272 4564832 27.87 267368 8522380 5298376 30.40 6904224 6364564 368 18:33:31 17:53:01 5356680 14325560 1616180 9.87 267376 8522712 2831856 16.25 3963868 6364868 416 18:33:31 17:54:01 2586996 11556684 4383344 26.76 267820 8523064 5235984 30.04 6722812 6365208 256 18:33:31 17:55:01 2567820 11537864 4402152 26.87 267820 8523420 5268008 30.22 6742768 6365564 176 18:33:31 17:56:01 2537492 11508168 4431812 27.05 267824 8524040 5268008 30.22 6773136 6366184 284 18:33:31 17:57:01 2517624 11488760 4451296 27.17 267840 8524500 5268008 30.22 6791388 6366628 292 18:33:31 17:58:01 2502544 11474188 4465836 27.26 267840 8524996 5268008 30.22 6806040 6367136 504 18:33:31 17:59:01 2482864 11454964 4484988 27.38 267856 8525468 5284452 30.32 6825856 6367584 212 18:33:31 18:00:01 2456364 11429404 4510536 27.53 267864 8526364 5300444 30.41 6849832 6368500 292 18:33:31 18:01:01 2110064 11082564 4857568 29.65 267888 8525720 6253904 35.88 7198792 6367832 244 18:33:31 18:02:01 1174548 10147844 5791796 35.36 268216 8526180 6625208 38.01 8128576 6368288 188 18:33:31 18:03:01 1041664 10015384 5924040 36.16 268224 8526596 6706748 38.48 8259276 6368700 60 18:33:31 18:04:01 936664 9910804 6028488 36.80 268232 8527024 6755768 38.76 8363284 6369112 164 18:33:31 18:05:01 914244 9888568 6050732 36.94 268236 8527188 6755768 38.76 8384820 6369288 40 18:33:31 18:06:01 898532 9873240 6065980 37.03 268244 8527560 6787844 38.94 8399772 6369664 556 18:33:31 18:07:01 883916 9858932 6080236 37.12 268268 8527876 6820692 39.13 8414260 6369956 592 18:33:31 18:08:01 856944 9832212 6106948 37.28 268280 8528100 6820692 39.13 8441184 6370188 44 18:33:31 18:09:01 3009412 11984708 3955596 24.15 268408 8527916 5543764 31.81 6296900 6370020 108 18:33:31 18:10:01 1372920 10348980 5590628 34.13 268740 8528312 6510416 37.35 7928116 6370412 124 18:33:31 18:11:01 1166520 10143024 5796296 35.38 268756 8528740 6624784 38.01 8131552 6370840 320 18:33:31 18:12:01 1082580 10059488 5879756 35.89 268760 8529140 6673704 38.29 8216240 6371240 44 18:33:31 18:13:01 1061696 10038808 5900376 36.02 268764 8529344 6689720 38.38 8237032 6371440 48 18:33:31 18:14:01 1055332 10032568 5906588 36.06 268764 8529460 6689720 38.38 8241896 6371560 152 18:33:31 18:15:01 1042936 10020324 5918848 36.13 268768 8529612 6689720 38.38 8255060 6371708 132 18:33:31 18:16:01 1024704 10002372 5936768 36.24 268768 8529884 6689720 38.38 8271336 6371984 148 18:33:31 18:17:01 997220 9974996 5964064 36.41 268772 8529992 6705720 38.47 8299160 6372084 376 18:33:31 18:18:01 976092 9954284 5984724 36.53 268780 8530384 6722456 38.57 8318728 6372468 344 18:33:31 18:19:01 4132280 13355076 2585792 15.78 274880 8755376 4170452 23.93 4985808 6547980 157228 18:33:31 18:20:01 2205724 11431064 4508884 27.52 275228 8757352 5322056 30.53 6914232 6541872 532 18:33:31 18:21:01 6158628 15383996 557820 3.41 275236 8757380 1397640 8.02 2974412 6541680 300 18:33:31 18:22:01 2332532 11558748 4381496 26.75 275604 8757696 5133264 29.45 6810396 6520632 572 18:33:31 18:23:01 2253912 11480568 4459352 27.22 275608 8758140 5230200 30.01 6888548 6520524 12 18:33:31 18:24:01 2240100 11466968 4472944 27.31 275620 8758332 5230200 30.01 6900912 6520708 192 18:33:31 18:25:01 2229972 11457048 4482972 27.37 275624 8758512 5230200 30.01 6910832 6520896 256 18:33:31 18:26:01 2225232 11452460 4487556 27.39 275644 8758648 5230200 30.01 6915144 6521032 296 18:33:31 18:27:01 2182628 11410388 4529520 27.65 275652 8759188 5247144 30.10 6957368 6521524 260 18:33:31 18:28:01 2194360 11422444 4517488 27.58 275656 8759484 5246796 30.10 6945536 6521844 128 18:33:31 18:29:01 2108144 11336172 4603796 28.10 276040 8758964 5709992 32.76 7044696 6510284 444 18:33:31 18:30:01 1616416 10844980 5094472 31.10 276056 8759480 5907096 33.89 7533956 6510644 440 18:33:31 18:31:01 1500488 10730172 5209188 31.80 276060 8760588 5956712 34.18 7646876 6511688 868 18:33:31 18:32:01 1459184 10689280 5250036 32.05 276060 8760996 5972748 34.27 7686796 6512092 748 18:33:31 18:33:01 5953160 15407804 534160 3.26 281196 8965356 1281228 7.35 3016884 6690636 18380 18:33:31 Average: 3279751 11966181 3975692 24.27 255245 8268054 4862405 27.90 6206730 6207862 44319 18:33:31 18:33:31 17:10:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 18:33:31 17:11:01 lo 0.95 0.95 0.09 0.09 0.00 0.00 0.00 0.00 18:33:31 17:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:11:01 ens3 374.72 252.94 1340.88 70.52 0.00 0.00 0.00 0.00 18:33:31 17:12:01 lo 0.53 0.53 0.05 0.05 0.00 0.00 0.00 0.00 18:33:31 17:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:12:01 ens3 42.43 36.29 492.99 4.42 0.00 0.00 0.00 0.00 18:33:31 17:13:01 lo 6.06 6.06 0.62 0.62 0.00 0.00 0.00 0.00 18:33:31 17:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:13:01 ens3 431.18 341.15 6448.29 37.71 0.00 0.00 0.00 0.00 18:33:31 17:14:01 lo 0.67 0.67 0.06 0.06 0.00 0.00 0.00 0.00 18:33:31 17:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:14:01 ens3 315.60 265.96 4672.06 26.50 0.00 0.00 0.00 0.00 18:33:31 17:15:01 lo 1.33 1.33 0.14 0.14 0.00 0.00 0.00 0.00 18:33:31 17:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:15:01 ens3 196.42 99.77 4664.91 11.84 0.00 0.00 0.00 0.00 18:33:31 17:16:01 lo 2.92 2.92 0.32 0.32 0.00 0.00 0.00 0.00 18:33:31 17:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:16:01 ens3 85.69 26.63 1394.93 3.05 0.00 0.00 0.00 0.00 18:33:31 17:17:01 lo 5.70 5.70 6.05 6.05 0.00 0.00 0.00 0.00 18:33:31 17:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:17:01 ens3 1.58 1.28 0.46 0.38 0.00 0.00 0.00 0.00 18:33:31 17:18:01 lo 52.33 52.33 47.99 47.99 0.00 0.00 0.00 0.00 18:33:31 17:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:18:01 ens3 1.33 1.17 0.25 0.23 0.00 0.00 0.00 0.00 18:33:31 17:19:01 lo 38.03 38.03 18.08 18.08 0.00 0.00 0.00 0.00 18:33:31 17:19:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:19:01 ens3 1.68 1.42 0.29 0.28 0.00 0.00 0.00 0.00 18:33:31 17:20:01 lo 7.58 7.58 1.46 1.46 0.00 0.00 0.00 0.00 18:33:31 17:20:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:20:01 ens3 1.97 2.18 0.81 0.72 0.00 0.00 0.00 0.00 18:33:31 17:21:01 lo 24.15 24.15 25.22 25.22 0.00 0.00 0.00 0.00 18:33:31 17:21:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:21:01 ens3 6.38 3.13 5.67 0.33 0.00 0.00 0.00 0.00 18:33:31 17:22:01 lo 22.61 22.61 11.96 11.96 0.00 0.00 0.00 0.00 18:33:31 17:22:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:22:01 ens3 0.35 0.25 0.05 0.04 0.00 0.00 0.00 0.00 18:33:31 17:23:01 lo 14.76 14.76 5.63 5.63 0.00 0.00 0.00 0.00 18:33:31 17:23:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:23:01 ens3 0.27 0.15 0.04 0.03 0.00 0.00 0.00 0.00 18:33:31 17:24:01 lo 21.33 21.33 6.07 6.07 0.00 0.00 0.00 0.00 18:33:31 17:24:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:24:01 ens3 2.58 2.70 0.97 0.90 0.00 0.00 0.00 0.00 18:33:31 17:25:01 lo 14.53 14.53 20.25 20.25 0.00 0.00 0.00 0.00 18:33:31 17:25:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:25:01 ens3 19.23 15.71 4.06 11.70 0.00 0.00 0.00 0.00 18:33:31 17:26:01 lo 23.52 23.52 7.75 7.75 0.00 0.00 0.00 0.00 18:33:31 17:26:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:26:01 ens3 1.40 1.68 0.31 0.33 0.00 0.00 0.00 0.00 18:33:31 17:27:01 lo 25.70 25.70 8.18 8.18 0.00 0.00 0.00 0.00 18:33:31 17:27:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:27:01 ens3 1.90 1.93 0.53 0.46 0.00 0.00 0.00 0.00 18:33:31 17:28:01 lo 18.46 18.46 11.33 11.33 0.00 0.00 0.00 0.00 18:33:31 17:28:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:28:01 ens3 0.98 1.05 0.16 0.15 0.00 0.00 0.00 0.00 18:33:31 17:29:01 lo 41.72 41.72 14.01 14.01 0.00 0.00 0.00 0.00 18:33:31 17:29:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:29:01 ens3 1.10 1.48 0.23 0.24 0.00 0.00 0.00 0.00 18:33:31 17:30:01 lo 13.06 13.06 6.72 6.72 0.00 0.00 0.00 0.00 18:33:31 17:30:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:30:01 ens3 1.40 1.55 0.42 0.38 0.00 0.00 0.00 0.00 18:33:31 17:31:01 lo 18.00 18.00 5.42 5.42 0.00 0.00 0.00 0.00 18:33:31 17:31:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:31:01 ens3 2.10 2.65 0.93 0.85 0.00 0.00 0.00 0.00 18:33:31 17:32:01 lo 16.33 16.33 10.21 10.21 0.00 0.00 0.00 0.00 18:33:31 17:32:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:32:01 ens3 2.37 2.23 0.49 0.35 0.00 0.00 0.00 0.00 18:33:31 17:33:01 lo 13.21 13.21 4.64 4.64 0.00 0.00 0.00 0.00 18:33:31 17:33:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:33:01 ens3 1.72 1.43 0.70 0.55 0.00 0.00 0.00 0.00 18:33:31 17:34:01 lo 15.73 15.73 7.59 7.59 0.00 0.00 0.00 0.00 18:33:31 17:34:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:34:01 ens3 0.92 1.15 0.16 0.18 0.00 0.00 0.00 0.00 18:33:31 17:35:01 lo 16.21 16.21 6.48 6.48 0.00 0.00 0.00 0.00 18:33:31 17:35:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:35:01 ens3 1.37 1.58 0.25 0.25 0.00 0.00 0.00 0.00 18:33:31 17:36:01 lo 5.62 5.62 6.27 6.27 0.00 0.00 0.00 0.00 18:33:31 17:36:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:36:01 ens3 0.98 1.07 0.21 0.21 0.00 0.00 0.00 0.00 18:33:31 17:37:01 lo 8.08 8.08 3.05 3.05 0.00 0.00 0.00 0.00 18:33:31 17:37:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:37:01 ens3 0.42 0.40 0.06 0.06 0.00 0.00 0.00 0.00 18:33:31 17:38:01 lo 3.70 3.70 0.60 0.60 0.00 0.00 0.00 0.00 18:33:31 17:38:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:38:01 ens3 0.65 0.77 0.11 0.12 0.00 0.00 0.00 0.00 18:33:31 17:39:01 lo 30.34 30.34 14.92 14.92 0.00 0.00 0.00 0.00 18:33:31 17:39:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:39:01 ens3 0.70 0.78 0.11 0.11 0.00 0.00 0.00 0.00 18:33:31 17:40:02 lo 21.08 21.08 8.70 8.70 0.00 0.00 0.00 0.00 18:33:31 17:40:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:40:02 ens3 1.18 1.45 0.22 0.23 0.00 0.00 0.00 0.00 18:33:31 17:41:01 lo 14.47 14.47 16.90 16.90 0.00 0.00 0.00 0.00 18:33:31 17:41:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:41:01 ens3 0.69 0.56 0.16 0.14 0.00 0.00 0.00 0.00 18:33:31 17:42:01 lo 9.45 9.45 3.92 3.92 0.00 0.00 0.00 0.00 18:33:31 17:42:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:42:01 ens3 1.10 0.93 0.28 0.15 0.00 0.00 0.00 0.00 18:33:31 17:43:01 lo 45.14 45.14 17.84 17.84 0.00 0.00 0.00 0.00 18:33:31 17:43:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:43:01 ens3 0.75 0.65 0.14 0.13 0.00 0.00 0.00 0.00 18:33:31 17:44:01 lo 34.91 34.91 10.15 10.15 0.00 0.00 0.00 0.00 18:33:31 17:44:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:44:01 ens3 0.67 0.60 0.12 0.11 0.00 0.00 0.00 0.00 18:33:31 17:45:01 lo 13.56 13.56 4.00 4.00 0.00 0.00 0.00 0.00 18:33:31 17:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:45:01 ens3 0.70 0.58 0.11 0.10 0.00 0.00 0.00 0.00 18:33:31 17:46:01 lo 30.94 30.94 22.19 22.19 0.00 0.00 0.00 0.00 18:33:31 17:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:46:01 ens3 0.80 0.67 0.19 0.16 0.00 0.00 0.00 0.00 18:33:31 17:47:01 lo 11.05 11.05 6.79 6.79 0.00 0.00 0.00 0.00 18:33:31 17:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:47:01 ens3 0.87 0.68 0.15 0.15 0.00 0.00 0.00 0.00 18:33:31 17:48:01 lo 13.35 13.35 5.75 5.75 0.00 0.00 0.00 0.00 18:33:31 17:48:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:48:01 ens3 0.55 0.52 0.10 0.09 0.00 0.00 0.00 0.00 18:33:31 17:49:01 lo 23.93 23.93 10.03 10.03 0.00 0.00 0.00 0.00 18:33:31 17:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:49:01 ens3 1.30 0.73 0.46 0.30 0.00 0.00 0.00 0.00 18:33:31 17:50:01 lo 15.98 15.98 6.54 6.54 0.00 0.00 0.00 0.00 18:33:31 17:50:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:50:01 ens3 0.65 0.58 0.11 0.12 0.00 0.00 0.00 0.00 18:33:31 17:51:01 lo 9.50 9.50 4.48 4.48 0.00 0.00 0.00 0.00 18:33:31 17:51:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:51:01 ens3 0.65 0.47 0.16 0.14 0.00 0.00 0.00 0.00 18:33:31 17:52:01 lo 17.33 17.33 8.52 8.52 0.00 0.00 0.00 0.00 18:33:31 17:52:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:52:01 ens3 0.67 0.55 0.12 0.11 0.00 0.00 0.00 0.00 18:33:31 17:53:01 lo 21.23 21.23 6.92 6.92 0.00 0.00 0.00 0.00 18:33:31 17:53:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:53:01 ens3 0.62 0.48 0.10 0.09 0.00 0.00 0.00 0.00 18:33:31 17:54:01 lo 34.78 34.78 13.21 13.21 0.00 0.00 0.00 0.00 18:33:31 17:54:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:54:01 ens3 0.75 0.68 0.12 0.11 0.00 0.00 0.00 0.00 18:33:31 17:55:01 lo 21.26 21.26 6.89 6.89 0.00 0.00 0.00 0.00 18:33:31 17:55:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:55:01 ens3 0.68 0.57 0.18 0.11 0.00 0.00 0.00 0.00 18:33:31 17:56:01 lo 51.42 51.42 15.80 15.80 0.00 0.00 0.00 0.00 18:33:31 17:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:56:01 ens3 0.53 0.37 0.13 0.11 0.00 0.00 0.00 0.00 18:33:31 17:57:01 lo 32.86 32.86 9.29 9.29 0.00 0.00 0.00 0.00 18:33:31 17:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:57:01 ens3 0.42 0.33 0.08 0.07 0.00 0.00 0.00 0.00 18:33:31 17:58:01 lo 27.81 27.81 8.69 8.69 0.00 0.00 0.00 0.00 18:33:31 17:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:58:01 ens3 0.35 0.22 0.04 0.03 0.00 0.00 0.00 0.00 18:33:31 17:59:01 lo 30.83 30.83 9.64 9.64 0.00 0.00 0.00 0.00 18:33:31 17:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 17:59:01 ens3 0.18 0.07 0.01 0.01 0.00 0.00 0.00 0.00 18:33:31 18:00:01 lo 63.02 63.02 19.35 19.35 0.00 0.00 0.00 0.00 18:33:31 18:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:00:01 ens3 0.18 0.10 0.01 0.01 0.00 0.00 0.00 0.00 18:33:31 18:01:01 lo 2.63 2.63 0.25 0.25 0.00 0.00 0.00 0.00 18:33:31 18:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:01:01 ens3 0.97 0.82 0.19 0.17 0.00 0.00 0.00 0.00 18:33:31 18:02:01 lo 24.05 24.05 22.74 22.74 0.00 0.00 0.00 0.00 18:33:31 18:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:02:01 ens3 0.70 0.63 0.13 0.12 0.00 0.00 0.00 0.00 18:33:31 18:03:01 lo 34.24 34.24 14.51 14.51 0.00 0.00 0.00 0.00 18:33:31 18:03:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:03:01 ens3 0.77 0.68 0.24 0.15 0.00 0.00 0.00 0.00 18:33:31 18:04:01 lo 24.10 24.10 11.02 11.02 0.00 0.00 0.00 0.00 18:33:31 18:04:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:04:01 ens3 0.67 0.58 0.12 0.11 0.00 0.00 0.00 0.00 18:33:31 18:05:01 lo 22.08 22.08 8.83 8.83 0.00 0.00 0.00 0.00 18:33:31 18:05:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:05:01 ens3 0.60 0.48 0.11 0.10 0.00 0.00 0.00 0.00 18:33:31 18:06:01 lo 32.59 32.59 10.72 10.72 0.00 0.00 0.00 0.00 18:33:31 18:06:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:06:01 ens3 0.60 0.45 0.15 0.13 0.00 0.00 0.00 0.00 18:33:31 18:07:01 lo 17.76 17.76 8.31 8.31 0.00 0.00 0.00 0.00 18:33:31 18:07:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:07:01 ens3 0.58 0.57 0.12 0.11 0.00 0.00 0.00 0.00 18:33:31 18:08:01 lo 21.95 21.95 8.32 8.32 0.00 0.00 0.00 0.00 18:33:31 18:08:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:08:01 ens3 0.42 0.55 0.08 0.08 0.00 0.00 0.00 0.00 18:33:31 18:09:01 lo 4.97 4.97 2.16 2.16 0.00 0.00 0.00 0.00 18:33:31 18:09:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:09:01 ens3 1.03 1.08 0.15 0.16 0.00 0.00 0.00 0.00 18:33:31 18:10:01 lo 23.26 23.26 22.65 22.65 0.00 0.00 0.00 0.00 18:33:31 18:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:10:01 ens3 0.65 0.77 0.11 0.11 0.00 0.00 0.00 0.00 18:33:31 18:11:01 lo 33.24 33.24 14.39 14.39 0.00 0.00 0.00 0.00 18:33:31 18:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:11:01 ens3 0.78 0.90 0.20 0.19 0.00 0.00 0.00 0.00 18:33:31 18:12:01 lo 19.33 19.33 9.62 9.62 0.00 0.00 0.00 0.00 18:33:31 18:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:12:01 ens3 0.80 0.52 0.11 0.07 0.00 0.00 0.00 0.00 18:33:31 18:13:01 lo 20.00 20.00 10.96 10.96 0.00 0.00 0.00 0.00 18:33:31 18:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:13:01 ens3 1.02 1.22 0.38 0.35 0.00 0.00 0.00 0.00 18:33:31 18:14:01 lo 13.53 13.53 6.90 6.90 0.00 0.00 0.00 0.00 18:33:31 18:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:14:01 ens3 0.57 0.70 0.10 0.11 0.00 0.00 0.00 0.00 18:33:31 18:15:01 lo 7.35 7.35 5.52 5.52 0.00 0.00 0.00 0.00 18:33:31 18:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:15:01 ens3 0.93 0.47 0.13 0.07 0.00 0.00 0.00 0.00 18:33:31 18:16:01 lo 25.50 25.50 12.09 12.09 0.00 0.00 0.00 0.00 18:33:31 18:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:16:01 ens3 1.23 1.08 0.44 0.39 0.00 0.00 0.00 0.00 18:33:31 18:17:01 lo 10.51 10.51 6.17 6.17 0.00 0.00 0.00 0.00 18:33:31 18:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:17:01 ens3 0.43 0.33 0.08 0.07 0.00 0.00 0.00 0.00 18:33:31 18:18:01 lo 34.84 34.84 12.57 12.57 0.00 0.00 0.00 0.00 18:33:31 18:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:18:01 ens3 0.40 0.30 0.06 0.05 0.00 0.00 0.00 0.00 18:33:31 18:19:01 lo 5.18 5.18 2.58 2.58 0.00 0.00 0.00 0.00 18:33:31 18:19:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:19:01 ens3 2.07 2.22 0.85 0.75 0.00 0.00 0.00 0.00 18:33:31 18:20:01 lo 40.75 40.75 35.66 35.66 0.00 0.00 0.00 0.00 18:33:31 18:20:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:20:01 ens3 0.78 0.70 0.14 0.13 0.00 0.00 0.00 0.00 18:33:31 18:21:01 lo 31.09 31.09 11.82 11.82 0.00 0.00 0.00 0.00 18:33:31 18:21:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:21:01 ens3 1.15 1.05 0.29 0.27 0.00 0.00 0.00 0.00 18:33:31 18:22:01 lo 18.56 18.56 12.03 12.03 0.00 0.00 0.00 0.00 18:33:31 18:22:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:22:01 ens3 1.52 1.13 0.22 0.18 0.00 0.00 0.00 0.00 18:33:31 18:23:01 lo 26.46 26.46 10.02 10.02 0.00 0.00 0.00 0.00 18:33:31 18:23:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:23:01 ens3 1.27 1.43 0.62 0.36 0.00 0.00 0.00 0.00 18:33:31 18:24:01 lo 17.86 17.86 7.21 7.21 0.00 0.00 0.00 0.00 18:33:31 18:24:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:24:01 ens3 0.80 0.95 0.14 0.15 0.00 0.00 0.00 0.00 18:33:31 18:25:01 lo 16.43 16.43 6.33 6.33 0.00 0.00 0.00 0.00 18:33:31 18:25:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:25:01 ens3 0.83 0.57 0.13 0.10 0.00 0.00 0.00 0.00 18:33:31 18:26:01 lo 15.71 15.71 5.74 5.74 0.00 0.00 0.00 0.00 18:33:31 18:26:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:26:01 ens3 1.23 1.25 0.50 0.42 0.00 0.00 0.00 0.00 18:33:31 18:27:01 lo 44.03 44.03 15.27 15.27 0.00 0.00 0.00 0.00 18:33:31 18:27:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:27:01 ens3 1.78 0.97 0.53 0.35 0.00 0.00 0.00 0.00 18:33:31 18:28:01 lo 34.93 34.93 11.28 11.28 0.00 0.00 0.00 0.00 18:33:31 18:28:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:28:01 ens3 1.17 0.80 0.44 0.34 0.00 0.00 0.00 0.00 18:33:31 18:29:01 lo 9.67 9.67 8.06 8.06 0.00 0.00 0.00 0.00 18:33:31 18:29:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:29:01 ens3 0.93 0.83 0.14 0.13 0.00 0.00 0.00 0.00 18:33:31 18:30:01 lo 41.36 41.36 15.48 15.48 0.00 0.00 0.00 0.00 18:33:31 18:30:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:30:01 ens3 1.33 1.02 0.43 0.35 0.00 0.00 0.00 0.00 18:33:31 18:31:01 lo 87.12 87.12 32.18 32.18 0.00 0.00 0.00 0.00 18:33:31 18:31:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:31:01 ens3 0.83 0.88 0.20 0.21 0.00 0.00 0.00 0.00 18:33:31 18:32:01 lo 40.96 40.96 13.20 13.20 0.00 0.00 0.00 0.00 18:33:31 18:32:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:32:01 ens3 1.12 0.63 0.16 0.10 0.00 0.00 0.00 0.00 18:33:31 18:33:01 lo 38.96 38.96 12.73 12.73 0.00 0.00 0.00 0.00 18:33:31 18:33:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 18:33:01 ens3 131.36 106.67 1956.62 16.02 0.00 0.00 0.00 0.00 18:33:31 Average: lo 22.27 22.27 10.29 10.29 0.00 0.00 0.00 0.00 18:33:31 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:31 Average: ens3 20.12 14.60 252.91 2.37 0.00 0.00 0.00 0.00 18:33:31 18:33:31 18:33:31 ---> sar -P ALL: 18:33:31 Linux 5.4.0-190-generic (prd-ubuntu2004-docker-4c-16g-2598) 10/29/24 _x86_64_ (4 CPU) 18:33:31 18:33:31 17:09:39 LINUX RESTART (4 CPU) 18:33:31 18:33:31 17:10:02 CPU %user %nice %system %iowait %steal %idle 18:33:31 17:11:01 all 16.09 14.16 12.00 14.60 0.15 43.01 18:33:31 17:11:01 0 13.91 14.04 13.13 19.73 0.15 39.04 18:33:31 17:11:01 1 24.75 12.07 10.66 13.11 0.16 39.25 18:33:31 17:11:01 2 13.34 15.43 12.75 12.00 0.14 46.34 18:33:31 17:11:01 3 12.39 15.08 11.43 13.58 0.14 47.39 18:33:31 17:12:01 all 11.31 4.99 4.20 17.06 0.09 62.34 18:33:31 17:12:01 0 9.07 3.94 3.89 28.56 0.07 54.47 18:33:31 17:12:01 1 15.34 5.19 4.68 9.22 0.12 65.45 18:33:31 17:12:01 2 10.59 4.59 3.97 14.53 0.08 66.23 18:33:31 17:12:01 3 10.26 6.27 4.24 15.86 0.10 63.27 18:33:31 17:13:01 all 63.05 0.00 4.48 6.51 0.13 25.83 18:33:31 17:13:01 0 67.87 0.00 4.58 7.27 0.13 20.15 18:33:31 17:13:01 1 62.53 0.00 3.93 4.06 0.13 29.34 18:33:31 17:13:01 2 61.80 0.00 4.82 7.15 0.13 26.10 18:33:31 17:13:01 3 59.99 0.00 4.59 7.56 0.13 27.72 18:33:31 17:14:01 all 58.72 0.00 2.12 6.17 0.13 32.85 18:33:31 17:14:01 0 79.15 0.00 2.19 2.00 0.17 16.50 18:33:31 17:14:01 1 40.54 0.00 1.68 4.78 0.10 52.90 18:33:31 17:14:01 2 59.10 0.00 1.30 6.30 0.13 33.16 18:33:31 17:14:01 3 56.15 0.00 3.31 11.60 0.13 28.80 18:33:31 17:15:01 all 72.29 0.00 4.06 8.21 0.13 15.32 18:33:31 17:15:01 0 76.01 0.00 4.96 14.10 0.17 4.76 18:33:31 17:15:01 1 68.87 0.00 3.61 4.06 0.13 23.33 18:33:31 17:15:01 2 70.36 0.00 3.91 5.28 0.12 20.33 18:33:31 17:15:01 3 73.91 0.00 3.76 9.41 0.12 12.80 18:33:31 17:16:01 all 81.46 0.00 4.16 8.76 0.12 5.51 18:33:31 17:16:01 0 84.92 0.00 4.37 6.25 0.12 4.35 18:33:31 17:16:01 1 79.84 0.00 4.05 11.97 0.12 4.03 18:33:31 17:16:01 2 80.02 0.00 4.35 4.23 0.12 11.28 18:33:31 17:16:01 3 81.05 0.00 3.88 12.59 0.12 2.37 18:33:31 17:17:01 all 71.36 0.00 2.31 2.93 0.11 23.29 18:33:31 17:17:01 0 70.38 0.00 2.32 1.42 0.12 25.75 18:33:31 17:17:01 1 73.15 0.00 2.66 3.32 0.10 20.77 18:33:31 17:17:01 2 70.64 0.00 1.84 1.92 0.12 25.47 18:33:31 17:17:01 3 71.25 0.00 2.41 5.04 0.12 21.18 18:33:31 17:18:01 all 41.40 0.00 1.35 0.45 0.11 56.69 18:33:31 17:18:01 0 41.04 0.00 1.24 0.05 0.12 57.55 18:33:31 17:18:01 1 39.28 0.00 1.56 1.08 0.12 57.96 18:33:31 17:18:01 2 39.85 0.00 1.39 0.40 0.10 58.25 18:33:31 17:18:01 3 45.42 0.00 1.19 0.28 0.10 53.01 18:33:31 17:19:01 all 27.63 0.00 1.28 0.35 0.09 70.65 18:33:31 17:19:01 0 26.51 0.00 1.40 0.07 0.10 71.92 18:33:31 17:19:01 1 28.88 0.00 1.34 0.79 0.08 68.91 18:33:31 17:19:01 2 27.60 0.00 1.20 0.35 0.08 70.76 18:33:31 17:19:01 3 27.55 0.00 1.19 0.18 0.08 71.00 18:33:31 17:20:01 all 90.41 0.00 2.93 0.15 0.13 6.39 18:33:31 17:20:01 0 89.80 0.00 2.70 0.10 0.13 7.27 18:33:31 17:20:01 1 90.25 0.00 2.75 0.30 0.12 6.58 18:33:31 17:20:01 2 90.06 0.00 3.25 0.18 0.12 6.39 18:33:31 17:20:01 3 91.51 0.00 3.01 0.03 0.13 5.31 18:33:31 17:21:01 all 38.91 0.00 1.22 0.12 0.10 59.66 18:33:31 17:21:01 0 38.07 0.00 1.20 0.03 0.10 60.59 18:33:31 17:21:01 1 39.52 0.00 1.20 0.27 0.10 58.91 18:33:31 17:21:01 2 38.56 0.00 1.25 0.03 0.08 60.07 18:33:31 17:21:01 3 39.50 0.00 1.20 0.13 0.10 59.07 18:33:31 18:33:31 17:21:01 CPU %user %nice %system %iowait %steal %idle 18:33:31 17:22:01 all 7.82 0.00 0.50 0.03 0.08 91.56 18:33:31 17:22:01 0 7.73 0.00 0.57 0.00 0.08 91.62 18:33:31 17:22:01 1 8.20 0.00 0.62 0.10 0.08 91.00 18:33:31 17:22:01 2 8.07 0.00 0.43 0.00 0.08 91.41 18:33:31 17:22:01 3 7.28 0.00 0.38 0.02 0.08 92.24 18:33:31 17:23:01 all 2.80 0.00 0.46 0.01 0.07 96.66 18:33:31 17:23:01 0 3.20 0.00 0.62 0.00 0.08 96.11 18:33:31 17:23:01 1 2.70 0.00 0.33 0.03 0.07 96.87 18:33:31 17:23:01 2 2.67 0.00 0.55 0.00 0.07 96.71 18:33:31 17:23:01 3 2.62 0.00 0.33 0.02 0.07 96.96 18:33:31 17:24:01 all 19.26 0.00 1.17 0.13 0.09 79.35 18:33:31 17:24:01 0 19.26 0.00 1.17 0.02 0.10 79.45 18:33:31 17:24:01 1 19.49 0.00 1.10 0.28 0.08 79.03 18:33:31 17:24:01 2 21.12 0.00 1.24 0.17 0.10 77.38 18:33:31 17:24:01 3 17.16 0.00 1.18 0.03 0.08 81.54 18:33:31 17:25:01 all 50.03 0.00 1.54 1.09 0.09 47.25 18:33:31 17:25:01 0 48.14 0.00 1.56 0.75 0.10 49.45 18:33:31 17:25:01 1 50.94 0.00 1.46 1.46 0.08 46.06 18:33:31 17:25:01 2 49.67 0.00 1.26 0.22 0.08 48.78 18:33:31 17:25:01 3 51.37 0.00 1.91 1.92 0.10 44.70 18:33:31 17:26:01 all 4.35 0.00 0.35 0.02 0.07 95.22 18:33:31 17:26:01 0 4.49 0.00 0.35 0.00 0.05 95.11 18:33:31 17:26:01 1 4.19 0.00 0.28 0.00 0.07 95.46 18:33:31 17:26:01 2 4.37 0.00 0.38 0.00 0.08 95.16 18:33:31 17:26:01 3 4.33 0.00 0.37 0.07 0.07 95.17 18:33:31 17:27:01 all 2.95 0.00 0.31 0.01 0.06 96.66 18:33:31 17:27:01 0 2.44 0.00 0.22 0.00 0.03 97.31 18:33:31 17:27:01 1 3.45 0.00 0.28 0.00 0.07 96.20 18:33:31 17:27:01 2 2.95 0.00 0.33 0.00 0.07 96.65 18:33:31 17:27:01 3 2.97 0.00 0.42 0.03 0.08 96.50 18:33:31 17:28:01 all 38.18 0.00 1.23 0.27 0.08 60.23 18:33:31 17:28:01 0 36.97 0.00 1.35 0.40 0.08 61.20 18:33:31 17:28:01 1 36.48 0.00 1.29 0.00 0.08 62.15 18:33:31 17:28:01 2 39.90 0.00 0.97 0.62 0.08 58.43 18:33:31 17:28:01 3 39.38 0.00 1.32 0.07 0.08 59.15 18:33:31 17:29:01 all 4.74 0.00 0.40 0.01 0.05 94.80 18:33:31 17:29:01 0 4.90 0.00 0.33 0.00 0.05 94.72 18:33:31 17:29:01 1 4.87 0.00 0.42 0.00 0.05 94.67 18:33:31 17:29:01 2 4.51 0.00 0.33 0.03 0.07 95.06 18:33:31 17:29:01 3 4.68 0.00 0.52 0.00 0.05 94.75 18:33:31 17:30:01 all 31.24 0.00 0.99 0.36 0.09 67.32 18:33:31 17:30:01 0 31.84 0.00 0.88 0.02 0.10 67.16 18:33:31 17:30:01 1 30.60 0.00 1.24 0.17 0.08 67.91 18:33:31 17:30:01 2 32.30 0.00 1.09 1.17 0.10 65.34 18:33:31 17:30:01 3 30.23 0.00 0.73 0.08 0.08 68.87 18:33:31 17:31:01 all 29.89 0.00 1.02 0.17 0.08 68.83 18:33:31 17:31:01 0 30.09 0.00 0.97 0.23 0.08 68.63 18:33:31 17:31:01 1 34.91 0.00 1.06 0.10 0.08 63.85 18:33:31 17:31:01 2 24.88 0.00 0.94 0.30 0.08 73.79 18:33:31 17:31:01 3 29.69 0.00 1.12 0.05 0.08 69.06 18:33:31 17:32:01 all 23.88 0.00 0.96 0.58 0.07 74.51 18:33:31 17:32:01 0 23.29 0.00 0.80 0.15 0.07 75.69 18:33:31 17:32:01 1 22.37 0.00 0.96 0.51 0.08 76.08 18:33:31 17:32:01 2 25.57 0.00 1.07 1.67 0.07 71.62 18:33:31 17:32:01 3 24.32 0.00 1.00 0.00 0.07 74.62 18:33:31 18:33:31 17:32:01 CPU %user %nice %system %iowait %steal %idle 18:33:31 17:33:01 all 53.49 0.00 1.46 0.33 0.11 44.61 18:33:31 17:33:01 0 53.84 0.00 1.69 0.13 0.12 44.22 18:33:31 17:33:01 1 52.17 0.00 1.10 0.07 0.12 46.54 18:33:31 17:33:01 2 52.74 0.00 1.74 0.79 0.10 44.63 18:33:31 17:33:01 3 55.22 0.00 1.30 0.32 0.10 43.06 18:33:31 17:34:01 all 21.75 0.00 0.68 0.25 0.09 77.23 18:33:31 17:34:01 0 22.86 0.00 0.65 0.35 0.07 76.07 18:33:31 17:34:01 1 21.92 0.00 0.82 0.55 0.10 76.60 18:33:31 17:34:01 2 21.37 0.00 0.54 0.00 0.08 78.01 18:33:31 17:34:01 3 20.86 0.00 0.70 0.08 0.10 78.25 18:33:31 17:35:01 all 9.81 0.00 0.57 0.03 0.06 89.53 18:33:31 17:35:01 0 10.12 0.00 0.40 0.02 0.07 89.39 18:33:31 17:35:01 1 10.83 0.00 0.50 0.00 0.07 88.60 18:33:31 17:35:01 2 8.75 0.00 0.65 0.03 0.07 90.50 18:33:31 17:35:01 3 9.53 0.00 0.74 0.07 0.05 89.61 18:33:31 17:36:01 all 29.79 0.00 1.02 0.28 0.09 68.82 18:33:31 17:36:01 0 29.74 0.00 0.69 0.03 0.08 69.45 18:33:31 17:36:01 1 30.95 0.00 1.02 0.40 0.10 67.53 18:33:31 17:36:01 2 30.61 0.00 1.42 0.02 0.08 67.87 18:33:31 17:36:01 3 27.85 0.00 0.94 0.69 0.10 70.43 18:33:31 17:37:01 all 20.47 0.00 0.55 0.33 0.08 78.57 18:33:31 17:37:01 0 21.62 0.00 0.70 0.03 0.05 77.60 18:33:31 17:37:01 1 19.38 0.00 0.64 1.12 0.08 78.78 18:33:31 17:37:01 2 20.10 0.00 0.50 0.00 0.10 79.30 18:33:31 17:37:01 3 20.78 0.00 0.37 0.15 0.08 78.62 18:33:31 17:38:01 all 18.64 0.00 0.63 0.03 0.08 80.62 18:33:31 17:38:01 0 17.41 0.00 0.60 0.03 0.08 81.88 18:33:31 17:38:01 1 18.59 0.00 0.69 0.03 0.08 80.61 18:33:31 17:38:01 2 19.50 0.00 0.57 0.03 0.07 79.83 18:33:31 17:38:01 3 19.08 0.00 0.69 0.02 0.07 80.15 18:33:31 17:39:01 all 48.03 0.00 1.53 0.57 0.10 49.77 18:33:31 17:39:01 0 44.23 0.00 1.18 0.45 0.10 54.04 18:33:31 17:39:01 1 50.01 0.00 1.98 1.19 0.10 46.72 18:33:31 17:39:01 2 48.80 0.00 1.19 0.02 0.10 49.89 18:33:31 17:39:01 3 49.07 0.00 1.79 0.62 0.10 48.42 18:33:31 17:40:02 all 14.68 0.00 0.56 0.06 0.07 84.63 18:33:31 17:40:02 0 14.07 0.00 0.52 0.02 0.07 85.33 18:33:31 17:40:02 1 14.28 0.00 0.44 0.03 0.07 85.18 18:33:31 17:40:02 2 14.13 0.00 0.60 0.03 0.07 85.17 18:33:31 17:40:02 3 16.24 0.00 0.67 0.15 0.08 82.86 18:33:31 17:41:01 all 31.32 0.00 0.92 0.22 0.10 67.44 18:33:31 17:41:01 0 30.38 0.00 0.90 0.22 0.10 68.39 18:33:31 17:41:01 1 29.13 0.00 0.86 0.14 0.10 69.77 18:33:31 17:41:01 2 33.97 0.00 1.07 0.08 0.10 64.78 18:33:31 17:41:01 3 31.81 0.00 0.83 0.46 0.10 66.80 18:33:31 17:42:01 all 32.68 0.00 1.09 0.02 0.09 66.12 18:33:31 17:42:01 0 30.78 0.00 0.99 0.03 0.10 68.09 18:33:31 17:42:01 1 32.74 0.00 0.85 0.00 0.08 66.32 18:33:31 17:42:01 2 32.43 0.00 1.15 0.03 0.08 66.30 18:33:31 17:42:01 3 34.74 0.00 1.37 0.02 0.08 63.79 18:33:31 17:43:01 all 24.55 0.00 0.63 0.28 0.09 74.45 18:33:31 17:43:01 0 23.03 0.00 0.54 0.02 0.08 76.33 18:33:31 17:43:01 1 24.74 0.00 0.52 0.18 0.08 74.48 18:33:31 17:43:01 2 25.80 0.00 0.85 0.08 0.10 73.16 18:33:31 17:43:01 3 24.61 0.00 0.62 0.85 0.08 73.83 18:33:31 18:33:31 17:43:01 CPU %user %nice %system %iowait %steal %idle 18:33:31 17:44:01 all 3.98 0.00 0.23 0.03 0.05 95.71 18:33:31 17:44:01 0 3.93 0.00 0.22 0.00 0.05 95.80 18:33:31 17:44:01 1 3.71 0.00 0.22 0.00 0.05 96.03 18:33:31 17:44:01 2 3.83 0.00 0.25 0.08 0.05 95.79 18:33:31 17:44:01 3 4.43 0.00 0.25 0.03 0.05 95.23 18:33:31 17:45:01 all 29.42 0.00 1.03 0.04 0.08 69.44 18:33:31 17:45:01 0 31.35 0.00 0.89 0.03 0.07 67.66 18:33:31 17:45:01 1 30.61 0.00 1.61 0.05 0.07 67.67 18:33:31 17:45:01 2 29.90 0.00 0.53 0.02 0.07 69.48 18:33:31 17:45:01 3 25.83 0.00 1.09 0.05 0.10 72.93 18:33:31 17:46:01 all 28.38 0.00 0.88 0.28 0.08 70.37 18:33:31 17:46:01 0 28.92 0.00 1.27 1.05 0.07 68.69 18:33:31 17:46:01 1 29.25 0.00 0.70 0.02 0.10 69.93 18:33:31 17:46:01 2 29.08 0.00 0.74 0.03 0.08 70.07 18:33:31 17:46:01 3 26.28 0.00 0.82 0.03 0.08 72.78 18:33:31 17:47:01 all 3.65 0.00 0.27 0.01 0.06 96.01 18:33:31 17:47:01 0 4.68 0.00 0.35 0.02 0.08 94.87 18:33:31 17:47:01 1 3.42 0.00 0.23 0.00 0.05 96.29 18:33:31 17:47:01 2 3.02 0.00 0.22 0.02 0.05 96.70 18:33:31 17:47:01 3 3.46 0.00 0.30 0.00 0.07 96.18 18:33:31 17:48:01 all 2.11 0.00 0.25 0.02 0.07 97.55 18:33:31 17:48:01 0 1.96 0.00 0.22 0.05 0.07 97.70 18:33:31 17:48:01 1 2.17 0.00 0.28 0.00 0.07 97.48 18:33:31 17:48:01 2 1.97 0.00 0.18 0.03 0.05 97.76 18:33:31 17:48:01 3 2.33 0.00 0.32 0.00 0.08 97.26 18:33:31 17:49:01 all 2.59 0.00 0.30 0.01 0.07 97.03 18:33:31 17:49:01 0 2.43 0.00 0.27 0.02 0.08 97.20 18:33:31 17:49:01 1 2.47 0.00 0.22 0.00 0.05 97.26 18:33:31 17:49:01 2 2.69 0.00 0.37 0.02 0.08 96.85 18:33:31 17:49:01 3 2.79 0.00 0.35 0.00 0.07 96.80 18:33:31 17:50:01 all 1.59 0.00 0.28 0.01 0.06 98.05 18:33:31 17:50:01 0 1.54 0.00 0.33 0.03 0.05 98.04 18:33:31 17:50:01 1 1.70 0.00 0.30 0.00 0.05 97.95 18:33:31 17:50:01 2 1.49 0.00 0.13 0.00 0.05 98.33 18:33:31 17:50:01 3 1.65 0.00 0.37 0.00 0.08 97.90 18:33:31 17:51:01 all 1.44 0.00 0.30 0.03 0.06 98.18 18:33:31 17:51:01 0 1.42 0.00 0.27 0.05 0.07 98.19 18:33:31 17:51:01 1 1.81 0.00 0.50 0.00 0.07 97.62 18:33:31 17:51:01 2 1.11 0.00 0.20 0.03 0.05 98.61 18:33:31 17:51:01 3 1.42 0.00 0.22 0.02 0.05 98.30 18:33:31 17:52:01 all 2.11 0.00 0.30 0.01 0.07 97.51 18:33:31 17:52:01 0 1.77 0.00 0.27 0.03 0.05 97.88 18:33:31 17:52:01 1 2.37 0.00 0.30 0.00 0.07 97.26 18:33:31 17:52:01 2 2.21 0.00 0.25 0.00 0.08 97.46 18:33:31 17:52:01 3 2.10 0.00 0.37 0.02 0.07 97.45 18:33:31 17:53:01 all 20.64 0.00 0.86 0.04 0.08 78.39 18:33:31 17:53:01 0 21.63 0.00 0.75 0.10 0.07 77.45 18:33:31 17:53:01 1 21.00 0.00 0.97 0.03 0.10 77.89 18:33:31 17:53:01 2 20.64 0.00 0.87 0.02 0.07 78.41 18:33:31 17:53:01 3 19.28 0.00 0.85 0.00 0.07 79.80 18:33:31 17:54:01 all 32.98 0.00 1.10 0.25 0.08 65.59 18:33:31 17:54:01 0 32.88 0.00 1.03 0.62 0.08 65.38 18:33:31 17:54:01 1 32.84 0.00 1.15 0.30 0.07 65.64 18:33:31 17:54:01 2 33.66 0.00 1.39 0.00 0.10 64.85 18:33:31 17:54:01 3 32.56 0.00 0.81 0.08 0.08 66.46 18:33:31 18:33:31 17:54:01 CPU %user %nice %system %iowait %steal %idle 18:33:31 17:55:01 all 3.55 0.00 0.32 0.01 0.06 96.05 18:33:31 17:55:01 0 3.09 0.00 0.28 0.02 0.07 96.54 18:33:31 17:55:01 1 3.93 0.00 0.50 0.00 0.07 95.51 18:33:31 17:55:01 2 3.76 0.00 0.22 0.00 0.07 95.96 18:33:31 17:55:01 3 3.43 0.00 0.28 0.03 0.05 96.21 18:33:31 17:56:01 all 5.41 0.00 0.39 0.02 0.08 94.10 18:33:31 17:56:01 0 5.60 0.00 0.40 0.03 0.07 93.90 18:33:31 17:56:01 1 5.81 0.00 0.43 0.00 0.08 93.67 18:33:31 17:56:01 2 5.08 0.00 0.30 0.00 0.05 94.57 18:33:31 17:56:01 3 5.16 0.00 0.42 0.05 0.10 94.27 18:33:31 17:57:01 all 2.79 0.00 0.28 0.01 0.06 96.85 18:33:31 17:57:01 0 2.75 0.00 0.25 0.03 0.07 96.89 18:33:31 17:57:01 1 2.59 0.00 0.18 0.00 0.05 97.18 18:33:31 17:57:01 2 3.28 0.00 0.35 0.00 0.07 96.31 18:33:31 17:57:01 3 2.56 0.00 0.35 0.02 0.05 97.02 18:33:31 17:58:01 all 2.38 0.00 0.29 0.01 0.05 97.27 18:33:31 17:58:01 0 2.37 0.00 0.37 0.03 0.07 97.16 18:33:31 17:58:01 1 2.54 0.00 0.20 0.00 0.03 97.23 18:33:31 17:58:01 2 2.21 0.00 0.27 0.00 0.05 97.47 18:33:31 17:58:01 3 2.39 0.00 0.33 0.00 0.07 97.21 18:33:31 17:59:01 all 2.55 0.00 0.28 0.02 0.07 97.08 18:33:31 17:59:01 0 2.28 0.00 0.27 0.03 0.07 97.35 18:33:31 17:59:01 1 2.23 0.00 0.40 0.00 0.07 97.30 18:33:31 17:59:01 2 3.41 0.00 0.15 0.05 0.05 96.34 18:33:31 17:59:01 3 2.25 0.00 0.30 0.02 0.08 97.35 18:33:31 18:00:01 all 4.56 0.00 0.44 0.01 0.07 94.93 18:33:31 18:00:01 0 4.28 0.00 0.45 0.03 0.07 95.17 18:33:31 18:00:01 1 5.45 0.00 0.44 0.00 0.07 94.04 18:33:31 18:00:01 2 4.22 0.00 0.30 0.00 0.07 95.42 18:33:31 18:00:01 3 4.27 0.00 0.55 0.00 0.07 95.11 18:33:31 18:01:01 all 48.55 0.00 1.49 0.03 0.09 49.84 18:33:31 18:01:01 0 47.14 0.00 0.98 0.05 0.10 51.72 18:33:31 18:01:01 1 45.95 0.00 1.97 0.02 0.10 51.97 18:33:31 18:01:01 2 50.23 0.00 1.34 0.03 0.08 48.32 18:33:31 18:01:01 3 50.88 0.00 1.67 0.02 0.08 47.35 18:33:31 18:02:01 all 16.71 0.00 0.48 0.29 0.07 82.45 18:33:31 18:02:01 0 15.64 0.00 0.53 0.70 0.07 83.06 18:33:31 18:02:01 1 17.72 0.00 0.35 0.00 0.07 81.86 18:33:31 18:02:01 2 17.34 0.00 0.59 0.47 0.08 81.52 18:33:31 18:02:01 3 16.13 0.00 0.44 0.00 0.07 83.37 18:33:31 18:03:01 all 8.62 0.00 0.31 0.71 0.61 89.75 18:33:31 18:03:01 0 8.84 0.00 0.29 2.85 0.40 87.62 18:33:31 18:03:01 1 8.92 0.00 0.32 0.00 0.35 90.41 18:33:31 18:03:01 2 8.57 0.00 0.36 0.02 1.47 89.58 18:33:31 18:03:01 3 8.14 0.00 0.27 0.00 0.20 91.39 18:33:31 18:04:01 all 5.46 0.00 0.23 0.01 0.06 94.24 18:33:31 18:04:01 0 5.23 0.00 0.22 0.02 0.05 94.48 18:33:31 18:04:01 1 6.53 0.00 0.21 0.00 0.07 93.19 18:33:31 18:04:01 2 5.24 0.00 0.32 0.02 0.07 94.36 18:33:31 18:04:01 3 4.85 0.00 0.17 0.00 0.05 94.93 18:33:31 18:05:01 all 3.07 0.00 0.18 0.01 0.06 96.67 18:33:31 18:05:01 0 2.68 0.00 0.22 0.03 0.07 97.00 18:33:31 18:05:01 1 3.42 0.00 0.22 0.00 0.07 96.29 18:33:31 18:05:01 2 3.00 0.00 0.15 0.02 0.05 96.78 18:33:31 18:05:01 3 3.18 0.00 0.15 0.00 0.05 96.62 18:33:31 18:33:31 18:05:01 CPU %user %nice %system %iowait %steal %idle 18:33:31 18:06:01 all 3.20 0.00 0.19 0.01 0.07 96.53 18:33:31 18:06:01 0 4.72 0.00 0.15 0.03 0.05 95.05 18:33:31 18:06:01 1 2.38 0.00 0.20 0.00 0.07 97.36 18:33:31 18:06:01 2 2.75 0.00 0.25 0.02 0.10 96.88 18:33:31 18:06:01 3 2.94 0.00 0.17 0.00 0.07 96.83 18:33:31 18:07:01 all 2.06 0.00 0.18 0.01 0.07 97.68 18:33:31 18:07:01 0 2.13 0.00 0.12 0.03 0.08 97.63 18:33:31 18:07:01 1 2.04 0.00 0.28 0.00 0.07 97.60 18:33:31 18:07:01 2 1.81 0.00 0.22 0.02 0.07 97.89 18:33:31 18:07:01 3 2.27 0.00 0.08 0.00 0.05 97.60 18:33:31 18:08:01 all 1.93 0.00 0.21 0.01 0.08 97.77 18:33:31 18:08:01 0 1.98 0.00 0.23 0.03 0.07 97.69 18:33:31 18:08:01 1 1.89 0.00 0.18 0.00 0.08 97.84 18:33:31 18:08:01 2 1.91 0.00 0.29 0.02 0.08 97.70 18:33:31 18:08:01 3 1.92 0.00 0.15 0.00 0.07 97.86 18:33:31 18:09:01 all 39.65 0.00 1.38 0.02 0.09 58.87 18:33:31 18:09:01 0 41.15 0.00 1.56 0.02 0.08 57.19 18:33:31 18:09:01 1 35.03 0.00 1.27 0.05 0.08 63.56 18:33:31 18:09:01 2 40.94 0.00 1.61 0.00 0.10 57.35 18:33:31 18:09:01 3 41.47 0.00 1.07 0.00 0.08 57.38 18:33:31 18:10:01 all 24.16 0.00 0.76 0.33 0.08 74.68 18:33:31 18:10:01 0 23.87 0.00 0.87 0.15 0.07 75.04 18:33:31 18:10:01 1 24.24 0.00 0.70 0.91 0.08 74.07 18:33:31 18:10:01 2 23.47 0.00 0.62 0.20 0.08 75.62 18:33:31 18:10:01 3 25.05 0.00 0.84 0.03 0.08 74.00 18:33:31 18:11:01 all 8.59 0.00 0.38 0.01 0.07 90.96 18:33:31 18:11:01 0 8.46 0.00 0.42 0.00 0.07 91.05 18:33:31 18:11:01 1 9.38 0.00 0.42 0.02 0.07 90.12 18:33:31 18:11:01 2 8.15 0.00 0.35 0.02 0.07 91.42 18:33:31 18:11:01 3 8.36 0.00 0.32 0.00 0.07 91.26 18:33:31 18:12:01 all 4.58 0.00 0.26 0.02 0.06 95.08 18:33:31 18:12:01 0 4.19 0.00 0.32 0.00 0.07 95.42 18:33:31 18:12:01 1 3.97 0.00 0.25 0.03 0.07 95.67 18:33:31 18:12:01 2 4.92 0.00 0.23 0.03 0.07 94.75 18:33:31 18:12:01 3 5.23 0.00 0.23 0.00 0.05 94.48 18:33:31 18:13:01 all 3.80 0.00 0.26 0.01 0.06 95.87 18:33:31 18:13:01 0 3.95 0.00 0.22 0.00 0.05 95.78 18:33:31 18:13:01 1 3.83 0.00 0.23 0.03 0.05 95.85 18:33:31 18:13:01 2 3.73 0.00 0.29 0.02 0.08 95.89 18:33:31 18:13:01 3 3.69 0.00 0.30 0.00 0.07 95.94 18:33:31 18:14:01 all 1.71 0.00 0.28 0.02 0.07 97.93 18:33:31 18:14:01 0 1.95 0.00 0.38 0.00 0.10 97.56 18:33:31 18:14:01 1 1.65 0.00 0.25 0.00 0.07 98.03 18:33:31 18:14:01 2 1.59 0.00 0.30 0.03 0.05 98.03 18:33:31 18:14:01 3 1.63 0.00 0.18 0.03 0.07 98.09 18:33:31 18:15:01 all 1.08 0.00 0.21 0.02 0.07 98.62 18:33:31 18:15:01 0 0.99 0.00 0.27 0.00 0.07 98.68 18:33:31 18:15:01 1 1.08 0.00 0.17 0.00 0.07 98.68 18:33:31 18:15:01 2 0.90 0.00 0.25 0.03 0.07 98.75 18:33:31 18:15:01 3 1.34 0.00 0.17 0.03 0.07 98.39 18:33:31 18:16:01 all 3.01 0.00 0.29 0.02 0.06 96.62 18:33:31 18:16:01 0 3.36 0.00 0.28 0.00 0.05 96.31 18:33:31 18:16:01 1 2.60 0.00 0.22 0.00 0.07 97.12 18:33:31 18:16:01 2 3.33 0.00 0.30 0.03 0.07 96.27 18:33:31 18:16:01 3 2.76 0.00 0.35 0.03 0.07 96.79 18:33:31 18:33:31 18:16:01 CPU %user %nice %system %iowait %steal %idle 18:33:31 18:17:01 all 1.42 0.00 0.24 0.01 0.06 98.27 18:33:31 18:17:01 0 0.89 0.00 0.17 0.00 0.05 98.90 18:33:31 18:17:01 1 1.04 0.00 0.23 0.02 0.05 98.66 18:33:31 18:17:01 2 1.81 0.00 0.35 0.02 0.07 97.75 18:33:31 18:17:01 3 1.94 0.00 0.22 0.02 0.07 97.76 18:33:31 18:18:01 all 3.24 0.00 0.33 0.03 0.07 96.34 18:33:31 18:18:01 0 2.96 0.00 0.30 0.00 0.07 96.67 18:33:31 18:18:01 1 3.02 0.00 0.40 0.00 0.07 96.51 18:33:31 18:18:01 2 3.13 0.00 0.28 0.05 0.08 96.46 18:33:31 18:18:01 3 3.84 0.00 0.32 0.05 0.07 95.73 18:33:31 18:19:01 all 38.70 0.00 1.47 0.19 0.09 59.55 18:33:31 18:19:01 0 46.81 0.00 1.57 0.07 0.08 51.47 18:33:31 18:19:01 1 36.50 0.00 1.57 0.02 0.10 61.81 18:33:31 18:19:01 2 36.82 0.00 1.41 0.38 0.10 61.29 18:33:31 18:19:01 3 34.71 0.00 1.32 0.30 0.08 63.59 18:33:31 18:20:01 all 26.99 0.00 1.00 0.43 0.08 71.50 18:33:31 18:20:01 0 27.84 0.00 1.45 0.33 0.08 70.29 18:33:31 18:20:01 1 26.21 0.00 0.92 0.50 0.08 72.29 18:33:31 18:20:01 2 26.13 0.00 0.80 0.72 0.08 72.27 18:33:31 18:20:01 3 27.76 0.00 0.84 0.17 0.08 71.15 18:33:31 18:21:01 all 4.77 0.00 0.45 0.02 0.06 94.71 18:33:31 18:21:01 0 4.24 0.00 0.33 0.00 0.07 95.36 18:33:31 18:21:01 1 4.40 0.00 0.47 0.02 0.07 95.05 18:33:31 18:21:01 2 5.02 0.00 0.48 0.05 0.05 94.39 18:33:31 18:21:01 3 5.40 0.00 0.51 0.00 0.05 94.04 18:33:31 18:22:01 all 52.33 0.00 1.51 0.34 0.09 45.72 18:33:31 18:22:01 0 53.08 0.00 1.21 0.00 0.08 45.63 18:33:31 18:22:01 1 53.13 0.00 2.41 0.03 0.08 44.34 18:33:31 18:22:01 2 51.06 0.00 1.37 1.34 0.10 46.13 18:33:31 18:22:01 3 52.08 0.00 1.04 0.00 0.10 46.78 18:33:31 18:23:01 all 8.81 0.00 0.23 0.03 0.07 90.86 18:33:31 18:23:01 0 9.38 0.00 0.28 0.00 0.07 90.27 18:33:31 18:23:01 1 8.43 0.00 0.25 0.02 0.07 91.23 18:33:31 18:23:01 2 8.25 0.00 0.20 0.08 0.08 91.38 18:33:31 18:23:01 3 9.19 0.00 0.17 0.00 0.07 90.58 18:33:31 18:24:01 all 2.92 0.00 0.19 0.01 0.07 96.81 18:33:31 18:24:01 0 3.99 0.00 0.22 0.02 0.08 95.69 18:33:31 18:24:01 1 2.45 0.00 0.17 0.00 0.07 97.31 18:33:31 18:24:01 2 2.46 0.00 0.17 0.03 0.05 97.29 18:33:31 18:24:01 3 2.75 0.00 0.20 0.00 0.07 96.98 18:33:31 18:25:01 all 1.61 0.00 0.17 0.01 0.07 98.14 18:33:31 18:25:01 0 1.48 0.00 0.20 0.02 0.07 98.23 18:33:31 18:25:01 1 1.57 0.00 0.17 0.00 0.08 98.18 18:33:31 18:25:01 2 1.57 0.00 0.17 0.03 0.07 98.16 18:33:31 18:25:01 3 1.81 0.00 0.15 0.00 0.07 97.97 18:33:31 18:26:01 all 1.49 0.00 0.13 0.03 0.06 98.29 18:33:31 18:26:01 0 1.33 0.00 0.13 0.02 0.07 98.45 18:33:31 18:26:01 1 1.29 0.00 0.15 0.00 0.07 98.49 18:33:31 18:26:01 2 1.64 0.00 0.12 0.08 0.05 98.11 18:33:31 18:26:01 3 1.69 0.00 0.10 0.02 0.07 98.12 18:33:31 18:27:01 all 4.27 0.00 0.25 0.01 0.07 95.40 18:33:31 18:27:01 0 4.48 0.00 0.28 0.00 0.05 95.19 18:33:31 18:27:01 1 4.52 0.00 0.27 0.00 0.08 95.13 18:33:31 18:27:01 2 3.97 0.00 0.18 0.05 0.07 95.73 18:33:31 18:27:01 3 4.11 0.00 0.25 0.00 0.07 95.57 18:33:31 18:33:31 18:27:01 CPU %user %nice %system %iowait %steal %idle 18:33:31 18:28:01 all 2.77 0.00 0.23 0.02 0.05 96.94 18:33:31 18:28:01 0 2.54 0.00 0.20 0.00 0.05 97.21 18:33:31 18:28:01 1 3.75 0.00 0.30 0.00 0.05 95.90 18:33:31 18:28:01 2 2.63 0.00 0.23 0.07 0.05 97.02 18:33:31 18:28:01 3 2.18 0.00 0.17 0.00 0.03 97.62 18:33:31 18:29:01 all 49.37 0.00 1.47 0.31 0.10 48.75 18:33:31 18:29:01 0 51.43 0.00 1.62 0.63 0.08 46.23 18:33:31 18:29:01 1 52.77 0.00 1.76 0.40 0.10 44.97 18:33:31 18:29:01 2 47.71 0.00 1.27 0.07 0.10 50.85 18:33:31 18:29:01 3 45.54 0.00 1.22 0.15 0.10 52.98 18:33:31 18:30:01 all 14.35 0.00 0.45 0.01 0.07 85.12 18:33:31 18:30:01 0 15.07 0.00 0.38 0.02 0.07 84.46 18:33:31 18:30:01 1 14.46 0.00 0.35 0.00 0.07 85.12 18:33:31 18:30:01 2 13.80 0.00 0.45 0.03 0.07 85.65 18:33:31 18:30:01 3 14.08 0.00 0.62 0.00 0.07 85.24 18:33:31 18:31:01 all 10.04 0.00 0.45 0.01 0.07 89.43 18:33:31 18:31:01 0 9.96 0.00 0.50 0.03 0.07 89.44 18:33:31 18:31:01 1 10.16 0.00 0.45 0.00 0.08 89.30 18:33:31 18:31:01 2 9.66 0.00 0.42 0.02 0.08 89.82 18:33:31 18:31:01 3 10.38 0.00 0.42 0.00 0.05 89.15 18:33:31 18:32:01 all 3.65 0.00 0.31 0.02 0.07 95.96 18:33:31 18:32:01 0 3.66 0.00 0.27 0.03 0.07 95.98 18:33:31 18:32:01 1 3.70 0.00 0.33 0.00 0.07 95.90 18:33:31 18:32:01 2 4.08 0.00 0.32 0.03 0.07 95.50 18:33:31 18:32:01 3 3.17 0.00 0.32 0.00 0.07 96.44 18:33:31 18:33:01 all 22.60 0.00 1.35 0.73 0.07 75.25 18:33:31 18:33:01 0 21.08 0.00 1.40 0.73 0.07 76.72 18:33:31 18:33:01 1 20.95 0.00 1.02 0.18 0.07 77.78 18:33:31 18:33:01 2 18.18 0.00 1.10 1.52 0.07 79.13 18:33:31 18:33:01 3 30.17 0.00 1.89 0.49 0.08 67.37 18:33:31 Average: all 20.09 0.23 1.00 0.90 0.09 77.70 18:33:31 Average: 0 20.43 0.21 1.02 1.08 0.08 77.17 18:33:31 Average: 1 19.96 0.20 1.00 0.74 0.08 78.01 18:33:31 Average: 2 19.91 0.24 0.99 0.76 0.10 78.01 18:33:31 Average: 3 20.05 0.25 1.00 1.00 0.08 77.61 18:33:31 18:33:31 18:33:31