03:43:26 Triggered by Gerrit: https://git.opendaylight.org/gerrit/c/transportpce/+/113602 03:43:26 Running as SYSTEM 03:43:26 [EnvInject] - Loading node environment variables. 03:43:26 Building remotely on prd-ubuntu2004-docker-4c-16g-25087 (ubuntu2004-docker-4c-16g) in workspace /w/workspace/transportpce-tox-verify-scandium 03:43:26 [ssh-agent] Looking for ssh-agent implementation... 03:43:26 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 03:43:26 $ ssh-agent 03:43:27 SSH_AUTH_SOCK=/tmp/ssh-mUSu3BCi0DfC/agent.13199 03:43:27 SSH_AGENT_PID=13201 03:43:27 [ssh-agent] Started. 03:43:27 Running ssh-add (command line suppressed) 03:43:27 Identity added: /w/workspace/transportpce-tox-verify-scandium@tmp/private_key_17053432351239861871.key (/w/workspace/transportpce-tox-verify-scandium@tmp/private_key_17053432351239861871.key) 03:43:27 [ssh-agent] Using credentials jenkins (jenkins-ssh) 03:43:27 The recommended git tool is: NONE 03:43:33 using credential jenkins-ssh 03:43:34 Wiping out workspace first. 03:43:34 Cloning the remote Git repository 03:43:34 Cloning repository git://devvexx.opendaylight.org/mirror/transportpce 03:43:34 > git init /w/workspace/transportpce-tox-verify-scandium # timeout=10 03:43:34 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 03:43:34 > git --version # timeout=10 03:43:34 > git --version # 'git version 2.25.1' 03:43:34 using GIT_SSH to set credentials jenkins-ssh 03:43:34 Verifying host key using known hosts file 03:43:34 You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. 03:43:34 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce +refs/heads/*:refs/remotes/origin/* # timeout=10 03:43:37 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 03:43:37 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 03:43:39 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 03:43:39 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 03:43:39 using GIT_SSH to set credentials jenkins-ssh 03:43:39 Verifying host key using known hosts file 03:43:39 You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. 03:43:39 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce refs/changes/02/113602/2 # timeout=10 03:43:39 > git rev-parse d6d346d568042c620ef10f0c8d49618b7bbb83b6^{commit} # timeout=10 03:43:39 Checking out Revision d6d346d568042c620ef10f0c8d49618b7bbb83b6 (refs/changes/02/113602/2) 03:43:39 > git config core.sparsecheckout # timeout=10 03:43:39 > git checkout -f d6d346d568042c620ef10f0c8d49618b7bbb83b6 # timeout=10 03:43:43 Commit message: "Refactor test_utils lib to improve the Reg search" 03:43:43 > git rev-parse FETCH_HEAD^{commit} # timeout=10 03:43:44 > git rev-list --no-walk 4fa081d591be867f34609af07aee62f954db43dd # timeout=10 03:43:44 > git remote # timeout=10 03:43:44 > git submodule init # timeout=10 03:43:44 > git submodule sync # timeout=10 03:43:44 > git config --get remote.origin.url # timeout=10 03:43:44 > git submodule init # timeout=10 03:43:44 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 03:43:44 ERROR: No submodules found. 03:43:45 provisioning config files... 03:43:45 copy managed file [npmrc] to file:/home/jenkins/.npmrc 03:43:45 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 03:43:45 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins17975271504415537456.sh 03:43:45 ---> python-tools-install.sh 03:43:45 Setup pyenv: 03:43:45 * system (set by /opt/pyenv/version) 03:43:45 * 3.8.13 (set by /opt/pyenv/version) 03:43:45 * 3.9.13 (set by /opt/pyenv/version) 03:43:45 * 3.10.13 (set by /opt/pyenv/version) 03:43:45 * 3.11.7 (set by /opt/pyenv/version) 03:43:50 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-yoJP 03:43:50 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 03:43:53 lf-activate-venv(): INFO: Installing: lftools 03:44:30 lf-activate-venv(): INFO: Adding /tmp/venv-yoJP/bin to PATH 03:44:30 Generating Requirements File 03:45:02 Python 3.11.7 03:45:02 pip 24.2 from /tmp/venv-yoJP/lib/python3.11/site-packages/pip (python 3.11) 03:45:02 appdirs==1.4.4 03:45:02 argcomplete==3.5.0 03:45:02 aspy.yaml==1.3.0 03:45:02 attrs==24.2.0 03:45:02 autopage==0.5.2 03:45:02 beautifulsoup4==4.12.3 03:45:02 boto3==1.35.24 03:45:02 botocore==1.35.24 03:45:02 bs4==0.0.2 03:45:02 cachetools==5.5.0 03:45:02 certifi==2024.8.30 03:45:02 cffi==1.17.1 03:45:02 cfgv==3.4.0 03:45:02 chardet==5.2.0 03:45:02 charset-normalizer==3.3.2 03:45:02 click==8.1.7 03:45:02 cliff==4.7.0 03:45:02 cmd2==2.4.3 03:45:02 cryptography==3.3.2 03:45:02 debtcollector==3.0.0 03:45:02 decorator==5.1.1 03:45:02 defusedxml==0.7.1 03:45:02 Deprecated==1.2.14 03:45:02 distlib==0.3.8 03:45:02 dnspython==2.6.1 03:45:02 docker==4.2.2 03:45:02 dogpile.cache==1.3.3 03:45:02 durationpy==0.7 03:45:02 email_validator==2.2.0 03:45:02 filelock==3.16.1 03:45:02 future==1.0.0 03:45:02 gitdb==4.0.11 03:45:02 GitPython==3.1.43 03:45:02 google-auth==2.35.0 03:45:02 httplib2==0.22.0 03:45:02 identify==2.6.1 03:45:02 idna==3.10 03:45:02 importlib-resources==1.5.0 03:45:02 iso8601==2.1.0 03:45:02 Jinja2==3.1.4 03:45:02 jmespath==1.0.1 03:45:02 jsonpatch==1.33 03:45:02 jsonpointer==3.0.0 03:45:02 jsonschema==4.23.0 03:45:02 jsonschema-specifications==2023.12.1 03:45:02 keystoneauth1==5.8.0 03:45:02 kubernetes==31.0.0 03:45:02 lftools==0.37.10 03:45:02 lxml==5.3.0 03:45:02 MarkupSafe==2.1.5 03:45:02 msgpack==1.1.0 03:45:02 multi_key_dict==2.0.3 03:45:02 munch==4.0.0 03:45:02 netaddr==1.3.0 03:45:02 netifaces==0.11.0 03:45:02 niet==1.4.2 03:45:02 nodeenv==1.9.1 03:45:02 oauth2client==4.1.3 03:45:02 oauthlib==3.2.2 03:45:02 openstacksdk==4.0.0 03:45:02 os-client-config==2.1.0 03:45:02 os-service-types==1.7.0 03:45:02 osc-lib==3.1.0 03:45:02 oslo.config==9.6.0 03:45:02 oslo.context==5.6.0 03:45:02 oslo.i18n==6.4.0 03:45:02 oslo.log==6.1.2 03:45:02 oslo.serialization==5.5.0 03:45:02 oslo.utils==7.3.0 03:45:02 packaging==24.1 03:45:02 pbr==6.1.0 03:45:02 platformdirs==4.3.6 03:45:02 prettytable==3.11.0 03:45:02 pyasn1==0.6.1 03:45:02 pyasn1_modules==0.4.1 03:45:02 pycparser==2.22 03:45:02 pygerrit2==2.0.15 03:45:02 PyGithub==2.4.0 03:45:02 PyJWT==2.9.0 03:45:02 PyNaCl==1.5.0 03:45:02 pyparsing==2.4.7 03:45:02 pyperclip==1.9.0 03:45:02 pyrsistent==0.20.0 03:45:02 python-cinderclient==9.6.0 03:45:02 python-dateutil==2.9.0.post0 03:45:02 python-heatclient==4.0.0 03:45:02 python-jenkins==1.8.2 03:45:02 python-keystoneclient==5.5.0 03:45:02 python-magnumclient==4.7.0 03:45:02 python-openstackclient==7.1.2 03:45:02 python-swiftclient==4.6.0 03:45:02 PyYAML==6.0.2 03:45:02 referencing==0.35.1 03:45:02 requests==2.32.3 03:45:02 requests-oauthlib==2.0.0 03:45:02 requestsexceptions==1.4.0 03:45:02 rfc3986==2.0.0 03:45:02 rpds-py==0.20.0 03:45:02 rsa==4.9 03:45:02 ruamel.yaml==0.18.6 03:45:02 ruamel.yaml.clib==0.2.8 03:45:02 s3transfer==0.10.2 03:45:02 simplejson==3.19.3 03:45:02 six==1.16.0 03:45:02 smmap==5.0.1 03:45:02 soupsieve==2.6 03:45:02 stevedore==5.3.0 03:45:02 tabulate==0.9.0 03:45:02 toml==0.10.2 03:45:02 tomlkit==0.13.2 03:45:02 tqdm==4.66.5 03:45:02 typing_extensions==4.12.2 03:45:02 tzdata==2024.1 03:45:02 urllib3==1.26.20 03:45:02 virtualenv==20.26.5 03:45:02 wcwidth==0.2.13 03:45:02 websocket-client==1.8.0 03:45:02 wrapt==1.16.0 03:45:02 xdg==6.0.0 03:45:02 xmltodict==0.13.0 03:45:02 yq==3.4.3 03:45:02 [EnvInject] - Injecting environment variables from a build step. 03:45:02 [EnvInject] - Injecting as environment variables the properties content 03:45:02 PYTHON=python3 03:45:02 03:45:02 [EnvInject] - Variables injected successfully. 03:45:02 [transportpce-tox-verify-scandium] $ /bin/bash -l /tmp/jenkins9089228038457785983.sh 03:45:02 ---> tox-install.sh 03:45:02 + source /home/jenkins/lf-env.sh 03:45:02 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 03:45:02 ++ mktemp -d /tmp/venv-XXXX 03:45:02 + lf_venv=/tmp/venv-RBHJ 03:45:02 + local venv_file=/tmp/.os_lf_venv 03:45:02 + local python=python3 03:45:02 + local options 03:45:02 + local set_path=true 03:45:02 + local install_args= 03:45:02 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 03:45:02 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 03:45:02 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 03:45:02 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 03:45:02 + true 03:45:02 + case $1 in 03:45:02 + venv_file=/tmp/.toxenv 03:45:02 + shift 2 03:45:02 + true 03:45:02 + case $1 in 03:45:02 + shift 03:45:02 + break 03:45:02 + case $python in 03:45:02 + local pkg_list= 03:45:02 + [[ -d /opt/pyenv ]] 03:45:02 + echo 'Setup pyenv:' 03:45:02 Setup pyenv: 03:45:02 + export PYENV_ROOT=/opt/pyenv 03:45:02 + PYENV_ROOT=/opt/pyenv 03:45:02 + export PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 03:45:02 + PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 03:45:02 + pyenv versions 03:45:02 system 03:45:02 3.8.13 03:45:02 3.9.13 03:45:02 3.10.13 03:45:02 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) 03:45:02 + command -v pyenv 03:45:02 ++ pyenv init - --no-rehash 03:45:02 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 03:45:02 for i in ${!paths[@]}; do 03:45:02 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 03:45:02 fi; done; 03:45:02 echo "${paths[*]}"'\'')" 03:45:02 export PATH="/opt/pyenv/shims:${PATH}" 03:45:02 export PYENV_SHELL=bash 03:45:02 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 03:45:02 pyenv() { 03:45:02 local command 03:45:02 command="${1:-}" 03:45:02 if [ "$#" -gt 0 ]; then 03:45:02 shift 03:45:02 fi 03:45:02 03:45:02 case "$command" in 03:45:02 rehash|shell) 03:45:02 eval "$(pyenv "sh-$command" "$@")" 03:45:02 ;; 03:45:02 *) 03:45:02 command pyenv "$command" "$@" 03:45:02 ;; 03:45:02 esac 03:45:02 }' 03:45:02 +++ bash --norc -ec 'IFS=:; paths=($PATH); 03:45:02 for i in ${!paths[@]}; do 03:45:02 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 03:45:02 fi; done; 03:45:02 echo "${paths[*]}"' 03:45:02 ++ PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 03:45:02 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 03:45:02 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 03:45:02 ++ export PYENV_SHELL=bash 03:45:02 ++ PYENV_SHELL=bash 03:45:02 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 03:45:02 +++ complete -F _pyenv pyenv 03:45:02 ++ lf-pyver python3 03:45:02 ++ local py_version_xy=python3 03:45:02 ++ local py_version_xyz= 03:45:02 ++ pyenv versions 03:45:02 ++ local command 03:45:02 ++ command=versions 03:45:02 ++ '[' 1 -gt 0 ']' 03:45:02 ++ shift 03:45:02 ++ case "$command" in 03:45:02 ++ command pyenv versions 03:45:02 ++ pyenv versions 03:45:02 ++ awk '{ print $1 }' 03:45:02 ++ grep -E '^[0-9.]*[0-9]$' 03:45:02 ++ sed 's/^[ *]* //' 03:45:03 ++ [[ ! -s /tmp/.pyenv_versions ]] 03:45:03 +++ grep '^3' /tmp/.pyenv_versions 03:45:03 +++ sort -V 03:45:03 +++ tail -n 1 03:45:03 ++ py_version_xyz=3.11.7 03:45:03 ++ [[ -z 3.11.7 ]] 03:45:03 ++ echo 3.11.7 03:45:03 ++ return 0 03:45:03 + pyenv local 3.11.7 03:45:03 + local command 03:45:03 + command=local 03:45:03 + '[' 2 -gt 0 ']' 03:45:03 + shift 03:45:03 + case "$command" in 03:45:03 + command pyenv local 3.11.7 03:45:03 + pyenv local 3.11.7 03:45:03 + for arg in "$@" 03:45:03 + case $arg in 03:45:03 + pkg_list+='tox ' 03:45:03 + for arg in "$@" 03:45:03 + case $arg in 03:45:03 + pkg_list+='virtualenv ' 03:45:03 + for arg in "$@" 03:45:03 + case $arg in 03:45:03 + pkg_list+='urllib3~=1.26.15 ' 03:45:03 + [[ -f /tmp/.toxenv ]] 03:45:03 + [[ ! -f /tmp/.toxenv ]] 03:45:03 + [[ -n '' ]] 03:45:03 + python3 -m venv /tmp/venv-RBHJ 03:45:07 + echo 'lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-RBHJ' 03:45:07 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-RBHJ 03:45:07 + echo /tmp/venv-RBHJ 03:45:07 + echo 'lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv' 03:45:07 lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv 03:45:07 + /tmp/venv-RBHJ/bin/python3 -m pip install --upgrade --quiet pip virtualenv 03:45:10 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 03:45:10 + echo 'lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 ' 03:45:10 lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 03:45:10 + /tmp/venv-RBHJ/bin/python3 -m pip install --upgrade --quiet --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 03:45:14 + type python3 03:45:14 + true 03:45:14 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-RBHJ/bin to PATH' 03:45:14 lf-activate-venv(): INFO: Adding /tmp/venv-RBHJ/bin to PATH 03:45:14 + PATH=/tmp/venv-RBHJ/bin:/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 03:45:14 + return 0 03:45:14 + python3 --version 03:45:14 Python 3.11.7 03:45:14 + python3 -m pip --version 03:45:15 pip 24.2 from /tmp/venv-RBHJ/lib/python3.11/site-packages/pip (python 3.11) 03:45:15 + python3 -m pip freeze 03:45:15 cachetools==5.5.0 03:45:15 chardet==5.2.0 03:45:15 colorama==0.4.6 03:45:15 distlib==0.3.8 03:45:15 filelock==3.16.1 03:45:15 packaging==24.1 03:45:15 platformdirs==4.3.6 03:45:15 pluggy==1.5.0 03:45:15 pyproject-api==1.8.0 03:45:15 tox==4.20.0 03:45:15 urllib3==1.26.20 03:45:15 virtualenv==20.26.5 03:45:15 [transportpce-tox-verify-scandium] $ /bin/sh -xe /tmp/jenkins15010861132333365994.sh 03:45:15 [EnvInject] - Injecting environment variables from a build step. 03:45:15 [EnvInject] - Injecting as environment variables the properties content 03:45:15 PARALLEL=True 03:45:15 03:45:15 [EnvInject] - Variables injected successfully. 03:45:15 [transportpce-tox-verify-scandium] $ /bin/bash -l /tmp/jenkins17220697891008926354.sh 03:45:15 ---> tox-run.sh 03:45:15 + PATH=/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 03:45:15 + ARCHIVE_TOX_DIR=/w/workspace/transportpce-tox-verify-scandium/archives/tox 03:45:15 + ARCHIVE_DOC_DIR=/w/workspace/transportpce-tox-verify-scandium/archives/docs 03:45:15 + mkdir -p /w/workspace/transportpce-tox-verify-scandium/archives/tox 03:45:15 + cd /w/workspace/transportpce-tox-verify-scandium/. 03:45:15 + source /home/jenkins/lf-env.sh 03:45:15 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 03:45:15 ++ mktemp -d /tmp/venv-XXXX 03:45:15 + lf_venv=/tmp/venv-R062 03:45:15 + local venv_file=/tmp/.os_lf_venv 03:45:15 + local python=python3 03:45:15 + local options 03:45:15 + local set_path=true 03:45:15 + local install_args= 03:45:15 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 03:45:15 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 03:45:15 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 03:45:15 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 03:45:15 + true 03:45:15 + case $1 in 03:45:15 + venv_file=/tmp/.toxenv 03:45:15 + shift 2 03:45:15 + true 03:45:15 + case $1 in 03:45:15 + shift 03:45:15 + break 03:45:15 + case $python in 03:45:15 + local pkg_list= 03:45:15 + [[ -d /opt/pyenv ]] 03:45:15 + echo 'Setup pyenv:' 03:45:15 Setup pyenv: 03:45:15 + export PYENV_ROOT=/opt/pyenv 03:45:15 + PYENV_ROOT=/opt/pyenv 03:45:15 + export PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 03:45:15 + PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 03:45:15 + pyenv versions 03:45:15 system 03:45:15 3.8.13 03:45:15 3.9.13 03:45:15 3.10.13 03:45:15 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) 03:45:15 + command -v pyenv 03:45:15 ++ pyenv init - --no-rehash 03:45:15 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 03:45:15 for i in ${!paths[@]}; do 03:45:15 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 03:45:15 fi; done; 03:45:15 echo "${paths[*]}"'\'')" 03:45:15 export PATH="/opt/pyenv/shims:${PATH}" 03:45:15 export PYENV_SHELL=bash 03:45:15 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 03:45:15 pyenv() { 03:45:15 local command 03:45:15 command="${1:-}" 03:45:15 if [ "$#" -gt 0 ]; then 03:45:15 shift 03:45:15 fi 03:45:15 03:45:15 case "$command" in 03:45:15 rehash|shell) 03:45:15 eval "$(pyenv "sh-$command" "$@")" 03:45:15 ;; 03:45:15 *) 03:45:15 command pyenv "$command" "$@" 03:45:15 ;; 03:45:15 esac 03:45:15 }' 03:45:15 +++ bash --norc -ec 'IFS=:; paths=($PATH); 03:45:15 for i in ${!paths[@]}; do 03:45:15 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 03:45:15 fi; done; 03:45:15 echo "${paths[*]}"' 03:45:15 ++ PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 03:45:15 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 03:45:15 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 03:45:15 ++ export PYENV_SHELL=bash 03:45:15 ++ PYENV_SHELL=bash 03:45:15 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 03:45:15 +++ complete -F _pyenv pyenv 03:45:15 ++ lf-pyver python3 03:45:15 ++ local py_version_xy=python3 03:45:15 ++ local py_version_xyz= 03:45:15 ++ pyenv versions 03:45:15 ++ local command 03:45:15 ++ command=versions 03:45:15 ++ '[' 1 -gt 0 ']' 03:45:15 ++ sed 's/^[ *]* //' 03:45:15 ++ shift 03:45:15 ++ case "$command" in 03:45:15 ++ command pyenv versions 03:45:15 ++ pyenv versions 03:45:15 ++ grep -E '^[0-9.]*[0-9]$' 03:45:15 ++ awk '{ print $1 }' 03:45:15 ++ [[ ! -s /tmp/.pyenv_versions ]] 03:45:15 +++ sort -V 03:45:15 +++ grep '^3' /tmp/.pyenv_versions 03:45:15 +++ tail -n 1 03:45:15 ++ py_version_xyz=3.11.7 03:45:15 ++ [[ -z 3.11.7 ]] 03:45:15 ++ echo 3.11.7 03:45:15 ++ return 0 03:45:15 + pyenv local 3.11.7 03:45:15 + local command 03:45:15 + command=local 03:45:15 + '[' 2 -gt 0 ']' 03:45:15 + shift 03:45:15 + case "$command" in 03:45:15 + command pyenv local 3.11.7 03:45:15 + pyenv local 3.11.7 03:45:15 + for arg in "$@" 03:45:15 + case $arg in 03:45:15 + pkg_list+='tox ' 03:45:15 + for arg in "$@" 03:45:15 + case $arg in 03:45:15 + pkg_list+='virtualenv ' 03:45:15 + for arg in "$@" 03:45:15 + case $arg in 03:45:15 + pkg_list+='urllib3~=1.26.15 ' 03:45:15 + [[ -f /tmp/.toxenv ]] 03:45:15 ++ cat /tmp/.toxenv 03:45:15 + lf_venv=/tmp/venv-RBHJ 03:45:15 + echo 'lf-activate-venv(): INFO: Reuse venv:/tmp/venv-RBHJ from' file:/tmp/.toxenv 03:45:15 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-RBHJ from file:/tmp/.toxenv 03:45:15 + /tmp/venv-RBHJ/bin/python3 -m pip install --upgrade --quiet pip virtualenv 03:45:16 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 03:45:16 + echo 'lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 ' 03:45:16 lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 03:45:16 + /tmp/venv-RBHJ/bin/python3 -m pip install --upgrade --quiet --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 03:45:17 + type python3 03:45:17 + true 03:45:17 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-RBHJ/bin to PATH' 03:45:17 lf-activate-venv(): INFO: Adding /tmp/venv-RBHJ/bin to PATH 03:45:17 + PATH=/tmp/venv-RBHJ/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 03:45:17 + return 0 03:45:17 + [[ -d /opt/pyenv ]] 03:45:17 + echo '---> Setting up pyenv' 03:45:17 ---> Setting up pyenv 03:45:17 + export PYENV_ROOT=/opt/pyenv 03:45:17 + PYENV_ROOT=/opt/pyenv 03:45:17 + export PATH=/opt/pyenv/bin:/tmp/venv-RBHJ/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 03:45:17 + PATH=/opt/pyenv/bin:/tmp/venv-RBHJ/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 03:45:17 ++ pwd 03:45:17 + PYTHONPATH=/w/workspace/transportpce-tox-verify-scandium 03:45:17 + export PYTHONPATH 03:45:17 + export TOX_TESTENV_PASSENV=PYTHONPATH 03:45:17 + TOX_TESTENV_PASSENV=PYTHONPATH 03:45:17 + tox --version 03:45:17 4.20.0 from /tmp/venv-RBHJ/lib/python3.11/site-packages/tox/__init__.py 03:45:18 + PARALLEL=True 03:45:18 + TOX_OPTIONS_LIST= 03:45:18 + [[ -n '' ]] 03:45:18 + case ${PARALLEL,,} in 03:45:18 + TOX_OPTIONS_LIST=' --parallel auto --parallel-live' 03:45:18 + tox --parallel auto --parallel-live 03:45:18 + tee -a /w/workspace/transportpce-tox-verify-scandium/archives/tox/tox.log 03:45:19 checkbashisms: freeze> python -m pip freeze --all 03:45:19 buildcontroller: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 03:45:19 docs-linkcheck: install_deps> python -I -m pip install -r docs/requirements.txt 03:45:19 docs: install_deps> python -I -m pip install -r docs/requirements.txt 03:45:20 checkbashisms: pip==24.2,setuptools==75.1.0,wheel==0.44.0 03:45:20 checkbashisms: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./fixCIcentOS8reposMirrors.sh 03:45:20 checkbashisms: commands[1] /w/workspace/transportpce-tox-verify-scandium/tests> sh -c 'command checkbashisms>/dev/null || sudo yum install -y devscripts-checkbashisms || sudo yum install -y devscripts-minimal || sudo yum install -y devscripts || sudo yum install -y https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/31/Everything/x86_64/os/Packages/d/devscripts-checkbashisms-2.19.6-2.fc31.x86_64.rpm || (echo "checkbashisms command not found - please install it (e.g. sudo apt-get install devscripts | yum install devscripts-minimal )" >&2 && exit 1)' 03:45:20 checkbashisms: commands[2] /w/workspace/transportpce-tox-verify-scandium/tests> find . -not -path '*/\.*' -name '*.sh' -exec checkbashisms -f '{}' + 03:45:20 script ./reflectwarn.sh does not appear to have a #! interpreter line; 03:45:20 you may get strange results 03:45:21 checkbashisms: OK ✔ in 2.71 seconds 03:45:21 pre-commit: install_deps> python -I -m pip install pre-commit 03:45:23 pre-commit: freeze> python -m pip freeze --all 03:45:24 pre-commit: cfgv==3.4.0,distlib==0.3.8,filelock==3.16.1,identify==2.6.1,nodeenv==1.9.1,pip==24.2,platformdirs==4.3.6,pre-commit==3.8.0,PyYAML==6.0.2,setuptools==75.1.0,virtualenv==20.26.5,wheel==0.44.0 03:45:24 pre-commit: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./fixCIcentOS8reposMirrors.sh 03:45:24 pre-commit: commands[1] /w/workspace/transportpce-tox-verify-scandium/tests> sh -c 'which cpan || sudo yum install -y perl-CPAN || (echo "cpan command not found - please install it (e.g. sudo apt-get install perl-modules | yum install perl-CPAN )" >&2 && exit 1)' 03:45:24 /usr/bin/cpan 03:45:24 pre-commit: commands[2] /w/workspace/transportpce-tox-verify-scandium/tests> pre-commit run --all-files --show-diff-on-failure 03:45:24 [INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks. 03:45:24 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint. 03:45:25 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint:./gitlint-core[trusted-deps]. 03:45:25 [INFO] Initializing environment for https://github.com/Lucas-C/pre-commit-hooks. 03:45:26 [INFO] Initializing environment for https://github.com/pre-commit/mirrors-autopep8. 03:45:26 buildcontroller: freeze> python -m pip freeze --all 03:45:26 [INFO] Initializing environment for https://github.com/perltidy/perltidy. 03:45:26 buildcontroller: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 03:45:26 buildcontroller: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./build_controller.sh 03:45:26 + update-java-alternatives -l 03:45:26 java-1.11.0-openjdk-amd64 1111 /usr/lib/jvm/java-1.11.0-openjdk-amd64 03:45:26 java-1.12.0-openjdk-amd64 1211 /usr/lib/jvm/java-1.12.0-openjdk-amd64 03:45:26 java-1.17.0-openjdk-amd64 1711 /usr/lib/jvm/java-1.17.0-openjdk-amd64 03:45:26 java-1.21.0-openjdk-amd64 2111 /usr/lib/jvm/java-1.21.0-openjdk-amd64 03:45:26 java-1.8.0-openjdk-amd64 1081 /usr/lib/jvm/java-1.8.0-openjdk-amd64 03:45:26 + sudo update-java-alternatives -s java-1.21.0-openjdk-amd64 03:45:26 + java -version 03:45:26 + sed -n ;s/.* version "\(.*\)\.\(.*\)\..*".*$/\1/p; 03:45:27 [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. 03:45:27 [INFO] Once installed this environment will be reused. 03:45:27 [INFO] This may take a few minutes... 03:45:27 + JAVA_VER=21 03:45:27 + echo 21 03:45:27 21 03:45:27 + + javac -version 03:45:27 sed -n ;s/javac \(.*\)\.\(.*\)\..*.*$/\1/p; 03:45:27 + JAVAC_VER=21 03:45:27 + echo 21 03:45:27 21 03:45:27 ok, java is 21 or newer 03:45:27 + [ 21 -ge 21 ] 03:45:27 + [ 21 -ge 21 ] 03:45:27 + echo ok, java is 21 or newer 03:45:27 + wget -nv https://dlcdn.apache.org/maven/maven-3/3.9.8/binaries/apache-maven-3.9.8-bin.tar.gz -P /tmp 03:45:27 2024-09-21 03:45:27 URL:https://dlcdn.apache.org/maven/maven-3/3.9.8/binaries/apache-maven-3.9.8-bin.tar.gz [9083702/9083702] -> "/tmp/apache-maven-3.9.8-bin.tar.gz" [1] 03:45:27 + sudo mkdir -p /opt 03:45:27 + sudo tar xf /tmp/apache-maven-3.9.8-bin.tar.gz -C /opt 03:45:27 + sudo ln -s /opt/apache-maven-3.9.8 /opt/maven 03:45:28 + sudo ln -s /opt/maven/bin/mvn /usr/bin/mvn 03:45:28 + mvn --version 03:45:28 Apache Maven 3.9.8 (36645f6c9b5079805ea5009217e36f2cffd34256) 03:45:28 Maven home: /opt/maven 03:45:28 Java version: 21.0.4, vendor: Ubuntu, runtime: /usr/lib/jvm/java-21-openjdk-amd64 03:45:28 Default locale: en, platform encoding: UTF-8 03:45:28 OS name: "linux", version: "5.4.0-190-generic", arch: "amd64", family: "unix" 03:45:29 NOTE: Picked up JDK_JAVA_OPTIONS: 03:45:29 --add-opens=java.base/java.io=ALL-UNNAMED 03:45:29 --add-opens=java.base/java.lang=ALL-UNNAMED 03:45:29 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 03:45:29 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 03:45:29 --add-opens=java.base/java.net=ALL-UNNAMED 03:45:29 --add-opens=java.base/java.nio=ALL-UNNAMED 03:45:29 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 03:45:29 --add-opens=java.base/java.nio.file=ALL-UNNAMED 03:45:29 --add-opens=java.base/java.util=ALL-UNNAMED 03:45:29 --add-opens=java.base/java.util.jar=ALL-UNNAMED 03:45:29 --add-opens=java.base/java.util.stream=ALL-UNNAMED 03:45:29 --add-opens=java.base/java.util.zip=ALL-UNNAMED 03:45:29 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 03:45:29 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 03:45:29 -Xlog:disable 03:45:31 [INFO] Installing environment for https://github.com/Lucas-C/pre-commit-hooks. 03:45:31 [INFO] Once installed this environment will be reused. 03:45:31 [INFO] This may take a few minutes... 03:45:37 [INFO] Installing environment for https://github.com/pre-commit/mirrors-autopep8. 03:45:37 [INFO] Once installed this environment will be reused. 03:45:37 [INFO] This may take a few minutes... 03:45:40 [INFO] Installing environment for https://github.com/perltidy/perltidy. 03:45:40 [INFO] Once installed this environment will be reused. 03:45:40 [INFO] This may take a few minutes... 03:45:48 docs-linkcheck: freeze> python -m pip freeze --all 03:45:48 docs: freeze> python -m pip freeze --all 03:45:49 docs-linkcheck: alabaster==0.7.16,attrs==24.2.0,babel==2.16.0,blockdiag==3.0.0,certifi==2024.8.30,charset-normalizer==3.3.2,contourpy==1.3.0,cycler==0.12.1,docutils==0.20.1,fonttools==4.53.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.10,imagesize==1.4.1,Jinja2==3.1.4,jsonschema==3.2.0,kiwisolver==1.4.7,lfdocs-conf==0.9.0,MarkupSafe==2.1.5,matplotlib==3.9.2,numpy==2.1.1,nwdiag==3.0.0,packaging==24.1,pillow==10.4.0,pip==24.2,Pygments==2.18.0,pyparsing==3.1.4,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.2,requests==2.32.3,requests-file==1.5.1,seqdiag==3.0.0,setuptools==75.1.0,six==1.16.0,snowballstemmer==2.2.0,Sphinx==7.4.7,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-rtd-theme==2.0.0,sphinx-tabs==3.4.5,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.30,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.2.3,webcolors==24.8.0,wheel==0.44.0 03:45:49 docs-linkcheck: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> sphinx-build -q -b linkcheck -d /w/workspace/transportpce-tox-verify-scandium/.tox/docs-linkcheck/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-scandium/docs/_build/linkcheck 03:45:49 docs: alabaster==0.7.16,attrs==24.2.0,babel==2.16.0,blockdiag==3.0.0,certifi==2024.8.30,charset-normalizer==3.3.2,contourpy==1.3.0,cycler==0.12.1,docutils==0.20.1,fonttools==4.53.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.10,imagesize==1.4.1,Jinja2==3.1.4,jsonschema==3.2.0,kiwisolver==1.4.7,lfdocs-conf==0.9.0,MarkupSafe==2.1.5,matplotlib==3.9.2,numpy==2.1.1,nwdiag==3.0.0,packaging==24.1,pillow==10.4.0,pip==24.2,Pygments==2.18.0,pyparsing==3.1.4,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.2,requests==2.32.3,requests-file==1.5.1,seqdiag==3.0.0,setuptools==75.1.0,six==1.16.0,snowballstemmer==2.2.0,Sphinx==7.4.7,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-rtd-theme==2.0.0,sphinx-tabs==3.4.5,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.30,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.2.3,webcolors==24.8.0,wheel==0.44.0 03:45:49 docs: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> sphinx-build -q -W --keep-going -b html -n -d /w/workspace/transportpce-tox-verify-scandium/.tox/docs/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-scandium/docs/_build/html 03:45:50 /w/workspace/transportpce-tox-verify-scandium/.tox/docs-linkcheck/lib/python3.11/site-packages/sphinx/builders/linkcheck.py:86: RemovedInSphinx80Warning: The default value for 'linkcheck_report_timeouts_as_broken' will change to False in Sphinx 8, meaning that request timeouts will be reported with a new 'timeout' status, instead of as 'broken'. This is intended to provide more detail as to the failure mode. See https://github.com/sphinx-doc/sphinx/issues/11868 for details. 03:45:50 warnings.warn(deprecation_msg, RemovedInSphinx80Warning, stacklevel=1) 03:45:51 docs: OK ✔ in 33.56 seconds 03:45:51 pylint: install_deps> python -I -m pip install 'pylint>=2.6.0' 03:45:52 trim trailing whitespace.................................................Passed 03:45:53 Tabs remover.............................................................Passed 03:45:53 autopep8.................................................................Passed 03:45:57 perltidy.................................................................docs-linkcheck: OK ✔ in 35 seconds 03:45:57 pylint: freeze> python -m pip freeze --all 03:45:57 pylint: astroid==3.3.3,dill==0.3.8,isort==5.13.2,mccabe==0.7.0,pip==24.2,platformdirs==4.3.6,pylint==3.3.0,setuptools==75.1.0,tomlkit==0.13.2,wheel==0.44.0 03:45:57 pylint: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> find transportpce_tests/ -name '*.py' -exec pylint --fail-under=10 --max-line-length=120 --disable=missing-docstring,import-error --disable=fixme --disable=duplicate-code '--module-rgx=([a-z0-9_]+$)|([0-9.]{1,30}$)' '--method-rgx=(([a-z_][a-zA-Z0-9_]{2,})|(_[a-z0-9_]*)|(__[a-zA-Z][a-zA-Z0-9_]+__))$' '--variable-rgx=[a-zA-Z_][a-zA-Z0-9_]{1,30}$' '{}' + 03:45:58 Passed 03:45:58 pre-commit: commands[3] /w/workspace/transportpce-tox-verify-scandium/tests> pre-commit run gitlint-ci --hook-stage manual 03:45:58 [INFO] Installing environment for https://github.com/jorisroovers/gitlint. 03:45:58 [INFO] Once installed this environment will be reused. 03:45:58 [INFO] This may take a few minutes... 03:46:21 gitlint..................................................................Passed 03:46:26 03:46:26 ------------------------------------ 03:46:26 Your code has been rated at 10.00/10 03:46:26 03:47:15 pre-commit: OK ✔ in 1 minute 0.89 seconds 03:47:15 pylint: OK ✔ in 36.8 seconds 03:47:15 buildcontroller: OK ✔ in 1 minute 56.49 seconds 03:47:15 sims: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 03:47:15 testsPCE: install_deps> python -I -m pip install gnpy4tpce==2.4.7 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 03:47:15 build_karaf_tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 03:47:15 build_karaf_tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 03:47:23 build_karaf_tests121: freeze> python -m pip freeze --all 03:47:23 sims: freeze> python -m pip freeze --all 03:47:23 build_karaf_tests221: freeze> python -m pip freeze --all 03:47:23 build_karaf_tests121: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 03:47:23 build_karaf_tests121: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./build_karaf_for_tests.sh 03:47:23 NOTE: Picked up JDK_JAVA_OPTIONS: 03:47:23 --add-opens=java.base/java.io=ALL-UNNAMED 03:47:23 --add-opens=java.base/java.lang=ALL-UNNAMED 03:47:23 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 03:47:23 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 03:47:23 --add-opens=java.base/java.net=ALL-UNNAMED 03:47:23 --add-opens=java.base/java.nio=ALL-UNNAMED 03:47:23 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 03:47:23 --add-opens=java.base/java.nio.file=ALL-UNNAMED 03:47:23 --add-opens=java.base/java.util=ALL-UNNAMED 03:47:23 --add-opens=java.base/java.util.jar=ALL-UNNAMED 03:47:23 --add-opens=java.base/java.util.stream=ALL-UNNAMED 03:47:23 --add-opens=java.base/java.util.zip=ALL-UNNAMED 03:47:23 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 03:47:23 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 03:47:23 -Xlog:disable 03:47:23 sims: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 03:47:23 sims: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./install_lightynode.sh 03:47:23 Using lighynode version 20.1.0.2 03:47:23 Installing lightynode device to ./lightynode/lightynode-openroadm-device directory 03:47:23 build_karaf_tests221: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 03:47:23 build_karaf_tests221: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./build_karaf_for_tests.sh 03:47:23 NOTE: Picked up JDK_JAVA_OPTIONS: 03:47:23 --add-opens=java.base/java.io=ALL-UNNAMED 03:47:23 --add-opens=java.base/java.lang=ALL-UNNAMED 03:47:23 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 03:47:23 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 03:47:23 --add-opens=java.base/java.net=ALL-UNNAMED 03:47:23 --add-opens=java.base/java.nio=ALL-UNNAMED 03:47:23 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 03:47:23 --add-opens=java.base/java.nio.file=ALL-UNNAMED 03:47:23 --add-opens=java.base/java.util=ALL-UNNAMED 03:47:23 --add-opens=java.base/java.util.jar=ALL-UNNAMED 03:47:23 --add-opens=java.base/java.util.stream=ALL-UNNAMED 03:47:23 --add-opens=java.base/java.util.zip=ALL-UNNAMED 03:47:23 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 03:47:23 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 03:47:23 -Xlog:disable 03:47:26 sims: OK ✔ in 11.31 seconds 03:47:26 build_karaf_tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 03:47:38 build_karaf_tests71: freeze> python -m pip freeze --all 03:47:38 build_karaf_tests71: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 03:47:38 build_karaf_tests71: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./build_karaf_for_tests.sh 03:47:38 NOTE: Picked up JDK_JAVA_OPTIONS: 03:47:38 --add-opens=java.base/java.io=ALL-UNNAMED 03:47:38 --add-opens=java.base/java.lang=ALL-UNNAMED 03:47:38 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 03:47:38 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 03:47:38 --add-opens=java.base/java.net=ALL-UNNAMED 03:47:38 --add-opens=java.base/java.nio=ALL-UNNAMED 03:47:38 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 03:47:38 --add-opens=java.base/java.nio.file=ALL-UNNAMED 03:47:38 --add-opens=java.base/java.util=ALL-UNNAMED 03:47:38 --add-opens=java.base/java.util.jar=ALL-UNNAMED 03:47:38 --add-opens=java.base/java.util.stream=ALL-UNNAMED 03:47:38 --add-opens=java.base/java.util.zip=ALL-UNNAMED 03:47:38 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 03:47:38 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 03:47:38 -Xlog:disable 03:48:17 build_karaf_tests221: OK ✔ in 1 minute 1.61 seconds 03:48:17 build_karaf_tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 03:48:18 build_karaf_tests121: OK ✔ in 1 minute 3.6 seconds 03:48:18 tests_tapi: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 03:48:33 build_karaf_tests_hybrid: freeze> python -m pip freeze --all 03:48:33 tests_tapi: freeze> python -m pip freeze --all 03:48:33 build_karaf_tests_hybrid: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 03:48:33 build_karaf_tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./build_karaf_for_tests.sh 03:48:33 NOTE: Picked up JDK_JAVA_OPTIONS: 03:48:33 --add-opens=java.base/java.io=ALL-UNNAMED 03:48:33 --add-opens=java.base/java.lang=ALL-UNNAMED 03:48:33 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 03:48:33 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 03:48:33 --add-opens=java.base/java.net=ALL-UNNAMED 03:48:33 --add-opens=java.base/java.nio=ALL-UNNAMED 03:48:33 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 03:48:33 --add-opens=java.base/java.nio.file=ALL-UNNAMED 03:48:33 --add-opens=java.base/java.util=ALL-UNNAMED 03:48:33 --add-opens=java.base/java.util.jar=ALL-UNNAMED 03:48:33 --add-opens=java.base/java.util.stream=ALL-UNNAMED 03:48:33 --add-opens=java.base/java.util.zip=ALL-UNNAMED 03:48:33 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 03:48:33 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 03:48:33 -Xlog:disable 03:48:33 tests_tapi: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 03:48:33 tests_tapi: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh tapi 03:48:33 using environment variables from ./karaf221.env 03:48:33 pytest -q transportpce_tests/tapi/test01_abstracted_topology.py 03:48:51 build_karaf_tests71: OK ✔ in 1 minute 14.92 seconds 03:48:51 testsPCE: freeze> python -m pip freeze --all 03:48:52 testsPCE: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,click==8.1.7,contourpy==1.3.0,cryptography==3.3.2,cycler==0.12.1,dict2xml==1.7.6,Flask==2.1.3,Flask-Injector==0.14.0,fonttools==4.53.1,gnpy4tpce==2.4.7,idna==3.10,iniconfig==2.0.0,injector==0.22.0,itsdangerous==2.2.0,Jinja2==3.1.4,kiwisolver==1.4.7,lxml==5.3.0,MarkupSafe==2.1.5,matplotlib==3.9.2,netconf-client==3.1.1,networkx==2.8.8,numpy==1.26.4,packaging==24.1,pandas==1.5.3,paramiko==3.5.0,pbr==5.11.1,pillow==10.4.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pyparsing==3.1.4,pytest==8.3.3,python-dateutil==2.9.0.post0,pytz==2024.2,requests==2.32.3,scipy==1.14.1,setuptools==50.3.2,six==1.16.0,urllib3==2.2.3,Werkzeug==2.0.3,wheel==0.44.0,xlrd==1.2.0 03:48:52 testsPCE: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh pce 03:48:52 pytest -q transportpce_tests/pce/test01_pce.py 03:49:58 ........................................... [100%] 03:51:02 20 passed in 129.30s (0:02:09) 03:51:02 pytest -q transportpce_tests/pce/test02_pce_400G.py 03:51:03 .................. [100%] 03:51:48 9 passed in 45.77s 03:51:48 pytest -q transportpce_tests/pce/test03_gnpy.py 03:51:50 .............. [100%] 03:52:31 8 passed in 42.71s 03:52:31 pytest -q transportpce_tests/pce/test04_pce_bug_fix.py 03:52:34 ............ [100%] 03:52:39 50 passed in 245.23s (0:04:05) 03:52:39 pytest -q transportpce_tests/tapi/test02_full_topology.py 03:53:17 ... [100%] 03:53:23 3 passed in 52.26s 03:53:24 build_karaf_tests_hybrid: OK ✔ in 1 minute 7.31 seconds 03:53:24 testsPCE: OK ✔ in 6 minutes 9.19 seconds 03:53:24 tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 03:53:35 tests121: freeze> python -m pip freeze --all 03:53:35 tests121: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 03:53:35 tests121: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh 1.2.1 03:53:35 using environment variables from ./karaf121.env 03:53:35 pytest -q transportpce_tests/1.2.1/test01_portmapping.py 03:54:08 ...........FF....................F [100%] 03:57:37 =================================== FAILURES =================================== 03:57:37 _____________ TransportPCEtesting.test_12_check_openroadm_topology _____________ 03:57:37 03:57:37 self = 03:57:37 03:57:37 def test_12_check_openroadm_topology(self): 03:57:37 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 03:57:37 self.assertEqual(response['status_code'], requests.codes.ok) 03:57:37 > self.assertEqual(len(response['network'][0]['node']), 13, 'There should be 13 openroadm nodes') 03:57:37 E AssertionError: 17 != 13 : There should be 13 openroadm nodes 03:57:37 03:57:37 transportpce_tests/tapi/test02_full_topology.py:272: AssertionError 03:57:37 ____________ TransportPCEtesting.test_13_get_tapi_topology_details _____________ 03:57:37 03:57:37 self = 03:57:37 03:57:37 def test_13_get_tapi_topology_details(self): 03:57:37 self.tapi_topo["topology-id"] = test_utils.T0_FULL_MULTILAYER_TOPO_UUID 03:57:37 response = test_utils.transportpce_api_rpc_request( 03:57:37 'tapi-topology', 'get-topology-details', self.tapi_topo) 03:57:37 time.sleep(2) 03:57:37 self.assertEqual(response['status_code'], requests.codes.ok) 03:57:37 > self.assertEqual(len(response['output']['topology']['node']), 8, 'There should be 8 TAPI nodes') 03:57:37 E AssertionError: 9 != 8 : There should be 8 TAPI nodes 03:57:37 03:57:37 transportpce_tests/tapi/test02_full_topology.py:282: AssertionError 03:57:37 =========================== short test summary info ============================ 03:57:37 FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_12_check_openroadm_topology 03:57:37 FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_13_get_tapi_topology_details 03:57:37 2 failed, 28 passed in 297.78s (0:04:57) 03:57:37 Ftests_tapi: exit 1 (543.66 seconds) /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh tapi pid=31088 03:57:37 tests_tapi: FAIL ✖ in 9 minutes 19.24 seconds 03:57:37 tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 03:57:38 FFFFFtests71: freeze> python -m pip freeze --all 03:57:44 tests71: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 03:57:44 tests71: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh 7.1 03:57:44 using environment variables from ./karaf71.env 03:57:44 pytest -q transportpce_tests/7.1/test01_portmapping.py 03:57:44 FFFFFFFFFFF [100%] 03:58:00 =================================== FAILURES =================================== 03:58:00 _____ TransportPCEPortMappingTesting.test_04_rdm_portmapping_DEG1_TTP_TXRX _____ 03:58:00 03:58:00 self = 03:58:00 method = 'GET' 03:58:00 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX' 03:58:00 body = None 03:58:00 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 03:58:00 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:00 redirect = False, assert_same_host = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 03:58:00 release_conn = False, chunked = False, body_pos = None, preload_content = False 03:58:00 decode_content = False, response_kw = {} 03:58:00 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX', query=None, fragment=None) 03:58:00 destination_scheme = None, conn = None, release_this_conn = True 03:58:00 http_tunnel_required = False, err = None, clean_exit = False 03:58:00 03:58:00 def urlopen( # type: ignore[override] 03:58:00 self, 03:58:00 method: str, 03:58:00 url: str, 03:58:00 body: _TYPE_BODY | None = None, 03:58:00 headers: typing.Mapping[str, str] | None = None, 03:58:00 retries: Retry | bool | int | None = None, 03:58:00 redirect: bool = True, 03:58:00 assert_same_host: bool = True, 03:58:00 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:00 pool_timeout: int | None = None, 03:58:00 release_conn: bool | None = None, 03:58:00 chunked: bool = False, 03:58:00 body_pos: _TYPE_BODY_POSITION | None = None, 03:58:00 preload_content: bool = True, 03:58:00 decode_content: bool = True, 03:58:00 **response_kw: typing.Any, 03:58:00 ) -> BaseHTTPResponse: 03:58:00 """ 03:58:00 Get a connection from the pool and perform an HTTP request. This is the 03:58:00 lowest level call for making a request, so you'll need to specify all 03:58:00 the raw details. 03:58:00 03:58:00 .. note:: 03:58:00 03:58:00 More commonly, it's appropriate to use a convenience method 03:58:00 such as :meth:`request`. 03:58:00 03:58:00 .. note:: 03:58:00 03:58:00 `release_conn` will only behave as expected if 03:58:00 `preload_content=False` because we want to make 03:58:00 `preload_content=False` the default behaviour someday soon without 03:58:00 breaking backwards compatibility. 03:58:00 03:58:00 :param method: 03:58:00 HTTP request method (such as GET, POST, PUT, etc.) 03:58:00 03:58:00 :param url: 03:58:00 The URL to perform the request on. 03:58:00 03:58:00 :param body: 03:58:00 Data to send in the request body, either :class:`str`, :class:`bytes`, 03:58:00 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 03:58:00 03:58:00 :param headers: 03:58:00 Dictionary of custom headers to send, such as User-Agent, 03:58:00 If-None-Match, etc. If None, pool headers are used. If provided, 03:58:00 these headers completely replace any pool-specific headers. 03:58:00 03:58:00 :param retries: 03:58:00 Configure the number of retries to allow before raising a 03:58:00 :class:`~urllib3.exceptions.MaxRetryError` exception. 03:58:00 03:58:00 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 03:58:00 :class:`~urllib3.util.retry.Retry` object for fine-grained control 03:58:00 over different types of retries. 03:58:00 Pass an integer number to retry connection errors that many times, 03:58:00 but no other types of errors. Pass zero to never retry. 03:58:00 03:58:00 If ``False``, then retries are disabled and any exception is raised 03:58:00 immediately. Also, instead of raising a MaxRetryError on redirects, 03:58:00 the redirect response will be returned. 03:58:00 03:58:00 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 03:58:00 03:58:00 :param redirect: 03:58:00 If True, automatically handle redirects (status codes 301, 302, 03:58:00 303, 307, 308). Each redirect counts as a retry. Disabling retries 03:58:00 will disable redirect, too. 03:58:00 03:58:00 :param assert_same_host: 03:58:00 If ``True``, will make sure that the host of the pool requests is 03:58:00 consistent else will raise HostChangedError. When ``False``, you can 03:58:00 use the pool on an HTTP proxy and request foreign hosts. 03:58:00 03:58:00 :param timeout: 03:58:00 If specified, overrides the default timeout for this one 03:58:00 request. It may be a float (in seconds) or an instance of 03:58:00 :class:`urllib3.util.Timeout`. 03:58:00 03:58:00 :param pool_timeout: 03:58:00 If set and the pool is set to block=True, then this method will 03:58:00 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 03:58:00 connection is available within the time period. 03:58:00 03:58:00 :param bool preload_content: 03:58:00 If True, the response's body will be preloaded into memory. 03:58:00 03:58:00 :param bool decode_content: 03:58:00 If True, will attempt to decode the body based on the 03:58:00 'content-encoding' header. 03:58:00 03:58:00 :param release_conn: 03:58:00 If False, then the urlopen call will not release the connection 03:58:00 back into the pool once a response is received (but will release if 03:58:00 you read the entire contents of the response such as when 03:58:00 `preload_content=True`). This is useful if you're not preloading 03:58:00 the response's content immediately. You will need to call 03:58:00 ``r.release_conn()`` on the response ``r`` to return the connection 03:58:00 back into the pool. If None, it takes the value of ``preload_content`` 03:58:00 which defaults to ``True``. 03:58:00 03:58:00 :param bool chunked: 03:58:00 If True, urllib3 will send the body using chunked transfer 03:58:00 encoding. Otherwise, urllib3 will send the body using the standard 03:58:00 content-length form. Defaults to False. 03:58:00 03:58:00 :param int body_pos: 03:58:00 Position to seek to in file-like body in the event of a retry or 03:58:00 redirect. Typically this won't need to be set because urllib3 will 03:58:00 auto-populate the value when needed. 03:58:00 """ 03:58:00 parsed_url = parse_url(url) 03:58:00 destination_scheme = parsed_url.scheme 03:58:00 03:58:00 if headers is None: 03:58:00 headers = self.headers 03:58:00 03:58:00 if not isinstance(retries, Retry): 03:58:00 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 03:58:00 03:58:00 if release_conn is None: 03:58:00 release_conn = preload_content 03:58:00 03:58:00 # Check host 03:58:00 if assert_same_host and not self.is_same_host(url): 03:58:00 raise HostChangedError(self, url, retries) 03:58:00 03:58:00 # Ensure that the URL we're connecting to is properly encoded 03:58:00 if url.startswith("/"): 03:58:00 url = to_str(_encode_target(url)) 03:58:00 else: 03:58:00 url = to_str(parsed_url.url) 03:58:00 03:58:00 conn = None 03:58:00 03:58:00 # Track whether `conn` needs to be released before 03:58:00 # returning/raising/recursing. Update this variable if necessary, and 03:58:00 # leave `release_conn` constant throughout the function. That way, if 03:58:00 # the function recurses, the original value of `release_conn` will be 03:58:00 # passed down into the recursive call, and its value will be respected. 03:58:00 # 03:58:00 # See issue #651 [1] for details. 03:58:00 # 03:58:00 # [1] 03:58:00 release_this_conn = release_conn 03:58:00 03:58:00 http_tunnel_required = connection_requires_http_tunnel( 03:58:00 self.proxy, self.proxy_config, destination_scheme 03:58:00 ) 03:58:00 03:58:00 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 03:58:00 # have to copy the headers dict so we can safely change it without those 03:58:00 # changes being reflected in anyone else's copy. 03:58:00 if not http_tunnel_required: 03:58:00 headers = headers.copy() # type: ignore[attr-defined] 03:58:00 headers.update(self.proxy_headers) # type: ignore[union-attr] 03:58:00 03:58:00 # Must keep the exception bound to a separate variable or else Python 3 03:58:00 # complains about UnboundLocalError. 03:58:00 err = None 03:58:00 03:58:00 # Keep track of whether we cleanly exited the except block. This 03:58:00 # ensures we do proper cleanup in finally. 03:58:00 clean_exit = False 03:58:00 03:58:00 # Rewind body position, if needed. Record current position 03:58:00 # for future rewinds in the event of a redirect/retry. 03:58:00 body_pos = set_file_position(body, body_pos) 03:58:00 03:58:00 try: 03:58:00 # Request a connection from the queue. 03:58:00 timeout_obj = self._get_timeout(timeout) 03:58:00 conn = self._get_conn(timeout=pool_timeout) 03:58:00 03:58:00 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 03:58:00 03:58:00 # Is this a closed/new connection that requires CONNECT tunnelling? 03:58:00 if self.proxy is not None and http_tunnel_required and conn.is_closed: 03:58:00 try: 03:58:00 self._prepare_proxy(conn) 03:58:00 except (BaseSSLError, OSError, SocketTimeout) as e: 03:58:00 self._raise_timeout( 03:58:00 err=e, url=self.proxy.url, timeout_value=conn.timeout 03:58:00 ) 03:58:00 raise 03:58:00 03:58:00 # If we're going to release the connection in ``finally:``, then 03:58:00 # the response doesn't need to know about the connection. Otherwise 03:58:00 # it will also try to release it and we'll have a double-release 03:58:00 # mess. 03:58:00 response_conn = conn if not release_conn else None 03:58:00 03:58:00 # Make the request on the HTTPConnection object 03:58:00 > response = self._make_request( 03:58:00 conn, 03:58:00 method, 03:58:00 url, 03:58:00 timeout=timeout_obj, 03:58:00 body=body, 03:58:00 headers=headers, 03:58:00 chunked=chunked, 03:58:00 retries=retries, 03:58:00 response_conn=response_conn, 03:58:00 preload_content=preload_content, 03:58:00 decode_content=decode_content, 03:58:00 **response_kw, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:536: in _make_request 03:58:00 response = conn.getresponse() 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:507: in getresponse 03:58:00 httplib_response = super().getresponse() 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1386: in getresponse 03:58:00 response.begin() 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:325: in begin 03:58:00 version, status, reason = self._read_status() 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:286: in _read_status 03:58:00 line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = 03:58:00 b = 03:58:00 03:58:00 def readinto(self, b): 03:58:00 """Read up to len(b) bytes into the writable buffer *b* and return 03:58:00 the number of bytes read. If the socket is non-blocking and no bytes 03:58:00 are available, None is returned. 03:58:00 03:58:00 If *b* is non-empty, a 0 return value indicates that the connection 03:58:00 was shutdown at the other end. 03:58:00 """ 03:58:00 self._checkClosed() 03:58:00 self._checkReadable() 03:58:00 if self._timeout_occurred: 03:58:00 raise OSError("cannot read from timed out object") 03:58:00 while True: 03:58:00 try: 03:58:00 > return self._sock.recv_into(b) 03:58:00 E ConnectionResetError: [Errno 104] Connection reset by peer 03:58:00 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/socket.py:706: ConnectionResetError 03:58:00 03:58:00 During handling of the above exception, another exception occurred: 03:58:00 03:58:00 self = 03:58:00 request = , stream = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:00 proxies = OrderedDict() 03:58:00 03:58:00 def send( 03:58:00 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:00 ): 03:58:00 """Sends PreparedRequest object. Returns Response object. 03:58:00 03:58:00 :param request: The :class:`PreparedRequest ` being sent. 03:58:00 :param stream: (optional) Whether to stream the request content. 03:58:00 :param timeout: (optional) How long to wait for the server to send 03:58:00 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:00 read timeout) ` tuple. 03:58:00 :type timeout: float or tuple or urllib3 Timeout object 03:58:00 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:00 we verify the server's TLS certificate, or a string, in which case it 03:58:00 must be a path to a CA bundle to use 03:58:00 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:00 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:00 :rtype: requests.Response 03:58:00 """ 03:58:00 03:58:00 try: 03:58:00 conn = self.get_connection_with_tls_context( 03:58:00 request, verify, proxies=proxies, cert=cert 03:58:00 ) 03:58:00 except LocationValueError as e: 03:58:00 raise InvalidURL(e, request=request) 03:58:00 03:58:00 self.cert_verify(conn, request.url, verify, cert) 03:58:00 url = self.request_url(request, proxies) 03:58:00 self.add_headers( 03:58:00 request, 03:58:00 stream=stream, 03:58:00 timeout=timeout, 03:58:00 verify=verify, 03:58:00 cert=cert, 03:58:00 proxies=proxies, 03:58:00 ) 03:58:00 03:58:00 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:00 03:58:00 if isinstance(timeout, tuple): 03:58:00 try: 03:58:00 connect, read = timeout 03:58:00 timeout = TimeoutSauce(connect=connect, read=read) 03:58:00 except ValueError: 03:58:00 raise ValueError( 03:58:00 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:00 f"or a single float to set both timeouts to the same value." 03:58:00 ) 03:58:00 elif isinstance(timeout, TimeoutSauce): 03:58:00 pass 03:58:00 else: 03:58:00 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:00 03:58:00 try: 03:58:00 > resp = conn.urlopen( 03:58:00 method=request.method, 03:58:00 url=url, 03:58:00 body=request.body, 03:58:00 headers=request.headers, 03:58:00 redirect=False, 03:58:00 assert_same_host=False, 03:58:00 preload_content=False, 03:58:00 decode_content=False, 03:58:00 retries=self.max_retries, 03:58:00 timeout=timeout, 03:58:00 chunked=chunked, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 03:58:00 retries = retries.increment( 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:474: in increment 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/util.py:38: in reraise 03:58:00 raise value.with_traceback(tb) 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: in urlopen 03:58:00 response = self._make_request( 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:536: in _make_request 03:58:00 response = conn.getresponse() 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:507: in getresponse 03:58:00 httplib_response = super().getresponse() 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1386: in getresponse 03:58:00 response.begin() 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:325: in begin 03:58:00 version, status, reason = self._read_status() 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:286: in _read_status 03:58:00 line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = 03:58:00 b = 03:58:00 03:58:00 def readinto(self, b): 03:58:00 """Read up to len(b) bytes into the writable buffer *b* and return 03:58:00 the number of bytes read. If the socket is non-blocking and no bytes 03:58:00 are available, None is returned. 03:58:00 03:58:00 If *b* is non-empty, a 0 return value indicates that the connection 03:58:00 was shutdown at the other end. 03:58:00 """ 03:58:00 self._checkClosed() 03:58:00 self._checkReadable() 03:58:00 if self._timeout_occurred: 03:58:00 raise OSError("cannot read from timed out object") 03:58:00 while True: 03:58:00 try: 03:58:00 > return self._sock.recv_into(b) 03:58:00 E urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')) 03:58:00 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/socket.py:706: ProtocolError 03:58:00 03:58:00 During handling of the above exception, another exception occurred: 03:58:00 03:58:00 self = 03:58:00 03:58:00 def test_04_rdm_portmapping_DEG1_TTP_TXRX(self): 03:58:00 > response = test_utils.get_portmapping_node_attr("ROADMA01", "mapping", "DEG1-TTP-TXRX") 03:58:00 03:58:00 transportpce_tests/1.2.1/test01_portmapping.py:72: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 transportpce_tests/common/test_utils.py:471: in get_portmapping_node_attr 03:58:00 response = get_request(target_url) 03:58:00 transportpce_tests/common/test_utils.py:116: in get_request 03:58:00 return requests.request( 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 03:58:00 return session.request(method=method, url=url, **kwargs) 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 03:58:00 resp = self.send(prep, **send_kwargs) 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 03:58:00 r = adapter.send(request, **kwargs) 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = 03:58:00 request = , stream = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:00 proxies = OrderedDict() 03:58:00 03:58:00 def send( 03:58:00 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:00 ): 03:58:00 """Sends PreparedRequest object. Returns Response object. 03:58:00 03:58:00 :param request: The :class:`PreparedRequest ` being sent. 03:58:00 :param stream: (optional) Whether to stream the request content. 03:58:00 :param timeout: (optional) How long to wait for the server to send 03:58:00 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:00 read timeout) ` tuple. 03:58:00 :type timeout: float or tuple or urllib3 Timeout object 03:58:00 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:00 we verify the server's TLS certificate, or a string, in which case it 03:58:00 must be a path to a CA bundle to use 03:58:00 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:00 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:00 :rtype: requests.Response 03:58:00 """ 03:58:00 03:58:00 try: 03:58:00 conn = self.get_connection_with_tls_context( 03:58:00 request, verify, proxies=proxies, cert=cert 03:58:00 ) 03:58:00 except LocationValueError as e: 03:58:00 raise InvalidURL(e, request=request) 03:58:00 03:58:00 self.cert_verify(conn, request.url, verify, cert) 03:58:00 url = self.request_url(request, proxies) 03:58:00 self.add_headers( 03:58:00 request, 03:58:00 stream=stream, 03:58:00 timeout=timeout, 03:58:00 verify=verify, 03:58:00 cert=cert, 03:58:00 proxies=proxies, 03:58:00 ) 03:58:00 03:58:00 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:00 03:58:00 if isinstance(timeout, tuple): 03:58:00 try: 03:58:00 connect, read = timeout 03:58:00 timeout = TimeoutSauce(connect=connect, read=read) 03:58:00 except ValueError: 03:58:00 raise ValueError( 03:58:00 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:00 f"or a single float to set both timeouts to the same value." 03:58:00 ) 03:58:00 elif isinstance(timeout, TimeoutSauce): 03:58:00 pass 03:58:00 else: 03:58:00 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:00 03:58:00 try: 03:58:00 resp = conn.urlopen( 03:58:00 method=request.method, 03:58:00 url=url, 03:58:00 body=request.body, 03:58:00 headers=request.headers, 03:58:00 redirect=False, 03:58:00 assert_same_host=False, 03:58:00 preload_content=False, 03:58:00 decode_content=False, 03:58:00 retries=self.max_retries, 03:58:00 timeout=timeout, 03:58:00 chunked=chunked, 03:58:00 ) 03:58:00 03:58:00 except (ProtocolError, OSError) as err: 03:58:00 > raise ConnectionError(err, request=request) 03:58:00 E requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:682: ConnectionError 03:58:00 ----------------------------- Captured stdout call ----------------------------- 03:58:00 execution of test_04_rdm_portmapping_DEG1_TTP_TXRX 03:58:00 _____ TransportPCEPortMappingTesting.test_05_rdm_portmapping_SRG1_PP7_TXRX _____ 03:58:00 03:58:00 self = 03:58:00 03:58:00 def _new_conn(self) -> socket.socket: 03:58:00 """Establish a socket connection and set nodelay settings on it. 03:58:00 03:58:00 :return: New socket connection. 03:58:00 """ 03:58:00 try: 03:58:00 > sock = connection.create_connection( 03:58:00 (self._dns_host, self.port), 03:58:00 self.timeout, 03:58:00 source_address=self.source_address, 03:58:00 socket_options=self.socket_options, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 03:58:00 raise err 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 address = ('localhost', 8182), timeout = 10, source_address = None 03:58:00 socket_options = [(6, 1, 1)] 03:58:00 03:58:00 def create_connection( 03:58:00 address: tuple[str, int], 03:58:00 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:00 source_address: tuple[str, int] | None = None, 03:58:00 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 03:58:00 ) -> socket.socket: 03:58:00 """Connect to *address* and return the socket object. 03:58:00 03:58:00 Convenience function. Connect to *address* (a 2-tuple ``(host, 03:58:00 port)``) and return the socket object. Passing the optional 03:58:00 *timeout* parameter will set the timeout on the socket instance 03:58:00 before attempting to connect. If no *timeout* is supplied, the 03:58:00 global default timeout setting returned by :func:`socket.getdefaulttimeout` 03:58:00 is used. If *source_address* is set it must be a tuple of (host, port) 03:58:00 for the socket to bind as a source address before making the connection. 03:58:00 An host of '' or port 0 tells the OS to use the default. 03:58:00 """ 03:58:00 03:58:00 host, port = address 03:58:00 if host.startswith("["): 03:58:00 host = host.strip("[]") 03:58:00 err = None 03:58:00 03:58:00 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 03:58:00 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 03:58:00 # The original create_connection function always returns all records. 03:58:00 family = allowed_gai_family() 03:58:00 03:58:00 try: 03:58:00 host.encode("idna") 03:58:00 except UnicodeError: 03:58:00 raise LocationParseError(f"'{host}', label empty or too long") from None 03:58:00 03:58:00 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 03:58:00 af, socktype, proto, canonname, sa = res 03:58:00 sock = None 03:58:00 try: 03:58:00 sock = socket.socket(af, socktype, proto) 03:58:00 03:58:00 # If provided, set socket level options before connecting. 03:58:00 _set_socket_options(sock, socket_options) 03:58:00 03:58:00 if timeout is not _DEFAULT_TIMEOUT: 03:58:00 sock.settimeout(timeout) 03:58:00 if source_address: 03:58:00 sock.bind(source_address) 03:58:00 > sock.connect(sa) 03:58:00 E ConnectionRefusedError: [Errno 111] Connection refused 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 03:58:00 03:58:00 The above exception was the direct cause of the following exception: 03:58:00 03:58:00 self = 03:58:00 method = 'GET' 03:58:00 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX' 03:58:00 body = None 03:58:00 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 03:58:00 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:00 redirect = False, assert_same_host = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 03:58:00 release_conn = False, chunked = False, body_pos = None, preload_content = False 03:58:00 decode_content = False, response_kw = {} 03:58:00 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX', query=None, fragment=None) 03:58:00 destination_scheme = None, conn = None, release_this_conn = True 03:58:00 http_tunnel_required = False, err = None, clean_exit = False 03:58:00 03:58:00 def urlopen( # type: ignore[override] 03:58:00 self, 03:58:00 method: str, 03:58:00 url: str, 03:58:00 body: _TYPE_BODY | None = None, 03:58:00 headers: typing.Mapping[str, str] | None = None, 03:58:00 retries: Retry | bool | int | None = None, 03:58:00 redirect: bool = True, 03:58:00 assert_same_host: bool = True, 03:58:00 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:00 pool_timeout: int | None = None, 03:58:00 release_conn: bool | None = None, 03:58:00 chunked: bool = False, 03:58:00 body_pos: _TYPE_BODY_POSITION | None = None, 03:58:00 preload_content: bool = True, 03:58:00 decode_content: bool = True, 03:58:00 **response_kw: typing.Any, 03:58:00 ) -> BaseHTTPResponse: 03:58:00 """ 03:58:00 Get a connection from the pool and perform an HTTP request. This is the 03:58:00 lowest level call for making a request, so you'll need to specify all 03:58:00 the raw details. 03:58:00 03:58:00 .. note:: 03:58:00 03:58:00 More commonly, it's appropriate to use a convenience method 03:58:00 such as :meth:`request`. 03:58:00 03:58:00 .. note:: 03:58:00 03:58:00 `release_conn` will only behave as expected if 03:58:00 `preload_content=False` because we want to make 03:58:00 `preload_content=False` the default behaviour someday soon without 03:58:00 breaking backwards compatibility. 03:58:00 03:58:00 :param method: 03:58:00 HTTP request method (such as GET, POST, PUT, etc.) 03:58:00 03:58:00 :param url: 03:58:00 The URL to perform the request on. 03:58:00 03:58:00 :param body: 03:58:00 Data to send in the request body, either :class:`str`, :class:`bytes`, 03:58:00 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 03:58:00 03:58:00 :param headers: 03:58:00 Dictionary of custom headers to send, such as User-Agent, 03:58:00 If-None-Match, etc. If None, pool headers are used. If provided, 03:58:00 these headers completely replace any pool-specific headers. 03:58:00 03:58:00 :param retries: 03:58:00 Configure the number of retries to allow before raising a 03:58:00 :class:`~urllib3.exceptions.MaxRetryError` exception. 03:58:00 03:58:00 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 03:58:00 :class:`~urllib3.util.retry.Retry` object for fine-grained control 03:58:00 over different types of retries. 03:58:00 Pass an integer number to retry connection errors that many times, 03:58:00 but no other types of errors. Pass zero to never retry. 03:58:00 03:58:00 If ``False``, then retries are disabled and any exception is raised 03:58:00 immediately. Also, instead of raising a MaxRetryError on redirects, 03:58:00 the redirect response will be returned. 03:58:00 03:58:00 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 03:58:00 03:58:00 :param redirect: 03:58:00 If True, automatically handle redirects (status codes 301, 302, 03:58:00 303, 307, 308). Each redirect counts as a retry. Disabling retries 03:58:00 will disable redirect, too. 03:58:00 03:58:00 :param assert_same_host: 03:58:00 If ``True``, will make sure that the host of the pool requests is 03:58:00 consistent else will raise HostChangedError. When ``False``, you can 03:58:00 use the pool on an HTTP proxy and request foreign hosts. 03:58:00 03:58:00 :param timeout: 03:58:00 If specified, overrides the default timeout for this one 03:58:00 request. It may be a float (in seconds) or an instance of 03:58:00 :class:`urllib3.util.Timeout`. 03:58:00 03:58:00 :param pool_timeout: 03:58:00 If set and the pool is set to block=True, then this method will 03:58:00 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 03:58:00 connection is available within the time period. 03:58:00 03:58:00 :param bool preload_content: 03:58:00 If True, the response's body will be preloaded into memory. 03:58:00 03:58:00 :param bool decode_content: 03:58:00 If True, will attempt to decode the body based on the 03:58:00 'content-encoding' header. 03:58:00 03:58:00 :param release_conn: 03:58:00 If False, then the urlopen call will not release the connection 03:58:00 back into the pool once a response is received (but will release if 03:58:00 you read the entire contents of the response such as when 03:58:00 `preload_content=True`). This is useful if you're not preloading 03:58:00 the response's content immediately. You will need to call 03:58:00 ``r.release_conn()`` on the response ``r`` to return the connection 03:58:00 back into the pool. If None, it takes the value of ``preload_content`` 03:58:00 which defaults to ``True``. 03:58:00 03:58:00 :param bool chunked: 03:58:00 If True, urllib3 will send the body using chunked transfer 03:58:00 encoding. Otherwise, urllib3 will send the body using the standard 03:58:00 content-length form. Defaults to False. 03:58:00 03:58:00 :param int body_pos: 03:58:00 Position to seek to in file-like body in the event of a retry or 03:58:00 redirect. Typically this won't need to be set because urllib3 will 03:58:00 auto-populate the value when needed. 03:58:00 """ 03:58:00 parsed_url = parse_url(url) 03:58:00 destination_scheme = parsed_url.scheme 03:58:00 03:58:00 if headers is None: 03:58:00 headers = self.headers 03:58:00 03:58:00 if not isinstance(retries, Retry): 03:58:00 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 03:58:00 03:58:00 if release_conn is None: 03:58:00 release_conn = preload_content 03:58:00 03:58:00 # Check host 03:58:00 if assert_same_host and not self.is_same_host(url): 03:58:00 raise HostChangedError(self, url, retries) 03:58:00 03:58:00 # Ensure that the URL we're connecting to is properly encoded 03:58:00 if url.startswith("/"): 03:58:00 url = to_str(_encode_target(url)) 03:58:00 else: 03:58:00 url = to_str(parsed_url.url) 03:58:00 03:58:00 conn = None 03:58:00 03:58:00 # Track whether `conn` needs to be released before 03:58:00 # returning/raising/recursing. Update this variable if necessary, and 03:58:00 # leave `release_conn` constant throughout the function. That way, if 03:58:00 # the function recurses, the original value of `release_conn` will be 03:58:00 # passed down into the recursive call, and its value will be respected. 03:58:00 # 03:58:00 # See issue #651 [1] for details. 03:58:00 # 03:58:00 # [1] 03:58:00 release_this_conn = release_conn 03:58:00 03:58:00 http_tunnel_required = connection_requires_http_tunnel( 03:58:00 self.proxy, self.proxy_config, destination_scheme 03:58:00 ) 03:58:00 03:58:00 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 03:58:00 # have to copy the headers dict so we can safely change it without those 03:58:00 # changes being reflected in anyone else's copy. 03:58:00 if not http_tunnel_required: 03:58:00 headers = headers.copy() # type: ignore[attr-defined] 03:58:00 headers.update(self.proxy_headers) # type: ignore[union-attr] 03:58:00 03:58:00 # Must keep the exception bound to a separate variable or else Python 3 03:58:00 # complains about UnboundLocalError. 03:58:00 err = None 03:58:00 03:58:00 # Keep track of whether we cleanly exited the except block. This 03:58:00 # ensures we do proper cleanup in finally. 03:58:00 clean_exit = False 03:58:00 03:58:00 # Rewind body position, if needed. Record current position 03:58:00 # for future rewinds in the event of a redirect/retry. 03:58:00 body_pos = set_file_position(body, body_pos) 03:58:00 03:58:00 try: 03:58:00 # Request a connection from the queue. 03:58:00 timeout_obj = self._get_timeout(timeout) 03:58:00 conn = self._get_conn(timeout=pool_timeout) 03:58:00 03:58:00 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 03:58:00 03:58:00 # Is this a closed/new connection that requires CONNECT tunnelling? 03:58:00 if self.proxy is not None and http_tunnel_required and conn.is_closed: 03:58:00 try: 03:58:00 self._prepare_proxy(conn) 03:58:00 except (BaseSSLError, OSError, SocketTimeout) as e: 03:58:00 self._raise_timeout( 03:58:00 err=e, url=self.proxy.url, timeout_value=conn.timeout 03:58:00 ) 03:58:00 raise 03:58:00 03:58:00 # If we're going to release the connection in ``finally:``, then 03:58:00 # the response doesn't need to know about the connection. Otherwise 03:58:00 # it will also try to release it and we'll have a double-release 03:58:00 # mess. 03:58:00 response_conn = conn if not release_conn else None 03:58:00 03:58:00 # Make the request on the HTTPConnection object 03:58:00 > response = self._make_request( 03:58:00 conn, 03:58:00 method, 03:58:00 url, 03:58:00 timeout=timeout_obj, 03:58:00 body=body, 03:58:00 headers=headers, 03:58:00 chunked=chunked, 03:58:00 retries=retries, 03:58:00 response_conn=response_conn, 03:58:00 preload_content=preload_content, 03:58:00 decode_content=decode_content, 03:58:00 **response_kw, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 03:58:00 conn.request( 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 03:58:00 self.endheaders() 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 03:58:00 self._send_output(message_body, encode_chunked=encode_chunked) 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 03:58:00 self.send(msg) 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 03:58:00 self.connect() 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 03:58:00 self.sock = self._new_conn() 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = 03:58:00 03:58:00 def _new_conn(self) -> socket.socket: 03:58:00 """Establish a socket connection and set nodelay settings on it. 03:58:00 03:58:00 :return: New socket connection. 03:58:00 """ 03:58:00 try: 03:58:00 sock = connection.create_connection( 03:58:00 (self._dns_host, self.port), 03:58:00 self.timeout, 03:58:00 source_address=self.source_address, 03:58:00 socket_options=self.socket_options, 03:58:00 ) 03:58:00 except socket.gaierror as e: 03:58:00 raise NameResolutionError(self.host, self, e) from e 03:58:00 except SocketTimeout as e: 03:58:00 raise ConnectTimeoutError( 03:58:00 self, 03:58:00 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 03:58:00 ) from e 03:58:00 03:58:00 except OSError as e: 03:58:00 > raise NewConnectionError( 03:58:00 self, f"Failed to establish a new connection: {e}" 03:58:00 ) from e 03:58:00 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 03:58:00 03:58:00 The above exception was the direct cause of the following exception: 03:58:00 03:58:00 self = 03:58:00 request = , stream = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:00 proxies = OrderedDict() 03:58:00 03:58:00 def send( 03:58:00 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:00 ): 03:58:00 """Sends PreparedRequest object. Returns Response object. 03:58:00 03:58:00 :param request: The :class:`PreparedRequest ` being sent. 03:58:00 :param stream: (optional) Whether to stream the request content. 03:58:00 :param timeout: (optional) How long to wait for the server to send 03:58:00 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:00 read timeout) ` tuple. 03:58:00 :type timeout: float or tuple or urllib3 Timeout object 03:58:00 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:00 we verify the server's TLS certificate, or a string, in which case it 03:58:00 must be a path to a CA bundle to use 03:58:00 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:00 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:00 :rtype: requests.Response 03:58:00 """ 03:58:00 03:58:00 try: 03:58:00 conn = self.get_connection_with_tls_context( 03:58:00 request, verify, proxies=proxies, cert=cert 03:58:00 ) 03:58:00 except LocationValueError as e: 03:58:00 raise InvalidURL(e, request=request) 03:58:00 03:58:00 self.cert_verify(conn, request.url, verify, cert) 03:58:00 url = self.request_url(request, proxies) 03:58:00 self.add_headers( 03:58:00 request, 03:58:00 stream=stream, 03:58:00 timeout=timeout, 03:58:00 verify=verify, 03:58:00 cert=cert, 03:58:00 proxies=proxies, 03:58:00 ) 03:58:00 03:58:00 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:00 03:58:00 if isinstance(timeout, tuple): 03:58:00 try: 03:58:00 connect, read = timeout 03:58:00 timeout = TimeoutSauce(connect=connect, read=read) 03:58:00 except ValueError: 03:58:00 raise ValueError( 03:58:00 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:00 f"or a single float to set both timeouts to the same value." 03:58:00 ) 03:58:00 elif isinstance(timeout, TimeoutSauce): 03:58:00 pass 03:58:00 else: 03:58:00 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:00 03:58:00 try: 03:58:00 > resp = conn.urlopen( 03:58:00 method=request.method, 03:58:00 url=url, 03:58:00 body=request.body, 03:58:00 headers=request.headers, 03:58:00 redirect=False, 03:58:00 assert_same_host=False, 03:58:00 preload_content=False, 03:58:00 decode_content=False, 03:58:00 retries=self.max_retries, 03:58:00 timeout=timeout, 03:58:00 chunked=chunked, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 03:58:00 retries = retries.increment( 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:00 method = 'GET' 03:58:00 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX' 03:58:00 response = None 03:58:00 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 03:58:00 _pool = 03:58:00 _stacktrace = 03:58:00 03:58:00 def increment( 03:58:00 self, 03:58:00 method: str | None = None, 03:58:00 url: str | None = None, 03:58:00 response: BaseHTTPResponse | None = None, 03:58:00 error: Exception | None = None, 03:58:00 _pool: ConnectionPool | None = None, 03:58:00 _stacktrace: TracebackType | None = None, 03:58:00 ) -> Self: 03:58:00 """Return a new Retry object with incremented retry counters. 03:58:00 03:58:00 :param response: A response object, or None, if the server did not 03:58:00 return a response. 03:58:00 :type response: :class:`~urllib3.response.BaseHTTPResponse` 03:58:00 :param Exception error: An error encountered during the request, or 03:58:00 None if the response was received successfully. 03:58:00 03:58:00 :return: A new ``Retry`` object. 03:58:00 """ 03:58:00 if self.total is False and error: 03:58:00 # Disabled, indicate to re-raise the error. 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 03:58:00 total = self.total 03:58:00 if total is not None: 03:58:00 total -= 1 03:58:00 03:58:00 connect = self.connect 03:58:00 read = self.read 03:58:00 redirect = self.redirect 03:58:00 status_count = self.status 03:58:00 other = self.other 03:58:00 cause = "unknown" 03:58:00 status = None 03:58:00 redirect_location = None 03:58:00 03:58:00 if error and self._is_connection_error(error): 03:58:00 # Connect retry? 03:58:00 if connect is False: 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 elif connect is not None: 03:58:00 connect -= 1 03:58:00 03:58:00 elif error and self._is_read_error(error): 03:58:00 # Read retry? 03:58:00 if read is False or method is None or not self._is_method_retryable(method): 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 elif read is not None: 03:58:00 read -= 1 03:58:00 03:58:00 elif error: 03:58:00 # Other retry? 03:58:00 if other is not None: 03:58:00 other -= 1 03:58:00 03:58:00 elif response and response.get_redirect_location(): 03:58:00 # Redirect retry? 03:58:00 if redirect is not None: 03:58:00 redirect -= 1 03:58:00 cause = "too many redirects" 03:58:00 response_redirect_location = response.get_redirect_location() 03:58:00 if response_redirect_location: 03:58:00 redirect_location = response_redirect_location 03:58:00 status = response.status 03:58:00 03:58:00 else: 03:58:00 # Incrementing because of a server error like a 500 in 03:58:00 # status_forcelist and the given method is in the allowed_methods 03:58:00 cause = ResponseError.GENERIC_ERROR 03:58:00 if response and response.status: 03:58:00 if status_count is not None: 03:58:00 status_count -= 1 03:58:00 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 03:58:00 status = response.status 03:58:00 03:58:00 history = self.history + ( 03:58:00 RequestHistory(method, url, error, status, redirect_location), 03:58:00 ) 03:58:00 03:58:00 new_retry = self.new( 03:58:00 total=total, 03:58:00 connect=connect, 03:58:00 read=read, 03:58:00 redirect=redirect, 03:58:00 status=status_count, 03:58:00 other=other, 03:58:00 history=history, 03:58:00 ) 03:58:00 03:58:00 if new_retry.is_exhausted(): 03:58:00 reason = error or ResponseError(cause) 03:58:00 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 03:58:00 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 03:58:00 03:58:00 During handling of the above exception, another exception occurred: 03:58:00 03:58:00 self = 03:58:00 03:58:00 def test_05_rdm_portmapping_SRG1_PP7_TXRX(self): 03:58:00 > response = test_utils.get_portmapping_node_attr("ROADMA01", "mapping", "SRG1-PP7-TXRX") 03:58:00 03:58:00 transportpce_tests/1.2.1/test01_portmapping.py:81: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 transportpce_tests/common/test_utils.py:471: in get_portmapping_node_attr 03:58:00 response = get_request(target_url) 03:58:00 transportpce_tests/common/test_utils.py:116: in get_request 03:58:00 return requests.request( 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 03:58:00 return session.request(method=method, url=url, **kwargs) 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 03:58:00 resp = self.send(prep, **send_kwargs) 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 03:58:00 r = adapter.send(request, **kwargs) 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = 03:58:00 request = , stream = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:00 proxies = OrderedDict() 03:58:00 03:58:00 def send( 03:58:00 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:00 ): 03:58:00 """Sends PreparedRequest object. Returns Response object. 03:58:00 03:58:00 :param request: The :class:`PreparedRequest ` being sent. 03:58:00 :param stream: (optional) Whether to stream the request content. 03:58:00 :param timeout: (optional) How long to wait for the server to send 03:58:00 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:00 read timeout) ` tuple. 03:58:00 :type timeout: float or tuple or urllib3 Timeout object 03:58:00 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:00 we verify the server's TLS certificate, or a string, in which case it 03:58:00 must be a path to a CA bundle to use 03:58:00 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:00 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:00 :rtype: requests.Response 03:58:00 """ 03:58:00 03:58:00 try: 03:58:00 conn = self.get_connection_with_tls_context( 03:58:00 request, verify, proxies=proxies, cert=cert 03:58:00 ) 03:58:00 except LocationValueError as e: 03:58:00 raise InvalidURL(e, request=request) 03:58:00 03:58:00 self.cert_verify(conn, request.url, verify, cert) 03:58:00 url = self.request_url(request, proxies) 03:58:00 self.add_headers( 03:58:00 request, 03:58:00 stream=stream, 03:58:00 timeout=timeout, 03:58:00 verify=verify, 03:58:00 cert=cert, 03:58:00 proxies=proxies, 03:58:00 ) 03:58:00 03:58:00 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:00 03:58:00 if isinstance(timeout, tuple): 03:58:00 try: 03:58:00 connect, read = timeout 03:58:00 timeout = TimeoutSauce(connect=connect, read=read) 03:58:00 except ValueError: 03:58:00 raise ValueError( 03:58:00 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:00 f"or a single float to set both timeouts to the same value." 03:58:00 ) 03:58:00 elif isinstance(timeout, TimeoutSauce): 03:58:00 pass 03:58:00 else: 03:58:00 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:00 03:58:00 try: 03:58:00 resp = conn.urlopen( 03:58:00 method=request.method, 03:58:00 url=url, 03:58:00 body=request.body, 03:58:00 headers=request.headers, 03:58:00 redirect=False, 03:58:00 assert_same_host=False, 03:58:00 preload_content=False, 03:58:00 decode_content=False, 03:58:00 retries=self.max_retries, 03:58:00 timeout=timeout, 03:58:00 chunked=chunked, 03:58:00 ) 03:58:00 03:58:00 except (ProtocolError, OSError) as err: 03:58:00 raise ConnectionError(err, request=request) 03:58:00 03:58:00 except MaxRetryError as e: 03:58:00 if isinstance(e.reason, ConnectTimeoutError): 03:58:00 # TODO: Remove this in 3.0.0: see #2811 03:58:00 if not isinstance(e.reason, NewConnectionError): 03:58:00 raise ConnectTimeout(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, ResponseError): 03:58:00 raise RetryError(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, _ProxyError): 03:58:00 raise ProxyError(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, _SSLError): 03:58:00 # This branch is for urllib3 v1.22 and later. 03:58:00 raise SSLError(e, request=request) 03:58:00 03:58:00 > raise ConnectionError(e, request=request) 03:58:00 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 03:58:00 ----------------------------- Captured stdout call ----------------------------- 03:58:00 execution of test_05_rdm_portmapping_SRG1_PP7_TXRX 03:58:00 _____ TransportPCEPortMappingTesting.test_06_rdm_portmapping_SRG3_PP1_TXRX _____ 03:58:00 03:58:00 self = 03:58:00 03:58:00 def _new_conn(self) -> socket.socket: 03:58:00 """Establish a socket connection and set nodelay settings on it. 03:58:00 03:58:00 :return: New socket connection. 03:58:00 """ 03:58:00 try: 03:58:00 > sock = connection.create_connection( 03:58:00 (self._dns_host, self.port), 03:58:00 self.timeout, 03:58:00 source_address=self.source_address, 03:58:00 socket_options=self.socket_options, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 03:58:00 raise err 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 address = ('localhost', 8182), timeout = 10, source_address = None 03:58:00 socket_options = [(6, 1, 1)] 03:58:00 03:58:00 def create_connection( 03:58:00 address: tuple[str, int], 03:58:00 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:00 source_address: tuple[str, int] | None = None, 03:58:00 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 03:58:00 ) -> socket.socket: 03:58:00 """Connect to *address* and return the socket object. 03:58:00 03:58:00 Convenience function. Connect to *address* (a 2-tuple ``(host, 03:58:00 port)``) and return the socket object. Passing the optional 03:58:00 *timeout* parameter will set the timeout on the socket instance 03:58:00 before attempting to connect. If no *timeout* is supplied, the 03:58:00 global default timeout setting returned by :func:`socket.getdefaulttimeout` 03:58:00 is used. If *source_address* is set it must be a tuple of (host, port) 03:58:00 for the socket to bind as a source address before making the connection. 03:58:00 An host of '' or port 0 tells the OS to use the default. 03:58:00 """ 03:58:00 03:58:00 host, port = address 03:58:00 if host.startswith("["): 03:58:00 host = host.strip("[]") 03:58:00 err = None 03:58:00 03:58:00 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 03:58:00 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 03:58:00 # The original create_connection function always returns all records. 03:58:00 family = allowed_gai_family() 03:58:00 03:58:00 try: 03:58:00 host.encode("idna") 03:58:00 except UnicodeError: 03:58:00 raise LocationParseError(f"'{host}', label empty or too long") from None 03:58:00 03:58:00 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 03:58:00 af, socktype, proto, canonname, sa = res 03:58:00 sock = None 03:58:00 try: 03:58:00 sock = socket.socket(af, socktype, proto) 03:58:00 03:58:00 # If provided, set socket level options before connecting. 03:58:00 _set_socket_options(sock, socket_options) 03:58:00 03:58:00 if timeout is not _DEFAULT_TIMEOUT: 03:58:00 sock.settimeout(timeout) 03:58:00 if source_address: 03:58:00 sock.bind(source_address) 03:58:00 > sock.connect(sa) 03:58:00 E ConnectionRefusedError: [Errno 111] Connection refused 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 03:58:00 03:58:00 The above exception was the direct cause of the following exception: 03:58:00 03:58:00 self = 03:58:00 method = 'GET' 03:58:00 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX' 03:58:00 body = None 03:58:00 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 03:58:00 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:00 redirect = False, assert_same_host = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 03:58:00 release_conn = False, chunked = False, body_pos = None, preload_content = False 03:58:00 decode_content = False, response_kw = {} 03:58:00 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX', query=None, fragment=None) 03:58:00 destination_scheme = None, conn = None, release_this_conn = True 03:58:00 http_tunnel_required = False, err = None, clean_exit = False 03:58:00 03:58:00 def urlopen( # type: ignore[override] 03:58:00 self, 03:58:00 method: str, 03:58:00 url: str, 03:58:00 body: _TYPE_BODY | None = None, 03:58:00 headers: typing.Mapping[str, str] | None = None, 03:58:00 retries: Retry | bool | int | None = None, 03:58:00 redirect: bool = True, 03:58:00 assert_same_host: bool = True, 03:58:00 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:00 pool_timeout: int | None = None, 03:58:00 release_conn: bool | None = None, 03:58:00 chunked: bool = False, 03:58:00 body_pos: _TYPE_BODY_POSITION | None = None, 03:58:00 preload_content: bool = True, 03:58:00 decode_content: bool = True, 03:58:00 **response_kw: typing.Any, 03:58:00 ) -> BaseHTTPResponse: 03:58:00 """ 03:58:00 Get a connection from the pool and perform an HTTP request. This is the 03:58:00 lowest level call for making a request, so you'll need to specify all 03:58:00 the raw details. 03:58:00 03:58:00 .. note:: 03:58:00 03:58:00 More commonly, it's appropriate to use a convenience method 03:58:00 such as :meth:`request`. 03:58:00 03:58:00 .. note:: 03:58:00 03:58:00 `release_conn` will only behave as expected if 03:58:00 `preload_content=False` because we want to make 03:58:00 `preload_content=False` the default behaviour someday soon without 03:58:00 breaking backwards compatibility. 03:58:00 03:58:00 :param method: 03:58:00 HTTP request method (such as GET, POST, PUT, etc.) 03:58:00 03:58:00 :param url: 03:58:00 The URL to perform the request on. 03:58:00 03:58:00 :param body: 03:58:00 Data to send in the request body, either :class:`str`, :class:`bytes`, 03:58:00 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 03:58:00 03:58:00 :param headers: 03:58:00 Dictionary of custom headers to send, such as User-Agent, 03:58:00 If-None-Match, etc. If None, pool headers are used. If provided, 03:58:00 these headers completely replace any pool-specific headers. 03:58:00 03:58:00 :param retries: 03:58:00 Configure the number of retries to allow before raising a 03:58:00 :class:`~urllib3.exceptions.MaxRetryError` exception. 03:58:00 03:58:00 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 03:58:00 :class:`~urllib3.util.retry.Retry` object for fine-grained control 03:58:00 over different types of retries. 03:58:00 Pass an integer number to retry connection errors that many times, 03:58:00 but no other types of errors. Pass zero to never retry. 03:58:00 03:58:00 If ``False``, then retries are disabled and any exception is raised 03:58:00 immediately. Also, instead of raising a MaxRetryError on redirects, 03:58:00 the redirect response will be returned. 03:58:00 03:58:00 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 03:58:00 03:58:00 :param redirect: 03:58:00 If True, automatically handle redirects (status codes 301, 302, 03:58:00 303, 307, 308). Each redirect counts as a retry. Disabling retries 03:58:00 will disable redirect, too. 03:58:00 03:58:00 :param assert_same_host: 03:58:00 If ``True``, will make sure that the host of the pool requests is 03:58:00 consistent else will raise HostChangedError. When ``False``, you can 03:58:00 use the pool on an HTTP proxy and request foreign hosts. 03:58:00 03:58:00 :param timeout: 03:58:00 If specified, overrides the default timeout for this one 03:58:00 request. It may be a float (in seconds) or an instance of 03:58:00 :class:`urllib3.util.Timeout`. 03:58:00 03:58:00 :param pool_timeout: 03:58:00 If set and the pool is set to block=True, then this method will 03:58:00 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 03:58:00 connection is available within the time period. 03:58:00 03:58:00 :param bool preload_content: 03:58:00 If True, the response's body will be preloaded into memory. 03:58:00 03:58:00 :param bool decode_content: 03:58:00 If True, will attempt to decode the body based on the 03:58:00 'content-encoding' header. 03:58:00 03:58:00 :param release_conn: 03:58:00 If False, then the urlopen call will not release the connection 03:58:00 back into the pool once a response is received (but will release if 03:58:00 you read the entire contents of the response such as when 03:58:00 `preload_content=True`). This is useful if you're not preloading 03:58:00 the response's content immediately. You will need to call 03:58:00 ``r.release_conn()`` on the response ``r`` to return the connection 03:58:00 back into the pool. If None, it takes the value of ``preload_content`` 03:58:00 which defaults to ``True``. 03:58:00 03:58:00 :param bool chunked: 03:58:00 If True, urllib3 will send the body using chunked transfer 03:58:00 encoding. Otherwise, urllib3 will send the body using the standard 03:58:00 content-length form. Defaults to False. 03:58:00 03:58:00 :param int body_pos: 03:58:00 Position to seek to in file-like body in the event of a retry or 03:58:00 redirect. Typically this won't need to be set because urllib3 will 03:58:00 auto-populate the value when needed. 03:58:00 """ 03:58:00 parsed_url = parse_url(url) 03:58:00 destination_scheme = parsed_url.scheme 03:58:00 03:58:00 if headers is None: 03:58:00 headers = self.headers 03:58:00 03:58:00 if not isinstance(retries, Retry): 03:58:00 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 03:58:00 03:58:00 if release_conn is None: 03:58:00 release_conn = preload_content 03:58:00 03:58:00 # Check host 03:58:00 if assert_same_host and not self.is_same_host(url): 03:58:00 raise HostChangedError(self, url, retries) 03:58:00 03:58:00 # Ensure that the URL we're connecting to is properly encoded 03:58:00 if url.startswith("/"): 03:58:00 url = to_str(_encode_target(url)) 03:58:00 else: 03:58:00 url = to_str(parsed_url.url) 03:58:00 03:58:00 conn = None 03:58:00 03:58:00 # Track whether `conn` needs to be released before 03:58:00 # returning/raising/recursing. Update this variable if necessary, and 03:58:00 # leave `release_conn` constant throughout the function. That way, if 03:58:00 # the function recurses, the original value of `release_conn` will be 03:58:00 # passed down into the recursive call, and its value will be respected. 03:58:00 # 03:58:00 # See issue #651 [1] for details. 03:58:00 # 03:58:00 # [1] 03:58:00 release_this_conn = release_conn 03:58:00 03:58:00 http_tunnel_required = connection_requires_http_tunnel( 03:58:00 self.proxy, self.proxy_config, destination_scheme 03:58:00 ) 03:58:00 03:58:00 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 03:58:00 # have to copy the headers dict so we can safely change it without those 03:58:00 # changes being reflected in anyone else's copy. 03:58:00 if not http_tunnel_required: 03:58:00 headers = headers.copy() # type: ignore[attr-defined] 03:58:00 headers.update(self.proxy_headers) # type: ignore[union-attr] 03:58:00 03:58:00 # Must keep the exception bound to a separate variable or else Python 3 03:58:00 # complains about UnboundLocalError. 03:58:00 err = None 03:58:00 03:58:00 # Keep track of whether we cleanly exited the except block. This 03:58:00 # ensures we do proper cleanup in finally. 03:58:00 clean_exit = False 03:58:00 03:58:00 # Rewind body position, if needed. Record current position 03:58:00 # for future rewinds in the event of a redirect/retry. 03:58:00 body_pos = set_file_position(body, body_pos) 03:58:00 03:58:00 try: 03:58:00 # Request a connection from the queue. 03:58:00 timeout_obj = self._get_timeout(timeout) 03:58:00 conn = self._get_conn(timeout=pool_timeout) 03:58:00 03:58:00 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 03:58:00 03:58:00 # Is this a closed/new connection that requires CONNECT tunnelling? 03:58:00 if self.proxy is not None and http_tunnel_required and conn.is_closed: 03:58:00 try: 03:58:00 self._prepare_proxy(conn) 03:58:00 except (BaseSSLError, OSError, SocketTimeout) as e: 03:58:00 self._raise_timeout( 03:58:00 err=e, url=self.proxy.url, timeout_value=conn.timeout 03:58:00 ) 03:58:00 raise 03:58:00 03:58:00 # If we're going to release the connection in ``finally:``, then 03:58:00 # the response doesn't need to know about the connection. Otherwise 03:58:00 # it will also try to release it and we'll have a double-release 03:58:00 # mess. 03:58:00 response_conn = conn if not release_conn else None 03:58:00 03:58:00 # Make the request on the HTTPConnection object 03:58:00 > response = self._make_request( 03:58:00 conn, 03:58:00 method, 03:58:00 url, 03:58:00 timeout=timeout_obj, 03:58:00 body=body, 03:58:00 headers=headers, 03:58:00 chunked=chunked, 03:58:00 retries=retries, 03:58:00 response_conn=response_conn, 03:58:00 preload_content=preload_content, 03:58:00 decode_content=decode_content, 03:58:00 **response_kw, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 03:58:00 conn.request( 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 03:58:00 self.endheaders() 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 03:58:00 self._send_output(message_body, encode_chunked=encode_chunked) 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 03:58:00 self.send(msg) 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 03:58:00 self.connect() 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 03:58:00 self.sock = self._new_conn() 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = 03:58:00 03:58:00 def _new_conn(self) -> socket.socket: 03:58:00 """Establish a socket connection and set nodelay settings on it. 03:58:00 03:58:00 :return: New socket connection. 03:58:00 """ 03:58:00 try: 03:58:00 sock = connection.create_connection( 03:58:00 (self._dns_host, self.port), 03:58:00 self.timeout, 03:58:00 source_address=self.source_address, 03:58:00 socket_options=self.socket_options, 03:58:00 ) 03:58:00 except socket.gaierror as e: 03:58:00 raise NameResolutionError(self.host, self, e) from e 03:58:00 except SocketTimeout as e: 03:58:00 raise ConnectTimeoutError( 03:58:00 self, 03:58:00 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 03:58:00 ) from e 03:58:00 03:58:00 except OSError as e: 03:58:00 > raise NewConnectionError( 03:58:00 self, f"Failed to establish a new connection: {e}" 03:58:00 ) from e 03:58:00 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 03:58:00 03:58:00 The above exception was the direct cause of the following exception: 03:58:00 03:58:00 self = 03:58:00 request = , stream = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:00 proxies = OrderedDict() 03:58:00 03:58:00 def send( 03:58:00 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:00 ): 03:58:00 """Sends PreparedRequest object. Returns Response object. 03:58:00 03:58:00 :param request: The :class:`PreparedRequest ` being sent. 03:58:00 :param stream: (optional) Whether to stream the request content. 03:58:00 :param timeout: (optional) How long to wait for the server to send 03:58:00 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:00 read timeout) ` tuple. 03:58:00 :type timeout: float or tuple or urllib3 Timeout object 03:58:00 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:00 we verify the server's TLS certificate, or a string, in which case it 03:58:00 must be a path to a CA bundle to use 03:58:00 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:00 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:00 :rtype: requests.Response 03:58:00 """ 03:58:00 03:58:00 try: 03:58:00 conn = self.get_connection_with_tls_context( 03:58:00 request, verify, proxies=proxies, cert=cert 03:58:00 ) 03:58:00 except LocationValueError as e: 03:58:00 raise InvalidURL(e, request=request) 03:58:00 03:58:00 self.cert_verify(conn, request.url, verify, cert) 03:58:00 url = self.request_url(request, proxies) 03:58:00 self.add_headers( 03:58:00 request, 03:58:00 stream=stream, 03:58:00 timeout=timeout, 03:58:00 verify=verify, 03:58:00 cert=cert, 03:58:00 proxies=proxies, 03:58:00 ) 03:58:00 03:58:00 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:00 03:58:00 if isinstance(timeout, tuple): 03:58:00 try: 03:58:00 connect, read = timeout 03:58:00 timeout = TimeoutSauce(connect=connect, read=read) 03:58:00 except ValueError: 03:58:00 raise ValueError( 03:58:00 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:00 f"or a single float to set both timeouts to the same value." 03:58:00 ) 03:58:00 elif isinstance(timeout, TimeoutSauce): 03:58:00 pass 03:58:00 else: 03:58:00 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:00 03:58:00 try: 03:58:00 > resp = conn.urlopen( 03:58:00 method=request.method, 03:58:00 url=url, 03:58:00 body=request.body, 03:58:00 headers=request.headers, 03:58:00 redirect=False, 03:58:00 assert_same_host=False, 03:58:00 preload_content=False, 03:58:00 decode_content=False, 03:58:00 retries=self.max_retries, 03:58:00 timeout=timeout, 03:58:00 chunked=chunked, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 03:58:00 retries = retries.increment( 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:00 method = 'GET' 03:58:00 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX' 03:58:00 response = None 03:58:00 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 03:58:00 _pool = 03:58:00 _stacktrace = 03:58:00 03:58:00 def increment( 03:58:00 self, 03:58:00 method: str | None = None, 03:58:00 url: str | None = None, 03:58:00 response: BaseHTTPResponse | None = None, 03:58:00 error: Exception | None = None, 03:58:00 _pool: ConnectionPool | None = None, 03:58:00 _stacktrace: TracebackType | None = None, 03:58:00 ) -> Self: 03:58:00 """Return a new Retry object with incremented retry counters. 03:58:00 03:58:00 :param response: A response object, or None, if the server did not 03:58:00 return a response. 03:58:00 :type response: :class:`~urllib3.response.BaseHTTPResponse` 03:58:00 :param Exception error: An error encountered during the request, or 03:58:00 None if the response was received successfully. 03:58:00 03:58:00 :return: A new ``Retry`` object. 03:58:00 """ 03:58:00 if self.total is False and error: 03:58:00 # Disabled, indicate to re-raise the error. 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 03:58:00 total = self.total 03:58:00 if total is not None: 03:58:00 total -= 1 03:58:00 03:58:00 connect = self.connect 03:58:00 read = self.read 03:58:00 redirect = self.redirect 03:58:00 status_count = self.status 03:58:00 other = self.other 03:58:00 cause = "unknown" 03:58:00 status = None 03:58:00 redirect_location = None 03:58:00 03:58:00 if error and self._is_connection_error(error): 03:58:00 # Connect retry? 03:58:00 if connect is False: 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 elif connect is not None: 03:58:00 connect -= 1 03:58:00 03:58:00 elif error and self._is_read_error(error): 03:58:00 # Read retry? 03:58:00 if read is False or method is None or not self._is_method_retryable(method): 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 elif read is not None: 03:58:00 read -= 1 03:58:00 03:58:00 elif error: 03:58:00 # Other retry? 03:58:00 if other is not None: 03:58:00 other -= 1 03:58:00 03:58:00 elif response and response.get_redirect_location(): 03:58:00 # Redirect retry? 03:58:00 if redirect is not None: 03:58:00 redirect -= 1 03:58:00 cause = "too many redirects" 03:58:00 response_redirect_location = response.get_redirect_location() 03:58:00 if response_redirect_location: 03:58:00 redirect_location = response_redirect_location 03:58:00 status = response.status 03:58:00 03:58:00 else: 03:58:00 # Incrementing because of a server error like a 500 in 03:58:00 # status_forcelist and the given method is in the allowed_methods 03:58:00 cause = ResponseError.GENERIC_ERROR 03:58:00 if response and response.status: 03:58:00 if status_count is not None: 03:58:00 status_count -= 1 03:58:00 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 03:58:00 status = response.status 03:58:00 03:58:00 history = self.history + ( 03:58:00 RequestHistory(method, url, error, status, redirect_location), 03:58:00 ) 03:58:00 03:58:00 new_retry = self.new( 03:58:00 total=total, 03:58:00 connect=connect, 03:58:00 read=read, 03:58:00 redirect=redirect, 03:58:00 status=status_count, 03:58:00 other=other, 03:58:00 history=history, 03:58:00 ) 03:58:00 03:58:00 if new_retry.is_exhausted(): 03:58:00 reason = error or ResponseError(cause) 03:58:00 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 03:58:00 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 03:58:00 03:58:00 During handling of the above exception, another exception occurred: 03:58:00 03:58:00 self = 03:58:00 03:58:00 def test_06_rdm_portmapping_SRG3_PP1_TXRX(self): 03:58:00 > response = test_utils.get_portmapping_node_attr("ROADMA01", "mapping", "SRG3-PP1-TXRX") 03:58:00 03:58:00 transportpce_tests/1.2.1/test01_portmapping.py:90: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 transportpce_tests/common/test_utils.py:471: in get_portmapping_node_attr 03:58:00 response = get_request(target_url) 03:58:00 transportpce_tests/common/test_utils.py:116: in get_request 03:58:00 return requests.request( 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 03:58:00 return session.request(method=method, url=url, **kwargs) 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 03:58:00 resp = self.send(prep, **send_kwargs) 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 03:58:00 r = adapter.send(request, **kwargs) 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = 03:58:00 request = , stream = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:00 proxies = OrderedDict() 03:58:00 03:58:00 def send( 03:58:00 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:00 ): 03:58:00 """Sends PreparedRequest object. Returns Response object. 03:58:00 03:58:00 :param request: The :class:`PreparedRequest ` being sent. 03:58:00 :param stream: (optional) Whether to stream the request content. 03:58:00 :param timeout: (optional) How long to wait for the server to send 03:58:00 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:00 read timeout) ` tuple. 03:58:00 :type timeout: float or tuple or urllib3 Timeout object 03:58:00 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:00 we verify the server's TLS certificate, or a string, in which case it 03:58:00 must be a path to a CA bundle to use 03:58:00 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:00 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:00 :rtype: requests.Response 03:58:00 """ 03:58:00 03:58:00 try: 03:58:00 conn = self.get_connection_with_tls_context( 03:58:00 request, verify, proxies=proxies, cert=cert 03:58:00 ) 03:58:00 except LocationValueError as e: 03:58:00 raise InvalidURL(e, request=request) 03:58:00 03:58:00 self.cert_verify(conn, request.url, verify, cert) 03:58:00 url = self.request_url(request, proxies) 03:58:00 self.add_headers( 03:58:00 request, 03:58:00 stream=stream, 03:58:00 timeout=timeout, 03:58:00 verify=verify, 03:58:00 cert=cert, 03:58:00 proxies=proxies, 03:58:00 ) 03:58:00 03:58:00 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:00 03:58:00 if isinstance(timeout, tuple): 03:58:00 try: 03:58:00 connect, read = timeout 03:58:00 timeout = TimeoutSauce(connect=connect, read=read) 03:58:00 except ValueError: 03:58:00 raise ValueError( 03:58:00 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:00 f"or a single float to set both timeouts to the same value." 03:58:00 ) 03:58:00 elif isinstance(timeout, TimeoutSauce): 03:58:00 pass 03:58:00 else: 03:58:00 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:00 03:58:00 try: 03:58:00 resp = conn.urlopen( 03:58:00 method=request.method, 03:58:00 url=url, 03:58:00 body=request.body, 03:58:00 headers=request.headers, 03:58:00 redirect=False, 03:58:00 assert_same_host=False, 03:58:00 preload_content=False, 03:58:00 decode_content=False, 03:58:00 retries=self.max_retries, 03:58:00 timeout=timeout, 03:58:00 chunked=chunked, 03:58:00 ) 03:58:00 03:58:00 except (ProtocolError, OSError) as err: 03:58:00 raise ConnectionError(err, request=request) 03:58:00 03:58:00 except MaxRetryError as e: 03:58:00 if isinstance(e.reason, ConnectTimeoutError): 03:58:00 # TODO: Remove this in 3.0.0: see #2811 03:58:00 if not isinstance(e.reason, NewConnectionError): 03:58:00 raise ConnectTimeout(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, ResponseError): 03:58:00 raise RetryError(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, _ProxyError): 03:58:00 raise ProxyError(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, _SSLError): 03:58:00 # This branch is for urllib3 v1.22 and later. 03:58:00 raise SSLError(e, request=request) 03:58:00 03:58:00 > raise ConnectionError(e, request=request) 03:58:00 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 03:58:00 ----------------------------- Captured stdout call ----------------------------- 03:58:00 execution of test_06_rdm_portmapping_SRG3_PP1_TXRX 03:58:00 ________ TransportPCEPortMappingTesting.test_07_xpdr_device_connection _________ 03:58:00 03:58:00 self = 03:58:00 03:58:00 def _new_conn(self) -> socket.socket: 03:58:00 """Establish a socket connection and set nodelay settings on it. 03:58:00 03:58:00 :return: New socket connection. 03:58:00 """ 03:58:00 try: 03:58:00 > sock = connection.create_connection( 03:58:00 (self._dns_host, self.port), 03:58:00 self.timeout, 03:58:00 source_address=self.source_address, 03:58:00 socket_options=self.socket_options, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 03:58:00 raise err 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 address = ('localhost', 8182), timeout = 10, source_address = None 03:58:00 socket_options = [(6, 1, 1)] 03:58:00 03:58:00 def create_connection( 03:58:00 address: tuple[str, int], 03:58:00 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:00 source_address: tuple[str, int] | None = None, 03:58:00 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 03:58:00 ) -> socket.socket: 03:58:00 """Connect to *address* and return the socket object. 03:58:00 03:58:00 Convenience function. Connect to *address* (a 2-tuple ``(host, 03:58:00 port)``) and return the socket object. Passing the optional 03:58:00 *timeout* parameter will set the timeout on the socket instance 03:58:00 before attempting to connect. If no *timeout* is supplied, the 03:58:00 global default timeout setting returned by :func:`socket.getdefaulttimeout` 03:58:00 is used. If *source_address* is set it must be a tuple of (host, port) 03:58:00 for the socket to bind as a source address before making the connection. 03:58:00 An host of '' or port 0 tells the OS to use the default. 03:58:00 """ 03:58:00 03:58:00 host, port = address 03:58:00 if host.startswith("["): 03:58:00 host = host.strip("[]") 03:58:00 err = None 03:58:00 03:58:00 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 03:58:00 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 03:58:00 # The original create_connection function always returns all records. 03:58:00 family = allowed_gai_family() 03:58:00 03:58:00 try: 03:58:00 host.encode("idna") 03:58:00 except UnicodeError: 03:58:00 raise LocationParseError(f"'{host}', label empty or too long") from None 03:58:00 03:58:00 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 03:58:00 af, socktype, proto, canonname, sa = res 03:58:00 sock = None 03:58:00 try: 03:58:00 sock = socket.socket(af, socktype, proto) 03:58:00 03:58:00 # If provided, set socket level options before connecting. 03:58:00 _set_socket_options(sock, socket_options) 03:58:00 03:58:00 if timeout is not _DEFAULT_TIMEOUT: 03:58:00 sock.settimeout(timeout) 03:58:00 if source_address: 03:58:00 sock.bind(source_address) 03:58:00 > sock.connect(sa) 03:58:00 E ConnectionRefusedError: [Errno 111] Connection refused 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 03:58:00 03:58:00 The above exception was the direct cause of the following exception: 03:58:00 03:58:00 self = 03:58:00 method = 'PUT' 03:58:00 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' 03:58:00 body = '{"node": [{"node-id": "XPDRA01", "netconf-node-topology:host": "127.0.0.1", "netconf-node-topology:port": "17830", "n...off-millis": 1800000, "netconf-node-topology:backoff-multiplier": 1.5, "netconf-node-topology:keepalive-delay": 120}]}' 03:58:00 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '669', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 03:58:00 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:00 redirect = False, assert_same_host = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 03:58:00 release_conn = False, chunked = False, body_pos = None, preload_content = False 03:58:00 decode_content = False, response_kw = {} 03:58:00 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query=None, fragment=None) 03:58:00 destination_scheme = None, conn = None, release_this_conn = True 03:58:00 http_tunnel_required = False, err = None, clean_exit = False 03:58:00 03:58:00 def urlopen( # type: ignore[override] 03:58:00 self, 03:58:00 method: str, 03:58:00 url: str, 03:58:00 body: _TYPE_BODY | None = None, 03:58:00 headers: typing.Mapping[str, str] | None = None, 03:58:00 retries: Retry | bool | int | None = None, 03:58:00 redirect: bool = True, 03:58:00 assert_same_host: bool = True, 03:58:00 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:00 pool_timeout: int | None = None, 03:58:00 release_conn: bool | None = None, 03:58:00 chunked: bool = False, 03:58:00 body_pos: _TYPE_BODY_POSITION | None = None, 03:58:00 preload_content: bool = True, 03:58:00 decode_content: bool = True, 03:58:00 **response_kw: typing.Any, 03:58:00 ) -> BaseHTTPResponse: 03:58:00 """ 03:58:00 Get a connection from the pool and perform an HTTP request. This is the 03:58:00 lowest level call for making a request, so you'll need to specify all 03:58:00 the raw details. 03:58:00 03:58:00 .. note:: 03:58:00 03:58:00 More commonly, it's appropriate to use a convenience method 03:58:00 such as :meth:`request`. 03:58:00 03:58:00 .. note:: 03:58:00 03:58:00 `release_conn` will only behave as expected if 03:58:00 `preload_content=False` because we want to make 03:58:00 `preload_content=False` the default behaviour someday soon without 03:58:00 breaking backwards compatibility. 03:58:00 03:58:00 :param method: 03:58:00 HTTP request method (such as GET, POST, PUT, etc.) 03:58:00 03:58:00 :param url: 03:58:00 The URL to perform the request on. 03:58:00 03:58:00 :param body: 03:58:00 Data to send in the request body, either :class:`str`, :class:`bytes`, 03:58:00 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 03:58:00 03:58:00 :param headers: 03:58:00 Dictionary of custom headers to send, such as User-Agent, 03:58:00 If-None-Match, etc. If None, pool headers are used. If provided, 03:58:00 these headers completely replace any pool-specific headers. 03:58:00 03:58:00 :param retries: 03:58:00 Configure the number of retries to allow before raising a 03:58:00 :class:`~urllib3.exceptions.MaxRetryError` exception. 03:58:00 03:58:00 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 03:58:00 :class:`~urllib3.util.retry.Retry` object for fine-grained control 03:58:00 over different types of retries. 03:58:00 Pass an integer number to retry connection errors that many times, 03:58:00 but no other types of errors. Pass zero to never retry. 03:58:00 03:58:00 If ``False``, then retries are disabled and any exception is raised 03:58:00 immediately. Also, instead of raising a MaxRetryError on redirects, 03:58:00 the redirect response will be returned. 03:58:00 03:58:00 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 03:58:00 03:58:00 :param redirect: 03:58:00 If True, automatically handle redirects (status codes 301, 302, 03:58:00 303, 307, 308). Each redirect counts as a retry. Disabling retries 03:58:00 will disable redirect, too. 03:58:00 03:58:00 :param assert_same_host: 03:58:00 If ``True``, will make sure that the host of the pool requests is 03:58:00 consistent else will raise HostChangedError. When ``False``, you can 03:58:00 use the pool on an HTTP proxy and request foreign hosts. 03:58:00 03:58:00 :param timeout: 03:58:00 If specified, overrides the default timeout for this one 03:58:00 request. It may be a float (in seconds) or an instance of 03:58:00 :class:`urllib3.util.Timeout`. 03:58:00 03:58:00 :param pool_timeout: 03:58:00 If set and the pool is set to block=True, then this method will 03:58:00 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 03:58:00 connection is available within the time period. 03:58:00 03:58:00 :param bool preload_content: 03:58:00 If True, the response's body will be preloaded into memory. 03:58:00 03:58:00 :param bool decode_content: 03:58:00 If True, will attempt to decode the body based on the 03:58:00 'content-encoding' header. 03:58:00 03:58:00 :param release_conn: 03:58:00 If False, then the urlopen call will not release the connection 03:58:00 back into the pool once a response is received (but will release if 03:58:00 you read the entire contents of the response such as when 03:58:00 `preload_content=True`). This is useful if you're not preloading 03:58:00 the response's content immediately. You will need to call 03:58:00 ``r.release_conn()`` on the response ``r`` to return the connection 03:58:00 back into the pool. If None, it takes the value of ``preload_content`` 03:58:00 which defaults to ``True``. 03:58:00 03:58:00 :param bool chunked: 03:58:00 If True, urllib3 will send the body using chunked transfer 03:58:00 encoding. Otherwise, urllib3 will send the body using the standard 03:58:00 content-length form. Defaults to False. 03:58:00 03:58:00 :param int body_pos: 03:58:00 Position to seek to in file-like body in the event of a retry or 03:58:00 redirect. Typically this won't need to be set because urllib3 will 03:58:00 auto-populate the value when needed. 03:58:00 """ 03:58:00 parsed_url = parse_url(url) 03:58:00 destination_scheme = parsed_url.scheme 03:58:00 03:58:00 if headers is None: 03:58:00 headers = self.headers 03:58:00 03:58:00 if not isinstance(retries, Retry): 03:58:00 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 03:58:00 03:58:00 if release_conn is None: 03:58:00 release_conn = preload_content 03:58:00 03:58:00 # Check host 03:58:00 if assert_same_host and not self.is_same_host(url): 03:58:00 raise HostChangedError(self, url, retries) 03:58:00 03:58:00 # Ensure that the URL we're connecting to is properly encoded 03:58:00 if url.startswith("/"): 03:58:00 url = to_str(_encode_target(url)) 03:58:00 else: 03:58:00 url = to_str(parsed_url.url) 03:58:00 03:58:00 conn = None 03:58:00 03:58:00 # Track whether `conn` needs to be released before 03:58:00 # returning/raising/recursing. Update this variable if necessary, and 03:58:00 # leave `release_conn` constant throughout the function. That way, if 03:58:00 # the function recurses, the original value of `release_conn` will be 03:58:00 # passed down into the recursive call, and its value will be respected. 03:58:00 # 03:58:00 # See issue #651 [1] for details. 03:58:00 # 03:58:00 # [1] 03:58:00 release_this_conn = release_conn 03:58:00 03:58:00 http_tunnel_required = connection_requires_http_tunnel( 03:58:00 self.proxy, self.proxy_config, destination_scheme 03:58:00 ) 03:58:00 03:58:00 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 03:58:00 # have to copy the headers dict so we can safely change it without those 03:58:00 # changes being reflected in anyone else's copy. 03:58:00 if not http_tunnel_required: 03:58:00 headers = headers.copy() # type: ignore[attr-defined] 03:58:00 headers.update(self.proxy_headers) # type: ignore[union-attr] 03:58:00 03:58:00 # Must keep the exception bound to a separate variable or else Python 3 03:58:00 # complains about UnboundLocalError. 03:58:00 err = None 03:58:00 03:58:00 # Keep track of whether we cleanly exited the except block. This 03:58:00 # ensures we do proper cleanup in finally. 03:58:00 clean_exit = False 03:58:00 03:58:00 # Rewind body position, if needed. Record current position 03:58:00 # for future rewinds in the event of a redirect/retry. 03:58:00 body_pos = set_file_position(body, body_pos) 03:58:00 03:58:00 try: 03:58:00 # Request a connection from the queue. 03:58:00 timeout_obj = self._get_timeout(timeout) 03:58:00 conn = self._get_conn(timeout=pool_timeout) 03:58:00 03:58:00 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 03:58:00 03:58:00 # Is this a closed/new connection that requires CONNECT tunnelling? 03:58:00 if self.proxy is not None and http_tunnel_required and conn.is_closed: 03:58:00 try: 03:58:00 self._prepare_proxy(conn) 03:58:00 except (BaseSSLError, OSError, SocketTimeout) as e: 03:58:00 self._raise_timeout( 03:58:00 err=e, url=self.proxy.url, timeout_value=conn.timeout 03:58:00 ) 03:58:00 raise 03:58:00 03:58:00 # If we're going to release the connection in ``finally:``, then 03:58:00 # the response doesn't need to know about the connection. Otherwise 03:58:00 # it will also try to release it and we'll have a double-release 03:58:00 # mess. 03:58:00 response_conn = conn if not release_conn else None 03:58:00 03:58:00 # Make the request on the HTTPConnection object 03:58:00 > response = self._make_request( 03:58:00 conn, 03:58:00 method, 03:58:00 url, 03:58:00 timeout=timeout_obj, 03:58:00 body=body, 03:58:00 headers=headers, 03:58:00 chunked=chunked, 03:58:00 retries=retries, 03:58:00 response_conn=response_conn, 03:58:00 preload_content=preload_content, 03:58:00 decode_content=decode_content, 03:58:00 **response_kw, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 03:58:00 conn.request( 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 03:58:00 self.endheaders() 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 03:58:00 self._send_output(message_body, encode_chunked=encode_chunked) 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 03:58:00 self.send(msg) 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 03:58:00 self.connect() 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 03:58:00 self.sock = self._new_conn() 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = 03:58:00 03:58:00 def _new_conn(self) -> socket.socket: 03:58:00 """Establish a socket connection and set nodelay settings on it. 03:58:00 03:58:00 :return: New socket connection. 03:58:00 """ 03:58:00 try: 03:58:00 sock = connection.create_connection( 03:58:00 (self._dns_host, self.port), 03:58:00 self.timeout, 03:58:00 source_address=self.source_address, 03:58:00 socket_options=self.socket_options, 03:58:00 ) 03:58:00 except socket.gaierror as e: 03:58:00 raise NameResolutionError(self.host, self, e) from e 03:58:00 except SocketTimeout as e: 03:58:00 raise ConnectTimeoutError( 03:58:00 self, 03:58:00 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 03:58:00 ) from e 03:58:00 03:58:00 except OSError as e: 03:58:00 > raise NewConnectionError( 03:58:00 self, f"Failed to establish a new connection: {e}" 03:58:00 ) from e 03:58:00 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 03:58:00 03:58:00 The above exception was the direct cause of the following exception: 03:58:00 03:58:00 self = 03:58:00 request = , stream = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:00 proxies = OrderedDict() 03:58:00 03:58:00 def send( 03:58:00 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:00 ): 03:58:00 """Sends PreparedRequest object. Returns Response object. 03:58:00 03:58:00 :param request: The :class:`PreparedRequest ` being sent. 03:58:00 :param stream: (optional) Whether to stream the request content. 03:58:00 :param timeout: (optional) How long to wait for the server to send 03:58:00 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:00 read timeout) ` tuple. 03:58:00 :type timeout: float or tuple or urllib3 Timeout object 03:58:00 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:00 we verify the server's TLS certificate, or a string, in which case it 03:58:00 must be a path to a CA bundle to use 03:58:00 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:00 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:00 :rtype: requests.Response 03:58:00 """ 03:58:00 03:58:00 try: 03:58:00 conn = self.get_connection_with_tls_context( 03:58:00 request, verify, proxies=proxies, cert=cert 03:58:00 ) 03:58:00 except LocationValueError as e: 03:58:00 raise InvalidURL(e, request=request) 03:58:00 03:58:00 self.cert_verify(conn, request.url, verify, cert) 03:58:00 url = self.request_url(request, proxies) 03:58:00 self.add_headers( 03:58:00 request, 03:58:00 stream=stream, 03:58:00 timeout=timeout, 03:58:00 verify=verify, 03:58:00 cert=cert, 03:58:00 proxies=proxies, 03:58:00 ) 03:58:00 03:58:00 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:00 03:58:00 if isinstance(timeout, tuple): 03:58:00 try: 03:58:00 connect, read = timeout 03:58:00 timeout = TimeoutSauce(connect=connect, read=read) 03:58:00 except ValueError: 03:58:00 raise ValueError( 03:58:00 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:00 f"or a single float to set both timeouts to the same value." 03:58:00 ) 03:58:00 elif isinstance(timeout, TimeoutSauce): 03:58:00 pass 03:58:00 else: 03:58:00 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:00 03:58:00 try: 03:58:00 > resp = conn.urlopen( 03:58:00 method=request.method, 03:58:00 url=url, 03:58:00 body=request.body, 03:58:00 headers=request.headers, 03:58:00 redirect=False, 03:58:00 assert_same_host=False, 03:58:00 preload_content=False, 03:58:00 decode_content=False, 03:58:00 retries=self.max_retries, 03:58:00 timeout=timeout, 03:58:00 chunked=chunked, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 03:58:00 retries = retries.increment( 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:00 method = 'PUT' 03:58:00 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' 03:58:00 response = None 03:58:00 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 03:58:00 _pool = 03:58:00 _stacktrace = 03:58:00 03:58:00 def increment( 03:58:00 self, 03:58:00 method: str | None = None, 03:58:00 url: str | None = None, 03:58:00 response: BaseHTTPResponse | None = None, 03:58:00 error: Exception | None = None, 03:58:00 _pool: ConnectionPool | None = None, 03:58:00 _stacktrace: TracebackType | None = None, 03:58:00 ) -> Self: 03:58:00 """Return a new Retry object with incremented retry counters. 03:58:00 03:58:00 :param response: A response object, or None, if the server did not 03:58:00 return a response. 03:58:00 :type response: :class:`~urllib3.response.BaseHTTPResponse` 03:58:00 :param Exception error: An error encountered during the request, or 03:58:00 None if the response was received successfully. 03:58:00 03:58:00 :return: A new ``Retry`` object. 03:58:00 """ 03:58:00 if self.total is False and error: 03:58:00 # Disabled, indicate to re-raise the error. 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 03:58:00 total = self.total 03:58:00 if total is not None: 03:58:00 total -= 1 03:58:00 03:58:00 connect = self.connect 03:58:00 read = self.read 03:58:00 redirect = self.redirect 03:58:00 status_count = self.status 03:58:00 other = self.other 03:58:00 cause = "unknown" 03:58:00 status = None 03:58:00 redirect_location = None 03:58:00 03:58:00 if error and self._is_connection_error(error): 03:58:00 # Connect retry? 03:58:00 if connect is False: 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 elif connect is not None: 03:58:00 connect -= 1 03:58:00 03:58:00 elif error and self._is_read_error(error): 03:58:00 # Read retry? 03:58:00 if read is False or method is None or not self._is_method_retryable(method): 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 elif read is not None: 03:58:00 read -= 1 03:58:00 03:58:00 elif error: 03:58:00 # Other retry? 03:58:00 if other is not None: 03:58:00 other -= 1 03:58:00 03:58:00 elif response and response.get_redirect_location(): 03:58:00 # Redirect retry? 03:58:00 if redirect is not None: 03:58:00 redirect -= 1 03:58:00 cause = "too many redirects" 03:58:00 response_redirect_location = response.get_redirect_location() 03:58:00 if response_redirect_location: 03:58:00 redirect_location = response_redirect_location 03:58:00 status = response.status 03:58:00 03:58:00 else: 03:58:00 # Incrementing because of a server error like a 500 in 03:58:00 # status_forcelist and the given method is in the allowed_methods 03:58:00 cause = ResponseError.GENERIC_ERROR 03:58:00 if response and response.status: 03:58:00 if status_count is not None: 03:58:00 status_count -= 1 03:58:00 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 03:58:00 status = response.status 03:58:00 03:58:00 history = self.history + ( 03:58:00 RequestHistory(method, url, error, status, redirect_location), 03:58:00 ) 03:58:00 03:58:00 new_retry = self.new( 03:58:00 total=total, 03:58:00 connect=connect, 03:58:00 read=read, 03:58:00 redirect=redirect, 03:58:00 status=status_count, 03:58:00 other=other, 03:58:00 history=history, 03:58:00 ) 03:58:00 03:58:00 if new_retry.is_exhausted(): 03:58:00 reason = error or ResponseError(cause) 03:58:00 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 03:58:00 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 03:58:00 03:58:00 During handling of the above exception, another exception occurred: 03:58:00 03:58:00 self = 03:58:00 03:58:00 def test_07_xpdr_device_connection(self): 03:58:00 > response = test_utils.mount_device("XPDRA01", ('xpdra', self.NODE_VERSION)) 03:58:00 03:58:00 transportpce_tests/1.2.1/test01_portmapping.py:99: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 transportpce_tests/common/test_utils.py:342: in mount_device 03:58:00 response = put_request(url[RESTCONF_VERSION].format('{}', node), body) 03:58:00 transportpce_tests/common/test_utils.py:124: in put_request 03:58:00 return requests.request( 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 03:58:00 return session.request(method=method, url=url, **kwargs) 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 03:58:00 resp = self.send(prep, **send_kwargs) 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 03:58:00 r = adapter.send(request, **kwargs) 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = 03:58:00 request = , stream = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:00 proxies = OrderedDict() 03:58:00 03:58:00 def send( 03:58:00 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:00 ): 03:58:00 """Sends PreparedRequest object. Returns Response object. 03:58:00 03:58:00 :param request: The :class:`PreparedRequest ` being sent. 03:58:00 :param stream: (optional) Whether to stream the request content. 03:58:00 :param timeout: (optional) How long to wait for the server to send 03:58:00 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:00 read timeout) ` tuple. 03:58:00 :type timeout: float or tuple or urllib3 Timeout object 03:58:00 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:00 we verify the server's TLS certificate, or a string, in which case it 03:58:00 must be a path to a CA bundle to use 03:58:00 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:00 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:00 :rtype: requests.Response 03:58:00 """ 03:58:00 03:58:00 try: 03:58:00 conn = self.get_connection_with_tls_context( 03:58:00 request, verify, proxies=proxies, cert=cert 03:58:00 ) 03:58:00 except LocationValueError as e: 03:58:00 raise InvalidURL(e, request=request) 03:58:00 03:58:00 self.cert_verify(conn, request.url, verify, cert) 03:58:00 url = self.request_url(request, proxies) 03:58:00 self.add_headers( 03:58:00 request, 03:58:00 stream=stream, 03:58:00 timeout=timeout, 03:58:00 verify=verify, 03:58:00 cert=cert, 03:58:00 proxies=proxies, 03:58:00 ) 03:58:00 03:58:00 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:00 03:58:00 if isinstance(timeout, tuple): 03:58:00 try: 03:58:00 connect, read = timeout 03:58:00 timeout = TimeoutSauce(connect=connect, read=read) 03:58:00 except ValueError: 03:58:00 raise ValueError( 03:58:00 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:00 f"or a single float to set both timeouts to the same value." 03:58:00 ) 03:58:00 elif isinstance(timeout, TimeoutSauce): 03:58:00 pass 03:58:00 else: 03:58:00 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:00 03:58:00 try: 03:58:00 resp = conn.urlopen( 03:58:00 method=request.method, 03:58:00 url=url, 03:58:00 body=request.body, 03:58:00 headers=request.headers, 03:58:00 redirect=False, 03:58:00 assert_same_host=False, 03:58:00 preload_content=False, 03:58:00 decode_content=False, 03:58:00 retries=self.max_retries, 03:58:00 timeout=timeout, 03:58:00 chunked=chunked, 03:58:00 ) 03:58:00 03:58:00 except (ProtocolError, OSError) as err: 03:58:00 raise ConnectionError(err, request=request) 03:58:00 03:58:00 except MaxRetryError as e: 03:58:00 if isinstance(e.reason, ConnectTimeoutError): 03:58:00 # TODO: Remove this in 3.0.0: see #2811 03:58:00 if not isinstance(e.reason, NewConnectionError): 03:58:00 raise ConnectTimeout(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, ResponseError): 03:58:00 raise RetryError(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, _ProxyError): 03:58:00 raise ProxyError(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, _SSLError): 03:58:00 # This branch is for urllib3 v1.22 and later. 03:58:00 raise SSLError(e, request=request) 03:58:00 03:58:00 > raise ConnectionError(e, request=request) 03:58:00 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 03:58:00 ----------------------------- Captured stdout call ----------------------------- 03:58:00 execution of test_07_xpdr_device_connection 03:58:00 _________ TransportPCEPortMappingTesting.test_08_xpdr_device_connected _________ 03:58:00 03:58:00 self = 03:58:00 03:58:00 def _new_conn(self) -> socket.socket: 03:58:00 """Establish a socket connection and set nodelay settings on it. 03:58:00 03:58:00 :return: New socket connection. 03:58:00 """ 03:58:00 try: 03:58:00 > sock = connection.create_connection( 03:58:00 (self._dns_host, self.port), 03:58:00 self.timeout, 03:58:00 source_address=self.source_address, 03:58:00 socket_options=self.socket_options, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 03:58:00 raise err 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 address = ('localhost', 8182), timeout = 10, source_address = None 03:58:00 socket_options = [(6, 1, 1)] 03:58:00 03:58:00 def create_connection( 03:58:00 address: tuple[str, int], 03:58:00 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:00 source_address: tuple[str, int] | None = None, 03:58:00 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 03:58:00 ) -> socket.socket: 03:58:00 """Connect to *address* and return the socket object. 03:58:00 03:58:00 Convenience function. Connect to *address* (a 2-tuple ``(host, 03:58:00 port)``) and return the socket object. Passing the optional 03:58:00 *timeout* parameter will set the timeout on the socket instance 03:58:00 before attempting to connect. If no *timeout* is supplied, the 03:58:00 global default timeout setting returned by :func:`socket.getdefaulttimeout` 03:58:00 is used. If *source_address* is set it must be a tuple of (host, port) 03:58:00 for the socket to bind as a source address before making the connection. 03:58:00 An host of '' or port 0 tells the OS to use the default. 03:58:00 """ 03:58:00 03:58:00 host, port = address 03:58:00 if host.startswith("["): 03:58:00 host = host.strip("[]") 03:58:00 err = None 03:58:00 03:58:00 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 03:58:00 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 03:58:00 # The original create_connection function always returns all records. 03:58:00 family = allowed_gai_family() 03:58:00 03:58:00 try: 03:58:00 host.encode("idna") 03:58:00 except UnicodeError: 03:58:00 raise LocationParseError(f"'{host}', label empty or too long") from None 03:58:00 03:58:00 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 03:58:00 af, socktype, proto, canonname, sa = res 03:58:00 sock = None 03:58:00 try: 03:58:00 sock = socket.socket(af, socktype, proto) 03:58:00 03:58:00 # If provided, set socket level options before connecting. 03:58:00 _set_socket_options(sock, socket_options) 03:58:00 03:58:00 if timeout is not _DEFAULT_TIMEOUT: 03:58:00 sock.settimeout(timeout) 03:58:00 if source_address: 03:58:00 sock.bind(source_address) 03:58:00 > sock.connect(sa) 03:58:00 E ConnectionRefusedError: [Errno 111] Connection refused 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 03:58:00 03:58:00 The above exception was the direct cause of the following exception: 03:58:00 03:58:00 self = 03:58:00 method = 'GET' 03:58:00 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' 03:58:00 body = None 03:58:00 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 03:58:00 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:00 redirect = False, assert_same_host = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 03:58:00 release_conn = False, chunked = False, body_pos = None, preload_content = False 03:58:00 decode_content = False, response_kw = {} 03:58:00 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query='content=nonconfig', fragment=None) 03:58:00 destination_scheme = None, conn = None, release_this_conn = True 03:58:00 http_tunnel_required = False, err = None, clean_exit = False 03:58:00 03:58:00 def urlopen( # type: ignore[override] 03:58:00 self, 03:58:00 method: str, 03:58:00 url: str, 03:58:00 body: _TYPE_BODY | None = None, 03:58:00 headers: typing.Mapping[str, str] | None = None, 03:58:00 retries: Retry | bool | int | None = None, 03:58:00 redirect: bool = True, 03:58:00 assert_same_host: bool = True, 03:58:00 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:00 pool_timeout: int | None = None, 03:58:00 release_conn: bool | None = None, 03:58:00 chunked: bool = False, 03:58:00 body_pos: _TYPE_BODY_POSITION | None = None, 03:58:00 preload_content: bool = True, 03:58:00 decode_content: bool = True, 03:58:00 **response_kw: typing.Any, 03:58:00 ) -> BaseHTTPResponse: 03:58:00 """ 03:58:00 Get a connection from the pool and perform an HTTP request. This is the 03:58:00 lowest level call for making a request, so you'll need to specify all 03:58:00 the raw details. 03:58:00 03:58:00 .. note:: 03:58:00 03:58:00 More commonly, it's appropriate to use a convenience method 03:58:00 such as :meth:`request`. 03:58:00 03:58:00 .. note:: 03:58:00 03:58:00 `release_conn` will only behave as expected if 03:58:00 `preload_content=False` because we want to make 03:58:00 `preload_content=False` the default behaviour someday soon without 03:58:00 breaking backwards compatibility. 03:58:00 03:58:00 :param method: 03:58:00 HTTP request method (such as GET, POST, PUT, etc.) 03:58:00 03:58:00 :param url: 03:58:00 The URL to perform the request on. 03:58:00 03:58:00 :param body: 03:58:00 Data to send in the request body, either :class:`str`, :class:`bytes`, 03:58:00 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 03:58:00 03:58:00 :param headers: 03:58:00 Dictionary of custom headers to send, such as User-Agent, 03:58:00 If-None-Match, etc. If None, pool headers are used. If provided, 03:58:00 these headers completely replace any pool-specific headers. 03:58:00 03:58:00 :param retries: 03:58:00 Configure the number of retries to allow before raising a 03:58:00 :class:`~urllib3.exceptions.MaxRetryError` exception. 03:58:00 03:58:00 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 03:58:00 :class:`~urllib3.util.retry.Retry` object for fine-grained control 03:58:00 over different types of retries. 03:58:00 Pass an integer number to retry connection errors that many times, 03:58:00 but no other types of errors. Pass zero to never retry. 03:58:00 03:58:00 If ``False``, then retries are disabled and any exception is raised 03:58:00 immediately. Also, instead of raising a MaxRetryError on redirects, 03:58:00 the redirect response will be returned. 03:58:00 03:58:00 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 03:58:00 03:58:00 :param redirect: 03:58:00 If True, automatically handle redirects (status codes 301, 302, 03:58:00 303, 307, 308). Each redirect counts as a retry. Disabling retries 03:58:00 will disable redirect, too. 03:58:00 03:58:00 :param assert_same_host: 03:58:00 If ``True``, will make sure that the host of the pool requests is 03:58:00 consistent else will raise HostChangedError. When ``False``, you can 03:58:00 use the pool on an HTTP proxy and request foreign hosts. 03:58:00 03:58:00 :param timeout: 03:58:00 If specified, overrides the default timeout for this one 03:58:00 request. It may be a float (in seconds) or an instance of 03:58:00 :class:`urllib3.util.Timeout`. 03:58:00 03:58:00 :param pool_timeout: 03:58:00 If set and the pool is set to block=True, then this method will 03:58:00 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 03:58:00 connection is available within the time period. 03:58:00 03:58:00 :param bool preload_content: 03:58:00 If True, the response's body will be preloaded into memory. 03:58:00 03:58:00 :param bool decode_content: 03:58:00 If True, will attempt to decode the body based on the 03:58:00 'content-encoding' header. 03:58:00 03:58:00 :param release_conn: 03:58:00 If False, then the urlopen call will not release the connection 03:58:00 back into the pool once a response is received (but will release if 03:58:00 you read the entire contents of the response such as when 03:58:00 `preload_content=True`). This is useful if you're not preloading 03:58:00 the response's content immediately. You will need to call 03:58:00 ``r.release_conn()`` on the response ``r`` to return the connection 03:58:00 back into the pool. If None, it takes the value of ``preload_content`` 03:58:00 which defaults to ``True``. 03:58:00 03:58:00 :param bool chunked: 03:58:00 If True, urllib3 will send the body using chunked transfer 03:58:00 encoding. Otherwise, urllib3 will send the body using the standard 03:58:00 content-length form. Defaults to False. 03:58:00 03:58:00 :param int body_pos: 03:58:00 Position to seek to in file-like body in the event of a retry or 03:58:00 redirect. Typically this won't need to be set because urllib3 will 03:58:00 auto-populate the value when needed. 03:58:00 """ 03:58:00 parsed_url = parse_url(url) 03:58:00 destination_scheme = parsed_url.scheme 03:58:00 03:58:00 if headers is None: 03:58:00 headers = self.headers 03:58:00 03:58:00 if not isinstance(retries, Retry): 03:58:00 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 03:58:00 03:58:00 if release_conn is None: 03:58:00 release_conn = preload_content 03:58:00 03:58:00 # Check host 03:58:00 if assert_same_host and not self.is_same_host(url): 03:58:00 raise HostChangedError(self, url, retries) 03:58:00 03:58:00 # Ensure that the URL we're connecting to is properly encoded 03:58:00 if url.startswith("/"): 03:58:00 url = to_str(_encode_target(url)) 03:58:00 else: 03:58:00 url = to_str(parsed_url.url) 03:58:00 03:58:00 conn = None 03:58:00 03:58:00 # Track whether `conn` needs to be released before 03:58:00 # returning/raising/recursing. Update this variable if necessary, and 03:58:00 # leave `release_conn` constant throughout the function. That way, if 03:58:00 # the function recurses, the original value of `release_conn` will be 03:58:00 # passed down into the recursive call, and its value will be respected. 03:58:00 # 03:58:00 # See issue #651 [1] for details. 03:58:00 # 03:58:00 # [1] 03:58:00 release_this_conn = release_conn 03:58:00 03:58:00 http_tunnel_required = connection_requires_http_tunnel( 03:58:00 self.proxy, self.proxy_config, destination_scheme 03:58:00 ) 03:58:00 03:58:00 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 03:58:00 # have to copy the headers dict so we can safely change it without those 03:58:00 # changes being reflected in anyone else's copy. 03:58:00 if not http_tunnel_required: 03:58:00 headers = headers.copy() # type: ignore[attr-defined] 03:58:00 headers.update(self.proxy_headers) # type: ignore[union-attr] 03:58:00 03:58:00 # Must keep the exception bound to a separate variable or else Python 3 03:58:00 # complains about UnboundLocalError. 03:58:00 err = None 03:58:00 03:58:00 # Keep track of whether we cleanly exited the except block. This 03:58:00 # ensures we do proper cleanup in finally. 03:58:00 clean_exit = False 03:58:00 03:58:00 # Rewind body position, if needed. Record current position 03:58:00 # for future rewinds in the event of a redirect/retry. 03:58:00 body_pos = set_file_position(body, body_pos) 03:58:00 03:58:00 try: 03:58:00 # Request a connection from the queue. 03:58:00 timeout_obj = self._get_timeout(timeout) 03:58:00 conn = self._get_conn(timeout=pool_timeout) 03:58:00 03:58:00 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 03:58:00 03:58:00 # Is this a closed/new connection that requires CONNECT tunnelling? 03:58:00 if self.proxy is not None and http_tunnel_required and conn.is_closed: 03:58:00 try: 03:58:00 self._prepare_proxy(conn) 03:58:00 except (BaseSSLError, OSError, SocketTimeout) as e: 03:58:00 self._raise_timeout( 03:58:00 err=e, url=self.proxy.url, timeout_value=conn.timeout 03:58:00 ) 03:58:00 raise 03:58:00 03:58:00 # If we're going to release the connection in ``finally:``, then 03:58:00 # the response doesn't need to know about the connection. Otherwise 03:58:00 # it will also try to release it and we'll have a double-release 03:58:00 # mess. 03:58:00 response_conn = conn if not release_conn else None 03:58:00 03:58:00 # Make the request on the HTTPConnection object 03:58:00 > response = self._make_request( 03:58:00 conn, 03:58:00 method, 03:58:00 url, 03:58:00 timeout=timeout_obj, 03:58:00 body=body, 03:58:00 headers=headers, 03:58:00 chunked=chunked, 03:58:00 retries=retries, 03:58:00 response_conn=response_conn, 03:58:00 preload_content=preload_content, 03:58:00 decode_content=decode_content, 03:58:00 **response_kw, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 03:58:00 conn.request( 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 03:58:00 self.endheaders() 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 03:58:00 self._send_output(message_body, encode_chunked=encode_chunked) 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 03:58:00 self.send(msg) 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 03:58:00 self.connect() 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 03:58:00 self.sock = self._new_conn() 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = 03:58:00 03:58:00 def _new_conn(self) -> socket.socket: 03:58:00 """Establish a socket connection and set nodelay settings on it. 03:58:00 03:58:00 :return: New socket connection. 03:58:00 """ 03:58:00 try: 03:58:00 sock = connection.create_connection( 03:58:00 (self._dns_host, self.port), 03:58:00 self.timeout, 03:58:00 source_address=self.source_address, 03:58:00 socket_options=self.socket_options, 03:58:00 ) 03:58:00 except socket.gaierror as e: 03:58:00 raise NameResolutionError(self.host, self, e) from e 03:58:00 except SocketTimeout as e: 03:58:00 raise ConnectTimeoutError( 03:58:00 self, 03:58:00 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 03:58:00 ) from e 03:58:00 03:58:00 except OSError as e: 03:58:00 > raise NewConnectionError( 03:58:00 self, f"Failed to establish a new connection: {e}" 03:58:00 ) from e 03:58:00 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 03:58:00 03:58:00 The above exception was the direct cause of the following exception: 03:58:00 03:58:00 self = 03:58:00 request = , stream = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:00 proxies = OrderedDict() 03:58:00 03:58:00 def send( 03:58:00 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:00 ): 03:58:00 """Sends PreparedRequest object. Returns Response object. 03:58:00 03:58:00 :param request: The :class:`PreparedRequest ` being sent. 03:58:00 :param stream: (optional) Whether to stream the request content. 03:58:00 :param timeout: (optional) How long to wait for the server to send 03:58:00 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:00 read timeout) ` tuple. 03:58:00 :type timeout: float or tuple or urllib3 Timeout object 03:58:00 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:00 we verify the server's TLS certificate, or a string, in which case it 03:58:00 must be a path to a CA bundle to use 03:58:00 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:00 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:00 :rtype: requests.Response 03:58:00 """ 03:58:00 03:58:00 try: 03:58:00 conn = self.get_connection_with_tls_context( 03:58:00 request, verify, proxies=proxies, cert=cert 03:58:00 ) 03:58:00 except LocationValueError as e: 03:58:00 raise InvalidURL(e, request=request) 03:58:00 03:58:00 self.cert_verify(conn, request.url, verify, cert) 03:58:00 url = self.request_url(request, proxies) 03:58:00 self.add_headers( 03:58:00 request, 03:58:00 stream=stream, 03:58:00 timeout=timeout, 03:58:00 verify=verify, 03:58:00 cert=cert, 03:58:00 proxies=proxies, 03:58:00 ) 03:58:00 03:58:00 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:00 03:58:00 if isinstance(timeout, tuple): 03:58:00 try: 03:58:00 connect, read = timeout 03:58:00 timeout = TimeoutSauce(connect=connect, read=read) 03:58:00 except ValueError: 03:58:00 raise ValueError( 03:58:00 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:00 f"or a single float to set both timeouts to the same value." 03:58:00 ) 03:58:00 elif isinstance(timeout, TimeoutSauce): 03:58:00 pass 03:58:00 else: 03:58:00 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:00 03:58:00 try: 03:58:00 > resp = conn.urlopen( 03:58:00 method=request.method, 03:58:00 url=url, 03:58:00 body=request.body, 03:58:00 headers=request.headers, 03:58:00 redirect=False, 03:58:00 assert_same_host=False, 03:58:00 preload_content=False, 03:58:00 decode_content=False, 03:58:00 retries=self.max_retries, 03:58:00 timeout=timeout, 03:58:00 chunked=chunked, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 03:58:00 retries = retries.increment( 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:00 method = 'GET' 03:58:00 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' 03:58:00 response = None 03:58:00 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 03:58:00 _pool = 03:58:00 _stacktrace = 03:58:00 03:58:00 def increment( 03:58:00 self, 03:58:00 method: str | None = None, 03:58:00 url: str | None = None, 03:58:00 response: BaseHTTPResponse | None = None, 03:58:00 error: Exception | None = None, 03:58:00 _pool: ConnectionPool | None = None, 03:58:00 _stacktrace: TracebackType | None = None, 03:58:00 ) -> Self: 03:58:00 """Return a new Retry object with incremented retry counters. 03:58:00 03:58:00 :param response: A response object, or None, if the server did not 03:58:00 return a response. 03:58:00 :type response: :class:`~urllib3.response.BaseHTTPResponse` 03:58:00 :param Exception error: An error encountered during the request, or 03:58:00 None if the response was received successfully. 03:58:00 03:58:00 :return: A new ``Retry`` object. 03:58:00 """ 03:58:00 if self.total is False and error: 03:58:00 # Disabled, indicate to re-raise the error. 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 03:58:00 total = self.total 03:58:00 if total is not None: 03:58:00 total -= 1 03:58:00 03:58:00 connect = self.connect 03:58:00 read = self.read 03:58:00 redirect = self.redirect 03:58:00 status_count = self.status 03:58:00 other = self.other 03:58:00 cause = "unknown" 03:58:00 status = None 03:58:00 redirect_location = None 03:58:00 03:58:00 if error and self._is_connection_error(error): 03:58:00 # Connect retry? 03:58:00 if connect is False: 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 elif connect is not None: 03:58:00 connect -= 1 03:58:00 03:58:00 elif error and self._is_read_error(error): 03:58:00 # Read retry? 03:58:00 if read is False or method is None or not self._is_method_retryable(method): 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 elif read is not None: 03:58:00 read -= 1 03:58:00 03:58:00 elif error: 03:58:00 # Other retry? 03:58:00 if other is not None: 03:58:00 other -= 1 03:58:00 03:58:00 elif response and response.get_redirect_location(): 03:58:00 # Redirect retry? 03:58:00 if redirect is not None: 03:58:00 redirect -= 1 03:58:00 cause = "too many redirects" 03:58:00 response_redirect_location = response.get_redirect_location() 03:58:00 if response_redirect_location: 03:58:00 redirect_location = response_redirect_location 03:58:00 status = response.status 03:58:00 03:58:00 else: 03:58:00 # Incrementing because of a server error like a 500 in 03:58:00 # status_forcelist and the given method is in the allowed_methods 03:58:00 cause = ResponseError.GENERIC_ERROR 03:58:00 if response and response.status: 03:58:00 if status_count is not None: 03:58:00 status_count -= 1 03:58:00 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 03:58:00 status = response.status 03:58:00 03:58:00 history = self.history + ( 03:58:00 RequestHistory(method, url, error, status, redirect_location), 03:58:00 ) 03:58:00 03:58:00 new_retry = self.new( 03:58:00 total=total, 03:58:00 connect=connect, 03:58:00 read=read, 03:58:00 redirect=redirect, 03:58:00 status=status_count, 03:58:00 other=other, 03:58:00 history=history, 03:58:00 ) 03:58:00 03:58:00 if new_retry.is_exhausted(): 03:58:00 reason = error or ResponseError(cause) 03:58:00 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 03:58:00 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 03:58:00 03:58:00 During handling of the above exception, another exception occurred: 03:58:00 03:58:00 self = 03:58:00 03:58:00 def test_08_xpdr_device_connected(self): 03:58:00 > response = test_utils.check_device_connection("XPDRA01") 03:58:00 03:58:00 transportpce_tests/1.2.1/test01_portmapping.py:103: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 transportpce_tests/common/test_utils.py:370: in check_device_connection 03:58:00 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 03:58:00 transportpce_tests/common/test_utils.py:116: in get_request 03:58:00 return requests.request( 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 03:58:00 return session.request(method=method, url=url, **kwargs) 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 03:58:00 resp = self.send(prep, **send_kwargs) 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 03:58:00 r = adapter.send(request, **kwargs) 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = 03:58:00 request = , stream = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:00 proxies = OrderedDict() 03:58:00 03:58:00 def send( 03:58:00 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:00 ): 03:58:00 """Sends PreparedRequest object. Returns Response object. 03:58:00 03:58:00 :param request: The :class:`PreparedRequest ` being sent. 03:58:00 :param stream: (optional) Whether to stream the request content. 03:58:00 :param timeout: (optional) How long to wait for the server to send 03:58:00 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:00 read timeout) ` tuple. 03:58:00 :type timeout: float or tuple or urllib3 Timeout object 03:58:00 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:00 we verify the server's TLS certificate, or a string, in which case it 03:58:00 must be a path to a CA bundle to use 03:58:00 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:00 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:00 :rtype: requests.Response 03:58:00 """ 03:58:00 03:58:00 try: 03:58:00 conn = self.get_connection_with_tls_context( 03:58:00 request, verify, proxies=proxies, cert=cert 03:58:00 ) 03:58:00 except LocationValueError as e: 03:58:00 raise InvalidURL(e, request=request) 03:58:00 03:58:00 self.cert_verify(conn, request.url, verify, cert) 03:58:00 url = self.request_url(request, proxies) 03:58:00 self.add_headers( 03:58:00 request, 03:58:00 stream=stream, 03:58:00 timeout=timeout, 03:58:00 verify=verify, 03:58:00 cert=cert, 03:58:00 proxies=proxies, 03:58:00 ) 03:58:00 03:58:00 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:00 03:58:00 if isinstance(timeout, tuple): 03:58:00 try: 03:58:00 connect, read = timeout 03:58:00 timeout = TimeoutSauce(connect=connect, read=read) 03:58:00 except ValueError: 03:58:00 raise ValueError( 03:58:00 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:00 f"or a single float to set both timeouts to the same value." 03:58:00 ) 03:58:00 elif isinstance(timeout, TimeoutSauce): 03:58:00 pass 03:58:00 else: 03:58:00 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:00 03:58:00 try: 03:58:00 resp = conn.urlopen( 03:58:00 method=request.method, 03:58:00 url=url, 03:58:00 body=request.body, 03:58:00 headers=request.headers, 03:58:00 redirect=False, 03:58:00 assert_same_host=False, 03:58:00 preload_content=False, 03:58:00 decode_content=False, 03:58:00 retries=self.max_retries, 03:58:00 timeout=timeout, 03:58:00 chunked=chunked, 03:58:00 ) 03:58:00 03:58:00 except (ProtocolError, OSError) as err: 03:58:00 raise ConnectionError(err, request=request) 03:58:00 03:58:00 except MaxRetryError as e: 03:58:00 if isinstance(e.reason, ConnectTimeoutError): 03:58:00 # TODO: Remove this in 3.0.0: see #2811 03:58:00 if not isinstance(e.reason, NewConnectionError): 03:58:00 raise ConnectTimeout(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, ResponseError): 03:58:00 raise RetryError(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, _ProxyError): 03:58:00 raise ProxyError(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, _SSLError): 03:58:00 # This branch is for urllib3 v1.22 and later. 03:58:00 raise SSLError(e, request=request) 03:58:00 03:58:00 > raise ConnectionError(e, request=request) 03:58:00 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 03:58:00 ----------------------------- Captured stdout call ----------------------------- 03:58:00 execution of test_08_xpdr_device_connected 03:58:00 _________ TransportPCEPortMappingTesting.test_09_xpdr_portmapping_info _________ 03:58:00 03:58:00 self = 03:58:00 03:58:00 def _new_conn(self) -> socket.socket: 03:58:00 """Establish a socket connection and set nodelay settings on it. 03:58:00 03:58:00 :return: New socket connection. 03:58:00 """ 03:58:00 try: 03:58:00 > sock = connection.create_connection( 03:58:00 (self._dns_host, self.port), 03:58:00 self.timeout, 03:58:00 source_address=self.source_address, 03:58:00 socket_options=self.socket_options, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 03:58:00 raise err 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 address = ('localhost', 8182), timeout = 10, source_address = None 03:58:00 socket_options = [(6, 1, 1)] 03:58:00 03:58:00 def create_connection( 03:58:00 address: tuple[str, int], 03:58:00 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:00 source_address: tuple[str, int] | None = None, 03:58:00 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 03:58:00 ) -> socket.socket: 03:58:00 """Connect to *address* and return the socket object. 03:58:00 03:58:00 Convenience function. Connect to *address* (a 2-tuple ``(host, 03:58:00 port)``) and return the socket object. Passing the optional 03:58:00 *timeout* parameter will set the timeout on the socket instance 03:58:00 before attempting to connect. If no *timeout* is supplied, the 03:58:00 global default timeout setting returned by :func:`socket.getdefaulttimeout` 03:58:00 is used. If *source_address* is set it must be a tuple of (host, port) 03:58:00 for the socket to bind as a source address before making the connection. 03:58:00 An host of '' or port 0 tells the OS to use the default. 03:58:00 """ 03:58:00 03:58:00 host, port = address 03:58:00 if host.startswith("["): 03:58:00 host = host.strip("[]") 03:58:00 err = None 03:58:00 03:58:00 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 03:58:00 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 03:58:00 # The original create_connection function always returns all records. 03:58:00 family = allowed_gai_family() 03:58:00 03:58:00 try: 03:58:00 host.encode("idna") 03:58:00 except UnicodeError: 03:58:00 raise LocationParseError(f"'{host}', label empty or too long") from None 03:58:00 03:58:00 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 03:58:00 af, socktype, proto, canonname, sa = res 03:58:00 sock = None 03:58:00 try: 03:58:00 sock = socket.socket(af, socktype, proto) 03:58:00 03:58:00 # If provided, set socket level options before connecting. 03:58:00 _set_socket_options(sock, socket_options) 03:58:00 03:58:00 if timeout is not _DEFAULT_TIMEOUT: 03:58:00 sock.settimeout(timeout) 03:58:00 if source_address: 03:58:00 sock.bind(source_address) 03:58:00 > sock.connect(sa) 03:58:00 E ConnectionRefusedError: [Errno 111] Connection refused 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 03:58:00 03:58:00 The above exception was the direct cause of the following exception: 03:58:00 03:58:00 self = 03:58:00 method = 'GET' 03:58:00 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' 03:58:00 body = None 03:58:00 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 03:58:00 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:00 redirect = False, assert_same_host = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 03:58:00 release_conn = False, chunked = False, body_pos = None, preload_content = False 03:58:00 decode_content = False, response_kw = {} 03:58:00 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info', query=None, fragment=None) 03:58:00 destination_scheme = None, conn = None, release_this_conn = True 03:58:00 http_tunnel_required = False, err = None, clean_exit = False 03:58:00 03:58:00 def urlopen( # type: ignore[override] 03:58:00 self, 03:58:00 method: str, 03:58:00 url: str, 03:58:00 body: _TYPE_BODY | None = None, 03:58:00 headers: typing.Mapping[str, str] | None = None, 03:58:00 retries: Retry | bool | int | None = None, 03:58:00 redirect: bool = True, 03:58:00 assert_same_host: bool = True, 03:58:00 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:00 pool_timeout: int | None = None, 03:58:00 release_conn: bool | None = None, 03:58:00 chunked: bool = False, 03:58:00 body_pos: _TYPE_BODY_POSITION | None = None, 03:58:00 preload_content: bool = True, 03:58:00 decode_content: bool = True, 03:58:00 **response_kw: typing.Any, 03:58:00 ) -> BaseHTTPResponse: 03:58:00 """ 03:58:00 Get a connection from the pool and perform an HTTP request. This is the 03:58:00 lowest level call for making a request, so you'll need to specify all 03:58:00 the raw details. 03:58:00 03:58:00 .. note:: 03:58:00 03:58:00 More commonly, it's appropriate to use a convenience method 03:58:00 such as :meth:`request`. 03:58:00 03:58:00 .. note:: 03:58:00 03:58:00 `release_conn` will only behave as expected if 03:58:00 `preload_content=False` because we want to make 03:58:00 `preload_content=False` the default behaviour someday soon without 03:58:00 breaking backwards compatibility. 03:58:00 03:58:00 :param method: 03:58:00 HTTP request method (such as GET, POST, PUT, etc.) 03:58:00 03:58:00 :param url: 03:58:00 The URL to perform the request on. 03:58:00 03:58:00 :param body: 03:58:00 Data to send in the request body, either :class:`str`, :class:`bytes`, 03:58:00 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 03:58:00 03:58:00 :param headers: 03:58:00 Dictionary of custom headers to send, such as User-Agent, 03:58:00 If-None-Match, etc. If None, pool headers are used. If provided, 03:58:00 these headers completely replace any pool-specific headers. 03:58:00 03:58:00 :param retries: 03:58:00 Configure the number of retries to allow before raising a 03:58:00 :class:`~urllib3.exceptions.MaxRetryError` exception. 03:58:00 03:58:00 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 03:58:00 :class:`~urllib3.util.retry.Retry` object for fine-grained control 03:58:00 over different types of retries. 03:58:00 Pass an integer number to retry connection errors that many times, 03:58:00 but no other types of errors. Pass zero to never retry. 03:58:00 03:58:00 If ``False``, then retries are disabled and any exception is raised 03:58:00 immediately. Also, instead of raising a MaxRetryError on redirects, 03:58:00 the redirect response will be returned. 03:58:00 03:58:00 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 03:58:00 03:58:00 :param redirect: 03:58:00 If True, automatically handle redirects (status codes 301, 302, 03:58:00 303, 307, 308). Each redirect counts as a retry. Disabling retries 03:58:00 will disable redirect, too. 03:58:00 03:58:00 :param assert_same_host: 03:58:00 If ``True``, will make sure that the host of the pool requests is 03:58:00 consistent else will raise HostChangedError. When ``False``, you can 03:58:00 use the pool on an HTTP proxy and request foreign hosts. 03:58:00 03:58:00 :param timeout: 03:58:00 If specified, overrides the default timeout for this one 03:58:00 request. It may be a float (in seconds) or an instance of 03:58:00 :class:`urllib3.util.Timeout`. 03:58:00 03:58:00 :param pool_timeout: 03:58:00 If set and the pool is set to block=True, then this method will 03:58:00 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 03:58:00 connection is available within the time period. 03:58:00 03:58:00 :param bool preload_content: 03:58:00 If True, the response's body will be preloaded into memory. 03:58:00 03:58:00 :param bool decode_content: 03:58:00 If True, will attempt to decode the body based on the 03:58:00 'content-encoding' header. 03:58:00 03:58:00 :param release_conn: 03:58:00 If False, then the urlopen call will not release the connection 03:58:00 back into the pool once a response is received (but will release if 03:58:00 you read the entire contents of the response such as when 03:58:00 `preload_content=True`). This is useful if you're not preloading 03:58:00 the response's content immediately. You will need to call 03:58:00 ``r.release_conn()`` on the response ``r`` to return the connection 03:58:00 back into the pool. If None, it takes the value of ``preload_content`` 03:58:00 which defaults to ``True``. 03:58:00 03:58:00 :param bool chunked: 03:58:00 If True, urllib3 will send the body using chunked transfer 03:58:00 encoding. Otherwise, urllib3 will send the body using the standard 03:58:00 content-length form. Defaults to False. 03:58:00 03:58:00 :param int body_pos: 03:58:00 Position to seek to in file-like body in the event of a retry or 03:58:00 redirect. Typically this won't need to be set because urllib3 will 03:58:00 auto-populate the value when needed. 03:58:00 """ 03:58:00 parsed_url = parse_url(url) 03:58:00 destination_scheme = parsed_url.scheme 03:58:00 03:58:00 if headers is None: 03:58:00 headers = self.headers 03:58:00 03:58:00 if not isinstance(retries, Retry): 03:58:00 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 03:58:00 03:58:00 if release_conn is None: 03:58:00 release_conn = preload_content 03:58:00 03:58:00 # Check host 03:58:00 if assert_same_host and not self.is_same_host(url): 03:58:00 raise HostChangedError(self, url, retries) 03:58:00 03:58:00 # Ensure that the URL we're connecting to is properly encoded 03:58:00 if url.startswith("/"): 03:58:00 url = to_str(_encode_target(url)) 03:58:00 else: 03:58:00 url = to_str(parsed_url.url) 03:58:00 03:58:00 conn = None 03:58:00 03:58:00 # Track whether `conn` needs to be released before 03:58:00 # returning/raising/recursing. Update this variable if necessary, and 03:58:00 # leave `release_conn` constant throughout the function. That way, if 03:58:00 # the function recurses, the original value of `release_conn` will be 03:58:00 # passed down into the recursive call, and its value will be respected. 03:58:00 # 03:58:00 # See issue #651 [1] for details. 03:58:00 # 03:58:00 # [1] 03:58:00 release_this_conn = release_conn 03:58:00 03:58:00 http_tunnel_required = connection_requires_http_tunnel( 03:58:00 self.proxy, self.proxy_config, destination_scheme 03:58:00 ) 03:58:00 03:58:00 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 03:58:00 # have to copy the headers dict so we can safely change it without those 03:58:00 # changes being reflected in anyone else's copy. 03:58:00 if not http_tunnel_required: 03:58:00 headers = headers.copy() # type: ignore[attr-defined] 03:58:00 headers.update(self.proxy_headers) # type: ignore[union-attr] 03:58:00 03:58:00 # Must keep the exception bound to a separate variable or else Python 3 03:58:00 # complains about UnboundLocalError. 03:58:00 err = None 03:58:00 03:58:00 # Keep track of whether we cleanly exited the except block. This 03:58:00 # ensures we do proper cleanup in finally. 03:58:00 clean_exit = False 03:58:00 03:58:00 # Rewind body position, if needed. Record current position 03:58:00 # for future rewinds in the event of a redirect/retry. 03:58:00 body_pos = set_file_position(body, body_pos) 03:58:00 03:58:00 try: 03:58:00 # Request a connection from the queue. 03:58:00 timeout_obj = self._get_timeout(timeout) 03:58:00 conn = self._get_conn(timeout=pool_timeout) 03:58:00 03:58:00 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 03:58:00 03:58:00 # Is this a closed/new connection that requires CONNECT tunnelling? 03:58:00 if self.proxy is not None and http_tunnel_required and conn.is_closed: 03:58:00 try: 03:58:00 self._prepare_proxy(conn) 03:58:00 except (BaseSSLError, OSError, SocketTimeout) as e: 03:58:00 self._raise_timeout( 03:58:00 err=e, url=self.proxy.url, timeout_value=conn.timeout 03:58:00 ) 03:58:00 raise 03:58:00 03:58:00 # If we're going to release the connection in ``finally:``, then 03:58:00 # the response doesn't need to know about the connection. Otherwise 03:58:00 # it will also try to release it and we'll have a double-release 03:58:00 # mess. 03:58:00 response_conn = conn if not release_conn else None 03:58:00 03:58:00 # Make the request on the HTTPConnection object 03:58:00 > response = self._make_request( 03:58:00 conn, 03:58:00 method, 03:58:00 url, 03:58:00 timeout=timeout_obj, 03:58:00 body=body, 03:58:00 headers=headers, 03:58:00 chunked=chunked, 03:58:00 retries=retries, 03:58:00 response_conn=response_conn, 03:58:00 preload_content=preload_content, 03:58:00 decode_content=decode_content, 03:58:00 **response_kw, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 03:58:00 conn.request( 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 03:58:00 self.endheaders() 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 03:58:00 self._send_output(message_body, encode_chunked=encode_chunked) 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 03:58:00 self.send(msg) 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 03:58:00 self.connect() 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 03:58:00 self.sock = self._new_conn() 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = 03:58:00 03:58:00 def _new_conn(self) -> socket.socket: 03:58:00 """Establish a socket connection and set nodelay settings on it. 03:58:00 03:58:00 :return: New socket connection. 03:58:00 """ 03:58:00 try: 03:58:00 sock = connection.create_connection( 03:58:00 (self._dns_host, self.port), 03:58:00 self.timeout, 03:58:00 source_address=self.source_address, 03:58:00 socket_options=self.socket_options, 03:58:00 ) 03:58:00 except socket.gaierror as e: 03:58:00 raise NameResolutionError(self.host, self, e) from e 03:58:00 except SocketTimeout as e: 03:58:00 raise ConnectTimeoutError( 03:58:00 self, 03:58:00 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 03:58:00 ) from e 03:58:00 03:58:00 except OSError as e: 03:58:00 > raise NewConnectionError( 03:58:00 self, f"Failed to establish a new connection: {e}" 03:58:00 ) from e 03:58:00 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 03:58:00 03:58:00 The above exception was the direct cause of the following exception: 03:58:00 03:58:00 self = 03:58:00 request = , stream = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:00 proxies = OrderedDict() 03:58:00 03:58:00 def send( 03:58:00 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:00 ): 03:58:00 """Sends PreparedRequest object. Returns Response object. 03:58:00 03:58:00 :param request: The :class:`PreparedRequest ` being sent. 03:58:00 :param stream: (optional) Whether to stream the request content. 03:58:00 :param timeout: (optional) How long to wait for the server to send 03:58:00 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:00 read timeout) ` tuple. 03:58:00 :type timeout: float or tuple or urllib3 Timeout object 03:58:00 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:00 we verify the server's TLS certificate, or a string, in which case it 03:58:00 must be a path to a CA bundle to use 03:58:00 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:00 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:00 :rtype: requests.Response 03:58:00 """ 03:58:00 03:58:00 try: 03:58:00 conn = self.get_connection_with_tls_context( 03:58:00 request, verify, proxies=proxies, cert=cert 03:58:00 ) 03:58:00 except LocationValueError as e: 03:58:00 raise InvalidURL(e, request=request) 03:58:00 03:58:00 self.cert_verify(conn, request.url, verify, cert) 03:58:00 url = self.request_url(request, proxies) 03:58:00 self.add_headers( 03:58:00 request, 03:58:00 stream=stream, 03:58:00 timeout=timeout, 03:58:00 verify=verify, 03:58:00 cert=cert, 03:58:00 proxies=proxies, 03:58:00 ) 03:58:00 03:58:00 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:00 03:58:00 if isinstance(timeout, tuple): 03:58:00 try: 03:58:00 connect, read = timeout 03:58:00 timeout = TimeoutSauce(connect=connect, read=read) 03:58:00 except ValueError: 03:58:00 raise ValueError( 03:58:00 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:00 f"or a single float to set both timeouts to the same value." 03:58:00 ) 03:58:00 elif isinstance(timeout, TimeoutSauce): 03:58:00 pass 03:58:00 else: 03:58:00 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:00 03:58:00 try: 03:58:00 > resp = conn.urlopen( 03:58:00 method=request.method, 03:58:00 url=url, 03:58:00 body=request.body, 03:58:00 headers=request.headers, 03:58:00 redirect=False, 03:58:00 assert_same_host=False, 03:58:00 preload_content=False, 03:58:00 decode_content=False, 03:58:00 retries=self.max_retries, 03:58:00 timeout=timeout, 03:58:00 chunked=chunked, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 03:58:00 retries = retries.increment( 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:00 method = 'GET' 03:58:00 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' 03:58:00 response = None 03:58:00 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 03:58:00 _pool = 03:58:00 _stacktrace = 03:58:00 03:58:00 def increment( 03:58:00 self, 03:58:00 method: str | None = None, 03:58:00 url: str | None = None, 03:58:00 response: BaseHTTPResponse | None = None, 03:58:00 error: Exception | None = None, 03:58:00 _pool: ConnectionPool | None = None, 03:58:00 _stacktrace: TracebackType | None = None, 03:58:00 ) -> Self: 03:58:00 """Return a new Retry object with incremented retry counters. 03:58:00 03:58:00 :param response: A response object, or None, if the server did not 03:58:00 return a response. 03:58:00 :type response: :class:`~urllib3.response.BaseHTTPResponse` 03:58:00 :param Exception error: An error encountered during the request, or 03:58:00 None if the response was received successfully. 03:58:00 03:58:00 :return: A new ``Retry`` object. 03:58:00 """ 03:58:00 if self.total is False and error: 03:58:00 # Disabled, indicate to re-raise the error. 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 03:58:00 total = self.total 03:58:00 if total is not None: 03:58:00 total -= 1 03:58:00 03:58:00 connect = self.connect 03:58:00 read = self.read 03:58:00 redirect = self.redirect 03:58:00 status_count = self.status 03:58:00 other = self.other 03:58:00 cause = "unknown" 03:58:00 status = None 03:58:00 redirect_location = None 03:58:00 03:58:00 if error and self._is_connection_error(error): 03:58:00 # Connect retry? 03:58:00 if connect is False: 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 elif connect is not None: 03:58:00 connect -= 1 03:58:00 03:58:00 elif error and self._is_read_error(error): 03:58:00 # Read retry? 03:58:00 if read is False or method is None or not self._is_method_retryable(method): 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 elif read is not None: 03:58:00 read -= 1 03:58:00 03:58:00 elif error: 03:58:00 # Other retry? 03:58:00 if other is not None: 03:58:00 other -= 1 03:58:00 03:58:00 elif response and response.get_redirect_location(): 03:58:00 # Redirect retry? 03:58:00 if redirect is not None: 03:58:00 redirect -= 1 03:58:00 cause = "too many redirects" 03:58:00 response_redirect_location = response.get_redirect_location() 03:58:00 if response_redirect_location: 03:58:00 redirect_location = response_redirect_location 03:58:00 status = response.status 03:58:00 03:58:00 else: 03:58:00 # Incrementing because of a server error like a 500 in 03:58:00 # status_forcelist and the given method is in the allowed_methods 03:58:00 cause = ResponseError.GENERIC_ERROR 03:58:00 if response and response.status: 03:58:00 if status_count is not None: 03:58:00 status_count -= 1 03:58:00 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 03:58:00 status = response.status 03:58:00 03:58:00 history = self.history + ( 03:58:00 RequestHistory(method, url, error, status, redirect_location), 03:58:00 ) 03:58:00 03:58:00 new_retry = self.new( 03:58:00 total=total, 03:58:00 connect=connect, 03:58:00 read=read, 03:58:00 redirect=redirect, 03:58:00 status=status_count, 03:58:00 other=other, 03:58:00 history=history, 03:58:00 ) 03:58:00 03:58:00 if new_retry.is_exhausted(): 03:58:00 reason = error or ResponseError(cause) 03:58:00 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 03:58:00 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 03:58:00 03:58:00 During handling of the above exception, another exception occurred: 03:58:00 03:58:00 self = 03:58:00 03:58:00 def test_09_xpdr_portmapping_info(self): 03:58:00 > response = test_utils.get_portmapping_node_attr("XPDRA01", "node-info", None) 03:58:00 03:58:00 transportpce_tests/1.2.1/test01_portmapping.py:109: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 transportpce_tests/common/test_utils.py:471: in get_portmapping_node_attr 03:58:00 response = get_request(target_url) 03:58:00 transportpce_tests/common/test_utils.py:116: in get_request 03:58:00 return requests.request( 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 03:58:00 return session.request(method=method, url=url, **kwargs) 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 03:58:00 resp = self.send(prep, **send_kwargs) 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 03:58:00 r = adapter.send(request, **kwargs) 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = 03:58:00 request = , stream = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:00 proxies = OrderedDict() 03:58:00 03:58:00 def send( 03:58:00 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:00 ): 03:58:00 """Sends PreparedRequest object. Returns Response object. 03:58:00 03:58:00 :param request: The :class:`PreparedRequest ` being sent. 03:58:00 :param stream: (optional) Whether to stream the request content. 03:58:00 :param timeout: (optional) How long to wait for the server to send 03:58:00 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:00 read timeout) ` tuple. 03:58:00 :type timeout: float or tuple or urllib3 Timeout object 03:58:00 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:00 we verify the server's TLS certificate, or a string, in which case it 03:58:00 must be a path to a CA bundle to use 03:58:00 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:00 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:00 :rtype: requests.Response 03:58:00 """ 03:58:00 03:58:00 try: 03:58:00 conn = self.get_connection_with_tls_context( 03:58:00 request, verify, proxies=proxies, cert=cert 03:58:00 ) 03:58:00 except LocationValueError as e: 03:58:00 raise InvalidURL(e, request=request) 03:58:00 03:58:00 self.cert_verify(conn, request.url, verify, cert) 03:58:00 url = self.request_url(request, proxies) 03:58:00 self.add_headers( 03:58:00 request, 03:58:00 stream=stream, 03:58:00 timeout=timeout, 03:58:00 verify=verify, 03:58:00 cert=cert, 03:58:00 proxies=proxies, 03:58:00 ) 03:58:00 03:58:00 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:00 03:58:00 if isinstance(timeout, tuple): 03:58:00 try: 03:58:00 connect, read = timeout 03:58:00 timeout = TimeoutSauce(connect=connect, read=read) 03:58:00 except ValueError: 03:58:00 raise ValueError( 03:58:00 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:00 f"or a single float to set both timeouts to the same value." 03:58:00 ) 03:58:00 elif isinstance(timeout, TimeoutSauce): 03:58:00 pass 03:58:00 else: 03:58:00 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:00 03:58:00 try: 03:58:00 resp = conn.urlopen( 03:58:00 method=request.method, 03:58:00 url=url, 03:58:00 body=request.body, 03:58:00 headers=request.headers, 03:58:00 redirect=False, 03:58:00 assert_same_host=False, 03:58:00 preload_content=False, 03:58:00 decode_content=False, 03:58:00 retries=self.max_retries, 03:58:00 timeout=timeout, 03:58:00 chunked=chunked, 03:58:00 ) 03:58:00 03:58:00 except (ProtocolError, OSError) as err: 03:58:00 raise ConnectionError(err, request=request) 03:58:00 03:58:00 except MaxRetryError as e: 03:58:00 if isinstance(e.reason, ConnectTimeoutError): 03:58:00 # TODO: Remove this in 3.0.0: see #2811 03:58:00 if not isinstance(e.reason, NewConnectionError): 03:58:00 raise ConnectTimeout(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, ResponseError): 03:58:00 raise RetryError(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, _ProxyError): 03:58:00 raise ProxyError(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, _SSLError): 03:58:00 # This branch is for urllib3 v1.22 and later. 03:58:00 raise SSLError(e, request=request) 03:58:00 03:58:00 > raise ConnectionError(e, request=request) 03:58:00 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 03:58:00 ----------------------------- Captured stdout call ----------------------------- 03:58:00 execution of test_09_xpdr_portmapping_info 03:58:00 _______ TransportPCEPortMappingTesting.test_10_xpdr_portmapping_NETWORK1 _______ 03:58:00 03:58:00 self = 03:58:00 03:58:00 def _new_conn(self) -> socket.socket: 03:58:00 """Establish a socket connection and set nodelay settings on it. 03:58:00 03:58:00 :return: New socket connection. 03:58:00 """ 03:58:00 try: 03:58:00 > sock = connection.create_connection( 03:58:00 (self._dns_host, self.port), 03:58:00 self.timeout, 03:58:00 source_address=self.source_address, 03:58:00 socket_options=self.socket_options, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 03:58:00 raise err 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 address = ('localhost', 8182), timeout = 10, source_address = None 03:58:00 socket_options = [(6, 1, 1)] 03:58:00 03:58:00 def create_connection( 03:58:00 address: tuple[str, int], 03:58:00 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:00 source_address: tuple[str, int] | None = None, 03:58:00 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 03:58:00 ) -> socket.socket: 03:58:00 """Connect to *address* and return the socket object. 03:58:00 03:58:00 Convenience function. Connect to *address* (a 2-tuple ``(host, 03:58:00 port)``) and return the socket object. Passing the optional 03:58:00 *timeout* parameter will set the timeout on the socket instance 03:58:00 before attempting to connect. If no *timeout* is supplied, the 03:58:00 global default timeout setting returned by :func:`socket.getdefaulttimeout` 03:58:00 is used. If *source_address* is set it must be a tuple of (host, port) 03:58:00 for the socket to bind as a source address before making the connection. 03:58:00 An host of '' or port 0 tells the OS to use the default. 03:58:00 """ 03:58:00 03:58:00 host, port = address 03:58:00 if host.startswith("["): 03:58:00 host = host.strip("[]") 03:58:00 err = None 03:58:00 03:58:00 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 03:58:00 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 03:58:00 # The original create_connection function always returns all records. 03:58:00 family = allowed_gai_family() 03:58:00 03:58:00 try: 03:58:00 host.encode("idna") 03:58:00 except UnicodeError: 03:58:00 raise LocationParseError(f"'{host}', label empty or too long") from None 03:58:00 03:58:00 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 03:58:00 af, socktype, proto, canonname, sa = res 03:58:00 sock = None 03:58:00 try: 03:58:00 sock = socket.socket(af, socktype, proto) 03:58:00 03:58:00 # If provided, set socket level options before connecting. 03:58:00 _set_socket_options(sock, socket_options) 03:58:00 03:58:00 if timeout is not _DEFAULT_TIMEOUT: 03:58:00 sock.settimeout(timeout) 03:58:00 if source_address: 03:58:00 sock.bind(source_address) 03:58:00 > sock.connect(sa) 03:58:00 E ConnectionRefusedError: [Errno 111] Connection refused 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 03:58:00 03:58:00 The above exception was the direct cause of the following exception: 03:58:00 03:58:00 self = 03:58:00 method = 'GET' 03:58:00 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1' 03:58:00 body = None 03:58:00 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 03:58:00 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:00 redirect = False, assert_same_host = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 03:58:00 release_conn = False, chunked = False, body_pos = None, preload_content = False 03:58:00 decode_content = False, response_kw = {} 03:58:00 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1', query=None, fragment=None) 03:58:00 destination_scheme = None, conn = None, release_this_conn = True 03:58:00 http_tunnel_required = False, err = None, clean_exit = False 03:58:00 03:58:00 def urlopen( # type: ignore[override] 03:58:00 self, 03:58:00 method: str, 03:58:00 url: str, 03:58:00 body: _TYPE_BODY | None = None, 03:58:00 headers: typing.Mapping[str, str] | None = None, 03:58:00 retries: Retry | bool | int | None = None, 03:58:00 redirect: bool = True, 03:58:00 assert_same_host: bool = True, 03:58:00 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:00 pool_timeout: int | None = None, 03:58:00 release_conn: bool | None = None, 03:58:00 chunked: bool = False, 03:58:00 body_pos: _TYPE_BODY_POSITION | None = None, 03:58:00 preload_content: bool = True, 03:58:00 decode_content: bool = True, 03:58:00 **response_kw: typing.Any, 03:58:00 ) -> BaseHTTPResponse: 03:58:00 """ 03:58:00 Get a connection from the pool and perform an HTTP request. This is the 03:58:00 lowest level call for making a request, so you'll need to specify all 03:58:00 the raw details. 03:58:00 03:58:00 .. note:: 03:58:00 03:58:00 More commonly, it's appropriate to use a convenience method 03:58:00 such as :meth:`request`. 03:58:00 03:58:00 .. note:: 03:58:00 03:58:00 `release_conn` will only behave as expected if 03:58:00 `preload_content=False` because we want to make 03:58:00 `preload_content=False` the default behaviour someday soon without 03:58:00 breaking backwards compatibility. 03:58:00 03:58:00 :param method: 03:58:00 HTTP request method (such as GET, POST, PUT, etc.) 03:58:00 03:58:00 :param url: 03:58:00 The URL to perform the request on. 03:58:00 03:58:00 :param body: 03:58:00 Data to send in the request body, either :class:`str`, :class:`bytes`, 03:58:00 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 03:58:00 03:58:00 :param headers: 03:58:00 Dictionary of custom headers to send, such as User-Agent, 03:58:00 If-None-Match, etc. If None, pool headers are used. If provided, 03:58:00 these headers completely replace any pool-specific headers. 03:58:00 03:58:00 :param retries: 03:58:00 Configure the number of retries to allow before raising a 03:58:00 :class:`~urllib3.exceptions.MaxRetryError` exception. 03:58:00 03:58:00 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 03:58:00 :class:`~urllib3.util.retry.Retry` object for fine-grained control 03:58:00 over different types of retries. 03:58:00 Pass an integer number to retry connection errors that many times, 03:58:00 but no other types of errors. Pass zero to never retry. 03:58:00 03:58:00 If ``False``, then retries are disabled and any exception is raised 03:58:00 immediately. Also, instead of raising a MaxRetryError on redirects, 03:58:00 the redirect response will be returned. 03:58:00 03:58:00 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 03:58:00 03:58:00 :param redirect: 03:58:00 If True, automatically handle redirects (status codes 301, 302, 03:58:00 303, 307, 308). Each redirect counts as a retry. Disabling retries 03:58:00 will disable redirect, too. 03:58:00 03:58:00 :param assert_same_host: 03:58:00 If ``True``, will make sure that the host of the pool requests is 03:58:00 consistent else will raise HostChangedError. When ``False``, you can 03:58:00 use the pool on an HTTP proxy and request foreign hosts. 03:58:00 03:58:00 :param timeout: 03:58:00 If specified, overrides the default timeout for this one 03:58:00 request. It may be a float (in seconds) or an instance of 03:58:00 :class:`urllib3.util.Timeout`. 03:58:00 03:58:00 :param pool_timeout: 03:58:00 If set and the pool is set to block=True, then this method will 03:58:00 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 03:58:00 connection is available within the time period. 03:58:00 03:58:00 :param bool preload_content: 03:58:00 If True, the response's body will be preloaded into memory. 03:58:00 03:58:00 :param bool decode_content: 03:58:00 If True, will attempt to decode the body based on the 03:58:00 'content-encoding' header. 03:58:00 03:58:00 :param release_conn: 03:58:00 If False, then the urlopen call will not release the connection 03:58:00 back into the pool once a response is received (but will release if 03:58:00 you read the entire contents of the response such as when 03:58:00 `preload_content=True`). This is useful if you're not preloading 03:58:00 the response's content immediately. You will need to call 03:58:00 ``r.release_conn()`` on the response ``r`` to return the connection 03:58:00 back into the pool. If None, it takes the value of ``preload_content`` 03:58:00 which defaults to ``True``. 03:58:00 03:58:00 :param bool chunked: 03:58:00 If True, urllib3 will send the body using chunked transfer 03:58:00 encoding. Otherwise, urllib3 will send the body using the standard 03:58:00 content-length form. Defaults to False. 03:58:00 03:58:00 :param int body_pos: 03:58:00 Position to seek to in file-like body in the event of a retry or 03:58:00 redirect. Typically this won't need to be set because urllib3 will 03:58:00 auto-populate the value when needed. 03:58:00 """ 03:58:00 parsed_url = parse_url(url) 03:58:00 destination_scheme = parsed_url.scheme 03:58:00 03:58:00 if headers is None: 03:58:00 headers = self.headers 03:58:00 03:58:00 if not isinstance(retries, Retry): 03:58:00 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 03:58:00 03:58:00 if release_conn is None: 03:58:00 release_conn = preload_content 03:58:00 03:58:00 # Check host 03:58:00 if assert_same_host and not self.is_same_host(url): 03:58:00 raise HostChangedError(self, url, retries) 03:58:00 03:58:00 # Ensure that the URL we're connecting to is properly encoded 03:58:00 if url.startswith("/"): 03:58:00 url = to_str(_encode_target(url)) 03:58:00 else: 03:58:00 url = to_str(parsed_url.url) 03:58:00 03:58:00 conn = None 03:58:00 03:58:00 # Track whether `conn` needs to be released before 03:58:00 # returning/raising/recursing. Update this variable if necessary, and 03:58:00 # leave `release_conn` constant throughout the function. That way, if 03:58:00 # the function recurses, the original value of `release_conn` will be 03:58:00 # passed down into the recursive call, and its value will be respected. 03:58:00 # 03:58:00 # See issue #651 [1] for details. 03:58:00 # 03:58:00 # [1] 03:58:00 release_this_conn = release_conn 03:58:00 03:58:00 http_tunnel_required = connection_requires_http_tunnel( 03:58:00 self.proxy, self.proxy_config, destination_scheme 03:58:00 ) 03:58:00 03:58:00 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 03:58:00 # have to copy the headers dict so we can safely change it without those 03:58:00 # changes being reflected in anyone else's copy. 03:58:00 if not http_tunnel_required: 03:58:00 headers = headers.copy() # type: ignore[attr-defined] 03:58:00 headers.update(self.proxy_headers) # type: ignore[union-attr] 03:58:00 03:58:00 # Must keep the exception bound to a separate variable or else Python 3 03:58:00 # complains about UnboundLocalError. 03:58:00 err = None 03:58:00 03:58:00 # Keep track of whether we cleanly exited the except block. This 03:58:00 # ensures we do proper cleanup in finally. 03:58:00 clean_exit = False 03:58:00 03:58:00 # Rewind body position, if needed. Record current position 03:58:00 # for future rewinds in the event of a redirect/retry. 03:58:00 body_pos = set_file_position(body, body_pos) 03:58:00 03:58:00 try: 03:58:00 # Request a connection from the queue. 03:58:00 timeout_obj = self._get_timeout(timeout) 03:58:00 conn = self._get_conn(timeout=pool_timeout) 03:58:00 03:58:00 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 03:58:00 03:58:00 # Is this a closed/new connection that requires CONNECT tunnelling? 03:58:00 if self.proxy is not None and http_tunnel_required and conn.is_closed: 03:58:00 try: 03:58:00 self._prepare_proxy(conn) 03:58:00 except (BaseSSLError, OSError, SocketTimeout) as e: 03:58:00 self._raise_timeout( 03:58:00 err=e, url=self.proxy.url, timeout_value=conn.timeout 03:58:00 ) 03:58:00 raise 03:58:00 03:58:00 # If we're going to release the connection in ``finally:``, then 03:58:00 # the response doesn't need to know about the connection. Otherwise 03:58:00 # it will also try to release it and we'll have a double-release 03:58:00 # mess. 03:58:00 response_conn = conn if not release_conn else None 03:58:00 03:58:00 # Make the request on the HTTPConnection object 03:58:00 > response = self._make_request( 03:58:00 conn, 03:58:00 method, 03:58:00 url, 03:58:00 timeout=timeout_obj, 03:58:00 body=body, 03:58:00 headers=headers, 03:58:00 chunked=chunked, 03:58:00 retries=retries, 03:58:00 response_conn=response_conn, 03:58:00 preload_content=preload_content, 03:58:00 decode_content=decode_content, 03:58:00 **response_kw, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 03:58:00 conn.request( 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 03:58:00 self.endheaders() 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 03:58:00 self._send_output(message_body, encode_chunked=encode_chunked) 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 03:58:00 self.send(msg) 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 03:58:00 self.connect() 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 03:58:00 self.sock = self._new_conn() 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = 03:58:00 03:58:00 def _new_conn(self) -> socket.socket: 03:58:00 """Establish a socket connection and set nodelay settings on it. 03:58:00 03:58:00 :return: New socket connection. 03:58:00 """ 03:58:00 try: 03:58:00 sock = connection.create_connection( 03:58:00 (self._dns_host, self.port), 03:58:00 self.timeout, 03:58:00 source_address=self.source_address, 03:58:00 socket_options=self.socket_options, 03:58:00 ) 03:58:00 except socket.gaierror as e: 03:58:00 raise NameResolutionError(self.host, self, e) from e 03:58:00 except SocketTimeout as e: 03:58:00 raise ConnectTimeoutError( 03:58:00 self, 03:58:00 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 03:58:00 ) from e 03:58:00 03:58:00 except OSError as e: 03:58:00 > raise NewConnectionError( 03:58:00 self, f"Failed to establish a new connection: {e}" 03:58:00 ) from e 03:58:00 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 03:58:00 03:58:00 The above exception was the direct cause of the following exception: 03:58:00 03:58:00 self = 03:58:00 request = , stream = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:00 proxies = OrderedDict() 03:58:00 03:58:00 def send( 03:58:00 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:00 ): 03:58:00 """Sends PreparedRequest object. Returns Response object. 03:58:00 03:58:00 :param request: The :class:`PreparedRequest ` being sent. 03:58:00 :param stream: (optional) Whether to stream the request content. 03:58:00 :param timeout: (optional) How long to wait for the server to send 03:58:00 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:00 read timeout) ` tuple. 03:58:00 :type timeout: float or tuple or urllib3 Timeout object 03:58:00 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:00 we verify the server's TLS certificate, or a string, in which case it 03:58:00 must be a path to a CA bundle to use 03:58:00 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:00 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:00 :rtype: requests.Response 03:58:00 """ 03:58:00 03:58:00 try: 03:58:00 conn = self.get_connection_with_tls_context( 03:58:00 request, verify, proxies=proxies, cert=cert 03:58:00 ) 03:58:00 except LocationValueError as e: 03:58:00 raise InvalidURL(e, request=request) 03:58:00 03:58:00 self.cert_verify(conn, request.url, verify, cert) 03:58:00 url = self.request_url(request, proxies) 03:58:00 self.add_headers( 03:58:00 request, 03:58:00 stream=stream, 03:58:00 timeout=timeout, 03:58:00 verify=verify, 03:58:00 cert=cert, 03:58:00 proxies=proxies, 03:58:00 ) 03:58:00 03:58:00 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:00 03:58:00 if isinstance(timeout, tuple): 03:58:00 try: 03:58:00 connect, read = timeout 03:58:00 timeout = TimeoutSauce(connect=connect, read=read) 03:58:00 except ValueError: 03:58:00 raise ValueError( 03:58:00 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:00 f"or a single float to set both timeouts to the same value." 03:58:00 ) 03:58:00 elif isinstance(timeout, TimeoutSauce): 03:58:00 pass 03:58:00 else: 03:58:00 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:00 03:58:00 try: 03:58:00 > resp = conn.urlopen( 03:58:00 method=request.method, 03:58:00 url=url, 03:58:00 body=request.body, 03:58:00 headers=request.headers, 03:58:00 redirect=False, 03:58:00 assert_same_host=False, 03:58:00 preload_content=False, 03:58:00 decode_content=False, 03:58:00 retries=self.max_retries, 03:58:00 timeout=timeout, 03:58:00 chunked=chunked, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 03:58:00 retries = retries.increment( 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:00 method = 'GET' 03:58:00 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1' 03:58:00 response = None 03:58:00 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 03:58:00 _pool = 03:58:00 _stacktrace = 03:58:00 03:58:00 def increment( 03:58:00 self, 03:58:00 method: str | None = None, 03:58:00 url: str | None = None, 03:58:00 response: BaseHTTPResponse | None = None, 03:58:00 error: Exception | None = None, 03:58:00 _pool: ConnectionPool | None = None, 03:58:00 _stacktrace: TracebackType | None = None, 03:58:00 ) -> Self: 03:58:00 """Return a new Retry object with incremented retry counters. 03:58:00 03:58:00 :param response: A response object, or None, if the server did not 03:58:00 return a response. 03:58:00 :type response: :class:`~urllib3.response.BaseHTTPResponse` 03:58:00 :param Exception error: An error encountered during the request, or 03:58:00 None if the response was received successfully. 03:58:00 03:58:00 :return: A new ``Retry`` object. 03:58:00 """ 03:58:00 if self.total is False and error: 03:58:00 # Disabled, indicate to re-raise the error. 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 03:58:00 total = self.total 03:58:00 if total is not None: 03:58:00 total -= 1 03:58:00 03:58:00 connect = self.connect 03:58:00 read = self.read 03:58:00 redirect = self.redirect 03:58:00 status_count = self.status 03:58:00 other = self.other 03:58:00 cause = "unknown" 03:58:00 status = None 03:58:00 redirect_location = None 03:58:00 03:58:00 if error and self._is_connection_error(error): 03:58:00 # Connect retry? 03:58:00 if connect is False: 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 elif connect is not None: 03:58:00 connect -= 1 03:58:00 03:58:00 elif error and self._is_read_error(error): 03:58:00 # Read retry? 03:58:00 if read is False or method is None or not self._is_method_retryable(method): 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 elif read is not None: 03:58:00 read -= 1 03:58:00 03:58:00 elif error: 03:58:00 # Other retry? 03:58:00 if other is not None: 03:58:00 other -= 1 03:58:00 03:58:00 elif response and response.get_redirect_location(): 03:58:00 # Redirect retry? 03:58:00 if redirect is not None: 03:58:00 redirect -= 1 03:58:00 cause = "too many redirects" 03:58:00 response_redirect_location = response.get_redirect_location() 03:58:00 if response_redirect_location: 03:58:00 redirect_location = response_redirect_location 03:58:00 status = response.status 03:58:00 03:58:00 else: 03:58:00 # Incrementing because of a server error like a 500 in 03:58:00 # status_forcelist and the given method is in the allowed_methods 03:58:00 cause = ResponseError.GENERIC_ERROR 03:58:00 if response and response.status: 03:58:00 if status_count is not None: 03:58:00 status_count -= 1 03:58:00 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 03:58:00 status = response.status 03:58:00 03:58:00 history = self.history + ( 03:58:00 RequestHistory(method, url, error, status, redirect_location), 03:58:00 ) 03:58:00 03:58:00 new_retry = self.new( 03:58:00 total=total, 03:58:00 connect=connect, 03:58:00 read=read, 03:58:00 redirect=redirect, 03:58:00 status=status_count, 03:58:00 other=other, 03:58:00 history=history, 03:58:00 ) 03:58:00 03:58:00 if new_retry.is_exhausted(): 03:58:00 reason = error or ResponseError(cause) 03:58:00 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 03:58:00 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 03:58:00 03:58:00 During handling of the above exception, another exception occurred: 03:58:00 03:58:00 self = 03:58:00 03:58:00 def test_10_xpdr_portmapping_NETWORK1(self): 03:58:00 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-NETWORK1") 03:58:00 03:58:00 transportpce_tests/1.2.1/test01_portmapping.py:122: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 transportpce_tests/common/test_utils.py:471: in get_portmapping_node_attr 03:58:00 response = get_request(target_url) 03:58:00 transportpce_tests/common/test_utils.py:116: in get_request 03:58:00 return requests.request( 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 03:58:00 return session.request(method=method, url=url, **kwargs) 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 03:58:00 resp = self.send(prep, **send_kwargs) 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 03:58:00 r = adapter.send(request, **kwargs) 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = 03:58:00 request = , stream = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:00 proxies = OrderedDict() 03:58:00 03:58:00 def send( 03:58:00 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:00 ): 03:58:00 """Sends PreparedRequest object. Returns Response object. 03:58:00 03:58:00 :param request: The :class:`PreparedRequest ` being sent. 03:58:00 :param stream: (optional) Whether to stream the request content. 03:58:00 :param timeout: (optional) How long to wait for the server to send 03:58:00 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:00 read timeout) ` tuple. 03:58:00 :type timeout: float or tuple or urllib3 Timeout object 03:58:00 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:00 we verify the server's TLS certificate, or a string, in which case it 03:58:00 must be a path to a CA bundle to use 03:58:00 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:00 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:00 :rtype: requests.Response 03:58:00 """ 03:58:00 03:58:00 try: 03:58:00 conn = self.get_connection_with_tls_context( 03:58:00 request, verify, proxies=proxies, cert=cert 03:58:00 ) 03:58:00 except LocationValueError as e: 03:58:00 raise InvalidURL(e, request=request) 03:58:00 03:58:00 self.cert_verify(conn, request.url, verify, cert) 03:58:00 url = self.request_url(request, proxies) 03:58:00 self.add_headers( 03:58:00 request, 03:58:00 stream=stream, 03:58:00 timeout=timeout, 03:58:00 verify=verify, 03:58:00 cert=cert, 03:58:00 proxies=proxies, 03:58:00 ) 03:58:00 03:58:00 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:00 03:58:00 if isinstance(timeout, tuple): 03:58:00 try: 03:58:00 connect, read = timeout 03:58:00 timeout = TimeoutSauce(connect=connect, read=read) 03:58:00 except ValueError: 03:58:00 raise ValueError( 03:58:00 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:00 f"or a single float to set both timeouts to the same value." 03:58:00 ) 03:58:00 elif isinstance(timeout, TimeoutSauce): 03:58:00 pass 03:58:00 else: 03:58:00 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:00 03:58:00 try: 03:58:00 resp = conn.urlopen( 03:58:00 method=request.method, 03:58:00 url=url, 03:58:00 body=request.body, 03:58:00 headers=request.headers, 03:58:00 redirect=False, 03:58:00 assert_same_host=False, 03:58:00 preload_content=False, 03:58:00 decode_content=False, 03:58:00 retries=self.max_retries, 03:58:00 timeout=timeout, 03:58:00 chunked=chunked, 03:58:00 ) 03:58:00 03:58:00 except (ProtocolError, OSError) as err: 03:58:00 raise ConnectionError(err, request=request) 03:58:00 03:58:00 except MaxRetryError as e: 03:58:00 if isinstance(e.reason, ConnectTimeoutError): 03:58:00 # TODO: Remove this in 3.0.0: see #2811 03:58:00 if not isinstance(e.reason, NewConnectionError): 03:58:00 raise ConnectTimeout(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, ResponseError): 03:58:00 raise RetryError(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, _ProxyError): 03:58:00 raise ProxyError(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, _SSLError): 03:58:00 # This branch is for urllib3 v1.22 and later. 03:58:00 raise SSLError(e, request=request) 03:58:00 03:58:00 > raise ConnectionError(e, request=request) 03:58:00 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 03:58:00 ----------------------------- Captured stdout call ----------------------------- 03:58:00 execution of test_10_xpdr_portmapping_NETWORK1 03:58:00 _______ TransportPCEPortMappingTesting.test_11_xpdr_portmapping_NETWORK2 _______ 03:58:00 03:58:00 self = 03:58:00 03:58:00 def _new_conn(self) -> socket.socket: 03:58:00 """Establish a socket connection and set nodelay settings on it. 03:58:00 03:58:00 :return: New socket connection. 03:58:00 """ 03:58:00 try: 03:58:00 > sock = connection.create_connection( 03:58:00 (self._dns_host, self.port), 03:58:00 self.timeout, 03:58:00 source_address=self.source_address, 03:58:00 socket_options=self.socket_options, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 03:58:00 raise err 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 address = ('localhost', 8182), timeout = 10, source_address = None 03:58:00 socket_options = [(6, 1, 1)] 03:58:00 03:58:00 def create_connection( 03:58:00 address: tuple[str, int], 03:58:00 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:00 source_address: tuple[str, int] | None = None, 03:58:00 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 03:58:00 ) -> socket.socket: 03:58:00 """Connect to *address* and return the socket object. 03:58:00 03:58:00 Convenience function. Connect to *address* (a 2-tuple ``(host, 03:58:00 port)``) and return the socket object. Passing the optional 03:58:00 *timeout* parameter will set the timeout on the socket instance 03:58:00 before attempting to connect. If no *timeout* is supplied, the 03:58:00 global default timeout setting returned by :func:`socket.getdefaulttimeout` 03:58:00 is used. If *source_address* is set it must be a tuple of (host, port) 03:58:00 for the socket to bind as a source address before making the connection. 03:58:00 An host of '' or port 0 tells the OS to use the default. 03:58:00 """ 03:58:00 03:58:00 host, port = address 03:58:00 if host.startswith("["): 03:58:00 host = host.strip("[]") 03:58:00 err = None 03:58:00 03:58:00 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 03:58:00 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 03:58:00 # The original create_connection function always returns all records. 03:58:00 family = allowed_gai_family() 03:58:00 03:58:00 try: 03:58:00 host.encode("idna") 03:58:00 except UnicodeError: 03:58:00 raise LocationParseError(f"'{host}', label empty or too long") from None 03:58:00 03:58:00 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 03:58:00 af, socktype, proto, canonname, sa = res 03:58:00 sock = None 03:58:00 try: 03:58:00 sock = socket.socket(af, socktype, proto) 03:58:00 03:58:00 # If provided, set socket level options before connecting. 03:58:00 _set_socket_options(sock, socket_options) 03:58:00 03:58:00 if timeout is not _DEFAULT_TIMEOUT: 03:58:00 sock.settimeout(timeout) 03:58:00 if source_address: 03:58:00 sock.bind(source_address) 03:58:00 > sock.connect(sa) 03:58:00 E ConnectionRefusedError: [Errno 111] Connection refused 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 03:58:00 03:58:00 The above exception was the direct cause of the following exception: 03:58:00 03:58:00 self = 03:58:00 method = 'GET' 03:58:00 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2' 03:58:00 body = None 03:58:00 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 03:58:00 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:00 redirect = False, assert_same_host = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 03:58:00 release_conn = False, chunked = False, body_pos = None, preload_content = False 03:58:00 decode_content = False, response_kw = {} 03:58:00 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2', query=None, fragment=None) 03:58:00 destination_scheme = None, conn = None, release_this_conn = True 03:58:00 http_tunnel_required = False, err = None, clean_exit = False 03:58:00 03:58:00 def urlopen( # type: ignore[override] 03:58:00 self, 03:58:00 method: str, 03:58:00 url: str, 03:58:00 body: _TYPE_BODY | None = None, 03:58:00 headers: typing.Mapping[str, str] | None = None, 03:58:00 retries: Retry | bool | int | None = None, 03:58:00 redirect: bool = True, 03:58:00 assert_same_host: bool = True, 03:58:00 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:00 pool_timeout: int | None = None, 03:58:00 release_conn: bool | None = None, 03:58:00 chunked: bool = False, 03:58:00 body_pos: _TYPE_BODY_POSITION | None = None, 03:58:00 preload_content: bool = True, 03:58:00 decode_content: bool = True, 03:58:00 **response_kw: typing.Any, 03:58:00 ) -> BaseHTTPResponse: 03:58:00 """ 03:58:00 Get a connection from the pool and perform an HTTP request. This is the 03:58:00 lowest level call for making a request, so you'll need to specify all 03:58:00 the raw details. 03:58:00 03:58:00 .. note:: 03:58:00 03:58:00 More commonly, it's appropriate to use a convenience method 03:58:00 such as :meth:`request`. 03:58:00 03:58:00 .. note:: 03:58:00 03:58:00 `release_conn` will only behave as expected if 03:58:00 `preload_content=False` because we want to make 03:58:00 `preload_content=False` the default behaviour someday soon without 03:58:00 breaking backwards compatibility. 03:58:00 03:58:00 :param method: 03:58:00 HTTP request method (such as GET, POST, PUT, etc.) 03:58:00 03:58:00 :param url: 03:58:00 The URL to perform the request on. 03:58:00 03:58:00 :param body: 03:58:00 Data to send in the request body, either :class:`str`, :class:`bytes`, 03:58:00 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 03:58:00 03:58:00 :param headers: 03:58:00 Dictionary of custom headers to send, such as User-Agent, 03:58:00 If-None-Match, etc. If None, pool headers are used. If provided, 03:58:00 these headers completely replace any pool-specific headers. 03:58:00 03:58:00 :param retries: 03:58:00 Configure the number of retries to allow before raising a 03:58:00 :class:`~urllib3.exceptions.MaxRetryError` exception. 03:58:00 03:58:00 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 03:58:00 :class:`~urllib3.util.retry.Retry` object for fine-grained control 03:58:00 over different types of retries. 03:58:00 Pass an integer number to retry connection errors that many times, 03:58:00 but no other types of errors. Pass zero to never retry. 03:58:00 03:58:00 If ``False``, then retries are disabled and any exception is raised 03:58:00 immediately. Also, instead of raising a MaxRetryError on redirects, 03:58:00 the redirect response will be returned. 03:58:00 03:58:00 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 03:58:00 03:58:00 :param redirect: 03:58:00 If True, automatically handle redirects (status codes 301, 302, 03:58:00 303, 307, 308). Each redirect counts as a retry. Disabling retries 03:58:00 will disable redirect, too. 03:58:00 03:58:00 :param assert_same_host: 03:58:00 If ``True``, will make sure that the host of the pool requests is 03:58:00 consistent else will raise HostChangedError. When ``False``, you can 03:58:00 use the pool on an HTTP proxy and request foreign hosts. 03:58:00 03:58:00 :param timeout: 03:58:00 If specified, overrides the default timeout for this one 03:58:00 request. It may be a float (in seconds) or an instance of 03:58:00 :class:`urllib3.util.Timeout`. 03:58:00 03:58:00 :param pool_timeout: 03:58:00 If set and the pool is set to block=True, then this method will 03:58:00 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 03:58:00 connection is available within the time period. 03:58:00 03:58:00 :param bool preload_content: 03:58:00 If True, the response's body will be preloaded into memory. 03:58:00 03:58:00 :param bool decode_content: 03:58:00 If True, will attempt to decode the body based on the 03:58:00 'content-encoding' header. 03:58:00 03:58:00 :param release_conn: 03:58:00 If False, then the urlopen call will not release the connection 03:58:00 back into the pool once a response is received (but will release if 03:58:00 you read the entire contents of the response such as when 03:58:00 `preload_content=True`). This is useful if you're not preloading 03:58:00 the response's content immediately. You will need to call 03:58:00 ``r.release_conn()`` on the response ``r`` to return the connection 03:58:00 back into the pool. If None, it takes the value of ``preload_content`` 03:58:00 which defaults to ``True``. 03:58:00 03:58:00 :param bool chunked: 03:58:00 If True, urllib3 will send the body using chunked transfer 03:58:00 encoding. Otherwise, urllib3 will send the body using the standard 03:58:00 content-length form. Defaults to False. 03:58:00 03:58:00 :param int body_pos: 03:58:00 Position to seek to in file-like body in the event of a retry or 03:58:00 redirect. Typically this won't need to be set because urllib3 will 03:58:00 auto-populate the value when needed. 03:58:00 """ 03:58:00 parsed_url = parse_url(url) 03:58:00 destination_scheme = parsed_url.scheme 03:58:00 03:58:00 if headers is None: 03:58:00 headers = self.headers 03:58:00 03:58:00 if not isinstance(retries, Retry): 03:58:00 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 03:58:00 03:58:00 if release_conn is None: 03:58:00 release_conn = preload_content 03:58:00 03:58:00 # Check host 03:58:00 if assert_same_host and not self.is_same_host(url): 03:58:00 raise HostChangedError(self, url, retries) 03:58:00 03:58:00 # Ensure that the URL we're connecting to is properly encoded 03:58:00 if url.startswith("/"): 03:58:00 url = to_str(_encode_target(url)) 03:58:00 else: 03:58:00 url = to_str(parsed_url.url) 03:58:00 03:58:00 conn = None 03:58:00 03:58:00 # Track whether `conn` needs to be released before 03:58:00 # returning/raising/recursing. Update this variable if necessary, and 03:58:00 # leave `release_conn` constant throughout the function. That way, if 03:58:00 # the function recurses, the original value of `release_conn` will be 03:58:00 # passed down into the recursive call, and its value will be respected. 03:58:00 # 03:58:00 # See issue #651 [1] for details. 03:58:00 # 03:58:00 # [1] 03:58:00 release_this_conn = release_conn 03:58:00 03:58:00 http_tunnel_required = connection_requires_http_tunnel( 03:58:00 self.proxy, self.proxy_config, destination_scheme 03:58:00 ) 03:58:00 03:58:00 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 03:58:00 # have to copy the headers dict so we can safely change it without those 03:58:00 # changes being reflected in anyone else's copy. 03:58:00 if not http_tunnel_required: 03:58:00 headers = headers.copy() # type: ignore[attr-defined] 03:58:00 headers.update(self.proxy_headers) # type: ignore[union-attr] 03:58:00 03:58:00 # Must keep the exception bound to a separate variable or else Python 3 03:58:00 # complains about UnboundLocalError. 03:58:00 err = None 03:58:00 03:58:00 # Keep track of whether we cleanly exited the except block. This 03:58:00 # ensures we do proper cleanup in finally. 03:58:00 clean_exit = False 03:58:00 03:58:00 # Rewind body position, if needed. Record current position 03:58:00 # for future rewinds in the event of a redirect/retry. 03:58:00 body_pos = set_file_position(body, body_pos) 03:58:00 03:58:00 try: 03:58:00 # Request a connection from the queue. 03:58:00 timeout_obj = self._get_timeout(timeout) 03:58:00 conn = self._get_conn(timeout=pool_timeout) 03:58:00 03:58:00 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 03:58:00 03:58:00 # Is this a closed/new connection that requires CONNECT tunnelling? 03:58:00 if self.proxy is not None and http_tunnel_required and conn.is_closed: 03:58:00 try: 03:58:00 self._prepare_proxy(conn) 03:58:00 except (BaseSSLError, OSError, SocketTimeout) as e: 03:58:00 self._raise_timeout( 03:58:00 err=e, url=self.proxy.url, timeout_value=conn.timeout 03:58:00 ) 03:58:00 raise 03:58:00 03:58:00 # If we're going to release the connection in ``finally:``, then 03:58:00 # the response doesn't need to know about the connection. Otherwise 03:58:00 # it will also try to release it and we'll have a double-release 03:58:00 # mess. 03:58:00 response_conn = conn if not release_conn else None 03:58:00 03:58:00 # Make the request on the HTTPConnection object 03:58:00 > response = self._make_request( 03:58:00 conn, 03:58:00 method, 03:58:00 url, 03:58:00 timeout=timeout_obj, 03:58:00 body=body, 03:58:00 headers=headers, 03:58:00 chunked=chunked, 03:58:00 retries=retries, 03:58:00 response_conn=response_conn, 03:58:00 preload_content=preload_content, 03:58:00 decode_content=decode_content, 03:58:00 **response_kw, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 03:58:00 conn.request( 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 03:58:00 self.endheaders() 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 03:58:00 self._send_output(message_body, encode_chunked=encode_chunked) 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 03:58:00 self.send(msg) 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 03:58:00 self.connect() 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 03:58:00 self.sock = self._new_conn() 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = 03:58:00 03:58:00 def _new_conn(self) -> socket.socket: 03:58:00 """Establish a socket connection and set nodelay settings on it. 03:58:00 03:58:00 :return: New socket connection. 03:58:00 """ 03:58:00 try: 03:58:00 sock = connection.create_connection( 03:58:00 (self._dns_host, self.port), 03:58:00 self.timeout, 03:58:00 source_address=self.source_address, 03:58:00 socket_options=self.socket_options, 03:58:00 ) 03:58:00 except socket.gaierror as e: 03:58:00 raise NameResolutionError(self.host, self, e) from e 03:58:00 except SocketTimeout as e: 03:58:00 raise ConnectTimeoutError( 03:58:00 self, 03:58:00 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 03:58:00 ) from e 03:58:00 03:58:00 except OSError as e: 03:58:00 > raise NewConnectionError( 03:58:00 self, f"Failed to establish a new connection: {e}" 03:58:00 ) from e 03:58:00 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 03:58:00 03:58:00 The above exception was the direct cause of the following exception: 03:58:00 03:58:00 self = 03:58:00 request = , stream = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:00 proxies = OrderedDict() 03:58:00 03:58:00 def send( 03:58:00 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:00 ): 03:58:00 """Sends PreparedRequest object. Returns Response object. 03:58:00 03:58:00 :param request: The :class:`PreparedRequest ` being sent. 03:58:00 :param stream: (optional) Whether to stream the request content. 03:58:00 :param timeout: (optional) How long to wait for the server to send 03:58:00 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:00 read timeout) ` tuple. 03:58:00 :type timeout: float or tuple or urllib3 Timeout object 03:58:00 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:00 we verify the server's TLS certificate, or a string, in which case it 03:58:00 must be a path to a CA bundle to use 03:58:00 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:00 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:00 :rtype: requests.Response 03:58:00 """ 03:58:00 03:58:00 try: 03:58:00 conn = self.get_connection_with_tls_context( 03:58:00 request, verify, proxies=proxies, cert=cert 03:58:00 ) 03:58:00 except LocationValueError as e: 03:58:00 raise InvalidURL(e, request=request) 03:58:00 03:58:00 self.cert_verify(conn, request.url, verify, cert) 03:58:00 url = self.request_url(request, proxies) 03:58:00 self.add_headers( 03:58:00 request, 03:58:00 stream=stream, 03:58:00 timeout=timeout, 03:58:00 verify=verify, 03:58:00 cert=cert, 03:58:00 proxies=proxies, 03:58:00 ) 03:58:00 03:58:00 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:00 03:58:00 if isinstance(timeout, tuple): 03:58:00 try: 03:58:00 connect, read = timeout 03:58:00 timeout = TimeoutSauce(connect=connect, read=read) 03:58:00 except ValueError: 03:58:00 raise ValueError( 03:58:00 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:00 f"or a single float to set both timeouts to the same value." 03:58:00 ) 03:58:00 elif isinstance(timeout, TimeoutSauce): 03:58:00 pass 03:58:00 else: 03:58:00 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:00 03:58:00 try: 03:58:00 > resp = conn.urlopen( 03:58:00 method=request.method, 03:58:00 url=url, 03:58:00 body=request.body, 03:58:00 headers=request.headers, 03:58:00 redirect=False, 03:58:00 assert_same_host=False, 03:58:00 preload_content=False, 03:58:00 decode_content=False, 03:58:00 retries=self.max_retries, 03:58:00 timeout=timeout, 03:58:00 chunked=chunked, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 03:58:00 retries = retries.increment( 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:00 method = 'GET' 03:58:00 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2' 03:58:00 response = None 03:58:00 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 03:58:00 _pool = 03:58:00 _stacktrace = 03:58:00 03:58:00 def increment( 03:58:00 self, 03:58:00 method: str | None = None, 03:58:00 url: str | None = None, 03:58:00 response: BaseHTTPResponse | None = None, 03:58:00 error: Exception | None = None, 03:58:00 _pool: ConnectionPool | None = None, 03:58:00 _stacktrace: TracebackType | None = None, 03:58:00 ) -> Self: 03:58:00 """Return a new Retry object with incremented retry counters. 03:58:00 03:58:00 :param response: A response object, or None, if the server did not 03:58:00 return a response. 03:58:00 :type response: :class:`~urllib3.response.BaseHTTPResponse` 03:58:00 :param Exception error: An error encountered during the request, or 03:58:00 None if the response was received successfully. 03:58:00 03:58:00 :return: A new ``Retry`` object. 03:58:00 """ 03:58:00 if self.total is False and error: 03:58:00 # Disabled, indicate to re-raise the error. 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 03:58:00 total = self.total 03:58:00 if total is not None: 03:58:00 total -= 1 03:58:00 03:58:00 connect = self.connect 03:58:00 read = self.read 03:58:00 redirect = self.redirect 03:58:00 status_count = self.status 03:58:00 other = self.other 03:58:00 cause = "unknown" 03:58:00 status = None 03:58:00 redirect_location = None 03:58:00 03:58:00 if error and self._is_connection_error(error): 03:58:00 # Connect retry? 03:58:00 if connect is False: 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 elif connect is not None: 03:58:00 connect -= 1 03:58:00 03:58:00 elif error and self._is_read_error(error): 03:58:00 # Read retry? 03:58:00 if read is False or method is None or not self._is_method_retryable(method): 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 elif read is not None: 03:58:00 read -= 1 03:58:00 03:58:00 elif error: 03:58:00 # Other retry? 03:58:00 if other is not None: 03:58:00 other -= 1 03:58:00 03:58:00 elif response and response.get_redirect_location(): 03:58:00 # Redirect retry? 03:58:00 if redirect is not None: 03:58:00 redirect -= 1 03:58:00 cause = "too many redirects" 03:58:00 response_redirect_location = response.get_redirect_location() 03:58:00 if response_redirect_location: 03:58:00 redirect_location = response_redirect_location 03:58:00 status = response.status 03:58:00 03:58:00 else: 03:58:00 # Incrementing because of a server error like a 500 in 03:58:00 # status_forcelist and the given method is in the allowed_methods 03:58:00 cause = ResponseError.GENERIC_ERROR 03:58:00 if response and response.status: 03:58:00 if status_count is not None: 03:58:00 status_count -= 1 03:58:00 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 03:58:00 status = response.status 03:58:00 03:58:00 history = self.history + ( 03:58:00 RequestHistory(method, url, error, status, redirect_location), 03:58:00 ) 03:58:00 03:58:00 new_retry = self.new( 03:58:00 total=total, 03:58:00 connect=connect, 03:58:00 read=read, 03:58:00 redirect=redirect, 03:58:00 status=status_count, 03:58:00 other=other, 03:58:00 history=history, 03:58:00 ) 03:58:00 03:58:00 if new_retry.is_exhausted(): 03:58:00 reason = error or ResponseError(cause) 03:58:00 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 03:58:00 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 03:58:00 03:58:00 During handling of the above exception, another exception occurred: 03:58:00 03:58:00 self = 03:58:00 03:58:00 def test_11_xpdr_portmapping_NETWORK2(self): 03:58:00 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-NETWORK2") 03:58:00 03:58:00 transportpce_tests/1.2.1/test01_portmapping.py:133: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 transportpce_tests/common/test_utils.py:471: in get_portmapping_node_attr 03:58:00 response = get_request(target_url) 03:58:00 transportpce_tests/common/test_utils.py:116: in get_request 03:58:00 return requests.request( 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 03:58:00 return session.request(method=method, url=url, **kwargs) 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 03:58:00 resp = self.send(prep, **send_kwargs) 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 03:58:00 r = adapter.send(request, **kwargs) 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = 03:58:00 request = , stream = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:00 proxies = OrderedDict() 03:58:00 03:58:00 def send( 03:58:00 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:00 ): 03:58:00 """Sends PreparedRequest object. Returns Response object. 03:58:00 03:58:00 :param request: The :class:`PreparedRequest ` being sent. 03:58:00 :param stream: (optional) Whether to stream the request content. 03:58:00 :param timeout: (optional) How long to wait for the server to send 03:58:00 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:00 read timeout) ` tuple. 03:58:00 :type timeout: float or tuple or urllib3 Timeout object 03:58:00 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:00 we verify the server's TLS certificate, or a string, in which case it 03:58:00 must be a path to a CA bundle to use 03:58:00 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:00 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:00 :rtype: requests.Response 03:58:00 """ 03:58:00 03:58:00 try: 03:58:00 conn = self.get_connection_with_tls_context( 03:58:00 request, verify, proxies=proxies, cert=cert 03:58:00 ) 03:58:00 except LocationValueError as e: 03:58:00 raise InvalidURL(e, request=request) 03:58:00 03:58:00 self.cert_verify(conn, request.url, verify, cert) 03:58:00 url = self.request_url(request, proxies) 03:58:00 self.add_headers( 03:58:00 request, 03:58:00 stream=stream, 03:58:00 timeout=timeout, 03:58:00 verify=verify, 03:58:00 cert=cert, 03:58:00 proxies=proxies, 03:58:00 ) 03:58:00 03:58:00 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:00 03:58:00 if isinstance(timeout, tuple): 03:58:00 try: 03:58:00 connect, read = timeout 03:58:00 timeout = TimeoutSauce(connect=connect, read=read) 03:58:00 except ValueError: 03:58:00 raise ValueError( 03:58:00 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:00 f"or a single float to set both timeouts to the same value." 03:58:00 ) 03:58:00 elif isinstance(timeout, TimeoutSauce): 03:58:00 pass 03:58:00 else: 03:58:00 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:00 03:58:00 try: 03:58:00 resp = conn.urlopen( 03:58:00 method=request.method, 03:58:00 url=url, 03:58:00 body=request.body, 03:58:00 headers=request.headers, 03:58:00 redirect=False, 03:58:00 assert_same_host=False, 03:58:00 preload_content=False, 03:58:00 decode_content=False, 03:58:00 retries=self.max_retries, 03:58:00 timeout=timeout, 03:58:00 chunked=chunked, 03:58:00 ) 03:58:00 03:58:00 except (ProtocolError, OSError) as err: 03:58:00 raise ConnectionError(err, request=request) 03:58:00 03:58:00 except MaxRetryError as e: 03:58:00 if isinstance(e.reason, ConnectTimeoutError): 03:58:00 # TODO: Remove this in 3.0.0: see #2811 03:58:00 if not isinstance(e.reason, NewConnectionError): 03:58:00 raise ConnectTimeout(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, ResponseError): 03:58:00 raise RetryError(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, _ProxyError): 03:58:00 raise ProxyError(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, _SSLError): 03:58:00 # This branch is for urllib3 v1.22 and later. 03:58:00 raise SSLError(e, request=request) 03:58:00 03:58:00 > raise ConnectionError(e, request=request) 03:58:00 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 03:58:00 ----------------------------- Captured stdout call ----------------------------- 03:58:00 execution of test_11_xpdr_portmapping_NETWORK2 03:58:00 _______ TransportPCEPortMappingTesting.test_12_xpdr_portmapping_CLIENT1 ________ 03:58:00 03:58:00 self = 03:58:00 03:58:00 def _new_conn(self) -> socket.socket: 03:58:00 """Establish a socket connection and set nodelay settings on it. 03:58:00 03:58:00 :return: New socket connection. 03:58:00 """ 03:58:00 try: 03:58:00 > sock = connection.create_connection( 03:58:00 (self._dns_host, self.port), 03:58:00 self.timeout, 03:58:00 source_address=self.source_address, 03:58:00 socket_options=self.socket_options, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 03:58:00 raise err 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 address = ('localhost', 8182), timeout = 10, source_address = None 03:58:00 socket_options = [(6, 1, 1)] 03:58:00 03:58:00 def create_connection( 03:58:00 address: tuple[str, int], 03:58:00 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:00 source_address: tuple[str, int] | None = None, 03:58:00 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 03:58:00 ) -> socket.socket: 03:58:00 """Connect to *address* and return the socket object. 03:58:00 03:58:00 Convenience function. Connect to *address* (a 2-tuple ``(host, 03:58:00 port)``) and return the socket object. Passing the optional 03:58:00 *timeout* parameter will set the timeout on the socket instance 03:58:00 before attempting to connect. If no *timeout* is supplied, the 03:58:00 global default timeout setting returned by :func:`socket.getdefaulttimeout` 03:58:00 is used. If *source_address* is set it must be a tuple of (host, port) 03:58:00 for the socket to bind as a source address before making the connection. 03:58:00 An host of '' or port 0 tells the OS to use the default. 03:58:00 """ 03:58:00 03:58:00 host, port = address 03:58:00 if host.startswith("["): 03:58:00 host = host.strip("[]") 03:58:00 err = None 03:58:00 03:58:00 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 03:58:00 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 03:58:00 # The original create_connection function always returns all records. 03:58:00 family = allowed_gai_family() 03:58:00 03:58:00 try: 03:58:00 host.encode("idna") 03:58:00 except UnicodeError: 03:58:00 raise LocationParseError(f"'{host}', label empty or too long") from None 03:58:00 03:58:00 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 03:58:00 af, socktype, proto, canonname, sa = res 03:58:00 sock = None 03:58:00 try: 03:58:00 sock = socket.socket(af, socktype, proto) 03:58:00 03:58:00 # If provided, set socket level options before connecting. 03:58:00 _set_socket_options(sock, socket_options) 03:58:00 03:58:00 if timeout is not _DEFAULT_TIMEOUT: 03:58:00 sock.settimeout(timeout) 03:58:00 if source_address: 03:58:00 sock.bind(source_address) 03:58:00 > sock.connect(sa) 03:58:00 E ConnectionRefusedError: [Errno 111] Connection refused 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 03:58:00 03:58:00 The above exception was the direct cause of the following exception: 03:58:00 03:58:00 self = 03:58:00 method = 'GET' 03:58:00 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1' 03:58:00 body = None 03:58:00 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 03:58:00 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:00 redirect = False, assert_same_host = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 03:58:00 release_conn = False, chunked = False, body_pos = None, preload_content = False 03:58:00 decode_content = False, response_kw = {} 03:58:00 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1', query=None, fragment=None) 03:58:00 destination_scheme = None, conn = None, release_this_conn = True 03:58:00 http_tunnel_required = False, err = None, clean_exit = False 03:58:00 03:58:00 def urlopen( # type: ignore[override] 03:58:00 self, 03:58:00 method: str, 03:58:00 url: str, 03:58:00 body: _TYPE_BODY | None = None, 03:58:00 headers: typing.Mapping[str, str] | None = None, 03:58:00 retries: Retry | bool | int | None = None, 03:58:00 redirect: bool = True, 03:58:00 assert_same_host: bool = True, 03:58:00 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:00 pool_timeout: int | None = None, 03:58:00 release_conn: bool | None = None, 03:58:00 chunked: bool = False, 03:58:00 body_pos: _TYPE_BODY_POSITION | None = None, 03:58:00 preload_content: bool = True, 03:58:00 decode_content: bool = True, 03:58:00 **response_kw: typing.Any, 03:58:00 ) -> BaseHTTPResponse: 03:58:00 """ 03:58:00 Get a connection from the pool and perform an HTTP request. This is the 03:58:00 lowest level call for making a request, so you'll need to specify all 03:58:00 the raw details. 03:58:00 03:58:00 .. note:: 03:58:00 03:58:00 More commonly, it's appropriate to use a convenience method 03:58:00 such as :meth:`request`. 03:58:00 03:58:00 .. note:: 03:58:00 03:58:00 `release_conn` will only behave as expected if 03:58:00 `preload_content=False` because we want to make 03:58:00 `preload_content=False` the default behaviour someday soon without 03:58:00 breaking backwards compatibility. 03:58:00 03:58:00 :param method: 03:58:00 HTTP request method (such as GET, POST, PUT, etc.) 03:58:00 03:58:00 :param url: 03:58:00 The URL to perform the request on. 03:58:00 03:58:00 :param body: 03:58:00 Data to send in the request body, either :class:`str`, :class:`bytes`, 03:58:00 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 03:58:00 03:58:00 :param headers: 03:58:00 Dictionary of custom headers to send, such as User-Agent, 03:58:00 If-None-Match, etc. If None, pool headers are used. If provided, 03:58:00 these headers completely replace any pool-specific headers. 03:58:00 03:58:00 :param retries: 03:58:00 Configure the number of retries to allow before raising a 03:58:00 :class:`~urllib3.exceptions.MaxRetryError` exception. 03:58:00 03:58:00 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 03:58:00 :class:`~urllib3.util.retry.Retry` object for fine-grained control 03:58:00 over different types of retries. 03:58:00 Pass an integer number to retry connection errors that many times, 03:58:00 but no other types of errors. Pass zero to never retry. 03:58:00 03:58:00 If ``False``, then retries are disabled and any exception is raised 03:58:00 immediately. Also, instead of raising a MaxRetryError on redirects, 03:58:00 the redirect response will be returned. 03:58:00 03:58:00 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 03:58:00 03:58:00 :param redirect: 03:58:00 If True, automatically handle redirects (status codes 301, 302, 03:58:00 303, 307, 308). Each redirect counts as a retry. Disabling retries 03:58:00 will disable redirect, too. 03:58:00 03:58:00 :param assert_same_host: 03:58:00 If ``True``, will make sure that the host of the pool requests is 03:58:00 consistent else will raise HostChangedError. When ``False``, you can 03:58:00 use the pool on an HTTP proxy and request foreign hosts. 03:58:00 03:58:00 :param timeout: 03:58:00 If specified, overrides the default timeout for this one 03:58:00 request. It may be a float (in seconds) or an instance of 03:58:00 :class:`urllib3.util.Timeout`. 03:58:00 03:58:00 :param pool_timeout: 03:58:00 If set and the pool is set to block=True, then this method will 03:58:00 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 03:58:00 connection is available within the time period. 03:58:00 03:58:00 :param bool preload_content: 03:58:00 If True, the response's body will be preloaded into memory. 03:58:00 03:58:00 :param bool decode_content: 03:58:00 If True, will attempt to decode the body based on the 03:58:00 'content-encoding' header. 03:58:00 03:58:00 :param release_conn: 03:58:00 If False, then the urlopen call will not release the connection 03:58:00 back into the pool once a response is received (but will release if 03:58:00 you read the entire contents of the response such as when 03:58:00 `preload_content=True`). This is useful if you're not preloading 03:58:00 the response's content immediately. You will need to call 03:58:00 ``r.release_conn()`` on the response ``r`` to return the connection 03:58:00 back into the pool. If None, it takes the value of ``preload_content`` 03:58:00 which defaults to ``True``. 03:58:00 03:58:00 :param bool chunked: 03:58:00 If True, urllib3 will send the body using chunked transfer 03:58:00 encoding. Otherwise, urllib3 will send the body using the standard 03:58:00 content-length form. Defaults to False. 03:58:00 03:58:00 :param int body_pos: 03:58:00 Position to seek to in file-like body in the event of a retry or 03:58:00 redirect. Typically this won't need to be set because urllib3 will 03:58:00 auto-populate the value when needed. 03:58:00 """ 03:58:00 parsed_url = parse_url(url) 03:58:00 destination_scheme = parsed_url.scheme 03:58:00 03:58:00 if headers is None: 03:58:00 headers = self.headers 03:58:00 03:58:00 if not isinstance(retries, Retry): 03:58:00 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 03:58:00 03:58:00 if release_conn is None: 03:58:00 release_conn = preload_content 03:58:00 03:58:00 # Check host 03:58:00 if assert_same_host and not self.is_same_host(url): 03:58:00 raise HostChangedError(self, url, retries) 03:58:00 03:58:00 # Ensure that the URL we're connecting to is properly encoded 03:58:00 if url.startswith("/"): 03:58:00 url = to_str(_encode_target(url)) 03:58:00 else: 03:58:00 url = to_str(parsed_url.url) 03:58:00 03:58:00 conn = None 03:58:00 03:58:00 # Track whether `conn` needs to be released before 03:58:00 # returning/raising/recursing. Update this variable if necessary, and 03:58:00 # leave `release_conn` constant throughout the function. That way, if 03:58:00 # the function recurses, the original value of `release_conn` will be 03:58:00 # passed down into the recursive call, and its value will be respected. 03:58:00 # 03:58:00 # See issue #651 [1] for details. 03:58:00 # 03:58:00 # [1] 03:58:00 release_this_conn = release_conn 03:58:00 03:58:00 http_tunnel_required = connection_requires_http_tunnel( 03:58:00 self.proxy, self.proxy_config, destination_scheme 03:58:00 ) 03:58:00 03:58:00 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 03:58:00 # have to copy the headers dict so we can safely change it without those 03:58:00 # changes being reflected in anyone else's copy. 03:58:00 if not http_tunnel_required: 03:58:00 headers = headers.copy() # type: ignore[attr-defined] 03:58:00 headers.update(self.proxy_headers) # type: ignore[union-attr] 03:58:00 03:58:00 # Must keep the exception bound to a separate variable or else Python 3 03:58:00 # complains about UnboundLocalError. 03:58:00 err = None 03:58:00 03:58:00 # Keep track of whether we cleanly exited the except block. This 03:58:00 # ensures we do proper cleanup in finally. 03:58:00 clean_exit = False 03:58:00 03:58:00 # Rewind body position, if needed. Record current position 03:58:00 # for future rewinds in the event of a redirect/retry. 03:58:00 body_pos = set_file_position(body, body_pos) 03:58:00 03:58:00 try: 03:58:00 # Request a connection from the queue. 03:58:00 timeout_obj = self._get_timeout(timeout) 03:58:00 conn = self._get_conn(timeout=pool_timeout) 03:58:00 03:58:00 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 03:58:00 03:58:00 # Is this a closed/new connection that requires CONNECT tunnelling? 03:58:00 if self.proxy is not None and http_tunnel_required and conn.is_closed: 03:58:00 try: 03:58:00 self._prepare_proxy(conn) 03:58:00 except (BaseSSLError, OSError, SocketTimeout) as e: 03:58:00 self._raise_timeout( 03:58:00 err=e, url=self.proxy.url, timeout_value=conn.timeout 03:58:00 ) 03:58:00 raise 03:58:00 03:58:00 # If we're going to release the connection in ``finally:``, then 03:58:00 # the response doesn't need to know about the connection. Otherwise 03:58:00 # it will also try to release it and we'll have a double-release 03:58:00 # mess. 03:58:00 response_conn = conn if not release_conn else None 03:58:00 03:58:00 # Make the request on the HTTPConnection object 03:58:00 > response = self._make_request( 03:58:00 conn, 03:58:00 method, 03:58:00 url, 03:58:00 timeout=timeout_obj, 03:58:00 body=body, 03:58:00 headers=headers, 03:58:00 chunked=chunked, 03:58:00 retries=retries, 03:58:00 response_conn=response_conn, 03:58:00 preload_content=preload_content, 03:58:00 decode_content=decode_content, 03:58:00 **response_kw, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 03:58:00 conn.request( 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 03:58:00 self.endheaders() 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 03:58:00 self._send_output(message_body, encode_chunked=encode_chunked) 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 03:58:00 self.send(msg) 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 03:58:00 self.connect() 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 03:58:00 self.sock = self._new_conn() 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = 03:58:00 03:58:00 def _new_conn(self) -> socket.socket: 03:58:00 """Establish a socket connection and set nodelay settings on it. 03:58:00 03:58:00 :return: New socket connection. 03:58:00 """ 03:58:00 try: 03:58:00 sock = connection.create_connection( 03:58:00 (self._dns_host, self.port), 03:58:00 self.timeout, 03:58:00 source_address=self.source_address, 03:58:00 socket_options=self.socket_options, 03:58:00 ) 03:58:00 except socket.gaierror as e: 03:58:00 raise NameResolutionError(self.host, self, e) from e 03:58:00 except SocketTimeout as e: 03:58:00 raise ConnectTimeoutError( 03:58:00 self, 03:58:00 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 03:58:00 ) from e 03:58:00 03:58:00 except OSError as e: 03:58:00 > raise NewConnectionError( 03:58:00 self, f"Failed to establish a new connection: {e}" 03:58:00 ) from e 03:58:00 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 03:58:00 03:58:00 The above exception was the direct cause of the following exception: 03:58:00 03:58:00 self = 03:58:00 request = , stream = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:00 proxies = OrderedDict() 03:58:00 03:58:00 def send( 03:58:00 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:00 ): 03:58:00 """Sends PreparedRequest object. Returns Response object. 03:58:00 03:58:00 :param request: The :class:`PreparedRequest ` being sent. 03:58:00 :param stream: (optional) Whether to stream the request content. 03:58:00 :param timeout: (optional) How long to wait for the server to send 03:58:00 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:00 read timeout) ` tuple. 03:58:00 :type timeout: float or tuple or urllib3 Timeout object 03:58:00 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:00 we verify the server's TLS certificate, or a string, in which case it 03:58:00 must be a path to a CA bundle to use 03:58:00 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:00 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:00 :rtype: requests.Response 03:58:00 """ 03:58:00 03:58:00 try: 03:58:00 conn = self.get_connection_with_tls_context( 03:58:00 request, verify, proxies=proxies, cert=cert 03:58:00 ) 03:58:00 except LocationValueError as e: 03:58:00 raise InvalidURL(e, request=request) 03:58:00 03:58:00 self.cert_verify(conn, request.url, verify, cert) 03:58:00 url = self.request_url(request, proxies) 03:58:00 self.add_headers( 03:58:00 request, 03:58:00 stream=stream, 03:58:00 timeout=timeout, 03:58:00 verify=verify, 03:58:00 cert=cert, 03:58:00 proxies=proxies, 03:58:00 ) 03:58:00 03:58:00 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:00 03:58:00 if isinstance(timeout, tuple): 03:58:00 try: 03:58:00 connect, read = timeout 03:58:00 timeout = TimeoutSauce(connect=connect, read=read) 03:58:00 except ValueError: 03:58:00 raise ValueError( 03:58:00 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:00 f"or a single float to set both timeouts to the same value." 03:58:00 ) 03:58:00 elif isinstance(timeout, TimeoutSauce): 03:58:00 pass 03:58:00 else: 03:58:00 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:00 03:58:00 try: 03:58:00 > resp = conn.urlopen( 03:58:00 method=request.method, 03:58:00 url=url, 03:58:00 body=request.body, 03:58:00 headers=request.headers, 03:58:00 redirect=False, 03:58:00 assert_same_host=False, 03:58:00 preload_content=False, 03:58:00 decode_content=False, 03:58:00 retries=self.max_retries, 03:58:00 timeout=timeout, 03:58:00 chunked=chunked, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 03:58:00 retries = retries.increment( 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:00 method = 'GET' 03:58:00 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1' 03:58:00 response = None 03:58:00 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 03:58:00 _pool = 03:58:00 _stacktrace = 03:58:00 03:58:00 def increment( 03:58:00 self, 03:58:00 method: str | None = None, 03:58:00 url: str | None = None, 03:58:00 response: BaseHTTPResponse | None = None, 03:58:00 error: Exception | None = None, 03:58:00 _pool: ConnectionPool | None = None, 03:58:00 _stacktrace: TracebackType | None = None, 03:58:00 ) -> Self: 03:58:00 """Return a new Retry object with incremented retry counters. 03:58:00 03:58:00 :param response: A response object, or None, if the server did not 03:58:00 return a response. 03:58:00 :type response: :class:`~urllib3.response.BaseHTTPResponse` 03:58:00 :param Exception error: An error encountered during the request, or 03:58:00 None if the response was received successfully. 03:58:00 03:58:00 :return: A new ``Retry`` object. 03:58:00 """ 03:58:00 if self.total is False and error: 03:58:00 # Disabled, indicate to re-raise the error. 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 03:58:00 total = self.total 03:58:00 if total is not None: 03:58:00 total -= 1 03:58:00 03:58:00 connect = self.connect 03:58:00 read = self.read 03:58:00 redirect = self.redirect 03:58:00 status_count = self.status 03:58:00 other = self.other 03:58:00 cause = "unknown" 03:58:00 status = None 03:58:00 redirect_location = None 03:58:00 03:58:00 if error and self._is_connection_error(error): 03:58:00 # Connect retry? 03:58:00 if connect is False: 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 elif connect is not None: 03:58:00 connect -= 1 03:58:00 03:58:00 elif error and self._is_read_error(error): 03:58:00 # Read retry? 03:58:00 if read is False or method is None or not self._is_method_retryable(method): 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 elif read is not None: 03:58:00 read -= 1 03:58:00 03:58:00 elif error: 03:58:00 # Other retry? 03:58:00 if other is not None: 03:58:00 other -= 1 03:58:00 03:58:00 elif response and response.get_redirect_location(): 03:58:00 # Redirect retry? 03:58:00 if redirect is not None: 03:58:00 redirect -= 1 03:58:00 cause = "too many redirects" 03:58:00 response_redirect_location = response.get_redirect_location() 03:58:00 if response_redirect_location: 03:58:00 redirect_location = response_redirect_location 03:58:00 status = response.status 03:58:00 03:58:00 else: 03:58:00 # Incrementing because of a server error like a 500 in 03:58:00 # status_forcelist and the given method is in the allowed_methods 03:58:00 cause = ResponseError.GENERIC_ERROR 03:58:00 if response and response.status: 03:58:00 if status_count is not None: 03:58:00 status_count -= 1 03:58:00 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 03:58:00 status = response.status 03:58:00 03:58:00 history = self.history + ( 03:58:00 RequestHistory(method, url, error, status, redirect_location), 03:58:00 ) 03:58:00 03:58:00 new_retry = self.new( 03:58:00 total=total, 03:58:00 connect=connect, 03:58:00 read=read, 03:58:00 redirect=redirect, 03:58:00 status=status_count, 03:58:00 other=other, 03:58:00 history=history, 03:58:00 ) 03:58:00 03:58:00 if new_retry.is_exhausted(): 03:58:00 reason = error or ResponseError(cause) 03:58:00 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 03:58:00 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 03:58:00 03:58:00 During handling of the above exception, another exception occurred: 03:58:00 03:58:00 self = 03:58:00 03:58:00 def test_12_xpdr_portmapping_CLIENT1(self): 03:58:00 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT1") 03:58:00 03:58:00 transportpce_tests/1.2.1/test01_portmapping.py:144: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 transportpce_tests/common/test_utils.py:471: in get_portmapping_node_attr 03:58:00 response = get_request(target_url) 03:58:00 transportpce_tests/common/test_utils.py:116: in get_request 03:58:00 return requests.request( 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 03:58:00 return session.request(method=method, url=url, **kwargs) 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 03:58:00 resp = self.send(prep, **send_kwargs) 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 03:58:00 r = adapter.send(request, **kwargs) 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = 03:58:00 request = , stream = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:00 proxies = OrderedDict() 03:58:00 03:58:00 def send( 03:58:00 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:00 ): 03:58:00 """Sends PreparedRequest object. Returns Response object. 03:58:00 03:58:00 :param request: The :class:`PreparedRequest ` being sent. 03:58:00 :param stream: (optional) Whether to stream the request content. 03:58:00 :param timeout: (optional) How long to wait for the server to send 03:58:00 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:00 read timeout) ` tuple. 03:58:00 :type timeout: float or tuple or urllib3 Timeout object 03:58:00 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:00 we verify the server's TLS certificate, or a string, in which case it 03:58:00 must be a path to a CA bundle to use 03:58:00 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:00 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:00 :rtype: requests.Response 03:58:00 """ 03:58:00 03:58:00 try: 03:58:00 conn = self.get_connection_with_tls_context( 03:58:00 request, verify, proxies=proxies, cert=cert 03:58:00 ) 03:58:00 except LocationValueError as e: 03:58:00 raise InvalidURL(e, request=request) 03:58:00 03:58:00 self.cert_verify(conn, request.url, verify, cert) 03:58:00 url = self.request_url(request, proxies) 03:58:00 self.add_headers( 03:58:00 request, 03:58:00 stream=stream, 03:58:00 timeout=timeout, 03:58:00 verify=verify, 03:58:00 cert=cert, 03:58:00 proxies=proxies, 03:58:00 ) 03:58:00 03:58:00 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:00 03:58:00 if isinstance(timeout, tuple): 03:58:00 try: 03:58:00 connect, read = timeout 03:58:00 timeout = TimeoutSauce(connect=connect, read=read) 03:58:00 except ValueError: 03:58:00 raise ValueError( 03:58:00 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:00 f"or a single float to set both timeouts to the same value." 03:58:00 ) 03:58:00 elif isinstance(timeout, TimeoutSauce): 03:58:00 pass 03:58:00 else: 03:58:00 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:00 03:58:00 try: 03:58:00 resp = conn.urlopen( 03:58:00 method=request.method, 03:58:00 url=url, 03:58:00 body=request.body, 03:58:00 headers=request.headers, 03:58:00 redirect=False, 03:58:00 assert_same_host=False, 03:58:00 preload_content=False, 03:58:00 decode_content=False, 03:58:00 retries=self.max_retries, 03:58:00 timeout=timeout, 03:58:00 chunked=chunked, 03:58:00 ) 03:58:00 03:58:00 except (ProtocolError, OSError) as err: 03:58:00 raise ConnectionError(err, request=request) 03:58:00 03:58:00 except MaxRetryError as e: 03:58:00 if isinstance(e.reason, ConnectTimeoutError): 03:58:00 # TODO: Remove this in 3.0.0: see #2811 03:58:00 if not isinstance(e.reason, NewConnectionError): 03:58:00 raise ConnectTimeout(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, ResponseError): 03:58:00 raise RetryError(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, _ProxyError): 03:58:00 raise ProxyError(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, _SSLError): 03:58:00 # This branch is for urllib3 v1.22 and later. 03:58:00 raise SSLError(e, request=request) 03:58:00 03:58:00 > raise ConnectionError(e, request=request) 03:58:00 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 03:58:00 ----------------------------- Captured stdout call ----------------------------- 03:58:00 execution of test_12_xpdr_portmapping_CLIENT1 03:58:00 _______ TransportPCEPortMappingTesting.test_13_xpdr_portmapping_CLIENT2 ________ 03:58:00 03:58:00 self = 03:58:00 03:58:00 def _new_conn(self) -> socket.socket: 03:58:00 """Establish a socket connection and set nodelay settings on it. 03:58:00 03:58:00 :return: New socket connection. 03:58:00 """ 03:58:00 try: 03:58:00 > sock = connection.create_connection( 03:58:00 (self._dns_host, self.port), 03:58:00 self.timeout, 03:58:00 source_address=self.source_address, 03:58:00 socket_options=self.socket_options, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 03:58:00 raise err 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 address = ('localhost', 8182), timeout = 10, source_address = None 03:58:00 socket_options = [(6, 1, 1)] 03:58:00 03:58:00 def create_connection( 03:58:00 address: tuple[str, int], 03:58:00 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:00 source_address: tuple[str, int] | None = None, 03:58:00 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 03:58:00 ) -> socket.socket: 03:58:00 """Connect to *address* and return the socket object. 03:58:00 03:58:00 Convenience function. Connect to *address* (a 2-tuple ``(host, 03:58:00 port)``) and return the socket object. Passing the optional 03:58:00 *timeout* parameter will set the timeout on the socket instance 03:58:00 before attempting to connect. If no *timeout* is supplied, the 03:58:00 global default timeout setting returned by :func:`socket.getdefaulttimeout` 03:58:00 is used. If *source_address* is set it must be a tuple of (host, port) 03:58:00 for the socket to bind as a source address before making the connection. 03:58:00 An host of '' or port 0 tells the OS to use the default. 03:58:00 """ 03:58:00 03:58:00 host, port = address 03:58:00 if host.startswith("["): 03:58:00 host = host.strip("[]") 03:58:00 err = None 03:58:00 03:58:00 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 03:58:00 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 03:58:00 # The original create_connection function always returns all records. 03:58:00 family = allowed_gai_family() 03:58:00 03:58:00 try: 03:58:00 host.encode("idna") 03:58:00 except UnicodeError: 03:58:00 raise LocationParseError(f"'{host}', label empty or too long") from None 03:58:00 03:58:00 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 03:58:00 af, socktype, proto, canonname, sa = res 03:58:00 sock = None 03:58:00 try: 03:58:00 sock = socket.socket(af, socktype, proto) 03:58:00 03:58:00 # If provided, set socket level options before connecting. 03:58:00 _set_socket_options(sock, socket_options) 03:58:00 03:58:00 if timeout is not _DEFAULT_TIMEOUT: 03:58:00 sock.settimeout(timeout) 03:58:00 if source_address: 03:58:00 sock.bind(source_address) 03:58:00 > sock.connect(sa) 03:58:00 E ConnectionRefusedError: [Errno 111] Connection refused 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 03:58:00 03:58:00 The above exception was the direct cause of the following exception: 03:58:00 03:58:00 self = 03:58:00 method = 'GET' 03:58:00 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2' 03:58:00 body = None 03:58:00 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 03:58:00 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:00 redirect = False, assert_same_host = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 03:58:00 release_conn = False, chunked = False, body_pos = None, preload_content = False 03:58:00 decode_content = False, response_kw = {} 03:58:00 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2', query=None, fragment=None) 03:58:00 destination_scheme = None, conn = None, release_this_conn = True 03:58:00 http_tunnel_required = False, err = None, clean_exit = False 03:58:00 03:58:00 def urlopen( # type: ignore[override] 03:58:00 self, 03:58:00 method: str, 03:58:00 url: str, 03:58:00 body: _TYPE_BODY | None = None, 03:58:00 headers: typing.Mapping[str, str] | None = None, 03:58:00 retries: Retry | bool | int | None = None, 03:58:00 redirect: bool = True, 03:58:00 assert_same_host: bool = True, 03:58:00 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:00 pool_timeout: int | None = None, 03:58:00 release_conn: bool | None = None, 03:58:00 chunked: bool = False, 03:58:00 body_pos: _TYPE_BODY_POSITION | None = None, 03:58:00 preload_content: bool = True, 03:58:00 decode_content: bool = True, 03:58:00 **response_kw: typing.Any, 03:58:00 ) -> BaseHTTPResponse: 03:58:00 """ 03:58:00 Get a connection from the pool and perform an HTTP request. This is the 03:58:00 lowest level call for making a request, so you'll need to specify all 03:58:00 the raw details. 03:58:00 03:58:00 .. note:: 03:58:00 03:58:00 More commonly, it's appropriate to use a convenience method 03:58:00 such as :meth:`request`. 03:58:00 03:58:00 .. note:: 03:58:00 03:58:00 `release_conn` will only behave as expected if 03:58:00 `preload_content=False` because we want to make 03:58:00 `preload_content=False` the default behaviour someday soon without 03:58:00 breaking backwards compatibility. 03:58:00 03:58:00 :param method: 03:58:00 HTTP request method (such as GET, POST, PUT, etc.) 03:58:00 03:58:00 :param url: 03:58:00 The URL to perform the request on. 03:58:00 03:58:00 :param body: 03:58:00 Data to send in the request body, either :class:`str`, :class:`bytes`, 03:58:00 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 03:58:00 03:58:00 :param headers: 03:58:00 Dictionary of custom headers to send, such as User-Agent, 03:58:00 If-None-Match, etc. If None, pool headers are used. If provided, 03:58:00 these headers completely replace any pool-specific headers. 03:58:00 03:58:00 :param retries: 03:58:00 Configure the number of retries to allow before raising a 03:58:00 :class:`~urllib3.exceptions.MaxRetryError` exception. 03:58:00 03:58:00 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 03:58:00 :class:`~urllib3.util.retry.Retry` object for fine-grained control 03:58:00 over different types of retries. 03:58:00 Pass an integer number to retry connection errors that many times, 03:58:00 but no other types of errors. Pass zero to never retry. 03:58:00 03:58:00 If ``False``, then retries are disabled and any exception is raised 03:58:00 immediately. Also, instead of raising a MaxRetryError on redirects, 03:58:00 the redirect response will be returned. 03:58:00 03:58:00 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 03:58:00 03:58:00 :param redirect: 03:58:00 If True, automatically handle redirects (status codes 301, 302, 03:58:00 303, 307, 308). Each redirect counts as a retry. Disabling retries 03:58:00 will disable redirect, too. 03:58:00 03:58:00 :param assert_same_host: 03:58:00 If ``True``, will make sure that the host of the pool requests is 03:58:00 consistent else will raise HostChangedError. When ``False``, you can 03:58:00 use the pool on an HTTP proxy and request foreign hosts. 03:58:00 03:58:00 :param timeout: 03:58:00 If specified, overrides the default timeout for this one 03:58:00 request. It may be a float (in seconds) or an instance of 03:58:00 :class:`urllib3.util.Timeout`. 03:58:00 03:58:00 :param pool_timeout: 03:58:00 If set and the pool is set to block=True, then this method will 03:58:00 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 03:58:00 connection is available within the time period. 03:58:00 03:58:00 :param bool preload_content: 03:58:00 If True, the response's body will be preloaded into memory. 03:58:00 03:58:00 :param bool decode_content: 03:58:00 If True, will attempt to decode the body based on the 03:58:00 'content-encoding' header. 03:58:00 03:58:00 :param release_conn: 03:58:00 If False, then the urlopen call will not release the connection 03:58:00 back into the pool once a response is received (but will release if 03:58:00 you read the entire contents of the response such as when 03:58:00 `preload_content=True`). This is useful if you're not preloading 03:58:00 the response's content immediately. You will need to call 03:58:00 ``r.release_conn()`` on the response ``r`` to return the connection 03:58:00 back into the pool. If None, it takes the value of ``preload_content`` 03:58:00 which defaults to ``True``. 03:58:00 03:58:00 :param bool chunked: 03:58:00 If True, urllib3 will send the body using chunked transfer 03:58:00 encoding. Otherwise, urllib3 will send the body using the standard 03:58:00 content-length form. Defaults to False. 03:58:00 03:58:00 :param int body_pos: 03:58:00 Position to seek to in file-like body in the event of a retry or 03:58:00 redirect. Typically this won't need to be set because urllib3 will 03:58:00 auto-populate the value when needed. 03:58:00 """ 03:58:00 parsed_url = parse_url(url) 03:58:00 destination_scheme = parsed_url.scheme 03:58:00 03:58:00 if headers is None: 03:58:00 headers = self.headers 03:58:00 03:58:00 if not isinstance(retries, Retry): 03:58:00 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 03:58:00 03:58:00 if release_conn is None: 03:58:00 release_conn = preload_content 03:58:00 03:58:00 # Check host 03:58:00 if assert_same_host and not self.is_same_host(url): 03:58:00 raise HostChangedError(self, url, retries) 03:58:00 03:58:00 # Ensure that the URL we're connecting to is properly encoded 03:58:00 if url.startswith("/"): 03:58:00 url = to_str(_encode_target(url)) 03:58:00 else: 03:58:00 url = to_str(parsed_url.url) 03:58:00 03:58:00 conn = None 03:58:00 03:58:00 # Track whether `conn` needs to be released before 03:58:00 # returning/raising/recursing. Update this variable if necessary, and 03:58:00 # leave `release_conn` constant throughout the function. That way, if 03:58:00 # the function recurses, the original value of `release_conn` will be 03:58:00 # passed down into the recursive call, and its value will be respected. 03:58:00 # 03:58:00 # See issue #651 [1] for details. 03:58:00 # 03:58:00 # [1] 03:58:00 release_this_conn = release_conn 03:58:00 03:58:00 http_tunnel_required = connection_requires_http_tunnel( 03:58:00 self.proxy, self.proxy_config, destination_scheme 03:58:00 ) 03:58:00 03:58:00 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 03:58:00 # have to copy the headers dict so we can safely change it without those 03:58:00 # changes being reflected in anyone else's copy. 03:58:00 if not http_tunnel_required: 03:58:00 headers = headers.copy() # type: ignore[attr-defined] 03:58:00 headers.update(self.proxy_headers) # type: ignore[union-attr] 03:58:00 03:58:00 # Must keep the exception bound to a separate variable or else Python 3 03:58:00 # complains about UnboundLocalError. 03:58:00 err = None 03:58:00 03:58:00 # Keep track of whether we cleanly exited the except block. This 03:58:00 # ensures we do proper cleanup in finally. 03:58:00 clean_exit = False 03:58:00 03:58:00 # Rewind body position, if needed. Record current position 03:58:00 # for future rewinds in the event of a redirect/retry. 03:58:00 body_pos = set_file_position(body, body_pos) 03:58:00 03:58:00 try: 03:58:00 # Request a connection from the queue. 03:58:00 timeout_obj = self._get_timeout(timeout) 03:58:00 conn = self._get_conn(timeout=pool_timeout) 03:58:00 03:58:00 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 03:58:00 03:58:00 # Is this a closed/new connection that requires CONNECT tunnelling? 03:58:00 if self.proxy is not None and http_tunnel_required and conn.is_closed: 03:58:00 try: 03:58:00 self._prepare_proxy(conn) 03:58:00 except (BaseSSLError, OSError, SocketTimeout) as e: 03:58:00 self._raise_timeout( 03:58:00 err=e, url=self.proxy.url, timeout_value=conn.timeout 03:58:00 ) 03:58:00 raise 03:58:00 03:58:00 # If we're going to release the connection in ``finally:``, then 03:58:00 # the response doesn't need to know about the connection. Otherwise 03:58:00 # it will also try to release it and we'll have a double-release 03:58:00 # mess. 03:58:00 response_conn = conn if not release_conn else None 03:58:00 03:58:00 # Make the request on the HTTPConnection object 03:58:00 > response = self._make_request( 03:58:00 conn, 03:58:00 method, 03:58:00 url, 03:58:00 timeout=timeout_obj, 03:58:00 body=body, 03:58:00 headers=headers, 03:58:00 chunked=chunked, 03:58:00 retries=retries, 03:58:00 response_conn=response_conn, 03:58:00 preload_content=preload_content, 03:58:00 decode_content=decode_content, 03:58:00 **response_kw, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 03:58:00 conn.request( 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 03:58:00 self.endheaders() 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 03:58:00 self._send_output(message_body, encode_chunked=encode_chunked) 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 03:58:00 self.send(msg) 03:58:00 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 03:58:00 self.connect() 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 03:58:00 self.sock = self._new_conn() 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = 03:58:00 03:58:00 def _new_conn(self) -> socket.socket: 03:58:00 """Establish a socket connection and set nodelay settings on it. 03:58:00 03:58:00 :return: New socket connection. 03:58:00 """ 03:58:00 try: 03:58:00 sock = connection.create_connection( 03:58:00 (self._dns_host, self.port), 03:58:00 self.timeout, 03:58:00 source_address=self.source_address, 03:58:00 socket_options=self.socket_options, 03:58:00 ) 03:58:00 except socket.gaierror as e: 03:58:00 raise NameResolutionError(self.host, self, e) from e 03:58:00 except SocketTimeout as e: 03:58:00 raise ConnectTimeoutError( 03:58:00 self, 03:58:00 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 03:58:00 ) from e 03:58:00 03:58:00 except OSError as e: 03:58:00 > raise NewConnectionError( 03:58:00 self, f"Failed to establish a new connection: {e}" 03:58:00 ) from e 03:58:00 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 03:58:00 03:58:00 The above exception was the direct cause of the following exception: 03:58:00 03:58:00 self = 03:58:00 request = , stream = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:00 proxies = OrderedDict() 03:58:00 03:58:00 def send( 03:58:00 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:00 ): 03:58:00 """Sends PreparedRequest object. Returns Response object. 03:58:00 03:58:00 :param request: The :class:`PreparedRequest ` being sent. 03:58:00 :param stream: (optional) Whether to stream the request content. 03:58:00 :param timeout: (optional) How long to wait for the server to send 03:58:00 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:00 read timeout) ` tuple. 03:58:00 :type timeout: float or tuple or urllib3 Timeout object 03:58:00 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:00 we verify the server's TLS certificate, or a string, in which case it 03:58:00 must be a path to a CA bundle to use 03:58:00 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:00 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:00 :rtype: requests.Response 03:58:00 """ 03:58:00 03:58:00 try: 03:58:00 conn = self.get_connection_with_tls_context( 03:58:00 request, verify, proxies=proxies, cert=cert 03:58:00 ) 03:58:00 except LocationValueError as e: 03:58:00 raise InvalidURL(e, request=request) 03:58:00 03:58:00 self.cert_verify(conn, request.url, verify, cert) 03:58:00 url = self.request_url(request, proxies) 03:58:00 self.add_headers( 03:58:00 request, 03:58:00 stream=stream, 03:58:00 timeout=timeout, 03:58:00 verify=verify, 03:58:00 cert=cert, 03:58:00 proxies=proxies, 03:58:00 ) 03:58:00 03:58:00 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:00 03:58:00 if isinstance(timeout, tuple): 03:58:00 try: 03:58:00 connect, read = timeout 03:58:00 timeout = TimeoutSauce(connect=connect, read=read) 03:58:00 except ValueError: 03:58:00 raise ValueError( 03:58:00 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:00 f"or a single float to set both timeouts to the same value." 03:58:00 ) 03:58:00 elif isinstance(timeout, TimeoutSauce): 03:58:00 pass 03:58:00 else: 03:58:00 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:00 03:58:00 try: 03:58:00 > resp = conn.urlopen( 03:58:00 method=request.method, 03:58:00 url=url, 03:58:00 body=request.body, 03:58:00 headers=request.headers, 03:58:00 redirect=False, 03:58:00 assert_same_host=False, 03:58:00 preload_content=False, 03:58:00 decode_content=False, 03:58:00 retries=self.max_retries, 03:58:00 timeout=timeout, 03:58:00 chunked=chunked, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 03:58:00 retries = retries.increment( 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:00 method = 'GET' 03:58:00 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2' 03:58:00 response = None 03:58:00 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 03:58:00 _pool = 03:58:00 _stacktrace = 03:58:00 03:58:00 def increment( 03:58:00 self, 03:58:00 method: str | None = None, 03:58:00 url: str | None = None, 03:58:00 response: BaseHTTPResponse | None = None, 03:58:00 error: Exception | None = None, 03:58:00 _pool: ConnectionPool | None = None, 03:58:00 _stacktrace: TracebackType | None = None, 03:58:00 ) -> Self: 03:58:00 """Return a new Retry object with incremented retry counters. 03:58:00 03:58:00 :param response: A response object, or None, if the server did not 03:58:00 return a response. 03:58:00 :type response: :class:`~urllib3.response.BaseHTTPResponse` 03:58:00 :param Exception error: An error encountered during the request, or 03:58:00 None if the response was received successfully. 03:58:00 03:58:00 :return: A new ``Retry`` object. 03:58:00 """ 03:58:00 if self.total is False and error: 03:58:00 # Disabled, indicate to re-raise the error. 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 03:58:00 total = self.total 03:58:00 if total is not None: 03:58:00 total -= 1 03:58:00 03:58:00 connect = self.connect 03:58:00 read = self.read 03:58:00 redirect = self.redirect 03:58:00 status_count = self.status 03:58:00 other = self.other 03:58:00 cause = "unknown" 03:58:00 status = None 03:58:00 redirect_location = None 03:58:00 03:58:00 if error and self._is_connection_error(error): 03:58:00 # Connect retry? 03:58:00 if connect is False: 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 elif connect is not None: 03:58:00 connect -= 1 03:58:00 03:58:00 elif error and self._is_read_error(error): 03:58:00 # Read retry? 03:58:00 if read is False or method is None or not self._is_method_retryable(method): 03:58:00 raise reraise(type(error), error, _stacktrace) 03:58:00 elif read is not None: 03:58:00 read -= 1 03:58:00 03:58:00 elif error: 03:58:00 # Other retry? 03:58:00 if other is not None: 03:58:00 other -= 1 03:58:00 03:58:00 elif response and response.get_redirect_location(): 03:58:00 # Redirect retry? 03:58:00 if redirect is not None: 03:58:00 redirect -= 1 03:58:00 cause = "too many redirects" 03:58:00 response_redirect_location = response.get_redirect_location() 03:58:00 if response_redirect_location: 03:58:00 redirect_location = response_redirect_location 03:58:00 status = response.status 03:58:00 03:58:00 else: 03:58:00 # Incrementing because of a server error like a 500 in 03:58:00 # status_forcelist and the given method is in the allowed_methods 03:58:00 cause = ResponseError.GENERIC_ERROR 03:58:00 if response and response.status: 03:58:00 if status_count is not None: 03:58:00 status_count -= 1 03:58:00 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 03:58:00 status = response.status 03:58:00 03:58:00 history = self.history + ( 03:58:00 RequestHistory(method, url, error, status, redirect_location), 03:58:00 ) 03:58:00 03:58:00 new_retry = self.new( 03:58:00 total=total, 03:58:00 connect=connect, 03:58:00 read=read, 03:58:00 redirect=redirect, 03:58:00 status=status_count, 03:58:00 other=other, 03:58:00 history=history, 03:58:00 ) 03:58:00 03:58:00 if new_retry.is_exhausted(): 03:58:00 reason = error or ResponseError(cause) 03:58:00 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 03:58:00 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 03:58:00 03:58:00 During handling of the above exception, another exception occurred: 03:58:00 03:58:00 self = 03:58:00 03:58:00 def test_13_xpdr_portmapping_CLIENT2(self): 03:58:00 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT2") 03:58:00 03:58:00 transportpce_tests/1.2.1/test01_portmapping.py:156: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 transportpce_tests/common/test_utils.py:471: in get_portmapping_node_attr 03:58:00 response = get_request(target_url) 03:58:00 transportpce_tests/common/test_utils.py:116: in get_request 03:58:00 return requests.request( 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 03:58:00 return session.request(method=method, url=url, **kwargs) 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 03:58:00 resp = self.send(prep, **send_kwargs) 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 03:58:00 r = adapter.send(request, **kwargs) 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 self = 03:58:00 request = , stream = False 03:58:00 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:00 proxies = OrderedDict() 03:58:00 03:58:00 def send( 03:58:00 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:00 ): 03:58:00 """Sends PreparedRequest object. Returns Response object. 03:58:00 03:58:00 :param request: The :class:`PreparedRequest ` being sent. 03:58:00 :param stream: (optional) Whether to stream the request content. 03:58:00 :param timeout: (optional) How long to wait for the server to send 03:58:00 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:00 read timeout) ` tuple. 03:58:00 :type timeout: float or tuple or urllib3 Timeout object 03:58:00 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:00 we verify the server's TLS certificate, or a string, in which case it 03:58:00 must be a path to a CA bundle to use 03:58:00 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:00 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:00 :rtype: requests.Response 03:58:00 """ 03:58:00 03:58:00 try: 03:58:00 conn = self.get_connection_with_tls_context( 03:58:00 request, verify, proxies=proxies, cert=cert 03:58:00 ) 03:58:00 except LocationValueError as e: 03:58:00 raise InvalidURL(e, request=request) 03:58:00 03:58:00 self.cert_verify(conn, request.url, verify, cert) 03:58:00 url = self.request_url(request, proxies) 03:58:00 self.add_headers( 03:58:00 request, 03:58:00 stream=stream, 03:58:00 timeout=timeout, 03:58:00 verify=verify, 03:58:00 cert=cert, 03:58:00 proxies=proxies, 03:58:00 ) 03:58:00 03:58:00 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:00 03:58:00 if isinstance(timeout, tuple): 03:58:00 try: 03:58:00 connect, read = timeout 03:58:00 timeout = TimeoutSauce(connect=connect, read=read) 03:58:00 except ValueError: 03:58:00 raise ValueError( 03:58:00 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:00 f"or a single float to set both timeouts to the same value." 03:58:00 ) 03:58:00 elif isinstance(timeout, TimeoutSauce): 03:58:00 pass 03:58:00 else: 03:58:00 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:00 03:58:00 try: 03:58:00 resp = conn.urlopen( 03:58:00 method=request.method, 03:58:00 url=url, 03:58:00 body=request.body, 03:58:00 headers=request.headers, 03:58:00 redirect=False, 03:58:00 assert_same_host=False, 03:58:00 preload_content=False, 03:58:00 decode_content=False, 03:58:00 retries=self.max_retries, 03:58:00 timeout=timeout, 03:58:00 chunked=chunked, 03:58:00 ) 03:58:00 03:58:00 except (ProtocolError, OSError) as err: 03:58:00 raise ConnectionError(err, request=request) 03:58:00 03:58:00 except MaxRetryError as e: 03:58:00 if isinstance(e.reason, ConnectTimeoutError): 03:58:00 # TODO: Remove this in 3.0.0: see #2811 03:58:00 if not isinstance(e.reason, NewConnectionError): 03:58:00 raise ConnectTimeout(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, ResponseError): 03:58:00 raise RetryError(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, _ProxyError): 03:58:00 raise ProxyError(e, request=request) 03:58:00 03:58:00 if isinstance(e.reason, _SSLError): 03:58:00 # This branch is for urllib3 v1.22 and later. 03:58:00 raise SSLError(e, request=request) 03:58:00 03:58:00 > raise ConnectionError(e, request=request) 03:58:00 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 03:58:00 ----------------------------- Captured stdout call ----------------------------- 03:58:00 execution of test_13_xpdr_portmapping_CLIENT2 03:58:00 _______ TransportPCEPortMappingTesting.test_14_xpdr_portmapping_CLIENT3 ________ 03:58:00 03:58:00 self = 03:58:00 03:58:00 def _new_conn(self) -> socket.socket: 03:58:00 """Establish a socket connection and set nodelay settings on it. 03:58:00 03:58:00 :return: New socket connection. 03:58:00 """ 03:58:00 try: 03:58:00 > sock = connection.create_connection( 03:58:00 (self._dns_host, self.port), 03:58:00 self.timeout, 03:58:00 source_address=self.source_address, 03:58:00 socket_options=self.socket_options, 03:58:00 ) 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 03:58:00 raise err 03:58:00 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:00 03:58:00 address = ('localhost', 8182), timeout = 10, source_address = None 03:58:00 socket_options = [(6, 1, 1)] 03:58:00 03:58:00 def create_connection( 03:58:00 address: tuple[str, int], 03:58:00 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:00 source_address: tuple[str, int] | None = None, 03:58:00 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 03:58:00 ) -> socket.socket: 03:58:00 """Connect to *address* and return the socket object. 03:58:00 03:58:00 Convenience function. Connect to *address* (a 2-tuple ``(host, 03:58:00 port)``) and return the socket object. Passing the optional 03:58:00 *timeout* parameter will set the timeout on the socket instance 03:58:00 before attempting to connect. If no *timeout* is supplied, the 03:58:00 global default timeout setting returned by :func:`socket.getdefaulttimeout` 03:58:00 is used. If *source_address* is set it must be a tuple of (host, port) 03:58:00 for the socket to bind as a source address before making the connection. 03:58:00 An host of '' or port 0 tells the OS to use the default. 03:58:00 """ 03:58:00 03:58:00 host, port = address 03:58:00 if host.startswith("["): 03:58:00 host = host.strip("[]") 03:58:00 err = None 03:58:00 03:58:00 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 03:58:00 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 03:58:00 # The original create_connection function always returns all records. 03:58:00 family = allowed_gai_family() 03:58:00 03:58:00 try: 03:58:00 host.encode("idna") 03:58:00 except UnicodeError: 03:58:00 raise LocationParseError(f"'{host}', label empty or too long") from None 03:58:00 03:58:00 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 03:58:00 af, socktype, proto, canonname, sa = res 03:58:00 sock = None 03:58:00 try: 03:58:00 sock = socket.socket(af, socktype, proto) 03:58:00 03:58:00 # If provided, set socket level options before connecting. 03:58:00 _set_socket_options(sock, socket_options) 03:58:00 03:58:00 if timeout is not _DEFAULT_TIMEOUT: 03:58:00 sock.settimeout(timeout) 03:58:00 if source_address: 03:58:00 sock.bind(source_address) 03:58:00 > sock.connect(sa) 03:58:00 E ConnectionRefusedError: [Errno 111] Connection refused 03:58:00 03:58:00 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 03:58:00 03:58:00 The above exception was the direct cause of the following exception: 03:58:00 03:58:00 self = 03:58:00 method = 'GET' 03:58:01 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3' 03:58:01 body = None 03:58:01 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 03:58:01 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:01 redirect = False, assert_same_host = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 03:58:01 release_conn = False, chunked = False, body_pos = None, preload_content = False 03:58:01 decode_content = False, response_kw = {} 03:58:01 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3', query=None, fragment=None) 03:58:01 destination_scheme = None, conn = None, release_this_conn = True 03:58:01 http_tunnel_required = False, err = None, clean_exit = False 03:58:01 03:58:01 def urlopen( # type: ignore[override] 03:58:01 self, 03:58:01 method: str, 03:58:01 url: str, 03:58:01 body: _TYPE_BODY | None = None, 03:58:01 headers: typing.Mapping[str, str] | None = None, 03:58:01 retries: Retry | bool | int | None = None, 03:58:01 redirect: bool = True, 03:58:01 assert_same_host: bool = True, 03:58:01 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:01 pool_timeout: int | None = None, 03:58:01 release_conn: bool | None = None, 03:58:01 chunked: bool = False, 03:58:01 body_pos: _TYPE_BODY_POSITION | None = None, 03:58:01 preload_content: bool = True, 03:58:01 decode_content: bool = True, 03:58:01 **response_kw: typing.Any, 03:58:01 ) -> BaseHTTPResponse: 03:58:01 """ 03:58:01 Get a connection from the pool and perform an HTTP request. This is the 03:58:01 lowest level call for making a request, so you'll need to specify all 03:58:01 the raw details. 03:58:01 03:58:01 .. note:: 03:58:01 03:58:01 More commonly, it's appropriate to use a convenience method 03:58:01 such as :meth:`request`. 03:58:01 03:58:01 .. note:: 03:58:01 03:58:01 `release_conn` will only behave as expected if 03:58:01 `preload_content=False` because we want to make 03:58:01 `preload_content=False` the default behaviour someday soon without 03:58:01 breaking backwards compatibility. 03:58:01 03:58:01 :param method: 03:58:01 HTTP request method (such as GET, POST, PUT, etc.) 03:58:01 03:58:01 :param url: 03:58:01 The URL to perform the request on. 03:58:01 03:58:01 :param body: 03:58:01 Data to send in the request body, either :class:`str`, :class:`bytes`, 03:58:01 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 03:58:01 03:58:01 :param headers: 03:58:01 Dictionary of custom headers to send, such as User-Agent, 03:58:01 If-None-Match, etc. If None, pool headers are used. If provided, 03:58:01 these headers completely replace any pool-specific headers. 03:58:01 03:58:01 :param retries: 03:58:01 Configure the number of retries to allow before raising a 03:58:01 :class:`~urllib3.exceptions.MaxRetryError` exception. 03:58:01 03:58:01 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 03:58:01 :class:`~urllib3.util.retry.Retry` object for fine-grained control 03:58:01 over different types of retries. 03:58:01 Pass an integer number to retry connection errors that many times, 03:58:01 but no other types of errors. Pass zero to never retry. 03:58:01 03:58:01 If ``False``, then retries are disabled and any exception is raised 03:58:01 immediately. Also, instead of raising a MaxRetryError on redirects, 03:58:01 the redirect response will be returned. 03:58:01 03:58:01 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 03:58:01 03:58:01 :param redirect: 03:58:01 If True, automatically handle redirects (status codes 301, 302, 03:58:01 303, 307, 308). Each redirect counts as a retry. Disabling retries 03:58:01 will disable redirect, too. 03:58:01 03:58:01 :param assert_same_host: 03:58:01 If ``True``, will make sure that the host of the pool requests is 03:58:01 consistent else will raise HostChangedError. When ``False``, you can 03:58:01 use the pool on an HTTP proxy and request foreign hosts. 03:58:01 03:58:01 :param timeout: 03:58:01 If specified, overrides the default timeout for this one 03:58:01 request. It may be a float (in seconds) or an instance of 03:58:01 :class:`urllib3.util.Timeout`. 03:58:01 03:58:01 :param pool_timeout: 03:58:01 If set and the pool is set to block=True, then this method will 03:58:01 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 03:58:01 connection is available within the time period. 03:58:01 03:58:01 :param bool preload_content: 03:58:01 If True, the response's body will be preloaded into memory. 03:58:01 03:58:01 :param bool decode_content: 03:58:01 If True, will attempt to decode the body based on the 03:58:01 'content-encoding' header. 03:58:01 03:58:01 :param release_conn: 03:58:01 If False, then the urlopen call will not release the connection 03:58:01 back into the pool once a response is received (but will release if 03:58:01 you read the entire contents of the response such as when 03:58:01 `preload_content=True`). This is useful if you're not preloading 03:58:01 the response's content immediately. You will need to call 03:58:01 ``r.release_conn()`` on the response ``r`` to return the connection 03:58:01 back into the pool. If None, it takes the value of ``preload_content`` 03:58:01 which defaults to ``True``. 03:58:01 03:58:01 :param bool chunked: 03:58:01 If True, urllib3 will send the body using chunked transfer 03:58:01 encoding. Otherwise, urllib3 will send the body using the standard 03:58:01 content-length form. Defaults to False. 03:58:01 03:58:01 :param int body_pos: 03:58:01 Position to seek to in file-like body in the event of a retry or 03:58:01 redirect. Typically this won't need to be set because urllib3 will 03:58:01 auto-populate the value when needed. 03:58:01 """ 03:58:01 parsed_url = parse_url(url) 03:58:01 destination_scheme = parsed_url.scheme 03:58:01 03:58:01 if headers is None: 03:58:01 headers = self.headers 03:58:01 03:58:01 if not isinstance(retries, Retry): 03:58:01 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 03:58:01 03:58:01 if release_conn is None: 03:58:01 release_conn = preload_content 03:58:01 03:58:01 # Check host 03:58:01 if assert_same_host and not self.is_same_host(url): 03:58:01 raise HostChangedError(self, url, retries) 03:58:01 03:58:01 # Ensure that the URL we're connecting to is properly encoded 03:58:01 if url.startswith("/"): 03:58:01 url = to_str(_encode_target(url)) 03:58:01 else: 03:58:01 url = to_str(parsed_url.url) 03:58:01 03:58:01 conn = None 03:58:01 03:58:01 # Track whether `conn` needs to be released before 03:58:01 # returning/raising/recursing. Update this variable if necessary, and 03:58:01 # leave `release_conn` constant throughout the function. That way, if 03:58:01 # the function recurses, the original value of `release_conn` will be 03:58:01 # passed down into the recursive call, and its value will be respected. 03:58:01 # 03:58:01 # See issue #651 [1] for details. 03:58:01 # 03:58:01 # [1] 03:58:01 release_this_conn = release_conn 03:58:01 03:58:01 http_tunnel_required = connection_requires_http_tunnel( 03:58:01 self.proxy, self.proxy_config, destination_scheme 03:58:01 ) 03:58:01 03:58:01 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 03:58:01 # have to copy the headers dict so we can safely change it without those 03:58:01 # changes being reflected in anyone else's copy. 03:58:01 if not http_tunnel_required: 03:58:01 headers = headers.copy() # type: ignore[attr-defined] 03:58:01 headers.update(self.proxy_headers) # type: ignore[union-attr] 03:58:01 03:58:01 # Must keep the exception bound to a separate variable or else Python 3 03:58:01 # complains about UnboundLocalError. 03:58:01 err = None 03:58:01 03:58:01 # Keep track of whether we cleanly exited the except block. This 03:58:01 # ensures we do proper cleanup in finally. 03:58:01 clean_exit = False 03:58:01 03:58:01 # Rewind body position, if needed. Record current position 03:58:01 # for future rewinds in the event of a redirect/retry. 03:58:01 body_pos = set_file_position(body, body_pos) 03:58:01 03:58:01 try: 03:58:01 # Request a connection from the queue. 03:58:01 timeout_obj = self._get_timeout(timeout) 03:58:01 conn = self._get_conn(timeout=pool_timeout) 03:58:01 03:58:01 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 03:58:01 03:58:01 # Is this a closed/new connection that requires CONNECT tunnelling? 03:58:01 if self.proxy is not None and http_tunnel_required and conn.is_closed: 03:58:01 try: 03:58:01 self._prepare_proxy(conn) 03:58:01 except (BaseSSLError, OSError, SocketTimeout) as e: 03:58:01 self._raise_timeout( 03:58:01 err=e, url=self.proxy.url, timeout_value=conn.timeout 03:58:01 ) 03:58:01 raise 03:58:01 03:58:01 # If we're going to release the connection in ``finally:``, then 03:58:01 # the response doesn't need to know about the connection. Otherwise 03:58:01 # it will also try to release it and we'll have a double-release 03:58:01 # mess. 03:58:01 response_conn = conn if not release_conn else None 03:58:01 03:58:01 # Make the request on the HTTPConnection object 03:58:01 > response = self._make_request( 03:58:01 conn, 03:58:01 method, 03:58:01 url, 03:58:01 timeout=timeout_obj, 03:58:01 body=body, 03:58:01 headers=headers, 03:58:01 chunked=chunked, 03:58:01 retries=retries, 03:58:01 response_conn=response_conn, 03:58:01 preload_content=preload_content, 03:58:01 decode_content=decode_content, 03:58:01 **response_kw, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 03:58:01 conn.request( 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 03:58:01 self.endheaders() 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 03:58:01 self._send_output(message_body, encode_chunked=encode_chunked) 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 03:58:01 self.send(msg) 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 03:58:01 self.connect() 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 03:58:01 self.sock = self._new_conn() 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = 03:58:01 03:58:01 def _new_conn(self) -> socket.socket: 03:58:01 """Establish a socket connection and set nodelay settings on it. 03:58:01 03:58:01 :return: New socket connection. 03:58:01 """ 03:58:01 try: 03:58:01 sock = connection.create_connection( 03:58:01 (self._dns_host, self.port), 03:58:01 self.timeout, 03:58:01 source_address=self.source_address, 03:58:01 socket_options=self.socket_options, 03:58:01 ) 03:58:01 except socket.gaierror as e: 03:58:01 raise NameResolutionError(self.host, self, e) from e 03:58:01 except SocketTimeout as e: 03:58:01 raise ConnectTimeoutError( 03:58:01 self, 03:58:01 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 03:58:01 ) from e 03:58:01 03:58:01 except OSError as e: 03:58:01 > raise NewConnectionError( 03:58:01 self, f"Failed to establish a new connection: {e}" 03:58:01 ) from e 03:58:01 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 03:58:01 03:58:01 The above exception was the direct cause of the following exception: 03:58:01 03:58:01 self = 03:58:01 request = , stream = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:01 proxies = OrderedDict() 03:58:01 03:58:01 def send( 03:58:01 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:01 ): 03:58:01 """Sends PreparedRequest object. Returns Response object. 03:58:01 03:58:01 :param request: The :class:`PreparedRequest ` being sent. 03:58:01 :param stream: (optional) Whether to stream the request content. 03:58:01 :param timeout: (optional) How long to wait for the server to send 03:58:01 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:01 read timeout) ` tuple. 03:58:01 :type timeout: float or tuple or urllib3 Timeout object 03:58:01 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:01 we verify the server's TLS certificate, or a string, in which case it 03:58:01 must be a path to a CA bundle to use 03:58:01 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:01 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:01 :rtype: requests.Response 03:58:01 """ 03:58:01 03:58:01 try: 03:58:01 conn = self.get_connection_with_tls_context( 03:58:01 request, verify, proxies=proxies, cert=cert 03:58:01 ) 03:58:01 except LocationValueError as e: 03:58:01 raise InvalidURL(e, request=request) 03:58:01 03:58:01 self.cert_verify(conn, request.url, verify, cert) 03:58:01 url = self.request_url(request, proxies) 03:58:01 self.add_headers( 03:58:01 request, 03:58:01 stream=stream, 03:58:01 timeout=timeout, 03:58:01 verify=verify, 03:58:01 cert=cert, 03:58:01 proxies=proxies, 03:58:01 ) 03:58:01 03:58:01 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:01 03:58:01 if isinstance(timeout, tuple): 03:58:01 try: 03:58:01 connect, read = timeout 03:58:01 timeout = TimeoutSauce(connect=connect, read=read) 03:58:01 except ValueError: 03:58:01 raise ValueError( 03:58:01 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:01 f"or a single float to set both timeouts to the same value." 03:58:01 ) 03:58:01 elif isinstance(timeout, TimeoutSauce): 03:58:01 pass 03:58:01 else: 03:58:01 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:01 03:58:01 try: 03:58:01 > resp = conn.urlopen( 03:58:01 method=request.method, 03:58:01 url=url, 03:58:01 body=request.body, 03:58:01 headers=request.headers, 03:58:01 redirect=False, 03:58:01 assert_same_host=False, 03:58:01 preload_content=False, 03:58:01 decode_content=False, 03:58:01 retries=self.max_retries, 03:58:01 timeout=timeout, 03:58:01 chunked=chunked, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 03:58:01 retries = retries.increment( 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:01 method = 'GET' 03:58:01 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3' 03:58:01 response = None 03:58:01 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 03:58:01 _pool = 03:58:01 _stacktrace = 03:58:01 03:58:01 def increment( 03:58:01 self, 03:58:01 method: str | None = None, 03:58:01 url: str | None = None, 03:58:01 response: BaseHTTPResponse | None = None, 03:58:01 error: Exception | None = None, 03:58:01 _pool: ConnectionPool | None = None, 03:58:01 _stacktrace: TracebackType | None = None, 03:58:01 ) -> Self: 03:58:01 """Return a new Retry object with incremented retry counters. 03:58:01 03:58:01 :param response: A response object, or None, if the server did not 03:58:01 return a response. 03:58:01 :type response: :class:`~urllib3.response.BaseHTTPResponse` 03:58:01 :param Exception error: An error encountered during the request, or 03:58:01 None if the response was received successfully. 03:58:01 03:58:01 :return: A new ``Retry`` object. 03:58:01 """ 03:58:01 if self.total is False and error: 03:58:01 # Disabled, indicate to re-raise the error. 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 03:58:01 total = self.total 03:58:01 if total is not None: 03:58:01 total -= 1 03:58:01 03:58:01 connect = self.connect 03:58:01 read = self.read 03:58:01 redirect = self.redirect 03:58:01 status_count = self.status 03:58:01 other = self.other 03:58:01 cause = "unknown" 03:58:01 status = None 03:58:01 redirect_location = None 03:58:01 03:58:01 if error and self._is_connection_error(error): 03:58:01 # Connect retry? 03:58:01 if connect is False: 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 elif connect is not None: 03:58:01 connect -= 1 03:58:01 03:58:01 elif error and self._is_read_error(error): 03:58:01 # Read retry? 03:58:01 if read is False or method is None or not self._is_method_retryable(method): 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 elif read is not None: 03:58:01 read -= 1 03:58:01 03:58:01 elif error: 03:58:01 # Other retry? 03:58:01 if other is not None: 03:58:01 other -= 1 03:58:01 03:58:01 elif response and response.get_redirect_location(): 03:58:01 # Redirect retry? 03:58:01 if redirect is not None: 03:58:01 redirect -= 1 03:58:01 cause = "too many redirects" 03:58:01 response_redirect_location = response.get_redirect_location() 03:58:01 if response_redirect_location: 03:58:01 redirect_location = response_redirect_location 03:58:01 status = response.status 03:58:01 03:58:01 else: 03:58:01 # Incrementing because of a server error like a 500 in 03:58:01 # status_forcelist and the given method is in the allowed_methods 03:58:01 cause = ResponseError.GENERIC_ERROR 03:58:01 if response and response.status: 03:58:01 if status_count is not None: 03:58:01 status_count -= 1 03:58:01 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 03:58:01 status = response.status 03:58:01 03:58:01 history = self.history + ( 03:58:01 RequestHistory(method, url, error, status, redirect_location), 03:58:01 ) 03:58:01 03:58:01 new_retry = self.new( 03:58:01 total=total, 03:58:01 connect=connect, 03:58:01 read=read, 03:58:01 redirect=redirect, 03:58:01 status=status_count, 03:58:01 other=other, 03:58:01 history=history, 03:58:01 ) 03:58:01 03:58:01 if new_retry.is_exhausted(): 03:58:01 reason = error or ResponseError(cause) 03:58:01 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 03:58:01 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 03:58:01 03:58:01 During handling of the above exception, another exception occurred: 03:58:01 03:58:01 self = 03:58:01 03:58:01 def test_14_xpdr_portmapping_CLIENT3(self): 03:58:01 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT3") 03:58:01 03:58:01 transportpce_tests/1.2.1/test01_portmapping.py:168: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 transportpce_tests/common/test_utils.py:471: in get_portmapping_node_attr 03:58:01 response = get_request(target_url) 03:58:01 transportpce_tests/common/test_utils.py:116: in get_request 03:58:01 return requests.request( 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 03:58:01 return session.request(method=method, url=url, **kwargs) 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 03:58:01 resp = self.send(prep, **send_kwargs) 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 03:58:01 r = adapter.send(request, **kwargs) 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = 03:58:01 request = , stream = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:01 proxies = OrderedDict() 03:58:01 03:58:01 def send( 03:58:01 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:01 ): 03:58:01 """Sends PreparedRequest object. Returns Response object. 03:58:01 03:58:01 :param request: The :class:`PreparedRequest ` being sent. 03:58:01 :param stream: (optional) Whether to stream the request content. 03:58:01 :param timeout: (optional) How long to wait for the server to send 03:58:01 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:01 read timeout) ` tuple. 03:58:01 :type timeout: float or tuple or urllib3 Timeout object 03:58:01 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:01 we verify the server's TLS certificate, or a string, in which case it 03:58:01 must be a path to a CA bundle to use 03:58:01 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:01 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:01 :rtype: requests.Response 03:58:01 """ 03:58:01 03:58:01 try: 03:58:01 conn = self.get_connection_with_tls_context( 03:58:01 request, verify, proxies=proxies, cert=cert 03:58:01 ) 03:58:01 except LocationValueError as e: 03:58:01 raise InvalidURL(e, request=request) 03:58:01 03:58:01 self.cert_verify(conn, request.url, verify, cert) 03:58:01 url = self.request_url(request, proxies) 03:58:01 self.add_headers( 03:58:01 request, 03:58:01 stream=stream, 03:58:01 timeout=timeout, 03:58:01 verify=verify, 03:58:01 cert=cert, 03:58:01 proxies=proxies, 03:58:01 ) 03:58:01 03:58:01 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:01 03:58:01 if isinstance(timeout, tuple): 03:58:01 try: 03:58:01 connect, read = timeout 03:58:01 timeout = TimeoutSauce(connect=connect, read=read) 03:58:01 except ValueError: 03:58:01 raise ValueError( 03:58:01 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:01 f"or a single float to set both timeouts to the same value." 03:58:01 ) 03:58:01 elif isinstance(timeout, TimeoutSauce): 03:58:01 pass 03:58:01 else: 03:58:01 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:01 03:58:01 try: 03:58:01 resp = conn.urlopen( 03:58:01 method=request.method, 03:58:01 url=url, 03:58:01 body=request.body, 03:58:01 headers=request.headers, 03:58:01 redirect=False, 03:58:01 assert_same_host=False, 03:58:01 preload_content=False, 03:58:01 decode_content=False, 03:58:01 retries=self.max_retries, 03:58:01 timeout=timeout, 03:58:01 chunked=chunked, 03:58:01 ) 03:58:01 03:58:01 except (ProtocolError, OSError) as err: 03:58:01 raise ConnectionError(err, request=request) 03:58:01 03:58:01 except MaxRetryError as e: 03:58:01 if isinstance(e.reason, ConnectTimeoutError): 03:58:01 # TODO: Remove this in 3.0.0: see #2811 03:58:01 if not isinstance(e.reason, NewConnectionError): 03:58:01 raise ConnectTimeout(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, ResponseError): 03:58:01 raise RetryError(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, _ProxyError): 03:58:01 raise ProxyError(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, _SSLError): 03:58:01 # This branch is for urllib3 v1.22 and later. 03:58:01 raise SSLError(e, request=request) 03:58:01 03:58:01 > raise ConnectionError(e, request=request) 03:58:01 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 03:58:01 ----------------------------- Captured stdout call ----------------------------- 03:58:01 execution of test_14_xpdr_portmapping_CLIENT3 03:58:01 _______ TransportPCEPortMappingTesting.test_15_xpdr_portmapping_CLIENT4 ________ 03:58:01 03:58:01 self = 03:58:01 03:58:01 def _new_conn(self) -> socket.socket: 03:58:01 """Establish a socket connection and set nodelay settings on it. 03:58:01 03:58:01 :return: New socket connection. 03:58:01 """ 03:58:01 try: 03:58:01 > sock = connection.create_connection( 03:58:01 (self._dns_host, self.port), 03:58:01 self.timeout, 03:58:01 source_address=self.source_address, 03:58:01 socket_options=self.socket_options, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 03:58:01 raise err 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 address = ('localhost', 8182), timeout = 10, source_address = None 03:58:01 socket_options = [(6, 1, 1)] 03:58:01 03:58:01 def create_connection( 03:58:01 address: tuple[str, int], 03:58:01 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:01 source_address: tuple[str, int] | None = None, 03:58:01 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 03:58:01 ) -> socket.socket: 03:58:01 """Connect to *address* and return the socket object. 03:58:01 03:58:01 Convenience function. Connect to *address* (a 2-tuple ``(host, 03:58:01 port)``) and return the socket object. Passing the optional 03:58:01 *timeout* parameter will set the timeout on the socket instance 03:58:01 before attempting to connect. If no *timeout* is supplied, the 03:58:01 global default timeout setting returned by :func:`socket.getdefaulttimeout` 03:58:01 is used. If *source_address* is set it must be a tuple of (host, port) 03:58:01 for the socket to bind as a source address before making the connection. 03:58:01 An host of '' or port 0 tells the OS to use the default. 03:58:01 """ 03:58:01 03:58:01 host, port = address 03:58:01 if host.startswith("["): 03:58:01 host = host.strip("[]") 03:58:01 err = None 03:58:01 03:58:01 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 03:58:01 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 03:58:01 # The original create_connection function always returns all records. 03:58:01 family = allowed_gai_family() 03:58:01 03:58:01 try: 03:58:01 host.encode("idna") 03:58:01 except UnicodeError: 03:58:01 raise LocationParseError(f"'{host}', label empty or too long") from None 03:58:01 03:58:01 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 03:58:01 af, socktype, proto, canonname, sa = res 03:58:01 sock = None 03:58:01 try: 03:58:01 sock = socket.socket(af, socktype, proto) 03:58:01 03:58:01 # If provided, set socket level options before connecting. 03:58:01 _set_socket_options(sock, socket_options) 03:58:01 03:58:01 if timeout is not _DEFAULT_TIMEOUT: 03:58:01 sock.settimeout(timeout) 03:58:01 if source_address: 03:58:01 sock.bind(source_address) 03:58:01 > sock.connect(sa) 03:58:01 E ConnectionRefusedError: [Errno 111] Connection refused 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 03:58:01 03:58:01 The above exception was the direct cause of the following exception: 03:58:01 03:58:01 self = 03:58:01 method = 'GET' 03:58:01 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4' 03:58:01 body = None 03:58:01 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 03:58:01 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:01 redirect = False, assert_same_host = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 03:58:01 release_conn = False, chunked = False, body_pos = None, preload_content = False 03:58:01 decode_content = False, response_kw = {} 03:58:01 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4', query=None, fragment=None) 03:58:01 destination_scheme = None, conn = None, release_this_conn = True 03:58:01 http_tunnel_required = False, err = None, clean_exit = False 03:58:01 03:58:01 def urlopen( # type: ignore[override] 03:58:01 self, 03:58:01 method: str, 03:58:01 url: str, 03:58:01 body: _TYPE_BODY | None = None, 03:58:01 headers: typing.Mapping[str, str] | None = None, 03:58:01 retries: Retry | bool | int | None = None, 03:58:01 redirect: bool = True, 03:58:01 assert_same_host: bool = True, 03:58:01 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:01 pool_timeout: int | None = None, 03:58:01 release_conn: bool | None = None, 03:58:01 chunked: bool = False, 03:58:01 body_pos: _TYPE_BODY_POSITION | None = None, 03:58:01 preload_content: bool = True, 03:58:01 decode_content: bool = True, 03:58:01 **response_kw: typing.Any, 03:58:01 ) -> BaseHTTPResponse: 03:58:01 """ 03:58:01 Get a connection from the pool and perform an HTTP request. This is the 03:58:01 lowest level call for making a request, so you'll need to specify all 03:58:01 the raw details. 03:58:01 03:58:01 .. note:: 03:58:01 03:58:01 More commonly, it's appropriate to use a convenience method 03:58:01 such as :meth:`request`. 03:58:01 03:58:01 .. note:: 03:58:01 03:58:01 `release_conn` will only behave as expected if 03:58:01 `preload_content=False` because we want to make 03:58:01 `preload_content=False` the default behaviour someday soon without 03:58:01 breaking backwards compatibility. 03:58:01 03:58:01 :param method: 03:58:01 HTTP request method (such as GET, POST, PUT, etc.) 03:58:01 03:58:01 :param url: 03:58:01 The URL to perform the request on. 03:58:01 03:58:01 :param body: 03:58:01 Data to send in the request body, either :class:`str`, :class:`bytes`, 03:58:01 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 03:58:01 03:58:01 :param headers: 03:58:01 Dictionary of custom headers to send, such as User-Agent, 03:58:01 If-None-Match, etc. If None, pool headers are used. If provided, 03:58:01 these headers completely replace any pool-specific headers. 03:58:01 03:58:01 :param retries: 03:58:01 Configure the number of retries to allow before raising a 03:58:01 :class:`~urllib3.exceptions.MaxRetryError` exception. 03:58:01 03:58:01 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 03:58:01 :class:`~urllib3.util.retry.Retry` object for fine-grained control 03:58:01 over different types of retries. 03:58:01 Pass an integer number to retry connection errors that many times, 03:58:01 but no other types of errors. Pass zero to never retry. 03:58:01 03:58:01 If ``False``, then retries are disabled and any exception is raised 03:58:01 immediately. Also, instead of raising a MaxRetryError on redirects, 03:58:01 the redirect response will be returned. 03:58:01 03:58:01 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 03:58:01 03:58:01 :param redirect: 03:58:01 If True, automatically handle redirects (status codes 301, 302, 03:58:01 303, 307, 308). Each redirect counts as a retry. Disabling retries 03:58:01 will disable redirect, too. 03:58:01 03:58:01 :param assert_same_host: 03:58:01 If ``True``, will make sure that the host of the pool requests is 03:58:01 consistent else will raise HostChangedError. When ``False``, you can 03:58:01 use the pool on an HTTP proxy and request foreign hosts. 03:58:01 03:58:01 :param timeout: 03:58:01 If specified, overrides the default timeout for this one 03:58:01 request. It may be a float (in seconds) or an instance of 03:58:01 :class:`urllib3.util.Timeout`. 03:58:01 03:58:01 :param pool_timeout: 03:58:01 If set and the pool is set to block=True, then this method will 03:58:01 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 03:58:01 connection is available within the time period. 03:58:01 03:58:01 :param bool preload_content: 03:58:01 If True, the response's body will be preloaded into memory. 03:58:01 03:58:01 :param bool decode_content: 03:58:01 If True, will attempt to decode the body based on the 03:58:01 'content-encoding' header. 03:58:01 03:58:01 :param release_conn: 03:58:01 If False, then the urlopen call will not release the connection 03:58:01 back into the pool once a response is received (but will release if 03:58:01 you read the entire contents of the response such as when 03:58:01 `preload_content=True`). This is useful if you're not preloading 03:58:01 the response's content immediately. You will need to call 03:58:01 ``r.release_conn()`` on the response ``r`` to return the connection 03:58:01 back into the pool. If None, it takes the value of ``preload_content`` 03:58:01 which defaults to ``True``. 03:58:01 03:58:01 :param bool chunked: 03:58:01 If True, urllib3 will send the body using chunked transfer 03:58:01 encoding. Otherwise, urllib3 will send the body using the standard 03:58:01 content-length form. Defaults to False. 03:58:01 03:58:01 :param int body_pos: 03:58:01 Position to seek to in file-like body in the event of a retry or 03:58:01 redirect. Typically this won't need to be set because urllib3 will 03:58:01 auto-populate the value when needed. 03:58:01 """ 03:58:01 parsed_url = parse_url(url) 03:58:01 destination_scheme = parsed_url.scheme 03:58:01 03:58:01 if headers is None: 03:58:01 headers = self.headers 03:58:01 03:58:01 if not isinstance(retries, Retry): 03:58:01 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 03:58:01 03:58:01 if release_conn is None: 03:58:01 release_conn = preload_content 03:58:01 03:58:01 # Check host 03:58:01 if assert_same_host and not self.is_same_host(url): 03:58:01 raise HostChangedError(self, url, retries) 03:58:01 03:58:01 # Ensure that the URL we're connecting to is properly encoded 03:58:01 if url.startswith("/"): 03:58:01 url = to_str(_encode_target(url)) 03:58:01 else: 03:58:01 url = to_str(parsed_url.url) 03:58:01 03:58:01 conn = None 03:58:01 03:58:01 # Track whether `conn` needs to be released before 03:58:01 # returning/raising/recursing. Update this variable if necessary, and 03:58:01 # leave `release_conn` constant throughout the function. That way, if 03:58:01 # the function recurses, the original value of `release_conn` will be 03:58:01 # passed down into the recursive call, and its value will be respected. 03:58:01 # 03:58:01 # See issue #651 [1] for details. 03:58:01 # 03:58:01 # [1] 03:58:01 release_this_conn = release_conn 03:58:01 03:58:01 http_tunnel_required = connection_requires_http_tunnel( 03:58:01 self.proxy, self.proxy_config, destination_scheme 03:58:01 ) 03:58:01 03:58:01 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 03:58:01 # have to copy the headers dict so we can safely change it without those 03:58:01 # changes being reflected in anyone else's copy. 03:58:01 if not http_tunnel_required: 03:58:01 headers = headers.copy() # type: ignore[attr-defined] 03:58:01 headers.update(self.proxy_headers) # type: ignore[union-attr] 03:58:01 03:58:01 # Must keep the exception bound to a separate variable or else Python 3 03:58:01 # complains about UnboundLocalError. 03:58:01 err = None 03:58:01 03:58:01 # Keep track of whether we cleanly exited the except block. This 03:58:01 # ensures we do proper cleanup in finally. 03:58:01 clean_exit = False 03:58:01 03:58:01 # Rewind body position, if needed. Record current position 03:58:01 # for future rewinds in the event of a redirect/retry. 03:58:01 body_pos = set_file_position(body, body_pos) 03:58:01 03:58:01 try: 03:58:01 # Request a connection from the queue. 03:58:01 timeout_obj = self._get_timeout(timeout) 03:58:01 conn = self._get_conn(timeout=pool_timeout) 03:58:01 03:58:01 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 03:58:01 03:58:01 # Is this a closed/new connection that requires CONNECT tunnelling? 03:58:01 if self.proxy is not None and http_tunnel_required and conn.is_closed: 03:58:01 try: 03:58:01 self._prepare_proxy(conn) 03:58:01 except (BaseSSLError, OSError, SocketTimeout) as e: 03:58:01 self._raise_timeout( 03:58:01 err=e, url=self.proxy.url, timeout_value=conn.timeout 03:58:01 ) 03:58:01 raise 03:58:01 03:58:01 # If we're going to release the connection in ``finally:``, then 03:58:01 # the response doesn't need to know about the connection. Otherwise 03:58:01 # it will also try to release it and we'll have a double-release 03:58:01 # mess. 03:58:01 response_conn = conn if not release_conn else None 03:58:01 03:58:01 # Make the request on the HTTPConnection object 03:58:01 > response = self._make_request( 03:58:01 conn, 03:58:01 method, 03:58:01 url, 03:58:01 timeout=timeout_obj, 03:58:01 body=body, 03:58:01 headers=headers, 03:58:01 chunked=chunked, 03:58:01 retries=retries, 03:58:01 response_conn=response_conn, 03:58:01 preload_content=preload_content, 03:58:01 decode_content=decode_content, 03:58:01 **response_kw, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 03:58:01 conn.request( 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 03:58:01 self.endheaders() 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 03:58:01 self._send_output(message_body, encode_chunked=encode_chunked) 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 03:58:01 self.send(msg) 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 03:58:01 self.connect() 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 03:58:01 self.sock = self._new_conn() 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = 03:58:01 03:58:01 def _new_conn(self) -> socket.socket: 03:58:01 """Establish a socket connection and set nodelay settings on it. 03:58:01 03:58:01 :return: New socket connection. 03:58:01 """ 03:58:01 try: 03:58:01 sock = connection.create_connection( 03:58:01 (self._dns_host, self.port), 03:58:01 self.timeout, 03:58:01 source_address=self.source_address, 03:58:01 socket_options=self.socket_options, 03:58:01 ) 03:58:01 except socket.gaierror as e: 03:58:01 raise NameResolutionError(self.host, self, e) from e 03:58:01 except SocketTimeout as e: 03:58:01 raise ConnectTimeoutError( 03:58:01 self, 03:58:01 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 03:58:01 ) from e 03:58:01 03:58:01 except OSError as e: 03:58:01 > raise NewConnectionError( 03:58:01 self, f"Failed to establish a new connection: {e}" 03:58:01 ) from e 03:58:01 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 03:58:01 03:58:01 The above exception was the direct cause of the following exception: 03:58:01 03:58:01 self = 03:58:01 request = , stream = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:01 proxies = OrderedDict() 03:58:01 03:58:01 def send( 03:58:01 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:01 ): 03:58:01 """Sends PreparedRequest object. Returns Response object. 03:58:01 03:58:01 :param request: The :class:`PreparedRequest ` being sent. 03:58:01 :param stream: (optional) Whether to stream the request content. 03:58:01 :param timeout: (optional) How long to wait for the server to send 03:58:01 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:01 read timeout) ` tuple. 03:58:01 :type timeout: float or tuple or urllib3 Timeout object 03:58:01 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:01 we verify the server's TLS certificate, or a string, in which case it 03:58:01 must be a path to a CA bundle to use 03:58:01 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:01 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:01 :rtype: requests.Response 03:58:01 """ 03:58:01 03:58:01 try: 03:58:01 conn = self.get_connection_with_tls_context( 03:58:01 request, verify, proxies=proxies, cert=cert 03:58:01 ) 03:58:01 except LocationValueError as e: 03:58:01 raise InvalidURL(e, request=request) 03:58:01 03:58:01 self.cert_verify(conn, request.url, verify, cert) 03:58:01 url = self.request_url(request, proxies) 03:58:01 self.add_headers( 03:58:01 request, 03:58:01 stream=stream, 03:58:01 timeout=timeout, 03:58:01 verify=verify, 03:58:01 cert=cert, 03:58:01 proxies=proxies, 03:58:01 ) 03:58:01 03:58:01 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:01 03:58:01 if isinstance(timeout, tuple): 03:58:01 try: 03:58:01 connect, read = timeout 03:58:01 timeout = TimeoutSauce(connect=connect, read=read) 03:58:01 except ValueError: 03:58:01 raise ValueError( 03:58:01 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:01 f"or a single float to set both timeouts to the same value." 03:58:01 ) 03:58:01 elif isinstance(timeout, TimeoutSauce): 03:58:01 pass 03:58:01 else: 03:58:01 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:01 03:58:01 try: 03:58:01 > resp = conn.urlopen( 03:58:01 method=request.method, 03:58:01 url=url, 03:58:01 body=request.body, 03:58:01 headers=request.headers, 03:58:01 redirect=False, 03:58:01 assert_same_host=False, 03:58:01 preload_content=False, 03:58:01 decode_content=False, 03:58:01 retries=self.max_retries, 03:58:01 timeout=timeout, 03:58:01 chunked=chunked, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 03:58:01 retries = retries.increment( 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:01 method = 'GET' 03:58:01 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4' 03:58:01 response = None 03:58:01 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 03:58:01 _pool = 03:58:01 _stacktrace = 03:58:01 03:58:01 def increment( 03:58:01 self, 03:58:01 method: str | None = None, 03:58:01 url: str | None = None, 03:58:01 response: BaseHTTPResponse | None = None, 03:58:01 error: Exception | None = None, 03:58:01 _pool: ConnectionPool | None = None, 03:58:01 _stacktrace: TracebackType | None = None, 03:58:01 ) -> Self: 03:58:01 """Return a new Retry object with incremented retry counters. 03:58:01 03:58:01 :param response: A response object, or None, if the server did not 03:58:01 return a response. 03:58:01 :type response: :class:`~urllib3.response.BaseHTTPResponse` 03:58:01 :param Exception error: An error encountered during the request, or 03:58:01 None if the response was received successfully. 03:58:01 03:58:01 :return: A new ``Retry`` object. 03:58:01 """ 03:58:01 if self.total is False and error: 03:58:01 # Disabled, indicate to re-raise the error. 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 03:58:01 total = self.total 03:58:01 if total is not None: 03:58:01 total -= 1 03:58:01 03:58:01 connect = self.connect 03:58:01 read = self.read 03:58:01 redirect = self.redirect 03:58:01 status_count = self.status 03:58:01 other = self.other 03:58:01 cause = "unknown" 03:58:01 status = None 03:58:01 redirect_location = None 03:58:01 03:58:01 if error and self._is_connection_error(error): 03:58:01 # Connect retry? 03:58:01 if connect is False: 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 elif connect is not None: 03:58:01 connect -= 1 03:58:01 03:58:01 elif error and self._is_read_error(error): 03:58:01 # Read retry? 03:58:01 if read is False or method is None or not self._is_method_retryable(method): 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 elif read is not None: 03:58:01 read -= 1 03:58:01 03:58:01 elif error: 03:58:01 # Other retry? 03:58:01 if other is not None: 03:58:01 other -= 1 03:58:01 03:58:01 elif response and response.get_redirect_location(): 03:58:01 # Redirect retry? 03:58:01 if redirect is not None: 03:58:01 redirect -= 1 03:58:01 cause = "too many redirects" 03:58:01 response_redirect_location = response.get_redirect_location() 03:58:01 if response_redirect_location: 03:58:01 redirect_location = response_redirect_location 03:58:01 status = response.status 03:58:01 03:58:01 else: 03:58:01 # Incrementing because of a server error like a 500 in 03:58:01 # status_forcelist and the given method is in the allowed_methods 03:58:01 cause = ResponseError.GENERIC_ERROR 03:58:01 if response and response.status: 03:58:01 if status_count is not None: 03:58:01 status_count -= 1 03:58:01 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 03:58:01 status = response.status 03:58:01 03:58:01 history = self.history + ( 03:58:01 RequestHistory(method, url, error, status, redirect_location), 03:58:01 ) 03:58:01 03:58:01 new_retry = self.new( 03:58:01 total=total, 03:58:01 connect=connect, 03:58:01 read=read, 03:58:01 redirect=redirect, 03:58:01 status=status_count, 03:58:01 other=other, 03:58:01 history=history, 03:58:01 ) 03:58:01 03:58:01 if new_retry.is_exhausted(): 03:58:01 reason = error or ResponseError(cause) 03:58:01 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 03:58:01 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 03:58:01 03:58:01 During handling of the above exception, another exception occurred: 03:58:01 03:58:01 self = 03:58:01 03:58:01 def test_15_xpdr_portmapping_CLIENT4(self): 03:58:01 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT4") 03:58:01 03:58:01 transportpce_tests/1.2.1/test01_portmapping.py:180: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 transportpce_tests/common/test_utils.py:471: in get_portmapping_node_attr 03:58:01 response = get_request(target_url) 03:58:01 transportpce_tests/common/test_utils.py:116: in get_request 03:58:01 return requests.request( 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 03:58:01 return session.request(method=method, url=url, **kwargs) 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 03:58:01 resp = self.send(prep, **send_kwargs) 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 03:58:01 r = adapter.send(request, **kwargs) 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = 03:58:01 request = , stream = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:01 proxies = OrderedDict() 03:58:01 03:58:01 def send( 03:58:01 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:01 ): 03:58:01 """Sends PreparedRequest object. Returns Response object. 03:58:01 03:58:01 :param request: The :class:`PreparedRequest ` being sent. 03:58:01 :param stream: (optional) Whether to stream the request content. 03:58:01 :param timeout: (optional) How long to wait for the server to send 03:58:01 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:01 read timeout) ` tuple. 03:58:01 :type timeout: float or tuple or urllib3 Timeout object 03:58:01 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:01 we verify the server's TLS certificate, or a string, in which case it 03:58:01 must be a path to a CA bundle to use 03:58:01 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:01 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:01 :rtype: requests.Response 03:58:01 """ 03:58:01 03:58:01 try: 03:58:01 conn = self.get_connection_with_tls_context( 03:58:01 request, verify, proxies=proxies, cert=cert 03:58:01 ) 03:58:01 except LocationValueError as e: 03:58:01 raise InvalidURL(e, request=request) 03:58:01 03:58:01 self.cert_verify(conn, request.url, verify, cert) 03:58:01 url = self.request_url(request, proxies) 03:58:01 self.add_headers( 03:58:01 request, 03:58:01 stream=stream, 03:58:01 timeout=timeout, 03:58:01 verify=verify, 03:58:01 cert=cert, 03:58:01 proxies=proxies, 03:58:01 ) 03:58:01 03:58:01 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:01 03:58:01 if isinstance(timeout, tuple): 03:58:01 try: 03:58:01 connect, read = timeout 03:58:01 timeout = TimeoutSauce(connect=connect, read=read) 03:58:01 except ValueError: 03:58:01 raise ValueError( 03:58:01 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:01 f"or a single float to set both timeouts to the same value." 03:58:01 ) 03:58:01 elif isinstance(timeout, TimeoutSauce): 03:58:01 pass 03:58:01 else: 03:58:01 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:01 03:58:01 try: 03:58:01 resp = conn.urlopen( 03:58:01 method=request.method, 03:58:01 url=url, 03:58:01 body=request.body, 03:58:01 headers=request.headers, 03:58:01 redirect=False, 03:58:01 assert_same_host=False, 03:58:01 preload_content=False, 03:58:01 decode_content=False, 03:58:01 retries=self.max_retries, 03:58:01 timeout=timeout, 03:58:01 chunked=chunked, 03:58:01 ) 03:58:01 03:58:01 except (ProtocolError, OSError) as err: 03:58:01 raise ConnectionError(err, request=request) 03:58:01 03:58:01 except MaxRetryError as e: 03:58:01 if isinstance(e.reason, ConnectTimeoutError): 03:58:01 # TODO: Remove this in 3.0.0: see #2811 03:58:01 if not isinstance(e.reason, NewConnectionError): 03:58:01 raise ConnectTimeout(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, ResponseError): 03:58:01 raise RetryError(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, _ProxyError): 03:58:01 raise ProxyError(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, _SSLError): 03:58:01 # This branch is for urllib3 v1.22 and later. 03:58:01 raise SSLError(e, request=request) 03:58:01 03:58:01 > raise ConnectionError(e, request=request) 03:58:01 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 03:58:01 ----------------------------- Captured stdout call ----------------------------- 03:58:01 execution of test_15_xpdr_portmapping_CLIENT4 03:58:01 _______ TransportPCEPortMappingTesting.test_16_xpdr_device_disconnection _______ 03:58:01 03:58:01 self = 03:58:01 03:58:01 def _new_conn(self) -> socket.socket: 03:58:01 """Establish a socket connection and set nodelay settings on it. 03:58:01 03:58:01 :return: New socket connection. 03:58:01 """ 03:58:01 try: 03:58:01 > sock = connection.create_connection( 03:58:01 (self._dns_host, self.port), 03:58:01 self.timeout, 03:58:01 source_address=self.source_address, 03:58:01 socket_options=self.socket_options, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 03:58:01 raise err 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 address = ('localhost', 8182), timeout = 10, source_address = None 03:58:01 socket_options = [(6, 1, 1)] 03:58:01 03:58:01 def create_connection( 03:58:01 address: tuple[str, int], 03:58:01 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:01 source_address: tuple[str, int] | None = None, 03:58:01 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 03:58:01 ) -> socket.socket: 03:58:01 """Connect to *address* and return the socket object. 03:58:01 03:58:01 Convenience function. Connect to *address* (a 2-tuple ``(host, 03:58:01 port)``) and return the socket object. Passing the optional 03:58:01 *timeout* parameter will set the timeout on the socket instance 03:58:01 before attempting to connect. If no *timeout* is supplied, the 03:58:01 global default timeout setting returned by :func:`socket.getdefaulttimeout` 03:58:01 is used. If *source_address* is set it must be a tuple of (host, port) 03:58:01 for the socket to bind as a source address before making the connection. 03:58:01 An host of '' or port 0 tells the OS to use the default. 03:58:01 """ 03:58:01 03:58:01 host, port = address 03:58:01 if host.startswith("["): 03:58:01 host = host.strip("[]") 03:58:01 err = None 03:58:01 03:58:01 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 03:58:01 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 03:58:01 # The original create_connection function always returns all records. 03:58:01 family = allowed_gai_family() 03:58:01 03:58:01 try: 03:58:01 host.encode("idna") 03:58:01 except UnicodeError: 03:58:01 raise LocationParseError(f"'{host}', label empty or too long") from None 03:58:01 03:58:01 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 03:58:01 af, socktype, proto, canonname, sa = res 03:58:01 sock = None 03:58:01 try: 03:58:01 sock = socket.socket(af, socktype, proto) 03:58:01 03:58:01 # If provided, set socket level options before connecting. 03:58:01 _set_socket_options(sock, socket_options) 03:58:01 03:58:01 if timeout is not _DEFAULT_TIMEOUT: 03:58:01 sock.settimeout(timeout) 03:58:01 if source_address: 03:58:01 sock.bind(source_address) 03:58:01 > sock.connect(sa) 03:58:01 E ConnectionRefusedError: [Errno 111] Connection refused 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 03:58:01 03:58:01 The above exception was the direct cause of the following exception: 03:58:01 03:58:01 self = 03:58:01 method = 'DELETE' 03:58:01 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' 03:58:01 body = None 03:58:01 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 03:58:01 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:01 redirect = False, assert_same_host = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 03:58:01 release_conn = False, chunked = False, body_pos = None, preload_content = False 03:58:01 decode_content = False, response_kw = {} 03:58:01 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query=None, fragment=None) 03:58:01 destination_scheme = None, conn = None, release_this_conn = True 03:58:01 http_tunnel_required = False, err = None, clean_exit = False 03:58:01 03:58:01 def urlopen( # type: ignore[override] 03:58:01 self, 03:58:01 method: str, 03:58:01 url: str, 03:58:01 body: _TYPE_BODY | None = None, 03:58:01 headers: typing.Mapping[str, str] | None = None, 03:58:01 retries: Retry | bool | int | None = None, 03:58:01 redirect: bool = True, 03:58:01 assert_same_host: bool = True, 03:58:01 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:01 pool_timeout: int | None = None, 03:58:01 release_conn: bool | None = None, 03:58:01 chunked: bool = False, 03:58:01 body_pos: _TYPE_BODY_POSITION | None = None, 03:58:01 preload_content: bool = True, 03:58:01 decode_content: bool = True, 03:58:01 **response_kw: typing.Any, 03:58:01 ) -> BaseHTTPResponse: 03:58:01 """ 03:58:01 Get a connection from the pool and perform an HTTP request. This is the 03:58:01 lowest level call for making a request, so you'll need to specify all 03:58:01 the raw details. 03:58:01 03:58:01 .. note:: 03:58:01 03:58:01 More commonly, it's appropriate to use a convenience method 03:58:01 such as :meth:`request`. 03:58:01 03:58:01 .. note:: 03:58:01 03:58:01 `release_conn` will only behave as expected if 03:58:01 `preload_content=False` because we want to make 03:58:01 `preload_content=False` the default behaviour someday soon without 03:58:01 breaking backwards compatibility. 03:58:01 03:58:01 :param method: 03:58:01 HTTP request method (such as GET, POST, PUT, etc.) 03:58:01 03:58:01 :param url: 03:58:01 The URL to perform the request on. 03:58:01 03:58:01 :param body: 03:58:01 Data to send in the request body, either :class:`str`, :class:`bytes`, 03:58:01 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 03:58:01 03:58:01 :param headers: 03:58:01 Dictionary of custom headers to send, such as User-Agent, 03:58:01 If-None-Match, etc. If None, pool headers are used. If provided, 03:58:01 these headers completely replace any pool-specific headers. 03:58:01 03:58:01 :param retries: 03:58:01 Configure the number of retries to allow before raising a 03:58:01 :class:`~urllib3.exceptions.MaxRetryError` exception. 03:58:01 03:58:01 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 03:58:01 :class:`~urllib3.util.retry.Retry` object for fine-grained control 03:58:01 over different types of retries. 03:58:01 Pass an integer number to retry connection errors that many times, 03:58:01 but no other types of errors. Pass zero to never retry. 03:58:01 03:58:01 If ``False``, then retries are disabled and any exception is raised 03:58:01 immediately. Also, instead of raising a MaxRetryError on redirects, 03:58:01 the redirect response will be returned. 03:58:01 03:58:01 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 03:58:01 03:58:01 :param redirect: 03:58:01 If True, automatically handle redirects (status codes 301, 302, 03:58:01 303, 307, 308). Each redirect counts as a retry. Disabling retries 03:58:01 will disable redirect, too. 03:58:01 03:58:01 :param assert_same_host: 03:58:01 If ``True``, will make sure that the host of the pool requests is 03:58:01 consistent else will raise HostChangedError. When ``False``, you can 03:58:01 use the pool on an HTTP proxy and request foreign hosts. 03:58:01 03:58:01 :param timeout: 03:58:01 If specified, overrides the default timeout for this one 03:58:01 request. It may be a float (in seconds) or an instance of 03:58:01 :class:`urllib3.util.Timeout`. 03:58:01 03:58:01 :param pool_timeout: 03:58:01 If set and the pool is set to block=True, then this method will 03:58:01 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 03:58:01 connection is available within the time period. 03:58:01 03:58:01 :param bool preload_content: 03:58:01 If True, the response's body will be preloaded into memory. 03:58:01 03:58:01 :param bool decode_content: 03:58:01 If True, will attempt to decode the body based on the 03:58:01 'content-encoding' header. 03:58:01 03:58:01 :param release_conn: 03:58:01 If False, then the urlopen call will not release the connection 03:58:01 back into the pool once a response is received (but will release if 03:58:01 you read the entire contents of the response such as when 03:58:01 `preload_content=True`). This is useful if you're not preloading 03:58:01 the response's content immediately. You will need to call 03:58:01 ``r.release_conn()`` on the response ``r`` to return the connection 03:58:01 back into the pool. If None, it takes the value of ``preload_content`` 03:58:01 which defaults to ``True``. 03:58:01 03:58:01 :param bool chunked: 03:58:01 If True, urllib3 will send the body using chunked transfer 03:58:01 encoding. Otherwise, urllib3 will send the body using the standard 03:58:01 content-length form. Defaults to False. 03:58:01 03:58:01 :param int body_pos: 03:58:01 Position to seek to in file-like body in the event of a retry or 03:58:01 redirect. Typically this won't need to be set because urllib3 will 03:58:01 auto-populate the value when needed. 03:58:01 """ 03:58:01 parsed_url = parse_url(url) 03:58:01 destination_scheme = parsed_url.scheme 03:58:01 03:58:01 if headers is None: 03:58:01 headers = self.headers 03:58:01 03:58:01 if not isinstance(retries, Retry): 03:58:01 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 03:58:01 03:58:01 if release_conn is None: 03:58:01 release_conn = preload_content 03:58:01 03:58:01 # Check host 03:58:01 if assert_same_host and not self.is_same_host(url): 03:58:01 raise HostChangedError(self, url, retries) 03:58:01 03:58:01 # Ensure that the URL we're connecting to is properly encoded 03:58:01 if url.startswith("/"): 03:58:01 url = to_str(_encode_target(url)) 03:58:01 else: 03:58:01 url = to_str(parsed_url.url) 03:58:01 03:58:01 conn = None 03:58:01 03:58:01 # Track whether `conn` needs to be released before 03:58:01 # returning/raising/recursing. Update this variable if necessary, and 03:58:01 # leave `release_conn` constant throughout the function. That way, if 03:58:01 # the function recurses, the original value of `release_conn` will be 03:58:01 # passed down into the recursive call, and its value will be respected. 03:58:01 # 03:58:01 # See issue #651 [1] for details. 03:58:01 # 03:58:01 # [1] 03:58:01 release_this_conn = release_conn 03:58:01 03:58:01 http_tunnel_required = connection_requires_http_tunnel( 03:58:01 self.proxy, self.proxy_config, destination_scheme 03:58:01 ) 03:58:01 03:58:01 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 03:58:01 # have to copy the headers dict so we can safely change it without those 03:58:01 # changes being reflected in anyone else's copy. 03:58:01 if not http_tunnel_required: 03:58:01 headers = headers.copy() # type: ignore[attr-defined] 03:58:01 headers.update(self.proxy_headers) # type: ignore[union-attr] 03:58:01 03:58:01 # Must keep the exception bound to a separate variable or else Python 3 03:58:01 # complains about UnboundLocalError. 03:58:01 err = None 03:58:01 03:58:01 # Keep track of whether we cleanly exited the except block. This 03:58:01 # ensures we do proper cleanup in finally. 03:58:01 clean_exit = False 03:58:01 03:58:01 # Rewind body position, if needed. Record current position 03:58:01 # for future rewinds in the event of a redirect/retry. 03:58:01 body_pos = set_file_position(body, body_pos) 03:58:01 03:58:01 try: 03:58:01 # Request a connection from the queue. 03:58:01 timeout_obj = self._get_timeout(timeout) 03:58:01 conn = self._get_conn(timeout=pool_timeout) 03:58:01 03:58:01 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 03:58:01 03:58:01 # Is this a closed/new connection that requires CONNECT tunnelling? 03:58:01 if self.proxy is not None and http_tunnel_required and conn.is_closed: 03:58:01 try: 03:58:01 self._prepare_proxy(conn) 03:58:01 except (BaseSSLError, OSError, SocketTimeout) as e: 03:58:01 self._raise_timeout( 03:58:01 err=e, url=self.proxy.url, timeout_value=conn.timeout 03:58:01 ) 03:58:01 raise 03:58:01 03:58:01 # If we're going to release the connection in ``finally:``, then 03:58:01 # the response doesn't need to know about the connection. Otherwise 03:58:01 # it will also try to release it and we'll have a double-release 03:58:01 # mess. 03:58:01 response_conn = conn if not release_conn else None 03:58:01 03:58:01 # Make the request on the HTTPConnection object 03:58:01 > response = self._make_request( 03:58:01 conn, 03:58:01 method, 03:58:01 url, 03:58:01 timeout=timeout_obj, 03:58:01 body=body, 03:58:01 headers=headers, 03:58:01 chunked=chunked, 03:58:01 retries=retries, 03:58:01 response_conn=response_conn, 03:58:01 preload_content=preload_content, 03:58:01 decode_content=decode_content, 03:58:01 **response_kw, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 03:58:01 conn.request( 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 03:58:01 self.endheaders() 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 03:58:01 self._send_output(message_body, encode_chunked=encode_chunked) 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 03:58:01 self.send(msg) 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 03:58:01 self.connect() 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 03:58:01 self.sock = self._new_conn() 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = 03:58:01 03:58:01 def _new_conn(self) -> socket.socket: 03:58:01 """Establish a socket connection and set nodelay settings on it. 03:58:01 03:58:01 :return: New socket connection. 03:58:01 """ 03:58:01 try: 03:58:01 sock = connection.create_connection( 03:58:01 (self._dns_host, self.port), 03:58:01 self.timeout, 03:58:01 source_address=self.source_address, 03:58:01 socket_options=self.socket_options, 03:58:01 ) 03:58:01 except socket.gaierror as e: 03:58:01 raise NameResolutionError(self.host, self, e) from e 03:58:01 except SocketTimeout as e: 03:58:01 raise ConnectTimeoutError( 03:58:01 self, 03:58:01 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 03:58:01 ) from e 03:58:01 03:58:01 except OSError as e: 03:58:01 > raise NewConnectionError( 03:58:01 self, f"Failed to establish a new connection: {e}" 03:58:01 ) from e 03:58:01 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 03:58:01 03:58:01 The above exception was the direct cause of the following exception: 03:58:01 03:58:01 self = 03:58:01 request = , stream = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:01 proxies = OrderedDict() 03:58:01 03:58:01 def send( 03:58:01 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:01 ): 03:58:01 """Sends PreparedRequest object. Returns Response object. 03:58:01 03:58:01 :param request: The :class:`PreparedRequest ` being sent. 03:58:01 :param stream: (optional) Whether to stream the request content. 03:58:01 :param timeout: (optional) How long to wait for the server to send 03:58:01 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:01 read timeout) ` tuple. 03:58:01 :type timeout: float or tuple or urllib3 Timeout object 03:58:01 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:01 we verify the server's TLS certificate, or a string, in which case it 03:58:01 must be a path to a CA bundle to use 03:58:01 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:01 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:01 :rtype: requests.Response 03:58:01 """ 03:58:01 03:58:01 try: 03:58:01 conn = self.get_connection_with_tls_context( 03:58:01 request, verify, proxies=proxies, cert=cert 03:58:01 ) 03:58:01 except LocationValueError as e: 03:58:01 raise InvalidURL(e, request=request) 03:58:01 03:58:01 self.cert_verify(conn, request.url, verify, cert) 03:58:01 url = self.request_url(request, proxies) 03:58:01 self.add_headers( 03:58:01 request, 03:58:01 stream=stream, 03:58:01 timeout=timeout, 03:58:01 verify=verify, 03:58:01 cert=cert, 03:58:01 proxies=proxies, 03:58:01 ) 03:58:01 03:58:01 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:01 03:58:01 if isinstance(timeout, tuple): 03:58:01 try: 03:58:01 connect, read = timeout 03:58:01 timeout = TimeoutSauce(connect=connect, read=read) 03:58:01 except ValueError: 03:58:01 raise ValueError( 03:58:01 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:01 f"or a single float to set both timeouts to the same value." 03:58:01 ) 03:58:01 elif isinstance(timeout, TimeoutSauce): 03:58:01 pass 03:58:01 else: 03:58:01 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:01 03:58:01 try: 03:58:01 > resp = conn.urlopen( 03:58:01 method=request.method, 03:58:01 url=url, 03:58:01 body=request.body, 03:58:01 headers=request.headers, 03:58:01 redirect=False, 03:58:01 assert_same_host=False, 03:58:01 preload_content=False, 03:58:01 decode_content=False, 03:58:01 retries=self.max_retries, 03:58:01 timeout=timeout, 03:58:01 chunked=chunked, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 03:58:01 retries = retries.increment( 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:01 method = 'DELETE' 03:58:01 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' 03:58:01 response = None 03:58:01 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 03:58:01 _pool = 03:58:01 _stacktrace = 03:58:01 03:58:01 def increment( 03:58:01 self, 03:58:01 method: str | None = None, 03:58:01 url: str | None = None, 03:58:01 response: BaseHTTPResponse | None = None, 03:58:01 error: Exception | None = None, 03:58:01 _pool: ConnectionPool | None = None, 03:58:01 _stacktrace: TracebackType | None = None, 03:58:01 ) -> Self: 03:58:01 """Return a new Retry object with incremented retry counters. 03:58:01 03:58:01 :param response: A response object, or None, if the server did not 03:58:01 return a response. 03:58:01 :type response: :class:`~urllib3.response.BaseHTTPResponse` 03:58:01 :param Exception error: An error encountered during the request, or 03:58:01 None if the response was received successfully. 03:58:01 03:58:01 :return: A new ``Retry`` object. 03:58:01 """ 03:58:01 if self.total is False and error: 03:58:01 # Disabled, indicate to re-raise the error. 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 03:58:01 total = self.total 03:58:01 if total is not None: 03:58:01 total -= 1 03:58:01 03:58:01 connect = self.connect 03:58:01 read = self.read 03:58:01 redirect = self.redirect 03:58:01 status_count = self.status 03:58:01 other = self.other 03:58:01 cause = "unknown" 03:58:01 status = None 03:58:01 redirect_location = None 03:58:01 03:58:01 if error and self._is_connection_error(error): 03:58:01 # Connect retry? 03:58:01 if connect is False: 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 elif connect is not None: 03:58:01 connect -= 1 03:58:01 03:58:01 elif error and self._is_read_error(error): 03:58:01 # Read retry? 03:58:01 if read is False or method is None or not self._is_method_retryable(method): 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 elif read is not None: 03:58:01 read -= 1 03:58:01 03:58:01 elif error: 03:58:01 # Other retry? 03:58:01 if other is not None: 03:58:01 other -= 1 03:58:01 03:58:01 elif response and response.get_redirect_location(): 03:58:01 # Redirect retry? 03:58:01 if redirect is not None: 03:58:01 redirect -= 1 03:58:01 cause = "too many redirects" 03:58:01 response_redirect_location = response.get_redirect_location() 03:58:01 if response_redirect_location: 03:58:01 redirect_location = response_redirect_location 03:58:01 status = response.status 03:58:01 03:58:01 else: 03:58:01 # Incrementing because of a server error like a 500 in 03:58:01 # status_forcelist and the given method is in the allowed_methods 03:58:01 cause = ResponseError.GENERIC_ERROR 03:58:01 if response and response.status: 03:58:01 if status_count is not None: 03:58:01 status_count -= 1 03:58:01 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 03:58:01 status = response.status 03:58:01 03:58:01 history = self.history + ( 03:58:01 RequestHistory(method, url, error, status, redirect_location), 03:58:01 ) 03:58:01 03:58:01 new_retry = self.new( 03:58:01 total=total, 03:58:01 connect=connect, 03:58:01 read=read, 03:58:01 redirect=redirect, 03:58:01 status=status_count, 03:58:01 other=other, 03:58:01 history=history, 03:58:01 ) 03:58:01 03:58:01 if new_retry.is_exhausted(): 03:58:01 reason = error or ResponseError(cause) 03:58:01 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 03:58:01 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 03:58:01 03:58:01 During handling of the above exception, another exception occurred: 03:58:01 03:58:01 self = 03:58:01 03:58:01 def test_16_xpdr_device_disconnection(self): 03:58:01 > response = test_utils.unmount_device("XPDRA01") 03:58:01 03:58:01 transportpce_tests/1.2.1/test01_portmapping.py:191: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 transportpce_tests/common/test_utils.py:359: in unmount_device 03:58:01 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 03:58:01 transportpce_tests/common/test_utils.py:133: in delete_request 03:58:01 return requests.request( 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 03:58:01 return session.request(method=method, url=url, **kwargs) 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 03:58:01 resp = self.send(prep, **send_kwargs) 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 03:58:01 r = adapter.send(request, **kwargs) 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = 03:58:01 request = , stream = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:01 proxies = OrderedDict() 03:58:01 03:58:01 def send( 03:58:01 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:01 ): 03:58:01 """Sends PreparedRequest object. Returns Response object. 03:58:01 03:58:01 :param request: The :class:`PreparedRequest ` being sent. 03:58:01 :param stream: (optional) Whether to stream the request content. 03:58:01 :param timeout: (optional) How long to wait for the server to send 03:58:01 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:01 read timeout) ` tuple. 03:58:01 :type timeout: float or tuple or urllib3 Timeout object 03:58:01 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:01 we verify the server's TLS certificate, or a string, in which case it 03:58:01 must be a path to a CA bundle to use 03:58:01 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:01 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:01 :rtype: requests.Response 03:58:01 """ 03:58:01 03:58:01 try: 03:58:01 conn = self.get_connection_with_tls_context( 03:58:01 request, verify, proxies=proxies, cert=cert 03:58:01 ) 03:58:01 except LocationValueError as e: 03:58:01 raise InvalidURL(e, request=request) 03:58:01 03:58:01 self.cert_verify(conn, request.url, verify, cert) 03:58:01 url = self.request_url(request, proxies) 03:58:01 self.add_headers( 03:58:01 request, 03:58:01 stream=stream, 03:58:01 timeout=timeout, 03:58:01 verify=verify, 03:58:01 cert=cert, 03:58:01 proxies=proxies, 03:58:01 ) 03:58:01 03:58:01 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:01 03:58:01 if isinstance(timeout, tuple): 03:58:01 try: 03:58:01 connect, read = timeout 03:58:01 timeout = TimeoutSauce(connect=connect, read=read) 03:58:01 except ValueError: 03:58:01 raise ValueError( 03:58:01 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:01 f"or a single float to set both timeouts to the same value." 03:58:01 ) 03:58:01 elif isinstance(timeout, TimeoutSauce): 03:58:01 pass 03:58:01 else: 03:58:01 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:01 03:58:01 try: 03:58:01 resp = conn.urlopen( 03:58:01 method=request.method, 03:58:01 url=url, 03:58:01 body=request.body, 03:58:01 headers=request.headers, 03:58:01 redirect=False, 03:58:01 assert_same_host=False, 03:58:01 preload_content=False, 03:58:01 decode_content=False, 03:58:01 retries=self.max_retries, 03:58:01 timeout=timeout, 03:58:01 chunked=chunked, 03:58:01 ) 03:58:01 03:58:01 except (ProtocolError, OSError) as err: 03:58:01 raise ConnectionError(err, request=request) 03:58:01 03:58:01 except MaxRetryError as e: 03:58:01 if isinstance(e.reason, ConnectTimeoutError): 03:58:01 # TODO: Remove this in 3.0.0: see #2811 03:58:01 if not isinstance(e.reason, NewConnectionError): 03:58:01 raise ConnectTimeout(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, ResponseError): 03:58:01 raise RetryError(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, _ProxyError): 03:58:01 raise ProxyError(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, _SSLError): 03:58:01 # This branch is for urllib3 v1.22 and later. 03:58:01 raise SSLError(e, request=request) 03:58:01 03:58:01 > raise ConnectionError(e, request=request) 03:58:01 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 03:58:01 ----------------------------- Captured stdout call ----------------------------- 03:58:01 execution of test_16_xpdr_device_disconnection 03:58:01 _______ TransportPCEPortMappingTesting.test_17_xpdr_device_disconnected ________ 03:58:01 03:58:01 self = 03:58:01 03:58:01 def _new_conn(self) -> socket.socket: 03:58:01 """Establish a socket connection and set nodelay settings on it. 03:58:01 03:58:01 :return: New socket connection. 03:58:01 """ 03:58:01 try: 03:58:01 > sock = connection.create_connection( 03:58:01 (self._dns_host, self.port), 03:58:01 self.timeout, 03:58:01 source_address=self.source_address, 03:58:01 socket_options=self.socket_options, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 03:58:01 raise err 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 address = ('localhost', 8182), timeout = 10, source_address = None 03:58:01 socket_options = [(6, 1, 1)] 03:58:01 03:58:01 def create_connection( 03:58:01 address: tuple[str, int], 03:58:01 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:01 source_address: tuple[str, int] | None = None, 03:58:01 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 03:58:01 ) -> socket.socket: 03:58:01 """Connect to *address* and return the socket object. 03:58:01 03:58:01 Convenience function. Connect to *address* (a 2-tuple ``(host, 03:58:01 port)``) and return the socket object. Passing the optional 03:58:01 *timeout* parameter will set the timeout on the socket instance 03:58:01 before attempting to connect. If no *timeout* is supplied, the 03:58:01 global default timeout setting returned by :func:`socket.getdefaulttimeout` 03:58:01 is used. If *source_address* is set it must be a tuple of (host, port) 03:58:01 for the socket to bind as a source address before making the connection. 03:58:01 An host of '' or port 0 tells the OS to use the default. 03:58:01 """ 03:58:01 03:58:01 host, port = address 03:58:01 if host.startswith("["): 03:58:01 host = host.strip("[]") 03:58:01 err = None 03:58:01 03:58:01 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 03:58:01 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 03:58:01 # The original create_connection function always returns all records. 03:58:01 family = allowed_gai_family() 03:58:01 03:58:01 try: 03:58:01 host.encode("idna") 03:58:01 except UnicodeError: 03:58:01 raise LocationParseError(f"'{host}', label empty or too long") from None 03:58:01 03:58:01 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 03:58:01 af, socktype, proto, canonname, sa = res 03:58:01 sock = None 03:58:01 try: 03:58:01 sock = socket.socket(af, socktype, proto) 03:58:01 03:58:01 # If provided, set socket level options before connecting. 03:58:01 _set_socket_options(sock, socket_options) 03:58:01 03:58:01 if timeout is not _DEFAULT_TIMEOUT: 03:58:01 sock.settimeout(timeout) 03:58:01 if source_address: 03:58:01 sock.bind(source_address) 03:58:01 > sock.connect(sa) 03:58:01 E ConnectionRefusedError: [Errno 111] Connection refused 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 03:58:01 03:58:01 The above exception was the direct cause of the following exception: 03:58:01 03:58:01 self = 03:58:01 method = 'GET' 03:58:01 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' 03:58:01 body = None 03:58:01 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 03:58:01 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:01 redirect = False, assert_same_host = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 03:58:01 release_conn = False, chunked = False, body_pos = None, preload_content = False 03:58:01 decode_content = False, response_kw = {} 03:58:01 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query='content=nonconfig', fragment=None) 03:58:01 destination_scheme = None, conn = None, release_this_conn = True 03:58:01 http_tunnel_required = False, err = None, clean_exit = False 03:58:01 03:58:01 def urlopen( # type: ignore[override] 03:58:01 self, 03:58:01 method: str, 03:58:01 url: str, 03:58:01 body: _TYPE_BODY | None = None, 03:58:01 headers: typing.Mapping[str, str] | None = None, 03:58:01 retries: Retry | bool | int | None = None, 03:58:01 redirect: bool = True, 03:58:01 assert_same_host: bool = True, 03:58:01 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:01 pool_timeout: int | None = None, 03:58:01 release_conn: bool | None = None, 03:58:01 chunked: bool = False, 03:58:01 body_pos: _TYPE_BODY_POSITION | None = None, 03:58:01 preload_content: bool = True, 03:58:01 decode_content: bool = True, 03:58:01 **response_kw: typing.Any, 03:58:01 ) -> BaseHTTPResponse: 03:58:01 """ 03:58:01 Get a connection from the pool and perform an HTTP request. This is the 03:58:01 lowest level call for making a request, so you'll need to specify all 03:58:01 the raw details. 03:58:01 03:58:01 .. note:: 03:58:01 03:58:01 More commonly, it's appropriate to use a convenience method 03:58:01 such as :meth:`request`. 03:58:01 03:58:01 .. note:: 03:58:01 03:58:01 `release_conn` will only behave as expected if 03:58:01 `preload_content=False` because we want to make 03:58:01 `preload_content=False` the default behaviour someday soon without 03:58:01 breaking backwards compatibility. 03:58:01 03:58:01 :param method: 03:58:01 HTTP request method (such as GET, POST, PUT, etc.) 03:58:01 03:58:01 :param url: 03:58:01 The URL to perform the request on. 03:58:01 03:58:01 :param body: 03:58:01 Data to send in the request body, either :class:`str`, :class:`bytes`, 03:58:01 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 03:58:01 03:58:01 :param headers: 03:58:01 Dictionary of custom headers to send, such as User-Agent, 03:58:01 If-None-Match, etc. If None, pool headers are used. If provided, 03:58:01 these headers completely replace any pool-specific headers. 03:58:01 03:58:01 :param retries: 03:58:01 Configure the number of retries to allow before raising a 03:58:01 :class:`~urllib3.exceptions.MaxRetryError` exception. 03:58:01 03:58:01 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 03:58:01 :class:`~urllib3.util.retry.Retry` object for fine-grained control 03:58:01 over different types of retries. 03:58:01 Pass an integer number to retry connection errors that many times, 03:58:01 but no other types of errors. Pass zero to never retry. 03:58:01 03:58:01 If ``False``, then retries are disabled and any exception is raised 03:58:01 immediately. Also, instead of raising a MaxRetryError on redirects, 03:58:01 the redirect response will be returned. 03:58:01 03:58:01 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 03:58:01 03:58:01 :param redirect: 03:58:01 If True, automatically handle redirects (status codes 301, 302, 03:58:01 303, 307, 308). Each redirect counts as a retry. Disabling retries 03:58:01 will disable redirect, too. 03:58:01 03:58:01 :param assert_same_host: 03:58:01 If ``True``, will make sure that the host of the pool requests is 03:58:01 consistent else will raise HostChangedError. When ``False``, you can 03:58:01 use the pool on an HTTP proxy and request foreign hosts. 03:58:01 03:58:01 :param timeout: 03:58:01 If specified, overrides the default timeout for this one 03:58:01 request. It may be a float (in seconds) or an instance of 03:58:01 :class:`urllib3.util.Timeout`. 03:58:01 03:58:01 :param pool_timeout: 03:58:01 If set and the pool is set to block=True, then this method will 03:58:01 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 03:58:01 connection is available within the time period. 03:58:01 03:58:01 :param bool preload_content: 03:58:01 If True, the response's body will be preloaded into memory. 03:58:01 03:58:01 :param bool decode_content: 03:58:01 If True, will attempt to decode the body based on the 03:58:01 'content-encoding' header. 03:58:01 03:58:01 :param release_conn: 03:58:01 If False, then the urlopen call will not release the connection 03:58:01 back into the pool once a response is received (but will release if 03:58:01 you read the entire contents of the response such as when 03:58:01 `preload_content=True`). This is useful if you're not preloading 03:58:01 the response's content immediately. You will need to call 03:58:01 ``r.release_conn()`` on the response ``r`` to return the connection 03:58:01 back into the pool. If None, it takes the value of ``preload_content`` 03:58:01 which defaults to ``True``. 03:58:01 03:58:01 :param bool chunked: 03:58:01 If True, urllib3 will send the body using chunked transfer 03:58:01 encoding. Otherwise, urllib3 will send the body using the standard 03:58:01 content-length form. Defaults to False. 03:58:01 03:58:01 :param int body_pos: 03:58:01 Position to seek to in file-like body in the event of a retry or 03:58:01 redirect. Typically this won't need to be set because urllib3 will 03:58:01 auto-populate the value when needed. 03:58:01 """ 03:58:01 parsed_url = parse_url(url) 03:58:01 destination_scheme = parsed_url.scheme 03:58:01 03:58:01 if headers is None: 03:58:01 headers = self.headers 03:58:01 03:58:01 if not isinstance(retries, Retry): 03:58:01 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 03:58:01 03:58:01 if release_conn is None: 03:58:01 release_conn = preload_content 03:58:01 03:58:01 # Check host 03:58:01 if assert_same_host and not self.is_same_host(url): 03:58:01 raise HostChangedError(self, url, retries) 03:58:01 03:58:01 # Ensure that the URL we're connecting to is properly encoded 03:58:01 if url.startswith("/"): 03:58:01 url = to_str(_encode_target(url)) 03:58:01 else: 03:58:01 url = to_str(parsed_url.url) 03:58:01 03:58:01 conn = None 03:58:01 03:58:01 # Track whether `conn` needs to be released before 03:58:01 # returning/raising/recursing. Update this variable if necessary, and 03:58:01 # leave `release_conn` constant throughout the function. That way, if 03:58:01 # the function recurses, the original value of `release_conn` will be 03:58:01 # passed down into the recursive call, and its value will be respected. 03:58:01 # 03:58:01 # See issue #651 [1] for details. 03:58:01 # 03:58:01 # [1] 03:58:01 release_this_conn = release_conn 03:58:01 03:58:01 http_tunnel_required = connection_requires_http_tunnel( 03:58:01 self.proxy, self.proxy_config, destination_scheme 03:58:01 ) 03:58:01 03:58:01 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 03:58:01 # have to copy the headers dict so we can safely change it without those 03:58:01 # changes being reflected in anyone else's copy. 03:58:01 if not http_tunnel_required: 03:58:01 headers = headers.copy() # type: ignore[attr-defined] 03:58:01 headers.update(self.proxy_headers) # type: ignore[union-attr] 03:58:01 03:58:01 # Must keep the exception bound to a separate variable or else Python 3 03:58:01 # complains about UnboundLocalError. 03:58:01 err = None 03:58:01 03:58:01 # Keep track of whether we cleanly exited the except block. This 03:58:01 # ensures we do proper cleanup in finally. 03:58:01 clean_exit = False 03:58:01 03:58:01 # Rewind body position, if needed. Record current position 03:58:01 # for future rewinds in the event of a redirect/retry. 03:58:01 body_pos = set_file_position(body, body_pos) 03:58:01 03:58:01 try: 03:58:01 # Request a connection from the queue. 03:58:01 timeout_obj = self._get_timeout(timeout) 03:58:01 conn = self._get_conn(timeout=pool_timeout) 03:58:01 03:58:01 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 03:58:01 03:58:01 # Is this a closed/new connection that requires CONNECT tunnelling? 03:58:01 if self.proxy is not None and http_tunnel_required and conn.is_closed: 03:58:01 try: 03:58:01 self._prepare_proxy(conn) 03:58:01 except (BaseSSLError, OSError, SocketTimeout) as e: 03:58:01 self._raise_timeout( 03:58:01 err=e, url=self.proxy.url, timeout_value=conn.timeout 03:58:01 ) 03:58:01 raise 03:58:01 03:58:01 # If we're going to release the connection in ``finally:``, then 03:58:01 # the response doesn't need to know about the connection. Otherwise 03:58:01 # it will also try to release it and we'll have a double-release 03:58:01 # mess. 03:58:01 response_conn = conn if not release_conn else None 03:58:01 03:58:01 # Make the request on the HTTPConnection object 03:58:01 > response = self._make_request( 03:58:01 conn, 03:58:01 method, 03:58:01 url, 03:58:01 timeout=timeout_obj, 03:58:01 body=body, 03:58:01 headers=headers, 03:58:01 chunked=chunked, 03:58:01 retries=retries, 03:58:01 response_conn=response_conn, 03:58:01 preload_content=preload_content, 03:58:01 decode_content=decode_content, 03:58:01 **response_kw, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 03:58:01 conn.request( 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 03:58:01 self.endheaders() 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 03:58:01 self._send_output(message_body, encode_chunked=encode_chunked) 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 03:58:01 self.send(msg) 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 03:58:01 self.connect() 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 03:58:01 self.sock = self._new_conn() 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = 03:58:01 03:58:01 def _new_conn(self) -> socket.socket: 03:58:01 """Establish a socket connection and set nodelay settings on it. 03:58:01 03:58:01 :return: New socket connection. 03:58:01 """ 03:58:01 try: 03:58:01 sock = connection.create_connection( 03:58:01 (self._dns_host, self.port), 03:58:01 self.timeout, 03:58:01 source_address=self.source_address, 03:58:01 socket_options=self.socket_options, 03:58:01 ) 03:58:01 except socket.gaierror as e: 03:58:01 raise NameResolutionError(self.host, self, e) from e 03:58:01 except SocketTimeout as e: 03:58:01 raise ConnectTimeoutError( 03:58:01 self, 03:58:01 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 03:58:01 ) from e 03:58:01 03:58:01 except OSError as e: 03:58:01 > raise NewConnectionError( 03:58:01 self, f"Failed to establish a new connection: {e}" 03:58:01 ) from e 03:58:01 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 03:58:01 03:58:01 The above exception was the direct cause of the following exception: 03:58:01 03:58:01 self = 03:58:01 request = , stream = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:01 proxies = OrderedDict() 03:58:01 03:58:01 def send( 03:58:01 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:01 ): 03:58:01 """Sends PreparedRequest object. Returns Response object. 03:58:01 03:58:01 :param request: The :class:`PreparedRequest ` being sent. 03:58:01 :param stream: (optional) Whether to stream the request content. 03:58:01 :param timeout: (optional) How long to wait for the server to send 03:58:01 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:01 read timeout) ` tuple. 03:58:01 :type timeout: float or tuple or urllib3 Timeout object 03:58:01 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:01 we verify the server's TLS certificate, or a string, in which case it 03:58:01 must be a path to a CA bundle to use 03:58:01 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:01 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:01 :rtype: requests.Response 03:58:01 """ 03:58:01 03:58:01 try: 03:58:01 conn = self.get_connection_with_tls_context( 03:58:01 request, verify, proxies=proxies, cert=cert 03:58:01 ) 03:58:01 except LocationValueError as e: 03:58:01 raise InvalidURL(e, request=request) 03:58:01 03:58:01 self.cert_verify(conn, request.url, verify, cert) 03:58:01 url = self.request_url(request, proxies) 03:58:01 self.add_headers( 03:58:01 request, 03:58:01 stream=stream, 03:58:01 timeout=timeout, 03:58:01 verify=verify, 03:58:01 cert=cert, 03:58:01 proxies=proxies, 03:58:01 ) 03:58:01 03:58:01 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:01 03:58:01 if isinstance(timeout, tuple): 03:58:01 try: 03:58:01 connect, read = timeout 03:58:01 timeout = TimeoutSauce(connect=connect, read=read) 03:58:01 except ValueError: 03:58:01 raise ValueError( 03:58:01 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:01 f"or a single float to set both timeouts to the same value." 03:58:01 ) 03:58:01 elif isinstance(timeout, TimeoutSauce): 03:58:01 pass 03:58:01 else: 03:58:01 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:01 03:58:01 try: 03:58:01 > resp = conn.urlopen( 03:58:01 method=request.method, 03:58:01 url=url, 03:58:01 body=request.body, 03:58:01 headers=request.headers, 03:58:01 redirect=False, 03:58:01 assert_same_host=False, 03:58:01 preload_content=False, 03:58:01 decode_content=False, 03:58:01 retries=self.max_retries, 03:58:01 timeout=timeout, 03:58:01 chunked=chunked, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 03:58:01 retries = retries.increment( 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:01 method = 'GET' 03:58:01 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' 03:58:01 response = None 03:58:01 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 03:58:01 _pool = 03:58:01 _stacktrace = 03:58:01 03:58:01 def increment( 03:58:01 self, 03:58:01 method: str | None = None, 03:58:01 url: str | None = None, 03:58:01 response: BaseHTTPResponse | None = None, 03:58:01 error: Exception | None = None, 03:58:01 _pool: ConnectionPool | None = None, 03:58:01 _stacktrace: TracebackType | None = None, 03:58:01 ) -> Self: 03:58:01 """Return a new Retry object with incremented retry counters. 03:58:01 03:58:01 :param response: A response object, or None, if the server did not 03:58:01 return a response. 03:58:01 :type response: :class:`~urllib3.response.BaseHTTPResponse` 03:58:01 :param Exception error: An error encountered during the request, or 03:58:01 None if the response was received successfully. 03:58:01 03:58:01 :return: A new ``Retry`` object. 03:58:01 """ 03:58:01 if self.total is False and error: 03:58:01 # Disabled, indicate to re-raise the error. 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 03:58:01 total = self.total 03:58:01 if total is not None: 03:58:01 total -= 1 03:58:01 03:58:01 connect = self.connect 03:58:01 read = self.read 03:58:01 redirect = self.redirect 03:58:01 status_count = self.status 03:58:01 other = self.other 03:58:01 cause = "unknown" 03:58:01 status = None 03:58:01 redirect_location = None 03:58:01 03:58:01 if error and self._is_connection_error(error): 03:58:01 # Connect retry? 03:58:01 if connect is False: 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 elif connect is not None: 03:58:01 connect -= 1 03:58:01 03:58:01 elif error and self._is_read_error(error): 03:58:01 # Read retry? 03:58:01 if read is False or method is None or not self._is_method_retryable(method): 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 elif read is not None: 03:58:01 read -= 1 03:58:01 03:58:01 elif error: 03:58:01 # Other retry? 03:58:01 if other is not None: 03:58:01 other -= 1 03:58:01 03:58:01 elif response and response.get_redirect_location(): 03:58:01 # Redirect retry? 03:58:01 if redirect is not None: 03:58:01 redirect -= 1 03:58:01 cause = "too many redirects" 03:58:01 response_redirect_location = response.get_redirect_location() 03:58:01 if response_redirect_location: 03:58:01 redirect_location = response_redirect_location 03:58:01 status = response.status 03:58:01 03:58:01 else: 03:58:01 # Incrementing because of a server error like a 500 in 03:58:01 # status_forcelist and the given method is in the allowed_methods 03:58:01 cause = ResponseError.GENERIC_ERROR 03:58:01 if response and response.status: 03:58:01 if status_count is not None: 03:58:01 status_count -= 1 03:58:01 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 03:58:01 status = response.status 03:58:01 03:58:01 history = self.history + ( 03:58:01 RequestHistory(method, url, error, status, redirect_location), 03:58:01 ) 03:58:01 03:58:01 new_retry = self.new( 03:58:01 total=total, 03:58:01 connect=connect, 03:58:01 read=read, 03:58:01 redirect=redirect, 03:58:01 status=status_count, 03:58:01 other=other, 03:58:01 history=history, 03:58:01 ) 03:58:01 03:58:01 if new_retry.is_exhausted(): 03:58:01 reason = error or ResponseError(cause) 03:58:01 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 03:58:01 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 03:58:01 03:58:01 During handling of the above exception, another exception occurred: 03:58:01 03:58:01 self = 03:58:01 03:58:01 def test_17_xpdr_device_disconnected(self): 03:58:01 > response = test_utils.check_device_connection("XPDRA01") 03:58:01 03:58:01 transportpce_tests/1.2.1/test01_portmapping.py:195: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 transportpce_tests/common/test_utils.py:370: in check_device_connection 03:58:01 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 03:58:01 transportpce_tests/common/test_utils.py:116: in get_request 03:58:01 return requests.request( 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 03:58:01 return session.request(method=method, url=url, **kwargs) 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 03:58:01 resp = self.send(prep, **send_kwargs) 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 03:58:01 r = adapter.send(request, **kwargs) 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = 03:58:01 request = , stream = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:01 proxies = OrderedDict() 03:58:01 03:58:01 def send( 03:58:01 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:01 ): 03:58:01 """Sends PreparedRequest object. Returns Response object. 03:58:01 03:58:01 :param request: The :class:`PreparedRequest ` being sent. 03:58:01 :param stream: (optional) Whether to stream the request content. 03:58:01 :param timeout: (optional) How long to wait for the server to send 03:58:01 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:01 read timeout) ` tuple. 03:58:01 :type timeout: float or tuple or urllib3 Timeout object 03:58:01 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:01 we verify the server's TLS certificate, or a string, in which case it 03:58:01 must be a path to a CA bundle to use 03:58:01 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:01 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:01 :rtype: requests.Response 03:58:01 """ 03:58:01 03:58:01 try: 03:58:01 conn = self.get_connection_with_tls_context( 03:58:01 request, verify, proxies=proxies, cert=cert 03:58:01 ) 03:58:01 except LocationValueError as e: 03:58:01 raise InvalidURL(e, request=request) 03:58:01 03:58:01 self.cert_verify(conn, request.url, verify, cert) 03:58:01 url = self.request_url(request, proxies) 03:58:01 self.add_headers( 03:58:01 request, 03:58:01 stream=stream, 03:58:01 timeout=timeout, 03:58:01 verify=verify, 03:58:01 cert=cert, 03:58:01 proxies=proxies, 03:58:01 ) 03:58:01 03:58:01 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:01 03:58:01 if isinstance(timeout, tuple): 03:58:01 try: 03:58:01 connect, read = timeout 03:58:01 timeout = TimeoutSauce(connect=connect, read=read) 03:58:01 except ValueError: 03:58:01 raise ValueError( 03:58:01 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:01 f"or a single float to set both timeouts to the same value." 03:58:01 ) 03:58:01 elif isinstance(timeout, TimeoutSauce): 03:58:01 pass 03:58:01 else: 03:58:01 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:01 03:58:01 try: 03:58:01 resp = conn.urlopen( 03:58:01 method=request.method, 03:58:01 url=url, 03:58:01 body=request.body, 03:58:01 headers=request.headers, 03:58:01 redirect=False, 03:58:01 assert_same_host=False, 03:58:01 preload_content=False, 03:58:01 decode_content=False, 03:58:01 retries=self.max_retries, 03:58:01 timeout=timeout, 03:58:01 chunked=chunked, 03:58:01 ) 03:58:01 03:58:01 except (ProtocolError, OSError) as err: 03:58:01 raise ConnectionError(err, request=request) 03:58:01 03:58:01 except MaxRetryError as e: 03:58:01 if isinstance(e.reason, ConnectTimeoutError): 03:58:01 # TODO: Remove this in 3.0.0: see #2811 03:58:01 if not isinstance(e.reason, NewConnectionError): 03:58:01 raise ConnectTimeout(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, ResponseError): 03:58:01 raise RetryError(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, _ProxyError): 03:58:01 raise ProxyError(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, _SSLError): 03:58:01 # This branch is for urllib3 v1.22 and later. 03:58:01 raise SSLError(e, request=request) 03:58:01 03:58:01 > raise ConnectionError(e, request=request) 03:58:01 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 03:58:01 ----------------------------- Captured stdout call ----------------------------- 03:58:01 execution of test_17_xpdr_device_disconnected 03:58:01 _______ TransportPCEPortMappingTesting.test_18_xpdr_device_not_connected _______ 03:58:01 03:58:01 self = 03:58:01 03:58:01 def _new_conn(self) -> socket.socket: 03:58:01 """Establish a socket connection and set nodelay settings on it. 03:58:01 03:58:01 :return: New socket connection. 03:58:01 """ 03:58:01 try: 03:58:01 > sock = connection.create_connection( 03:58:01 (self._dns_host, self.port), 03:58:01 self.timeout, 03:58:01 source_address=self.source_address, 03:58:01 socket_options=self.socket_options, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 03:58:01 raise err 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 address = ('localhost', 8182), timeout = 10, source_address = None 03:58:01 socket_options = [(6, 1, 1)] 03:58:01 03:58:01 def create_connection( 03:58:01 address: tuple[str, int], 03:58:01 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:01 source_address: tuple[str, int] | None = None, 03:58:01 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 03:58:01 ) -> socket.socket: 03:58:01 """Connect to *address* and return the socket object. 03:58:01 03:58:01 Convenience function. Connect to *address* (a 2-tuple ``(host, 03:58:01 port)``) and return the socket object. Passing the optional 03:58:01 *timeout* parameter will set the timeout on the socket instance 03:58:01 before attempting to connect. If no *timeout* is supplied, the 03:58:01 global default timeout setting returned by :func:`socket.getdefaulttimeout` 03:58:01 is used. If *source_address* is set it must be a tuple of (host, port) 03:58:01 for the socket to bind as a source address before making the connection. 03:58:01 An host of '' or port 0 tells the OS to use the default. 03:58:01 """ 03:58:01 03:58:01 host, port = address 03:58:01 if host.startswith("["): 03:58:01 host = host.strip("[]") 03:58:01 err = None 03:58:01 03:58:01 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 03:58:01 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 03:58:01 # The original create_connection function always returns all records. 03:58:01 family = allowed_gai_family() 03:58:01 03:58:01 try: 03:58:01 host.encode("idna") 03:58:01 except UnicodeError: 03:58:01 raise LocationParseError(f"'{host}', label empty or too long") from None 03:58:01 03:58:01 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 03:58:01 af, socktype, proto, canonname, sa = res 03:58:01 sock = None 03:58:01 try: 03:58:01 sock = socket.socket(af, socktype, proto) 03:58:01 03:58:01 # If provided, set socket level options before connecting. 03:58:01 _set_socket_options(sock, socket_options) 03:58:01 03:58:01 if timeout is not _DEFAULT_TIMEOUT: 03:58:01 sock.settimeout(timeout) 03:58:01 if source_address: 03:58:01 sock.bind(source_address) 03:58:01 > sock.connect(sa) 03:58:01 E ConnectionRefusedError: [Errno 111] Connection refused 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 03:58:01 03:58:01 The above exception was the direct cause of the following exception: 03:58:01 03:58:01 self = 03:58:01 method = 'GET' 03:58:01 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' 03:58:01 body = None 03:58:01 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 03:58:01 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:01 redirect = False, assert_same_host = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 03:58:01 release_conn = False, chunked = False, body_pos = None, preload_content = False 03:58:01 decode_content = False, response_kw = {} 03:58:01 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info', query=None, fragment=None) 03:58:01 destination_scheme = None, conn = None, release_this_conn = True 03:58:01 http_tunnel_required = False, err = None, clean_exit = False 03:58:01 03:58:01 def urlopen( # type: ignore[override] 03:58:01 self, 03:58:01 method: str, 03:58:01 url: str, 03:58:01 body: _TYPE_BODY | None = None, 03:58:01 headers: typing.Mapping[str, str] | None = None, 03:58:01 retries: Retry | bool | int | None = None, 03:58:01 redirect: bool = True, 03:58:01 assert_same_host: bool = True, 03:58:01 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:01 pool_timeout: int | None = None, 03:58:01 release_conn: bool | None = None, 03:58:01 chunked: bool = False, 03:58:01 body_pos: _TYPE_BODY_POSITION | None = None, 03:58:01 preload_content: bool = True, 03:58:01 decode_content: bool = True, 03:58:01 **response_kw: typing.Any, 03:58:01 ) -> BaseHTTPResponse: 03:58:01 """ 03:58:01 Get a connection from the pool and perform an HTTP request. This is the 03:58:01 lowest level call for making a request, so you'll need to specify all 03:58:01 the raw details. 03:58:01 03:58:01 .. note:: 03:58:01 03:58:01 More commonly, it's appropriate to use a convenience method 03:58:01 such as :meth:`request`. 03:58:01 03:58:01 .. note:: 03:58:01 03:58:01 `release_conn` will only behave as expected if 03:58:01 `preload_content=False` because we want to make 03:58:01 `preload_content=False` the default behaviour someday soon without 03:58:01 breaking backwards compatibility. 03:58:01 03:58:01 :param method: 03:58:01 HTTP request method (such as GET, POST, PUT, etc.) 03:58:01 03:58:01 :param url: 03:58:01 The URL to perform the request on. 03:58:01 03:58:01 :param body: 03:58:01 Data to send in the request body, either :class:`str`, :class:`bytes`, 03:58:01 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 03:58:01 03:58:01 :param headers: 03:58:01 Dictionary of custom headers to send, such as User-Agent, 03:58:01 If-None-Match, etc. If None, pool headers are used. If provided, 03:58:01 these headers completely replace any pool-specific headers. 03:58:01 03:58:01 :param retries: 03:58:01 Configure the number of retries to allow before raising a 03:58:01 :class:`~urllib3.exceptions.MaxRetryError` exception. 03:58:01 03:58:01 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 03:58:01 :class:`~urllib3.util.retry.Retry` object for fine-grained control 03:58:01 over different types of retries. 03:58:01 Pass an integer number to retry connection errors that many times, 03:58:01 but no other types of errors. Pass zero to never retry. 03:58:01 03:58:01 If ``False``, then retries are disabled and any exception is raised 03:58:01 immediately. Also, instead of raising a MaxRetryError on redirects, 03:58:01 the redirect response will be returned. 03:58:01 03:58:01 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 03:58:01 03:58:01 :param redirect: 03:58:01 If True, automatically handle redirects (status codes 301, 302, 03:58:01 303, 307, 308). Each redirect counts as a retry. Disabling retries 03:58:01 will disable redirect, too. 03:58:01 03:58:01 :param assert_same_host: 03:58:01 If ``True``, will make sure that the host of the pool requests is 03:58:01 consistent else will raise HostChangedError. When ``False``, you can 03:58:01 use the pool on an HTTP proxy and request foreign hosts. 03:58:01 03:58:01 :param timeout: 03:58:01 If specified, overrides the default timeout for this one 03:58:01 request. It may be a float (in seconds) or an instance of 03:58:01 :class:`urllib3.util.Timeout`. 03:58:01 03:58:01 :param pool_timeout: 03:58:01 If set and the pool is set to block=True, then this method will 03:58:01 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 03:58:01 connection is available within the time period. 03:58:01 03:58:01 :param bool preload_content: 03:58:01 If True, the response's body will be preloaded into memory. 03:58:01 03:58:01 :param bool decode_content: 03:58:01 If True, will attempt to decode the body based on the 03:58:01 'content-encoding' header. 03:58:01 03:58:01 :param release_conn: 03:58:01 If False, then the urlopen call will not release the connection 03:58:01 back into the pool once a response is received (but will release if 03:58:01 you read the entire contents of the response such as when 03:58:01 `preload_content=True`). This is useful if you're not preloading 03:58:01 the response's content immediately. You will need to call 03:58:01 ``r.release_conn()`` on the response ``r`` to return the connection 03:58:01 back into the pool. If None, it takes the value of ``preload_content`` 03:58:01 which defaults to ``True``. 03:58:01 03:58:01 :param bool chunked: 03:58:01 If True, urllib3 will send the body using chunked transfer 03:58:01 encoding. Otherwise, urllib3 will send the body using the standard 03:58:01 content-length form. Defaults to False. 03:58:01 03:58:01 :param int body_pos: 03:58:01 Position to seek to in file-like body in the event of a retry or 03:58:01 redirect. Typically this won't need to be set because urllib3 will 03:58:01 auto-populate the value when needed. 03:58:01 """ 03:58:01 parsed_url = parse_url(url) 03:58:01 destination_scheme = parsed_url.scheme 03:58:01 03:58:01 if headers is None: 03:58:01 headers = self.headers 03:58:01 03:58:01 if not isinstance(retries, Retry): 03:58:01 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 03:58:01 03:58:01 if release_conn is None: 03:58:01 release_conn = preload_content 03:58:01 03:58:01 # Check host 03:58:01 if assert_same_host and not self.is_same_host(url): 03:58:01 raise HostChangedError(self, url, retries) 03:58:01 03:58:01 # Ensure that the URL we're connecting to is properly encoded 03:58:01 if url.startswith("/"): 03:58:01 url = to_str(_encode_target(url)) 03:58:01 else: 03:58:01 url = to_str(parsed_url.url) 03:58:01 03:58:01 conn = None 03:58:01 03:58:01 # Track whether `conn` needs to be released before 03:58:01 # returning/raising/recursing. Update this variable if necessary, and 03:58:01 # leave `release_conn` constant throughout the function. That way, if 03:58:01 # the function recurses, the original value of `release_conn` will be 03:58:01 # passed down into the recursive call, and its value will be respected. 03:58:01 # 03:58:01 # See issue #651 [1] for details. 03:58:01 # 03:58:01 # [1] 03:58:01 release_this_conn = release_conn 03:58:01 03:58:01 http_tunnel_required = connection_requires_http_tunnel( 03:58:01 self.proxy, self.proxy_config, destination_scheme 03:58:01 ) 03:58:01 03:58:01 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 03:58:01 # have to copy the headers dict so we can safely change it without those 03:58:01 # changes being reflected in anyone else's copy. 03:58:01 if not http_tunnel_required: 03:58:01 headers = headers.copy() # type: ignore[attr-defined] 03:58:01 headers.update(self.proxy_headers) # type: ignore[union-attr] 03:58:01 03:58:01 # Must keep the exception bound to a separate variable or else Python 3 03:58:01 # complains about UnboundLocalError. 03:58:01 err = None 03:58:01 03:58:01 # Keep track of whether we cleanly exited the except block. This 03:58:01 # ensures we do proper cleanup in finally. 03:58:01 clean_exit = False 03:58:01 03:58:01 # Rewind body position, if needed. Record current position 03:58:01 # for future rewinds in the event of a redirect/retry. 03:58:01 body_pos = set_file_position(body, body_pos) 03:58:01 03:58:01 try: 03:58:01 # Request a connection from the queue. 03:58:01 timeout_obj = self._get_timeout(timeout) 03:58:01 conn = self._get_conn(timeout=pool_timeout) 03:58:01 03:58:01 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 03:58:01 03:58:01 # Is this a closed/new connection that requires CONNECT tunnelling? 03:58:01 if self.proxy is not None and http_tunnel_required and conn.is_closed: 03:58:01 try: 03:58:01 self._prepare_proxy(conn) 03:58:01 except (BaseSSLError, OSError, SocketTimeout) as e: 03:58:01 self._raise_timeout( 03:58:01 err=e, url=self.proxy.url, timeout_value=conn.timeout 03:58:01 ) 03:58:01 raise 03:58:01 03:58:01 # If we're going to release the connection in ``finally:``, then 03:58:01 # the response doesn't need to know about the connection. Otherwise 03:58:01 # it will also try to release it and we'll have a double-release 03:58:01 # mess. 03:58:01 response_conn = conn if not release_conn else None 03:58:01 03:58:01 # Make the request on the HTTPConnection object 03:58:01 > response = self._make_request( 03:58:01 conn, 03:58:01 method, 03:58:01 url, 03:58:01 timeout=timeout_obj, 03:58:01 body=body, 03:58:01 headers=headers, 03:58:01 chunked=chunked, 03:58:01 retries=retries, 03:58:01 response_conn=response_conn, 03:58:01 preload_content=preload_content, 03:58:01 decode_content=decode_content, 03:58:01 **response_kw, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 03:58:01 conn.request( 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 03:58:01 self.endheaders() 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 03:58:01 self._send_output(message_body, encode_chunked=encode_chunked) 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 03:58:01 self.send(msg) 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 03:58:01 self.connect() 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 03:58:01 self.sock = self._new_conn() 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = 03:58:01 03:58:01 def _new_conn(self) -> socket.socket: 03:58:01 """Establish a socket connection and set nodelay settings on it. 03:58:01 03:58:01 :return: New socket connection. 03:58:01 """ 03:58:01 try: 03:58:01 sock = connection.create_connection( 03:58:01 (self._dns_host, self.port), 03:58:01 self.timeout, 03:58:01 source_address=self.source_address, 03:58:01 socket_options=self.socket_options, 03:58:01 ) 03:58:01 except socket.gaierror as e: 03:58:01 raise NameResolutionError(self.host, self, e) from e 03:58:01 except SocketTimeout as e: 03:58:01 raise ConnectTimeoutError( 03:58:01 self, 03:58:01 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 03:58:01 ) from e 03:58:01 03:58:01 except OSError as e: 03:58:01 > raise NewConnectionError( 03:58:01 self, f"Failed to establish a new connection: {e}" 03:58:01 ) from e 03:58:01 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 03:58:01 03:58:01 The above exception was the direct cause of the following exception: 03:58:01 03:58:01 self = 03:58:01 request = , stream = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:01 proxies = OrderedDict() 03:58:01 03:58:01 def send( 03:58:01 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:01 ): 03:58:01 """Sends PreparedRequest object. Returns Response object. 03:58:01 03:58:01 :param request: The :class:`PreparedRequest ` being sent. 03:58:01 :param stream: (optional) Whether to stream the request content. 03:58:01 :param timeout: (optional) How long to wait for the server to send 03:58:01 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:01 read timeout) ` tuple. 03:58:01 :type timeout: float or tuple or urllib3 Timeout object 03:58:01 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:01 we verify the server's TLS certificate, or a string, in which case it 03:58:01 must be a path to a CA bundle to use 03:58:01 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:01 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:01 :rtype: requests.Response 03:58:01 """ 03:58:01 03:58:01 try: 03:58:01 conn = self.get_connection_with_tls_context( 03:58:01 request, verify, proxies=proxies, cert=cert 03:58:01 ) 03:58:01 except LocationValueError as e: 03:58:01 raise InvalidURL(e, request=request) 03:58:01 03:58:01 self.cert_verify(conn, request.url, verify, cert) 03:58:01 url = self.request_url(request, proxies) 03:58:01 self.add_headers( 03:58:01 request, 03:58:01 stream=stream, 03:58:01 timeout=timeout, 03:58:01 verify=verify, 03:58:01 cert=cert, 03:58:01 proxies=proxies, 03:58:01 ) 03:58:01 03:58:01 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:01 03:58:01 if isinstance(timeout, tuple): 03:58:01 try: 03:58:01 connect, read = timeout 03:58:01 timeout = TimeoutSauce(connect=connect, read=read) 03:58:01 except ValueError: 03:58:01 raise ValueError( 03:58:01 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:01 f"or a single float to set both timeouts to the same value." 03:58:01 ) 03:58:01 elif isinstance(timeout, TimeoutSauce): 03:58:01 pass 03:58:01 else: 03:58:01 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:01 03:58:01 try: 03:58:01 > resp = conn.urlopen( 03:58:01 method=request.method, 03:58:01 url=url, 03:58:01 body=request.body, 03:58:01 headers=request.headers, 03:58:01 redirect=False, 03:58:01 assert_same_host=False, 03:58:01 preload_content=False, 03:58:01 decode_content=False, 03:58:01 retries=self.max_retries, 03:58:01 timeout=timeout, 03:58:01 chunked=chunked, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 03:58:01 retries = retries.increment( 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:01 method = 'GET' 03:58:01 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' 03:58:01 response = None 03:58:01 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 03:58:01 _pool = 03:58:01 _stacktrace = 03:58:01 03:58:01 def increment( 03:58:01 self, 03:58:01 method: str | None = None, 03:58:01 url: str | None = None, 03:58:01 response: BaseHTTPResponse | None = None, 03:58:01 error: Exception | None = None, 03:58:01 _pool: ConnectionPool | None = None, 03:58:01 _stacktrace: TracebackType | None = None, 03:58:01 ) -> Self: 03:58:01 """Return a new Retry object with incremented retry counters. 03:58:01 03:58:01 :param response: A response object, or None, if the server did not 03:58:01 return a response. 03:58:01 :type response: :class:`~urllib3.response.BaseHTTPResponse` 03:58:01 :param Exception error: An error encountered during the request, or 03:58:01 None if the response was received successfully. 03:58:01 03:58:01 :return: A new ``Retry`` object. 03:58:01 """ 03:58:01 if self.total is False and error: 03:58:01 # Disabled, indicate to re-raise the error. 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 03:58:01 total = self.total 03:58:01 if total is not None: 03:58:01 total -= 1 03:58:01 03:58:01 connect = self.connect 03:58:01 read = self.read 03:58:01 redirect = self.redirect 03:58:01 status_count = self.status 03:58:01 other = self.other 03:58:01 cause = "unknown" 03:58:01 status = None 03:58:01 redirect_location = None 03:58:01 03:58:01 if error and self._is_connection_error(error): 03:58:01 # Connect retry? 03:58:01 if connect is False: 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 elif connect is not None: 03:58:01 connect -= 1 03:58:01 03:58:01 elif error and self._is_read_error(error): 03:58:01 # Read retry? 03:58:01 if read is False or method is None or not self._is_method_retryable(method): 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 elif read is not None: 03:58:01 read -= 1 03:58:01 03:58:01 elif error: 03:58:01 # Other retry? 03:58:01 if other is not None: 03:58:01 other -= 1 03:58:01 03:58:01 elif response and response.get_redirect_location(): 03:58:01 # Redirect retry? 03:58:01 if redirect is not None: 03:58:01 redirect -= 1 03:58:01 cause = "too many redirects" 03:58:01 response_redirect_location = response.get_redirect_location() 03:58:01 if response_redirect_location: 03:58:01 redirect_location = response_redirect_location 03:58:01 status = response.status 03:58:01 03:58:01 else: 03:58:01 # Incrementing because of a server error like a 500 in 03:58:01 # status_forcelist and the given method is in the allowed_methods 03:58:01 cause = ResponseError.GENERIC_ERROR 03:58:01 if response and response.status: 03:58:01 if status_count is not None: 03:58:01 status_count -= 1 03:58:01 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 03:58:01 status = response.status 03:58:01 03:58:01 history = self.history + ( 03:58:01 RequestHistory(method, url, error, status, redirect_location), 03:58:01 ) 03:58:01 03:58:01 new_retry = self.new( 03:58:01 total=total, 03:58:01 connect=connect, 03:58:01 read=read, 03:58:01 redirect=redirect, 03:58:01 status=status_count, 03:58:01 other=other, 03:58:01 history=history, 03:58:01 ) 03:58:01 03:58:01 if new_retry.is_exhausted(): 03:58:01 reason = error or ResponseError(cause) 03:58:01 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 03:58:01 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 03:58:01 03:58:01 During handling of the above exception, another exception occurred: 03:58:01 03:58:01 self = 03:58:01 03:58:01 def test_18_xpdr_device_not_connected(self): 03:58:01 > response = test_utils.get_portmapping_node_attr("XPDRA01", "node-info", None) 03:58:01 03:58:01 transportpce_tests/1.2.1/test01_portmapping.py:203: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 transportpce_tests/common/test_utils.py:471: in get_portmapping_node_attr 03:58:01 response = get_request(target_url) 03:58:01 transportpce_tests/common/test_utils.py:116: in get_request 03:58:01 return requests.request( 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 03:58:01 return session.request(method=method, url=url, **kwargs) 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 03:58:01 resp = self.send(prep, **send_kwargs) 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 03:58:01 r = adapter.send(request, **kwargs) 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = 03:58:01 request = , stream = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:01 proxies = OrderedDict() 03:58:01 03:58:01 def send( 03:58:01 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:01 ): 03:58:01 """Sends PreparedRequest object. Returns Response object. 03:58:01 03:58:01 :param request: The :class:`PreparedRequest ` being sent. 03:58:01 :param stream: (optional) Whether to stream the request content. 03:58:01 :param timeout: (optional) How long to wait for the server to send 03:58:01 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:01 read timeout) ` tuple. 03:58:01 :type timeout: float or tuple or urllib3 Timeout object 03:58:01 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:01 we verify the server's TLS certificate, or a string, in which case it 03:58:01 must be a path to a CA bundle to use 03:58:01 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:01 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:01 :rtype: requests.Response 03:58:01 """ 03:58:01 03:58:01 try: 03:58:01 conn = self.get_connection_with_tls_context( 03:58:01 request, verify, proxies=proxies, cert=cert 03:58:01 ) 03:58:01 except LocationValueError as e: 03:58:01 raise InvalidURL(e, request=request) 03:58:01 03:58:01 self.cert_verify(conn, request.url, verify, cert) 03:58:01 url = self.request_url(request, proxies) 03:58:01 self.add_headers( 03:58:01 request, 03:58:01 stream=stream, 03:58:01 timeout=timeout, 03:58:01 verify=verify, 03:58:01 cert=cert, 03:58:01 proxies=proxies, 03:58:01 ) 03:58:01 03:58:01 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:01 03:58:01 if isinstance(timeout, tuple): 03:58:01 try: 03:58:01 connect, read = timeout 03:58:01 timeout = TimeoutSauce(connect=connect, read=read) 03:58:01 except ValueError: 03:58:01 raise ValueError( 03:58:01 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:01 f"or a single float to set both timeouts to the same value." 03:58:01 ) 03:58:01 elif isinstance(timeout, TimeoutSauce): 03:58:01 pass 03:58:01 else: 03:58:01 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:01 03:58:01 try: 03:58:01 resp = conn.urlopen( 03:58:01 method=request.method, 03:58:01 url=url, 03:58:01 body=request.body, 03:58:01 headers=request.headers, 03:58:01 redirect=False, 03:58:01 assert_same_host=False, 03:58:01 preload_content=False, 03:58:01 decode_content=False, 03:58:01 retries=self.max_retries, 03:58:01 timeout=timeout, 03:58:01 chunked=chunked, 03:58:01 ) 03:58:01 03:58:01 except (ProtocolError, OSError) as err: 03:58:01 raise ConnectionError(err, request=request) 03:58:01 03:58:01 except MaxRetryError as e: 03:58:01 if isinstance(e.reason, ConnectTimeoutError): 03:58:01 # TODO: Remove this in 3.0.0: see #2811 03:58:01 if not isinstance(e.reason, NewConnectionError): 03:58:01 raise ConnectTimeout(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, ResponseError): 03:58:01 raise RetryError(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, _ProxyError): 03:58:01 raise ProxyError(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, _SSLError): 03:58:01 # This branch is for urllib3 v1.22 and later. 03:58:01 raise SSLError(e, request=request) 03:58:01 03:58:01 > raise ConnectionError(e, request=request) 03:58:01 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 03:58:01 ----------------------------- Captured stdout call ----------------------------- 03:58:01 execution of test_18_xpdr_device_not_connected 03:58:01 _______ TransportPCEPortMappingTesting.test_19_rdm_device_disconnection ________ 03:58:01 03:58:01 self = 03:58:01 03:58:01 def _new_conn(self) -> socket.socket: 03:58:01 """Establish a socket connection and set nodelay settings on it. 03:58:01 03:58:01 :return: New socket connection. 03:58:01 """ 03:58:01 try: 03:58:01 > sock = connection.create_connection( 03:58:01 (self._dns_host, self.port), 03:58:01 self.timeout, 03:58:01 source_address=self.source_address, 03:58:01 socket_options=self.socket_options, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 03:58:01 raise err 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 address = ('localhost', 8182), timeout = 10, source_address = None 03:58:01 socket_options = [(6, 1, 1)] 03:58:01 03:58:01 def create_connection( 03:58:01 address: tuple[str, int], 03:58:01 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:01 source_address: tuple[str, int] | None = None, 03:58:01 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 03:58:01 ) -> socket.socket: 03:58:01 """Connect to *address* and return the socket object. 03:58:01 03:58:01 Convenience function. Connect to *address* (a 2-tuple ``(host, 03:58:01 port)``) and return the socket object. Passing the optional 03:58:01 *timeout* parameter will set the timeout on the socket instance 03:58:01 before attempting to connect. If no *timeout* is supplied, the 03:58:01 global default timeout setting returned by :func:`socket.getdefaulttimeout` 03:58:01 is used. If *source_address* is set it must be a tuple of (host, port) 03:58:01 for the socket to bind as a source address before making the connection. 03:58:01 An host of '' or port 0 tells the OS to use the default. 03:58:01 """ 03:58:01 03:58:01 host, port = address 03:58:01 if host.startswith("["): 03:58:01 host = host.strip("[]") 03:58:01 err = None 03:58:01 03:58:01 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 03:58:01 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 03:58:01 # The original create_connection function always returns all records. 03:58:01 family = allowed_gai_family() 03:58:01 03:58:01 try: 03:58:01 host.encode("idna") 03:58:01 except UnicodeError: 03:58:01 raise LocationParseError(f"'{host}', label empty or too long") from None 03:58:01 03:58:01 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 03:58:01 af, socktype, proto, canonname, sa = res 03:58:01 sock = None 03:58:01 try: 03:58:01 sock = socket.socket(af, socktype, proto) 03:58:01 03:58:01 # If provided, set socket level options before connecting. 03:58:01 _set_socket_options(sock, socket_options) 03:58:01 03:58:01 if timeout is not _DEFAULT_TIMEOUT: 03:58:01 sock.settimeout(timeout) 03:58:01 if source_address: 03:58:01 sock.bind(source_address) 03:58:01 > sock.connect(sa) 03:58:01 E ConnectionRefusedError: [Errno 111] Connection refused 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 03:58:01 03:58:01 The above exception was the direct cause of the following exception: 03:58:01 03:58:01 self = 03:58:01 method = 'DELETE' 03:58:01 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' 03:58:01 body = None 03:58:01 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 03:58:01 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:01 redirect = False, assert_same_host = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 03:58:01 release_conn = False, chunked = False, body_pos = None, preload_content = False 03:58:01 decode_content = False, response_kw = {} 03:58:01 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query=None, fragment=None) 03:58:01 destination_scheme = None, conn = None, release_this_conn = True 03:58:01 http_tunnel_required = False, err = None, clean_exit = False 03:58:01 03:58:01 def urlopen( # type: ignore[override] 03:58:01 self, 03:58:01 method: str, 03:58:01 url: str, 03:58:01 body: _TYPE_BODY | None = None, 03:58:01 headers: typing.Mapping[str, str] | None = None, 03:58:01 retries: Retry | bool | int | None = None, 03:58:01 redirect: bool = True, 03:58:01 assert_same_host: bool = True, 03:58:01 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:01 pool_timeout: int | None = None, 03:58:01 release_conn: bool | None = None, 03:58:01 chunked: bool = False, 03:58:01 body_pos: _TYPE_BODY_POSITION | None = None, 03:58:01 preload_content: bool = True, 03:58:01 decode_content: bool = True, 03:58:01 **response_kw: typing.Any, 03:58:01 ) -> BaseHTTPResponse: 03:58:01 """ 03:58:01 Get a connection from the pool and perform an HTTP request. This is the 03:58:01 lowest level call for making a request, so you'll need to specify all 03:58:01 the raw details. 03:58:01 03:58:01 .. note:: 03:58:01 03:58:01 More commonly, it's appropriate to use a convenience method 03:58:01 such as :meth:`request`. 03:58:01 03:58:01 .. note:: 03:58:01 03:58:01 `release_conn` will only behave as expected if 03:58:01 `preload_content=False` because we want to make 03:58:01 `preload_content=False` the default behaviour someday soon without 03:58:01 breaking backwards compatibility. 03:58:01 03:58:01 :param method: 03:58:01 HTTP request method (such as GET, POST, PUT, etc.) 03:58:01 03:58:01 :param url: 03:58:01 The URL to perform the request on. 03:58:01 03:58:01 :param body: 03:58:01 Data to send in the request body, either :class:`str`, :class:`bytes`, 03:58:01 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 03:58:01 03:58:01 :param headers: 03:58:01 Dictionary of custom headers to send, such as User-Agent, 03:58:01 If-None-Match, etc. If None, pool headers are used. If provided, 03:58:01 these headers completely replace any pool-specific headers. 03:58:01 03:58:01 :param retries: 03:58:01 Configure the number of retries to allow before raising a 03:58:01 :class:`~urllib3.exceptions.MaxRetryError` exception. 03:58:01 03:58:01 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 03:58:01 :class:`~urllib3.util.retry.Retry` object for fine-grained control 03:58:01 over different types of retries. 03:58:01 Pass an integer number to retry connection errors that many times, 03:58:01 but no other types of errors. Pass zero to never retry. 03:58:01 03:58:01 If ``False``, then retries are disabled and any exception is raised 03:58:01 immediately. Also, instead of raising a MaxRetryError on redirects, 03:58:01 the redirect response will be returned. 03:58:01 03:58:01 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 03:58:01 03:58:01 :param redirect: 03:58:01 If True, automatically handle redirects (status codes 301, 302, 03:58:01 303, 307, 308). Each redirect counts as a retry. Disabling retries 03:58:01 will disable redirect, too. 03:58:01 03:58:01 :param assert_same_host: 03:58:01 If ``True``, will make sure that the host of the pool requests is 03:58:01 consistent else will raise HostChangedError. When ``False``, you can 03:58:01 use the pool on an HTTP proxy and request foreign hosts. 03:58:01 03:58:01 :param timeout: 03:58:01 If specified, overrides the default timeout for this one 03:58:01 request. It may be a float (in seconds) or an instance of 03:58:01 :class:`urllib3.util.Timeout`. 03:58:01 03:58:01 :param pool_timeout: 03:58:01 If set and the pool is set to block=True, then this method will 03:58:01 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 03:58:01 connection is available within the time period. 03:58:01 03:58:01 :param bool preload_content: 03:58:01 If True, the response's body will be preloaded into memory. 03:58:01 03:58:01 :param bool decode_content: 03:58:01 If True, will attempt to decode the body based on the 03:58:01 'content-encoding' header. 03:58:01 03:58:01 :param release_conn: 03:58:01 If False, then the urlopen call will not release the connection 03:58:01 back into the pool once a response is received (but will release if 03:58:01 you read the entire contents of the response such as when 03:58:01 `preload_content=True`). This is useful if you're not preloading 03:58:01 the response's content immediately. You will need to call 03:58:01 ``r.release_conn()`` on the response ``r`` to return the connection 03:58:01 back into the pool. If None, it takes the value of ``preload_content`` 03:58:01 which defaults to ``True``. 03:58:01 03:58:01 :param bool chunked: 03:58:01 If True, urllib3 will send the body using chunked transfer 03:58:01 encoding. Otherwise, urllib3 will send the body using the standard 03:58:01 content-length form. Defaults to False. 03:58:01 03:58:01 :param int body_pos: 03:58:01 Position to seek to in file-like body in the event of a retry or 03:58:01 redirect. Typically this won't need to be set because urllib3 will 03:58:01 auto-populate the value when needed. 03:58:01 """ 03:58:01 parsed_url = parse_url(url) 03:58:01 destination_scheme = parsed_url.scheme 03:58:01 03:58:01 if headers is None: 03:58:01 headers = self.headers 03:58:01 03:58:01 if not isinstance(retries, Retry): 03:58:01 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 03:58:01 03:58:01 if release_conn is None: 03:58:01 release_conn = preload_content 03:58:01 03:58:01 # Check host 03:58:01 if assert_same_host and not self.is_same_host(url): 03:58:01 raise HostChangedError(self, url, retries) 03:58:01 03:58:01 # Ensure that the URL we're connecting to is properly encoded 03:58:01 if url.startswith("/"): 03:58:01 url = to_str(_encode_target(url)) 03:58:01 else: 03:58:01 url = to_str(parsed_url.url) 03:58:01 03:58:01 conn = None 03:58:01 03:58:01 # Track whether `conn` needs to be released before 03:58:01 # returning/raising/recursing. Update this variable if necessary, and 03:58:01 # leave `release_conn` constant throughout the function. That way, if 03:58:01 # the function recurses, the original value of `release_conn` will be 03:58:01 # passed down into the recursive call, and its value will be respected. 03:58:01 # 03:58:01 # See issue #651 [1] for details. 03:58:01 # 03:58:01 # [1] 03:58:01 release_this_conn = release_conn 03:58:01 03:58:01 http_tunnel_required = connection_requires_http_tunnel( 03:58:01 self.proxy, self.proxy_config, destination_scheme 03:58:01 ) 03:58:01 03:58:01 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 03:58:01 # have to copy the headers dict so we can safely change it without those 03:58:01 # changes being reflected in anyone else's copy. 03:58:01 if not http_tunnel_required: 03:58:01 headers = headers.copy() # type: ignore[attr-defined] 03:58:01 headers.update(self.proxy_headers) # type: ignore[union-attr] 03:58:01 03:58:01 # Must keep the exception bound to a separate variable or else Python 3 03:58:01 # complains about UnboundLocalError. 03:58:01 err = None 03:58:01 03:58:01 # Keep track of whether we cleanly exited the except block. This 03:58:01 # ensures we do proper cleanup in finally. 03:58:01 clean_exit = False 03:58:01 03:58:01 # Rewind body position, if needed. Record current position 03:58:01 # for future rewinds in the event of a redirect/retry. 03:58:01 body_pos = set_file_position(body, body_pos) 03:58:01 03:58:01 try: 03:58:01 # Request a connection from the queue. 03:58:01 timeout_obj = self._get_timeout(timeout) 03:58:01 conn = self._get_conn(timeout=pool_timeout) 03:58:01 03:58:01 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 03:58:01 03:58:01 # Is this a closed/new connection that requires CONNECT tunnelling? 03:58:01 if self.proxy is not None and http_tunnel_required and conn.is_closed: 03:58:01 try: 03:58:01 self._prepare_proxy(conn) 03:58:01 except (BaseSSLError, OSError, SocketTimeout) as e: 03:58:01 self._raise_timeout( 03:58:01 err=e, url=self.proxy.url, timeout_value=conn.timeout 03:58:01 ) 03:58:01 raise 03:58:01 03:58:01 # If we're going to release the connection in ``finally:``, then 03:58:01 # the response doesn't need to know about the connection. Otherwise 03:58:01 # it will also try to release it and we'll have a double-release 03:58:01 # mess. 03:58:01 response_conn = conn if not release_conn else None 03:58:01 03:58:01 # Make the request on the HTTPConnection object 03:58:01 > response = self._make_request( 03:58:01 conn, 03:58:01 method, 03:58:01 url, 03:58:01 timeout=timeout_obj, 03:58:01 body=body, 03:58:01 headers=headers, 03:58:01 chunked=chunked, 03:58:01 retries=retries, 03:58:01 response_conn=response_conn, 03:58:01 preload_content=preload_content, 03:58:01 decode_content=decode_content, 03:58:01 **response_kw, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 03:58:01 conn.request( 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 03:58:01 self.endheaders() 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 03:58:01 self._send_output(message_body, encode_chunked=encode_chunked) 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 03:58:01 self.send(msg) 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 03:58:01 self.connect() 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 03:58:01 self.sock = self._new_conn() 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = 03:58:01 03:58:01 def _new_conn(self) -> socket.socket: 03:58:01 """Establish a socket connection and set nodelay settings on it. 03:58:01 03:58:01 :return: New socket connection. 03:58:01 """ 03:58:01 try: 03:58:01 sock = connection.create_connection( 03:58:01 (self._dns_host, self.port), 03:58:01 self.timeout, 03:58:01 source_address=self.source_address, 03:58:01 socket_options=self.socket_options, 03:58:01 ) 03:58:01 except socket.gaierror as e: 03:58:01 raise NameResolutionError(self.host, self, e) from e 03:58:01 except SocketTimeout as e: 03:58:01 raise ConnectTimeoutError( 03:58:01 self, 03:58:01 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 03:58:01 ) from e 03:58:01 03:58:01 except OSError as e: 03:58:01 > raise NewConnectionError( 03:58:01 self, f"Failed to establish a new connection: {e}" 03:58:01 ) from e 03:58:01 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 03:58:01 03:58:01 The above exception was the direct cause of the following exception: 03:58:01 03:58:01 self = 03:58:01 request = , stream = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:01 proxies = OrderedDict() 03:58:01 03:58:01 def send( 03:58:01 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:01 ): 03:58:01 """Sends PreparedRequest object. Returns Response object. 03:58:01 03:58:01 :param request: The :class:`PreparedRequest ` being sent. 03:58:01 :param stream: (optional) Whether to stream the request content. 03:58:01 :param timeout: (optional) How long to wait for the server to send 03:58:01 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:01 read timeout) ` tuple. 03:58:01 :type timeout: float or tuple or urllib3 Timeout object 03:58:01 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:01 we verify the server's TLS certificate, or a string, in which case it 03:58:01 must be a path to a CA bundle to use 03:58:01 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:01 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:01 :rtype: requests.Response 03:58:01 """ 03:58:01 03:58:01 try: 03:58:01 conn = self.get_connection_with_tls_context( 03:58:01 request, verify, proxies=proxies, cert=cert 03:58:01 ) 03:58:01 except LocationValueError as e: 03:58:01 raise InvalidURL(e, request=request) 03:58:01 03:58:01 self.cert_verify(conn, request.url, verify, cert) 03:58:01 url = self.request_url(request, proxies) 03:58:01 self.add_headers( 03:58:01 request, 03:58:01 stream=stream, 03:58:01 timeout=timeout, 03:58:01 verify=verify, 03:58:01 cert=cert, 03:58:01 proxies=proxies, 03:58:01 ) 03:58:01 03:58:01 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:01 03:58:01 if isinstance(timeout, tuple): 03:58:01 try: 03:58:01 connect, read = timeout 03:58:01 timeout = TimeoutSauce(connect=connect, read=read) 03:58:01 except ValueError: 03:58:01 raise ValueError( 03:58:01 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:01 f"or a single float to set both timeouts to the same value." 03:58:01 ) 03:58:01 elif isinstance(timeout, TimeoutSauce): 03:58:01 pass 03:58:01 else: 03:58:01 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:01 03:58:01 try: 03:58:01 > resp = conn.urlopen( 03:58:01 method=request.method, 03:58:01 url=url, 03:58:01 body=request.body, 03:58:01 headers=request.headers, 03:58:01 redirect=False, 03:58:01 assert_same_host=False, 03:58:01 preload_content=False, 03:58:01 decode_content=False, 03:58:01 retries=self.max_retries, 03:58:01 timeout=timeout, 03:58:01 chunked=chunked, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 03:58:01 retries = retries.increment( 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:01 method = 'DELETE' 03:58:01 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' 03:58:01 response = None 03:58:01 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 03:58:01 _pool = 03:58:01 _stacktrace = 03:58:01 03:58:01 def increment( 03:58:01 self, 03:58:01 method: str | None = None, 03:58:01 url: str | None = None, 03:58:01 response: BaseHTTPResponse | None = None, 03:58:01 error: Exception | None = None, 03:58:01 _pool: ConnectionPool | None = None, 03:58:01 _stacktrace: TracebackType | None = None, 03:58:01 ) -> Self: 03:58:01 """Return a new Retry object with incremented retry counters. 03:58:01 03:58:01 :param response: A response object, or None, if the server did not 03:58:01 return a response. 03:58:01 :type response: :class:`~urllib3.response.BaseHTTPResponse` 03:58:01 :param Exception error: An error encountered during the request, or 03:58:01 None if the response was received successfully. 03:58:01 03:58:01 :return: A new ``Retry`` object. 03:58:01 """ 03:58:01 if self.total is False and error: 03:58:01 # Disabled, indicate to re-raise the error. 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 03:58:01 total = self.total 03:58:01 if total is not None: 03:58:01 total -= 1 03:58:01 03:58:01 connect = self.connect 03:58:01 read = self.read 03:58:01 redirect = self.redirect 03:58:01 status_count = self.status 03:58:01 other = self.other 03:58:01 cause = "unknown" 03:58:01 status = None 03:58:01 redirect_location = None 03:58:01 03:58:01 if error and self._is_connection_error(error): 03:58:01 # Connect retry? 03:58:01 if connect is False: 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 elif connect is not None: 03:58:01 connect -= 1 03:58:01 03:58:01 elif error and self._is_read_error(error): 03:58:01 # Read retry? 03:58:01 if read is False or method is None or not self._is_method_retryable(method): 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 elif read is not None: 03:58:01 read -= 1 03:58:01 03:58:01 elif error: 03:58:01 # Other retry? 03:58:01 if other is not None: 03:58:01 other -= 1 03:58:01 03:58:01 elif response and response.get_redirect_location(): 03:58:01 # Redirect retry? 03:58:01 if redirect is not None: 03:58:01 redirect -= 1 03:58:01 cause = "too many redirects" 03:58:01 response_redirect_location = response.get_redirect_location() 03:58:01 if response_redirect_location: 03:58:01 redirect_location = response_redirect_location 03:58:01 status = response.status 03:58:01 03:58:01 else: 03:58:01 # Incrementing because of a server error like a 500 in 03:58:01 # status_forcelist and the given method is in the allowed_methods 03:58:01 cause = ResponseError.GENERIC_ERROR 03:58:01 if response and response.status: 03:58:01 if status_count is not None: 03:58:01 status_count -= 1 03:58:01 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 03:58:01 status = response.status 03:58:01 03:58:01 history = self.history + ( 03:58:01 RequestHistory(method, url, error, status, redirect_location), 03:58:01 ) 03:58:01 03:58:01 new_retry = self.new( 03:58:01 total=total, 03:58:01 connect=connect, 03:58:01 read=read, 03:58:01 redirect=redirect, 03:58:01 status=status_count, 03:58:01 other=other, 03:58:01 history=history, 03:58:01 ) 03:58:01 03:58:01 if new_retry.is_exhausted(): 03:58:01 reason = error or ResponseError(cause) 03:58:01 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 03:58:01 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 03:58:01 03:58:01 During handling of the above exception, another exception occurred: 03:58:01 03:58:01 self = 03:58:01 03:58:01 def test_19_rdm_device_disconnection(self): 03:58:01 > response = test_utils.unmount_device("ROADMA01") 03:58:01 03:58:01 transportpce_tests/1.2.1/test01_portmapping.py:211: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 transportpce_tests/common/test_utils.py:359: in unmount_device 03:58:01 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 03:58:01 transportpce_tests/common/test_utils.py:133: in delete_request 03:58:01 return requests.request( 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 03:58:01 return session.request(method=method, url=url, **kwargs) 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 03:58:01 resp = self.send(prep, **send_kwargs) 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 03:58:01 r = adapter.send(request, **kwargs) 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = 03:58:01 request = , stream = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:01 proxies = OrderedDict() 03:58:01 03:58:01 def send( 03:58:01 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:01 ): 03:58:01 """Sends PreparedRequest object. Returns Response object. 03:58:01 03:58:01 :param request: The :class:`PreparedRequest ` being sent. 03:58:01 :param stream: (optional) Whether to stream the request content. 03:58:01 :param timeout: (optional) How long to wait for the server to send 03:58:01 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:01 read timeout) ` tuple. 03:58:01 :type timeout: float or tuple or urllib3 Timeout object 03:58:01 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:01 we verify the server's TLS certificate, or a string, in which case it 03:58:01 must be a path to a CA bundle to use 03:58:01 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:01 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:01 :rtype: requests.Response 03:58:01 """ 03:58:01 03:58:01 try: 03:58:01 conn = self.get_connection_with_tls_context( 03:58:01 request, verify, proxies=proxies, cert=cert 03:58:01 ) 03:58:01 except LocationValueError as e: 03:58:01 raise InvalidURL(e, request=request) 03:58:01 03:58:01 self.cert_verify(conn, request.url, verify, cert) 03:58:01 url = self.request_url(request, proxies) 03:58:01 self.add_headers( 03:58:01 request, 03:58:01 stream=stream, 03:58:01 timeout=timeout, 03:58:01 verify=verify, 03:58:01 cert=cert, 03:58:01 proxies=proxies, 03:58:01 ) 03:58:01 03:58:01 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:01 03:58:01 if isinstance(timeout, tuple): 03:58:01 try: 03:58:01 connect, read = timeout 03:58:01 timeout = TimeoutSauce(connect=connect, read=read) 03:58:01 except ValueError: 03:58:01 raise ValueError( 03:58:01 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:01 f"or a single float to set both timeouts to the same value." 03:58:01 ) 03:58:01 elif isinstance(timeout, TimeoutSauce): 03:58:01 pass 03:58:01 else: 03:58:01 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:01 03:58:01 try: 03:58:01 resp = conn.urlopen( 03:58:01 method=request.method, 03:58:01 url=url, 03:58:01 body=request.body, 03:58:01 headers=request.headers, 03:58:01 redirect=False, 03:58:01 assert_same_host=False, 03:58:01 preload_content=False, 03:58:01 decode_content=False, 03:58:01 retries=self.max_retries, 03:58:01 timeout=timeout, 03:58:01 chunked=chunked, 03:58:01 ) 03:58:01 03:58:01 except (ProtocolError, OSError) as err: 03:58:01 raise ConnectionError(err, request=request) 03:58:01 03:58:01 except MaxRetryError as e: 03:58:01 if isinstance(e.reason, ConnectTimeoutError): 03:58:01 # TODO: Remove this in 3.0.0: see #2811 03:58:01 if not isinstance(e.reason, NewConnectionError): 03:58:01 raise ConnectTimeout(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, ResponseError): 03:58:01 raise RetryError(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, _ProxyError): 03:58:01 raise ProxyError(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, _SSLError): 03:58:01 # This branch is for urllib3 v1.22 and later. 03:58:01 raise SSLError(e, request=request) 03:58:01 03:58:01 > raise ConnectionError(e, request=request) 03:58:01 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 03:58:01 ----------------------------- Captured stdout call ----------------------------- 03:58:01 execution of test_19_rdm_device_disconnection 03:58:01 ________ TransportPCEPortMappingTesting.test_20_rdm_device_disconnected ________ 03:58:01 03:58:01 self = 03:58:01 03:58:01 def _new_conn(self) -> socket.socket: 03:58:01 """Establish a socket connection and set nodelay settings on it. 03:58:01 03:58:01 :return: New socket connection. 03:58:01 """ 03:58:01 try: 03:58:01 > sock = connection.create_connection( 03:58:01 (self._dns_host, self.port), 03:58:01 self.timeout, 03:58:01 source_address=self.source_address, 03:58:01 socket_options=self.socket_options, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 03:58:01 raise err 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 address = ('localhost', 8182), timeout = 10, source_address = None 03:58:01 socket_options = [(6, 1, 1)] 03:58:01 03:58:01 def create_connection( 03:58:01 address: tuple[str, int], 03:58:01 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:01 source_address: tuple[str, int] | None = None, 03:58:01 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 03:58:01 ) -> socket.socket: 03:58:01 """Connect to *address* and return the socket object. 03:58:01 03:58:01 Convenience function. Connect to *address* (a 2-tuple ``(host, 03:58:01 port)``) and return the socket object. Passing the optional 03:58:01 *timeout* parameter will set the timeout on the socket instance 03:58:01 before attempting to connect. If no *timeout* is supplied, the 03:58:01 global default timeout setting returned by :func:`socket.getdefaulttimeout` 03:58:01 is used. If *source_address* is set it must be a tuple of (host, port) 03:58:01 for the socket to bind as a source address before making the connection. 03:58:01 An host of '' or port 0 tells the OS to use the default. 03:58:01 """ 03:58:01 03:58:01 host, port = address 03:58:01 if host.startswith("["): 03:58:01 host = host.strip("[]") 03:58:01 err = None 03:58:01 03:58:01 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 03:58:01 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 03:58:01 # The original create_connection function always returns all records. 03:58:01 family = allowed_gai_family() 03:58:01 03:58:01 try: 03:58:01 host.encode("idna") 03:58:01 except UnicodeError: 03:58:01 raise LocationParseError(f"'{host}', label empty or too long") from None 03:58:01 03:58:01 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 03:58:01 af, socktype, proto, canonname, sa = res 03:58:01 sock = None 03:58:01 try: 03:58:01 sock = socket.socket(af, socktype, proto) 03:58:01 03:58:01 # If provided, set socket level options before connecting. 03:58:01 _set_socket_options(sock, socket_options) 03:58:01 03:58:01 if timeout is not _DEFAULT_TIMEOUT: 03:58:01 sock.settimeout(timeout) 03:58:01 if source_address: 03:58:01 sock.bind(source_address) 03:58:01 > sock.connect(sa) 03:58:01 E ConnectionRefusedError: [Errno 111] Connection refused 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 03:58:01 03:58:01 The above exception was the direct cause of the following exception: 03:58:01 03:58:01 self = 03:58:01 method = 'GET' 03:58:01 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' 03:58:01 body = None 03:58:01 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 03:58:01 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:01 redirect = False, assert_same_host = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 03:58:01 release_conn = False, chunked = False, body_pos = None, preload_content = False 03:58:01 decode_content = False, response_kw = {} 03:58:01 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query='content=nonconfig', fragment=None) 03:58:01 destination_scheme = None, conn = None, release_this_conn = True 03:58:01 http_tunnel_required = False, err = None, clean_exit = False 03:58:01 03:58:01 def urlopen( # type: ignore[override] 03:58:01 self, 03:58:01 method: str, 03:58:01 url: str, 03:58:01 body: _TYPE_BODY | None = None, 03:58:01 headers: typing.Mapping[str, str] | None = None, 03:58:01 retries: Retry | bool | int | None = None, 03:58:01 redirect: bool = True, 03:58:01 assert_same_host: bool = True, 03:58:01 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:01 pool_timeout: int | None = None, 03:58:01 release_conn: bool | None = None, 03:58:01 chunked: bool = False, 03:58:01 body_pos: _TYPE_BODY_POSITION | None = None, 03:58:01 preload_content: bool = True, 03:58:01 decode_content: bool = True, 03:58:01 **response_kw: typing.Any, 03:58:01 ) -> BaseHTTPResponse: 03:58:01 """ 03:58:01 Get a connection from the pool and perform an HTTP request. This is the 03:58:01 lowest level call for making a request, so you'll need to specify all 03:58:01 the raw details. 03:58:01 03:58:01 .. note:: 03:58:01 03:58:01 More commonly, it's appropriate to use a convenience method 03:58:01 such as :meth:`request`. 03:58:01 03:58:01 .. note:: 03:58:01 03:58:01 `release_conn` will only behave as expected if 03:58:01 `preload_content=False` because we want to make 03:58:01 `preload_content=False` the default behaviour someday soon without 03:58:01 breaking backwards compatibility. 03:58:01 03:58:01 :param method: 03:58:01 HTTP request method (such as GET, POST, PUT, etc.) 03:58:01 03:58:01 :param url: 03:58:01 The URL to perform the request on. 03:58:01 03:58:01 :param body: 03:58:01 Data to send in the request body, either :class:`str`, :class:`bytes`, 03:58:01 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 03:58:01 03:58:01 :param headers: 03:58:01 Dictionary of custom headers to send, such as User-Agent, 03:58:01 If-None-Match, etc. If None, pool headers are used. If provided, 03:58:01 these headers completely replace any pool-specific headers. 03:58:01 03:58:01 :param retries: 03:58:01 Configure the number of retries to allow before raising a 03:58:01 :class:`~urllib3.exceptions.MaxRetryError` exception. 03:58:01 03:58:01 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 03:58:01 :class:`~urllib3.util.retry.Retry` object for fine-grained control 03:58:01 over different types of retries. 03:58:01 Pass an integer number to retry connection errors that many times, 03:58:01 but no other types of errors. Pass zero to never retry. 03:58:01 03:58:01 If ``False``, then retries are disabled and any exception is raised 03:58:01 immediately. Also, instead of raising a MaxRetryError on redirects, 03:58:01 the redirect response will be returned. 03:58:01 03:58:01 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 03:58:01 03:58:01 :param redirect: 03:58:01 If True, automatically handle redirects (status codes 301, 302, 03:58:01 303, 307, 308). Each redirect counts as a retry. Disabling retries 03:58:01 will disable redirect, too. 03:58:01 03:58:01 :param assert_same_host: 03:58:01 If ``True``, will make sure that the host of the pool requests is 03:58:01 consistent else will raise HostChangedError. When ``False``, you can 03:58:01 use the pool on an HTTP proxy and request foreign hosts. 03:58:01 03:58:01 :param timeout: 03:58:01 If specified, overrides the default timeout for this one 03:58:01 request. It may be a float (in seconds) or an instance of 03:58:01 :class:`urllib3.util.Timeout`. 03:58:01 03:58:01 :param pool_timeout: 03:58:01 If set and the pool is set to block=True, then this method will 03:58:01 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 03:58:01 connection is available within the time period. 03:58:01 03:58:01 :param bool preload_content: 03:58:01 If True, the response's body will be preloaded into memory. 03:58:01 03:58:01 :param bool decode_content: 03:58:01 If True, will attempt to decode the body based on the 03:58:01 'content-encoding' header. 03:58:01 03:58:01 :param release_conn: 03:58:01 If False, then the urlopen call will not release the connection 03:58:01 back into the pool once a response is received (but will release if 03:58:01 you read the entire contents of the response such as when 03:58:01 `preload_content=True`). This is useful if you're not preloading 03:58:01 the response's content immediately. You will need to call 03:58:01 ``r.release_conn()`` on the response ``r`` to return the connection 03:58:01 back into the pool. If None, it takes the value of ``preload_content`` 03:58:01 which defaults to ``True``. 03:58:01 03:58:01 :param bool chunked: 03:58:01 If True, urllib3 will send the body using chunked transfer 03:58:01 encoding. Otherwise, urllib3 will send the body using the standard 03:58:01 content-length form. Defaults to False. 03:58:01 03:58:01 :param int body_pos: 03:58:01 Position to seek to in file-like body in the event of a retry or 03:58:01 redirect. Typically this won't need to be set because urllib3 will 03:58:01 auto-populate the value when needed. 03:58:01 """ 03:58:01 parsed_url = parse_url(url) 03:58:01 destination_scheme = parsed_url.scheme 03:58:01 03:58:01 if headers is None: 03:58:01 headers = self.headers 03:58:01 03:58:01 if not isinstance(retries, Retry): 03:58:01 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 03:58:01 03:58:01 if release_conn is None: 03:58:01 release_conn = preload_content 03:58:01 03:58:01 # Check host 03:58:01 if assert_same_host and not self.is_same_host(url): 03:58:01 raise HostChangedError(self, url, retries) 03:58:01 03:58:01 # Ensure that the URL we're connecting to is properly encoded 03:58:01 if url.startswith("/"): 03:58:01 url = to_str(_encode_target(url)) 03:58:01 else: 03:58:01 url = to_str(parsed_url.url) 03:58:01 03:58:01 conn = None 03:58:01 03:58:01 # Track whether `conn` needs to be released before 03:58:01 # returning/raising/recursing. Update this variable if necessary, and 03:58:01 # leave `release_conn` constant throughout the function. That way, if 03:58:01 # the function recurses, the original value of `release_conn` will be 03:58:01 # passed down into the recursive call, and its value will be respected. 03:58:01 # 03:58:01 # See issue #651 [1] for details. 03:58:01 # 03:58:01 # [1] 03:58:01 release_this_conn = release_conn 03:58:01 03:58:01 http_tunnel_required = connection_requires_http_tunnel( 03:58:01 self.proxy, self.proxy_config, destination_scheme 03:58:01 ) 03:58:01 03:58:01 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 03:58:01 # have to copy the headers dict so we can safely change it without those 03:58:01 # changes being reflected in anyone else's copy. 03:58:01 if not http_tunnel_required: 03:58:01 headers = headers.copy() # type: ignore[attr-defined] 03:58:01 headers.update(self.proxy_headers) # type: ignore[union-attr] 03:58:01 03:58:01 # Must keep the exception bound to a separate variable or else Python 3 03:58:01 # complains about UnboundLocalError. 03:58:01 err = None 03:58:01 03:58:01 # Keep track of whether we cleanly exited the except block. This 03:58:01 # ensures we do proper cleanup in finally. 03:58:01 clean_exit = False 03:58:01 03:58:01 # Rewind body position, if needed. Record current position 03:58:01 # for future rewinds in the event of a redirect/retry. 03:58:01 body_pos = set_file_position(body, body_pos) 03:58:01 03:58:01 try: 03:58:01 # Request a connection from the queue. 03:58:01 timeout_obj = self._get_timeout(timeout) 03:58:01 conn = self._get_conn(timeout=pool_timeout) 03:58:01 03:58:01 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 03:58:01 03:58:01 # Is this a closed/new connection that requires CONNECT tunnelling? 03:58:01 if self.proxy is not None and http_tunnel_required and conn.is_closed: 03:58:01 try: 03:58:01 self._prepare_proxy(conn) 03:58:01 except (BaseSSLError, OSError, SocketTimeout) as e: 03:58:01 self._raise_timeout( 03:58:01 err=e, url=self.proxy.url, timeout_value=conn.timeout 03:58:01 ) 03:58:01 raise 03:58:01 03:58:01 # If we're going to release the connection in ``finally:``, then 03:58:01 # the response doesn't need to know about the connection. Otherwise 03:58:01 # it will also try to release it and we'll have a double-release 03:58:01 # mess. 03:58:01 response_conn = conn if not release_conn else None 03:58:01 03:58:01 # Make the request on the HTTPConnection object 03:58:01 > response = self._make_request( 03:58:01 conn, 03:58:01 method, 03:58:01 url, 03:58:01 timeout=timeout_obj, 03:58:01 body=body, 03:58:01 headers=headers, 03:58:01 chunked=chunked, 03:58:01 retries=retries, 03:58:01 response_conn=response_conn, 03:58:01 preload_content=preload_content, 03:58:01 decode_content=decode_content, 03:58:01 **response_kw, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 03:58:01 conn.request( 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 03:58:01 self.endheaders() 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 03:58:01 self._send_output(message_body, encode_chunked=encode_chunked) 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 03:58:01 self.send(msg) 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 03:58:01 self.connect() 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 03:58:01 self.sock = self._new_conn() 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = 03:58:01 03:58:01 def _new_conn(self) -> socket.socket: 03:58:01 """Establish a socket connection and set nodelay settings on it. 03:58:01 03:58:01 :return: New socket connection. 03:58:01 """ 03:58:01 try: 03:58:01 sock = connection.create_connection( 03:58:01 (self._dns_host, self.port), 03:58:01 self.timeout, 03:58:01 source_address=self.source_address, 03:58:01 socket_options=self.socket_options, 03:58:01 ) 03:58:01 except socket.gaierror as e: 03:58:01 raise NameResolutionError(self.host, self, e) from e 03:58:01 except SocketTimeout as e: 03:58:01 raise ConnectTimeoutError( 03:58:01 self, 03:58:01 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 03:58:01 ) from e 03:58:01 03:58:01 except OSError as e: 03:58:01 > raise NewConnectionError( 03:58:01 self, f"Failed to establish a new connection: {e}" 03:58:01 ) from e 03:58:01 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 03:58:01 03:58:01 The above exception was the direct cause of the following exception: 03:58:01 03:58:01 self = 03:58:01 request = , stream = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:01 proxies = OrderedDict() 03:58:01 03:58:01 def send( 03:58:01 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:01 ): 03:58:01 """Sends PreparedRequest object. Returns Response object. 03:58:01 03:58:01 :param request: The :class:`PreparedRequest ` being sent. 03:58:01 :param stream: (optional) Whether to stream the request content. 03:58:01 :param timeout: (optional) How long to wait for the server to send 03:58:01 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:01 read timeout) ` tuple. 03:58:01 :type timeout: float or tuple or urllib3 Timeout object 03:58:01 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:01 we verify the server's TLS certificate, or a string, in which case it 03:58:01 must be a path to a CA bundle to use 03:58:01 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:01 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:01 :rtype: requests.Response 03:58:01 """ 03:58:01 03:58:01 try: 03:58:01 conn = self.get_connection_with_tls_context( 03:58:01 request, verify, proxies=proxies, cert=cert 03:58:01 ) 03:58:01 except LocationValueError as e: 03:58:01 raise InvalidURL(e, request=request) 03:58:01 03:58:01 self.cert_verify(conn, request.url, verify, cert) 03:58:01 url = self.request_url(request, proxies) 03:58:01 self.add_headers( 03:58:01 request, 03:58:01 stream=stream, 03:58:01 timeout=timeout, 03:58:01 verify=verify, 03:58:01 cert=cert, 03:58:01 proxies=proxies, 03:58:01 ) 03:58:01 03:58:01 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:01 03:58:01 if isinstance(timeout, tuple): 03:58:01 try: 03:58:01 connect, read = timeout 03:58:01 timeout = TimeoutSauce(connect=connect, read=read) 03:58:01 except ValueError: 03:58:01 raise ValueError( 03:58:01 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:01 f"or a single float to set both timeouts to the same value." 03:58:01 ) 03:58:01 elif isinstance(timeout, TimeoutSauce): 03:58:01 pass 03:58:01 else: 03:58:01 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:01 03:58:01 try: 03:58:01 > resp = conn.urlopen( 03:58:01 method=request.method, 03:58:01 url=url, 03:58:01 body=request.body, 03:58:01 headers=request.headers, 03:58:01 redirect=False, 03:58:01 assert_same_host=False, 03:58:01 preload_content=False, 03:58:01 decode_content=False, 03:58:01 retries=self.max_retries, 03:58:01 timeout=timeout, 03:58:01 chunked=chunked, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 03:58:01 retries = retries.increment( 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:01 method = 'GET' 03:58:01 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' 03:58:01 response = None 03:58:01 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 03:58:01 _pool = 03:58:01 _stacktrace = 03:58:01 03:58:01 def increment( 03:58:01 self, 03:58:01 method: str | None = None, 03:58:01 url: str | None = None, 03:58:01 response: BaseHTTPResponse | None = None, 03:58:01 error: Exception | None = None, 03:58:01 _pool: ConnectionPool | None = None, 03:58:01 _stacktrace: TracebackType | None = None, 03:58:01 ) -> Self: 03:58:01 """Return a new Retry object with incremented retry counters. 03:58:01 03:58:01 :param response: A response object, or None, if the server did not 03:58:01 return a response. 03:58:01 :type response: :class:`~urllib3.response.BaseHTTPResponse` 03:58:01 :param Exception error: An error encountered during the request, or 03:58:01 None if the response was received successfully. 03:58:01 03:58:01 :return: A new ``Retry`` object. 03:58:01 """ 03:58:01 if self.total is False and error: 03:58:01 # Disabled, indicate to re-raise the error. 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 03:58:01 total = self.total 03:58:01 if total is not None: 03:58:01 total -= 1 03:58:01 03:58:01 connect = self.connect 03:58:01 read = self.read 03:58:01 redirect = self.redirect 03:58:01 status_count = self.status 03:58:01 other = self.other 03:58:01 cause = "unknown" 03:58:01 status = None 03:58:01 redirect_location = None 03:58:01 03:58:01 if error and self._is_connection_error(error): 03:58:01 # Connect retry? 03:58:01 if connect is False: 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 elif connect is not None: 03:58:01 connect -= 1 03:58:01 03:58:01 elif error and self._is_read_error(error): 03:58:01 # Read retry? 03:58:01 if read is False or method is None or not self._is_method_retryable(method): 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 elif read is not None: 03:58:01 read -= 1 03:58:01 03:58:01 elif error: 03:58:01 # Other retry? 03:58:01 if other is not None: 03:58:01 other -= 1 03:58:01 03:58:01 elif response and response.get_redirect_location(): 03:58:01 # Redirect retry? 03:58:01 if redirect is not None: 03:58:01 redirect -= 1 03:58:01 cause = "too many redirects" 03:58:01 response_redirect_location = response.get_redirect_location() 03:58:01 if response_redirect_location: 03:58:01 redirect_location = response_redirect_location 03:58:01 status = response.status 03:58:01 03:58:01 else: 03:58:01 # Incrementing because of a server error like a 500 in 03:58:01 # status_forcelist and the given method is in the allowed_methods 03:58:01 cause = ResponseError.GENERIC_ERROR 03:58:01 if response and response.status: 03:58:01 if status_count is not None: 03:58:01 status_count -= 1 03:58:01 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 03:58:01 status = response.status 03:58:01 03:58:01 history = self.history + ( 03:58:01 RequestHistory(method, url, error, status, redirect_location), 03:58:01 ) 03:58:01 03:58:01 new_retry = self.new( 03:58:01 total=total, 03:58:01 connect=connect, 03:58:01 read=read, 03:58:01 redirect=redirect, 03:58:01 status=status_count, 03:58:01 other=other, 03:58:01 history=history, 03:58:01 ) 03:58:01 03:58:01 if new_retry.is_exhausted(): 03:58:01 reason = error or ResponseError(cause) 03:58:01 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 03:58:01 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 03:58:01 03:58:01 During handling of the above exception, another exception occurred: 03:58:01 03:58:01 self = 03:58:01 03:58:01 def test_20_rdm_device_disconnected(self): 03:58:01 > response = test_utils.check_device_connection("ROADMA01") 03:58:01 03:58:01 transportpce_tests/1.2.1/test01_portmapping.py:215: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 transportpce_tests/common/test_utils.py:370: in check_device_connection 03:58:01 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 03:58:01 transportpce_tests/common/test_utils.py:116: in get_request 03:58:01 return requests.request( 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 03:58:01 return session.request(method=method, url=url, **kwargs) 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 03:58:01 resp = self.send(prep, **send_kwargs) 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 03:58:01 r = adapter.send(request, **kwargs) 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = 03:58:01 request = , stream = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:01 proxies = OrderedDict() 03:58:01 03:58:01 def send( 03:58:01 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:01 ): 03:58:01 """Sends PreparedRequest object. Returns Response object. 03:58:01 03:58:01 :param request: The :class:`PreparedRequest ` being sent. 03:58:01 :param stream: (optional) Whether to stream the request content. 03:58:01 :param timeout: (optional) How long to wait for the server to send 03:58:01 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:01 read timeout) ` tuple. 03:58:01 :type timeout: float or tuple or urllib3 Timeout object 03:58:01 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:01 we verify the server's TLS certificate, or a string, in which case it 03:58:01 must be a path to a CA bundle to use 03:58:01 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:01 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:01 :rtype: requests.Response 03:58:01 """ 03:58:01 03:58:01 try: 03:58:01 conn = self.get_connection_with_tls_context( 03:58:01 request, verify, proxies=proxies, cert=cert 03:58:01 ) 03:58:01 except LocationValueError as e: 03:58:01 raise InvalidURL(e, request=request) 03:58:01 03:58:01 self.cert_verify(conn, request.url, verify, cert) 03:58:01 url = self.request_url(request, proxies) 03:58:01 self.add_headers( 03:58:01 request, 03:58:01 stream=stream, 03:58:01 timeout=timeout, 03:58:01 verify=verify, 03:58:01 cert=cert, 03:58:01 proxies=proxies, 03:58:01 ) 03:58:01 03:58:01 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:01 03:58:01 if isinstance(timeout, tuple): 03:58:01 try: 03:58:01 connect, read = timeout 03:58:01 timeout = TimeoutSauce(connect=connect, read=read) 03:58:01 except ValueError: 03:58:01 raise ValueError( 03:58:01 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:01 f"or a single float to set both timeouts to the same value." 03:58:01 ) 03:58:01 elif isinstance(timeout, TimeoutSauce): 03:58:01 pass 03:58:01 else: 03:58:01 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:01 03:58:01 try: 03:58:01 resp = conn.urlopen( 03:58:01 method=request.method, 03:58:01 url=url, 03:58:01 body=request.body, 03:58:01 headers=request.headers, 03:58:01 redirect=False, 03:58:01 assert_same_host=False, 03:58:01 preload_content=False, 03:58:01 decode_content=False, 03:58:01 retries=self.max_retries, 03:58:01 timeout=timeout, 03:58:01 chunked=chunked, 03:58:01 ) 03:58:01 03:58:01 except (ProtocolError, OSError) as err: 03:58:01 raise ConnectionError(err, request=request) 03:58:01 03:58:01 except MaxRetryError as e: 03:58:01 if isinstance(e.reason, ConnectTimeoutError): 03:58:01 # TODO: Remove this in 3.0.0: see #2811 03:58:01 if not isinstance(e.reason, NewConnectionError): 03:58:01 raise ConnectTimeout(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, ResponseError): 03:58:01 raise RetryError(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, _ProxyError): 03:58:01 raise ProxyError(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, _SSLError): 03:58:01 # This branch is for urllib3 v1.22 and later. 03:58:01 raise SSLError(e, request=request) 03:58:01 03:58:01 > raise ConnectionError(e, request=request) 03:58:01 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 03:58:01 ----------------------------- Captured stdout call ----------------------------- 03:58:01 execution of test_20_rdm_device_disconnected 03:58:01 _______ TransportPCEPortMappingTesting.test_21_rdm_device_not_connected ________ 03:58:01 03:58:01 self = 03:58:01 03:58:01 def _new_conn(self) -> socket.socket: 03:58:01 """Establish a socket connection and set nodelay settings on it. 03:58:01 03:58:01 :return: New socket connection. 03:58:01 """ 03:58:01 try: 03:58:01 > sock = connection.create_connection( 03:58:01 (self._dns_host, self.port), 03:58:01 self.timeout, 03:58:01 source_address=self.source_address, 03:58:01 socket_options=self.socket_options, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 03:58:01 raise err 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 address = ('localhost', 8182), timeout = 10, source_address = None 03:58:01 socket_options = [(6, 1, 1)] 03:58:01 03:58:01 def create_connection( 03:58:01 address: tuple[str, int], 03:58:01 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:01 source_address: tuple[str, int] | None = None, 03:58:01 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 03:58:01 ) -> socket.socket: 03:58:01 """Connect to *address* and return the socket object. 03:58:01 03:58:01 Convenience function. Connect to *address* (a 2-tuple ``(host, 03:58:01 port)``) and return the socket object. Passing the optional 03:58:01 *timeout* parameter will set the timeout on the socket instance 03:58:01 before attempting to connect. If no *timeout* is supplied, the 03:58:01 global default timeout setting returned by :func:`socket.getdefaulttimeout` 03:58:01 is used. If *source_address* is set it must be a tuple of (host, port) 03:58:01 for the socket to bind as a source address before making the connection. 03:58:01 An host of '' or port 0 tells the OS to use the default. 03:58:01 """ 03:58:01 03:58:01 host, port = address 03:58:01 if host.startswith("["): 03:58:01 host = host.strip("[]") 03:58:01 err = None 03:58:01 03:58:01 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 03:58:01 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 03:58:01 # The original create_connection function always returns all records. 03:58:01 family = allowed_gai_family() 03:58:01 03:58:01 try: 03:58:01 host.encode("idna") 03:58:01 except UnicodeError: 03:58:01 raise LocationParseError(f"'{host}', label empty or too long") from None 03:58:01 03:58:01 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 03:58:01 af, socktype, proto, canonname, sa = res 03:58:01 sock = None 03:58:01 try: 03:58:01 sock = socket.socket(af, socktype, proto) 03:58:01 03:58:01 # If provided, set socket level options before connecting. 03:58:01 _set_socket_options(sock, socket_options) 03:58:01 03:58:01 if timeout is not _DEFAULT_TIMEOUT: 03:58:01 sock.settimeout(timeout) 03:58:01 if source_address: 03:58:01 sock.bind(source_address) 03:58:01 > sock.connect(sa) 03:58:01 E ConnectionRefusedError: [Errno 111] Connection refused 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 03:58:01 03:58:01 The above exception was the direct cause of the following exception: 03:58:01 03:58:01 self = 03:58:01 method = 'GET' 03:58:01 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' 03:58:01 body = None 03:58:01 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 03:58:01 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:01 redirect = False, assert_same_host = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 03:58:01 release_conn = False, chunked = False, body_pos = None, preload_content = False 03:58:01 decode_content = False, response_kw = {} 03:58:01 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info', query=None, fragment=None) 03:58:01 destination_scheme = None, conn = None, release_this_conn = True 03:58:01 http_tunnel_required = False, err = None, clean_exit = False 03:58:01 03:58:01 def urlopen( # type: ignore[override] 03:58:01 self, 03:58:01 method: str, 03:58:01 url: str, 03:58:01 body: _TYPE_BODY | None = None, 03:58:01 headers: typing.Mapping[str, str] | None = None, 03:58:01 retries: Retry | bool | int | None = None, 03:58:01 redirect: bool = True, 03:58:01 assert_same_host: bool = True, 03:58:01 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 03:58:01 pool_timeout: int | None = None, 03:58:01 release_conn: bool | None = None, 03:58:01 chunked: bool = False, 03:58:01 body_pos: _TYPE_BODY_POSITION | None = None, 03:58:01 preload_content: bool = True, 03:58:01 decode_content: bool = True, 03:58:01 **response_kw: typing.Any, 03:58:01 ) -> BaseHTTPResponse: 03:58:01 """ 03:58:01 Get a connection from the pool and perform an HTTP request. This is the 03:58:01 lowest level call for making a request, so you'll need to specify all 03:58:01 the raw details. 03:58:01 03:58:01 .. note:: 03:58:01 03:58:01 More commonly, it's appropriate to use a convenience method 03:58:01 such as :meth:`request`. 03:58:01 03:58:01 .. note:: 03:58:01 03:58:01 `release_conn` will only behave as expected if 03:58:01 `preload_content=False` because we want to make 03:58:01 `preload_content=False` the default behaviour someday soon without 03:58:01 breaking backwards compatibility. 03:58:01 03:58:01 :param method: 03:58:01 HTTP request method (such as GET, POST, PUT, etc.) 03:58:01 03:58:01 :param url: 03:58:01 The URL to perform the request on. 03:58:01 03:58:01 :param body: 03:58:01 Data to send in the request body, either :class:`str`, :class:`bytes`, 03:58:01 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 03:58:01 03:58:01 :param headers: 03:58:01 Dictionary of custom headers to send, such as User-Agent, 03:58:01 If-None-Match, etc. If None, pool headers are used. If provided, 03:58:01 these headers completely replace any pool-specific headers. 03:58:01 03:58:01 :param retries: 03:58:01 Configure the number of retries to allow before raising a 03:58:01 :class:`~urllib3.exceptions.MaxRetryError` exception. 03:58:01 03:58:01 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 03:58:01 :class:`~urllib3.util.retry.Retry` object for fine-grained control 03:58:01 over different types of retries. 03:58:01 Pass an integer number to retry connection errors that many times, 03:58:01 but no other types of errors. Pass zero to never retry. 03:58:01 03:58:01 If ``False``, then retries are disabled and any exception is raised 03:58:01 immediately. Also, instead of raising a MaxRetryError on redirects, 03:58:01 the redirect response will be returned. 03:58:01 03:58:01 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 03:58:01 03:58:01 :param redirect: 03:58:01 If True, automatically handle redirects (status codes 301, 302, 03:58:01 303, 307, 308). Each redirect counts as a retry. Disabling retries 03:58:01 will disable redirect, too. 03:58:01 03:58:01 :param assert_same_host: 03:58:01 If ``True``, will make sure that the host of the pool requests is 03:58:01 consistent else will raise HostChangedError. When ``False``, you can 03:58:01 use the pool on an HTTP proxy and request foreign hosts. 03:58:01 03:58:01 :param timeout: 03:58:01 If specified, overrides the default timeout for this one 03:58:01 request. It may be a float (in seconds) or an instance of 03:58:01 :class:`urllib3.util.Timeout`. 03:58:01 03:58:01 :param pool_timeout: 03:58:01 If set and the pool is set to block=True, then this method will 03:58:01 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 03:58:01 connection is available within the time period. 03:58:01 03:58:01 :param bool preload_content: 03:58:01 If True, the response's body will be preloaded into memory. 03:58:01 03:58:01 :param bool decode_content: 03:58:01 If True, will attempt to decode the body based on the 03:58:01 'content-encoding' header. 03:58:01 03:58:01 :param release_conn: 03:58:01 If False, then the urlopen call will not release the connection 03:58:01 back into the pool once a response is received (but will release if 03:58:01 you read the entire contents of the response such as when 03:58:01 `preload_content=True`). This is useful if you're not preloading 03:58:01 the response's content immediately. You will need to call 03:58:01 ``r.release_conn()`` on the response ``r`` to return the connection 03:58:01 back into the pool. If None, it takes the value of ``preload_content`` 03:58:01 which defaults to ``True``. 03:58:01 03:58:01 :param bool chunked: 03:58:01 If True, urllib3 will send the body using chunked transfer 03:58:01 encoding. Otherwise, urllib3 will send the body using the standard 03:58:01 content-length form. Defaults to False. 03:58:01 03:58:01 :param int body_pos: 03:58:01 Position to seek to in file-like body in the event of a retry or 03:58:01 redirect. Typically this won't need to be set because urllib3 will 03:58:01 auto-populate the value when needed. 03:58:01 """ 03:58:01 parsed_url = parse_url(url) 03:58:01 destination_scheme = parsed_url.scheme 03:58:01 03:58:01 if headers is None: 03:58:01 headers = self.headers 03:58:01 03:58:01 if not isinstance(retries, Retry): 03:58:01 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 03:58:01 03:58:01 if release_conn is None: 03:58:01 release_conn = preload_content 03:58:01 03:58:01 # Check host 03:58:01 if assert_same_host and not self.is_same_host(url): 03:58:01 raise HostChangedError(self, url, retries) 03:58:01 03:58:01 # Ensure that the URL we're connecting to is properly encoded 03:58:01 if url.startswith("/"): 03:58:01 url = to_str(_encode_target(url)) 03:58:01 else: 03:58:01 url = to_str(parsed_url.url) 03:58:01 03:58:01 conn = None 03:58:01 03:58:01 # Track whether `conn` needs to be released before 03:58:01 # returning/raising/recursing. Update this variable if necessary, and 03:58:01 # leave `release_conn` constant throughout the function. That way, if 03:58:01 # the function recurses, the original value of `release_conn` will be 03:58:01 # passed down into the recursive call, and its value will be respected. 03:58:01 # 03:58:01 # See issue #651 [1] for details. 03:58:01 # 03:58:01 # [1] 03:58:01 release_this_conn = release_conn 03:58:01 03:58:01 http_tunnel_required = connection_requires_http_tunnel( 03:58:01 self.proxy, self.proxy_config, destination_scheme 03:58:01 ) 03:58:01 03:58:01 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 03:58:01 # have to copy the headers dict so we can safely change it without those 03:58:01 # changes being reflected in anyone else's copy. 03:58:01 if not http_tunnel_required: 03:58:01 headers = headers.copy() # type: ignore[attr-defined] 03:58:01 headers.update(self.proxy_headers) # type: ignore[union-attr] 03:58:01 03:58:01 # Must keep the exception bound to a separate variable or else Python 3 03:58:01 # complains about UnboundLocalError. 03:58:01 err = None 03:58:01 03:58:01 # Keep track of whether we cleanly exited the except block. This 03:58:01 # ensures we do proper cleanup in finally. 03:58:01 clean_exit = False 03:58:01 03:58:01 # Rewind body position, if needed. Record current position 03:58:01 # for future rewinds in the event of a redirect/retry. 03:58:01 body_pos = set_file_position(body, body_pos) 03:58:01 03:58:01 try: 03:58:01 # Request a connection from the queue. 03:58:01 timeout_obj = self._get_timeout(timeout) 03:58:01 conn = self._get_conn(timeout=pool_timeout) 03:58:01 03:58:01 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 03:58:01 03:58:01 # Is this a closed/new connection that requires CONNECT tunnelling? 03:58:01 if self.proxy is not None and http_tunnel_required and conn.is_closed: 03:58:01 try: 03:58:01 self._prepare_proxy(conn) 03:58:01 except (BaseSSLError, OSError, SocketTimeout) as e: 03:58:01 self._raise_timeout( 03:58:01 err=e, url=self.proxy.url, timeout_value=conn.timeout 03:58:01 ) 03:58:01 raise 03:58:01 03:58:01 # If we're going to release the connection in ``finally:``, then 03:58:01 # the response doesn't need to know about the connection. Otherwise 03:58:01 # it will also try to release it and we'll have a double-release 03:58:01 # mess. 03:58:01 response_conn = conn if not release_conn else None 03:58:01 03:58:01 # Make the request on the HTTPConnection object 03:58:01 > response = self._make_request( 03:58:01 conn, 03:58:01 method, 03:58:01 url, 03:58:01 timeout=timeout_obj, 03:58:01 body=body, 03:58:01 headers=headers, 03:58:01 chunked=chunked, 03:58:01 retries=retries, 03:58:01 response_conn=response_conn, 03:58:01 preload_content=preload_content, 03:58:01 decode_content=decode_content, 03:58:01 **response_kw, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 03:58:01 conn.request( 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 03:58:01 self.endheaders() 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 03:58:01 self._send_output(message_body, encode_chunked=encode_chunked) 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 03:58:01 self.send(msg) 03:58:01 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 03:58:01 self.connect() 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 03:58:01 self.sock = self._new_conn() 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = 03:58:01 03:58:01 def _new_conn(self) -> socket.socket: 03:58:01 """Establish a socket connection and set nodelay settings on it. 03:58:01 03:58:01 :return: New socket connection. 03:58:01 """ 03:58:01 try: 03:58:01 sock = connection.create_connection( 03:58:01 (self._dns_host, self.port), 03:58:01 self.timeout, 03:58:01 source_address=self.source_address, 03:58:01 socket_options=self.socket_options, 03:58:01 ) 03:58:01 except socket.gaierror as e: 03:58:01 raise NameResolutionError(self.host, self, e) from e 03:58:01 except SocketTimeout as e: 03:58:01 raise ConnectTimeoutError( 03:58:01 self, 03:58:01 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 03:58:01 ) from e 03:58:01 03:58:01 except OSError as e: 03:58:01 > raise NewConnectionError( 03:58:01 self, f"Failed to establish a new connection: {e}" 03:58:01 ) from e 03:58:01 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 03:58:01 03:58:01 The above exception was the direct cause of the following exception: 03:58:01 03:58:01 self = 03:58:01 request = , stream = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:01 proxies = OrderedDict() 03:58:01 03:58:01 def send( 03:58:01 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:01 ): 03:58:01 """Sends PreparedRequest object. Returns Response object. 03:58:01 03:58:01 :param request: The :class:`PreparedRequest ` being sent. 03:58:01 :param stream: (optional) Whether to stream the request content. 03:58:01 :param timeout: (optional) How long to wait for the server to send 03:58:01 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:01 read timeout) ` tuple. 03:58:01 :type timeout: float or tuple or urllib3 Timeout object 03:58:01 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:01 we verify the server's TLS certificate, or a string, in which case it 03:58:01 must be a path to a CA bundle to use 03:58:01 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:01 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:01 :rtype: requests.Response 03:58:01 """ 03:58:01 03:58:01 try: 03:58:01 conn = self.get_connection_with_tls_context( 03:58:01 request, verify, proxies=proxies, cert=cert 03:58:01 ) 03:58:01 except LocationValueError as e: 03:58:01 raise InvalidURL(e, request=request) 03:58:01 03:58:01 self.cert_verify(conn, request.url, verify, cert) 03:58:01 url = self.request_url(request, proxies) 03:58:01 self.add_headers( 03:58:01 request, 03:58:01 stream=stream, 03:58:01 timeout=timeout, 03:58:01 verify=verify, 03:58:01 cert=cert, 03:58:01 proxies=proxies, 03:58:01 ) 03:58:01 03:58:01 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:01 03:58:01 if isinstance(timeout, tuple): 03:58:01 try: 03:58:01 connect, read = timeout 03:58:01 timeout = TimeoutSauce(connect=connect, read=read) 03:58:01 except ValueError: 03:58:01 raise ValueError( 03:58:01 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:01 f"or a single float to set both timeouts to the same value." 03:58:01 ) 03:58:01 elif isinstance(timeout, TimeoutSauce): 03:58:01 pass 03:58:01 else: 03:58:01 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:01 03:58:01 try: 03:58:01 > resp = conn.urlopen( 03:58:01 method=request.method, 03:58:01 url=url, 03:58:01 body=request.body, 03:58:01 headers=request.headers, 03:58:01 redirect=False, 03:58:01 assert_same_host=False, 03:58:01 preload_content=False, 03:58:01 decode_content=False, 03:58:01 retries=self.max_retries, 03:58:01 timeout=timeout, 03:58:01 chunked=chunked, 03:58:01 ) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 03:58:01 retries = retries.increment( 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 03:58:01 method = 'GET' 03:58:01 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' 03:58:01 response = None 03:58:01 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 03:58:01 _pool = 03:58:01 _stacktrace = 03:58:01 03:58:01 def increment( 03:58:01 self, 03:58:01 method: str | None = None, 03:58:01 url: str | None = None, 03:58:01 response: BaseHTTPResponse | None = None, 03:58:01 error: Exception | None = None, 03:58:01 _pool: ConnectionPool | None = None, 03:58:01 _stacktrace: TracebackType | None = None, 03:58:01 ) -> Self: 03:58:01 """Return a new Retry object with incremented retry counters. 03:58:01 03:58:01 :param response: A response object, or None, if the server did not 03:58:01 return a response. 03:58:01 :type response: :class:`~urllib3.response.BaseHTTPResponse` 03:58:01 :param Exception error: An error encountered during the request, or 03:58:01 None if the response was received successfully. 03:58:01 03:58:01 :return: A new ``Retry`` object. 03:58:01 """ 03:58:01 if self.total is False and error: 03:58:01 # Disabled, indicate to re-raise the error. 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 03:58:01 total = self.total 03:58:01 if total is not None: 03:58:01 total -= 1 03:58:01 03:58:01 connect = self.connect 03:58:01 read = self.read 03:58:01 redirect = self.redirect 03:58:01 status_count = self.status 03:58:01 other = self.other 03:58:01 cause = "unknown" 03:58:01 status = None 03:58:01 redirect_location = None 03:58:01 03:58:01 if error and self._is_connection_error(error): 03:58:01 # Connect retry? 03:58:01 if connect is False: 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 elif connect is not None: 03:58:01 connect -= 1 03:58:01 03:58:01 elif error and self._is_read_error(error): 03:58:01 # Read retry? 03:58:01 if read is False or method is None or not self._is_method_retryable(method): 03:58:01 raise reraise(type(error), error, _stacktrace) 03:58:01 elif read is not None: 03:58:01 read -= 1 03:58:01 03:58:01 elif error: 03:58:01 # Other retry? 03:58:01 if other is not None: 03:58:01 other -= 1 03:58:01 03:58:01 elif response and response.get_redirect_location(): 03:58:01 # Redirect retry? 03:58:01 if redirect is not None: 03:58:01 redirect -= 1 03:58:01 cause = "too many redirects" 03:58:01 response_redirect_location = response.get_redirect_location() 03:58:01 if response_redirect_location: 03:58:01 redirect_location = response_redirect_location 03:58:01 status = response.status 03:58:01 03:58:01 else: 03:58:01 # Incrementing because of a server error like a 500 in 03:58:01 # status_forcelist and the given method is in the allowed_methods 03:58:01 cause = ResponseError.GENERIC_ERROR 03:58:01 if response and response.status: 03:58:01 if status_count is not None: 03:58:01 status_count -= 1 03:58:01 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 03:58:01 status = response.status 03:58:01 03:58:01 history = self.history + ( 03:58:01 RequestHistory(method, url, error, status, redirect_location), 03:58:01 ) 03:58:01 03:58:01 new_retry = self.new( 03:58:01 total=total, 03:58:01 connect=connect, 03:58:01 read=read, 03:58:01 redirect=redirect, 03:58:01 status=status_count, 03:58:01 other=other, 03:58:01 history=history, 03:58:01 ) 03:58:01 03:58:01 if new_retry.is_exhausted(): 03:58:01 reason = error or ResponseError(cause) 03:58:01 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 03:58:01 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 03:58:01 03:58:01 During handling of the above exception, another exception occurred: 03:58:01 03:58:01 self = 03:58:01 03:58:01 def test_21_rdm_device_not_connected(self): 03:58:01 > response = test_utils.get_portmapping_node_attr("ROADMA01", "node-info", None) 03:58:01 03:58:01 transportpce_tests/1.2.1/test01_portmapping.py:223: 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 transportpce_tests/common/test_utils.py:471: in get_portmapping_node_attr 03:58:01 response = get_request(target_url) 03:58:01 transportpce_tests/common/test_utils.py:116: in get_request 03:58:01 return requests.request( 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 03:58:01 return session.request(method=method, url=url, **kwargs) 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 03:58:01 resp = self.send(prep, **send_kwargs) 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 03:58:01 r = adapter.send(request, **kwargs) 03:58:01 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 03:58:01 03:58:01 self = 03:58:01 request = , stream = False 03:58:01 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 03:58:01 proxies = OrderedDict() 03:58:01 03:58:01 def send( 03:58:01 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 03:58:01 ): 03:58:01 """Sends PreparedRequest object. Returns Response object. 03:58:01 03:58:01 :param request: The :class:`PreparedRequest ` being sent. 03:58:01 :param stream: (optional) Whether to stream the request content. 03:58:01 :param timeout: (optional) How long to wait for the server to send 03:58:01 data before giving up, as a float, or a :ref:`(connect timeout, 03:58:01 read timeout) ` tuple. 03:58:01 :type timeout: float or tuple or urllib3 Timeout object 03:58:01 :param verify: (optional) Either a boolean, in which case it controls whether 03:58:01 we verify the server's TLS certificate, or a string, in which case it 03:58:01 must be a path to a CA bundle to use 03:58:01 :param cert: (optional) Any user-provided SSL certificate to be trusted. 03:58:01 :param proxies: (optional) The proxies dictionary to apply to the request. 03:58:01 :rtype: requests.Response 03:58:01 """ 03:58:01 03:58:01 try: 03:58:01 conn = self.get_connection_with_tls_context( 03:58:01 request, verify, proxies=proxies, cert=cert 03:58:01 ) 03:58:01 except LocationValueError as e: 03:58:01 raise InvalidURL(e, request=request) 03:58:01 03:58:01 self.cert_verify(conn, request.url, verify, cert) 03:58:01 url = self.request_url(request, proxies) 03:58:01 self.add_headers( 03:58:01 request, 03:58:01 stream=stream, 03:58:01 timeout=timeout, 03:58:01 verify=verify, 03:58:01 cert=cert, 03:58:01 proxies=proxies, 03:58:01 ) 03:58:01 03:58:01 chunked = not (request.body is None or "Content-Length" in request.headers) 03:58:01 03:58:01 if isinstance(timeout, tuple): 03:58:01 try: 03:58:01 connect, read = timeout 03:58:01 timeout = TimeoutSauce(connect=connect, read=read) 03:58:01 except ValueError: 03:58:01 raise ValueError( 03:58:01 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 03:58:01 f"or a single float to set both timeouts to the same value." 03:58:01 ) 03:58:01 elif isinstance(timeout, TimeoutSauce): 03:58:01 pass 03:58:01 else: 03:58:01 timeout = TimeoutSauce(connect=timeout, read=timeout) 03:58:01 03:58:01 try: 03:58:01 resp = conn.urlopen( 03:58:01 method=request.method, 03:58:01 url=url, 03:58:01 body=request.body, 03:58:01 headers=request.headers, 03:58:01 redirect=False, 03:58:01 assert_same_host=False, 03:58:01 preload_content=False, 03:58:01 decode_content=False, 03:58:01 retries=self.max_retries, 03:58:01 timeout=timeout, 03:58:01 chunked=chunked, 03:58:01 ) 03:58:01 03:58:01 except (ProtocolError, OSError) as err: 03:58:01 raise ConnectionError(err, request=request) 03:58:01 03:58:01 except MaxRetryError as e: 03:58:01 if isinstance(e.reason, ConnectTimeoutError): 03:58:01 # TODO: Remove this in 3.0.0: see #2811 03:58:01 if not isinstance(e.reason, NewConnectionError): 03:58:01 raise ConnectTimeout(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, ResponseError): 03:58:01 raise RetryError(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, _ProxyError): 03:58:01 raise ProxyError(e, request=request) 03:58:01 03:58:01 if isinstance(e.reason, _SSLError): 03:58:01 # This branch is for urllib3 v1.22 and later. 03:58:01 raise SSLError(e, request=request) 03:58:01 03:58:01 > raise ConnectionError(e, request=request) 03:58:01 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 03:58:01 03:58:01 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 03:58:01 ----------------------------- Captured stdout call ----------------------------- 03:58:01 execution of test_21_rdm_device_not_connected 03:58:01 --------------------------- Captured stdout teardown --------------------------- 03:58:01 all processes killed 03:58:01 =========================== short test summary info ============================ 03:58:01 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_04_rdm_portmapping_DEG1_TTP_TXRX 03:58:01 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_05_rdm_portmapping_SRG1_PP7_TXRX 03:58:01 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_06_rdm_portmapping_SRG3_PP1_TXRX 03:58:01 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_07_xpdr_device_connection 03:58:01 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_08_xpdr_device_connected 03:58:01 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_09_xpdr_portmapping_info 03:58:01 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_10_xpdr_portmapping_NETWORK1 03:58:01 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_11_xpdr_portmapping_NETWORK2 03:58:01 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_12_xpdr_portmapping_CLIENT1 03:58:01 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_13_xpdr_portmapping_CLIENT2 03:58:01 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_14_xpdr_portmapping_CLIENT3 03:58:01 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_15_xpdr_portmapping_CLIENT4 03:58:01 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_16_xpdr_device_disconnection 03:58:01 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_17_xpdr_device_disconnected 03:58:01 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_18_xpdr_device_not_connected 03:58:01 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_19_rdm_device_disconnection 03:58:01 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_20_rdm_device_disconnected 03:58:01 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_21_rdm_device_not_connected 03:58:01 18 failed, 3 passed in 265.62s (0:04:25) 03:58:01 tests121: exit 1 (266.23 seconds) /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh 1.2.1 pid=36810 03:58:20 ............ [100%] 03:58:34 12 passed in 49.58s 03:58:34 pytest -q transportpce_tests/7.1/test02_otn_renderer.py 03:59:04 .............................................................. [100%] 04:01:16 62 passed in 161.63s (0:02:41) 04:01:16 pytest -q transportpce_tests/7.1/test03_renderer_or_modes.py 04:01:57 ................................................ [100%] 04:03:43 48 passed in 144.93s (0:02:24) 04:03:43 pytest -q transportpce_tests/7.1/test04_renderer_regen_mode.py 04:04:07 ...................... [100%] 04:04:55 22 passed in 73.43s (0:01:13) 04:04:55 tests121: FAIL ✖ in 4 minutes 37.69 seconds 04:04:55 tests71: OK ✔ in 7 minutes 17.69 seconds 04:04:55 tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 04:05:03 tests221: freeze> python -m pip freeze --all 04:05:04 tests221: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 04:05:04 tests221: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh 2.2.1 04:05:04 using environment variables from ./karaf221.env 04:05:04 pytest -q transportpce_tests/2.2.1/test01_portmapping.py 04:05:57 ................................... [100%] 04:06:37 35 passed in 92.78s (0:01:32) 04:06:37 pytest -q transportpce_tests/2.2.1/test02_topo_portmapping.py 04:07:12 ...... [100%] 04:07:25 6 passed in 48.00s 04:07:25 pytest -q transportpce_tests/2.2.1/test03_topology.py 04:08:21 ............................................ [100%] 04:09:59 44 passed in 153.05s (0:02:33) 04:09:59 pytest -q transportpce_tests/2.2.1/test04_otn_topology.py 04:10:41 ............ [100%] 04:11:05 12 passed in 65.94s (0:01:05) 04:11:05 pytest -q transportpce_tests/2.2.1/test05_flex_grid.py 04:11:32 ................ [100%] 04:13:01 16 passed in 115.71s (0:01:55) 04:13:01 pytest -q transportpce_tests/2.2.1/test06_renderer_service_path_nominal.py 04:13:32 ............................... [100%] 04:13:39 31 passed in 38.05s 04:13:39 pytest -q transportpce_tests/2.2.1/test07_otn_renderer.py 04:14:16 .......................... [100%] 04:15:12 26 passed in 92.40s (0:01:32) 04:15:12 pytest -q transportpce_tests/2.2.1/test08_otn_sh_renderer.py 04:15:52 ...................... [100%] 04:16:56 22 passed in 103.69s (0:01:43) 04:16:56 pytest -q transportpce_tests/2.2.1/test09_olm.py 04:17:38 ........................................ [100%] 04:23:01 40 passed in 364.80s (0:06:04) 04:23:01 pytest -q transportpce_tests/2.2.1/test11_otn_end2end.py 04:23:49 ........................................................................ [ 74%] 04:29:28 ......................... [100%] 04:31:20 97 passed in 499.26s (0:08:19) 04:31:20 pytest -q transportpce_tests/2.2.1/test12_end2end.py 04:32:03 ...................................................... [100%] 04:38:50 54 passed in 449.61s (0:07:29) 04:38:50 pytest -q transportpce_tests/2.2.1/test14_otn_switch_end2end.py 04:39:45 ........................................................................ [ 71%] 04:44:53 ............................. [100%] 04:47:02 101 passed in 492.11s (0:08:12) 04:47:02 pytest -q transportpce_tests/2.2.1/test15_otn_end2end_with_intermediate_switch.py 04:47:57 ........................................................................ [ 67%] 04:53:44 ................................... [100%] 04:57:05 107 passed in 601.84s (0:10:01) 04:57:05 tests221: OK ✔ in 52 minutes 9.71 seconds 04:57:05 tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 04:57:11 tests_hybrid: freeze> python -m pip freeze --all 04:57:11 tests_hybrid: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 04:57:11 tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh hybrid 04:57:11 using environment variables from ./karaf121.env 04:57:11 pytest -q transportpce_tests/hybrid/test01_device_change_notifications.py 04:57:56 ................................................... [100%] 04:59:43 51 passed in 151.87s (0:02:31) 04:59:43 pytest -q transportpce_tests/hybrid/test02_B100G_end2end.py 05:00:26 ........................................................................ [ 66%] 05:04:46 ..................................... [100%] 05:06:52 109 passed in 428.64s (0:07:08) 05:06:52 pytest -q transportpce_tests/hybrid/test03_autonomous_reroute.py 05:07:39 ..................................................... [100%] 05:11:11 53 passed in 258.92s (0:04:18) 05:11:11 tests_hybrid: OK ✔ in 14 minutes 6.58 seconds 05:11:11 buildlighty: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt 05:11:17 buildlighty: freeze> python -m pip freeze --all 05:11:17 buildlighty: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 05:11:17 buildlighty: commands[0] /w/workspace/transportpce-tox-verify-scandium/lighty> ./build.sh 05:11:17 NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED 05:11:29 [ERROR] COMPILATION ERROR : 05:11:29 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[17,42] cannot find symbol 05:11:29 symbol: class YangModuleInfo 05:11:29 location: package org.opendaylight.yangtools.binding 05:11:29 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[21,30] cannot find symbol 05:11:29 symbol: class YangModuleInfo 05:11:29 location: class io.lighty.controllers.tpce.utils.TPCEUtils 05:11:29 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[343,30] cannot find symbol 05:11:29 symbol: class YangModuleInfo 05:11:29 location: class io.lighty.controllers.tpce.utils.TPCEUtils 05:11:29 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[350,23] cannot find symbol 05:11:29 symbol: class YangModuleInfo 05:11:29 location: class io.lighty.controllers.tpce.utils.TPCEUtils 05:11:29 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.13.0:compile (default-compile) on project tpce: Compilation failure: Compilation failure: 05:11:29 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[17,42] cannot find symbol 05:11:29 [ERROR] symbol: class YangModuleInfo 05:11:29 [ERROR] location: package org.opendaylight.yangtools.binding 05:11:29 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[21,30] cannot find symbol 05:11:29 [ERROR] symbol: class YangModuleInfo 05:11:29 [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils 05:11:29 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[343,30] cannot find symbol 05:11:29 [ERROR] symbol: class YangModuleInfo 05:11:29 [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils 05:11:29 [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[350,23] cannot find symbol 05:11:29 [ERROR] symbol: class YangModuleInfo 05:11:29 [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils 05:11:29 [ERROR] -> [Help 1] 05:11:29 [ERROR] 05:11:29 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. 05:11:29 [ERROR] Re-run Maven using the -X switch to enable full debug logging. 05:11:29 [ERROR] 05:11:29 [ERROR] For more information about the errors and possible solutions, please read the following articles: 05:11:29 [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException 05:11:29 unzip: cannot find or open target/tpce-bin.zip, target/tpce-bin.zip.zip or target/tpce-bin.zip.ZIP. 05:11:29 buildlighty: exit 9 (11.75 seconds) /w/workspace/transportpce-tox-verify-scandium/lighty> ./build.sh pid=61053 05:11:29 buildlighty: command failed but is marked ignore outcome so handling it as success 05:11:29 buildcontroller: OK (116.49=setup[8.08]+cmd[108.41] seconds) 05:11:29 testsPCE: OK (369.19=setup[97.79]+cmd[271.41] seconds) 05:11:29 sims: OK (11.31=setup[8.77]+cmd[2.54] seconds) 05:11:29 build_karaf_tests121: OK (63.60=setup[8.74]+cmd[54.86] seconds) 05:11:29 tests121: FAIL code 1 (277.69=setup[11.46]+cmd[266.23] seconds) 05:11:29 build_karaf_tests221: OK (61.61=setup[8.81]+cmd[52.80] seconds) 05:11:29 tests_tapi: FAIL code 1 (559.24=setup[15.58]+cmd[543.66] seconds) 05:11:29 tests221: OK (3129.71=setup[9.11]+cmd[3120.60] seconds) 05:11:29 build_karaf_tests71: OK (74.92=setup[12.62]+cmd[62.30] seconds) 05:11:29 tests71: OK (437.69=setup[6.95]+cmd[430.74] seconds) 05:11:29 build_karaf_tests_hybrid: OK (67.31=setup[17.33]+cmd[49.98] seconds) 05:11:29 tests_hybrid: OK (846.58=setup[6.46]+cmd[840.12] seconds) 05:11:29 buildlighty: OK (18.09=setup[6.33]+cmd[11.75] seconds) 05:11:29 docs: OK (33.56=setup[31.10]+cmd[2.46] seconds) 05:11:29 docs-linkcheck: OK (35.00=setup[31.05]+cmd[3.95] seconds) 05:11:29 checkbashisms: OK (2.70=setup[1.87]+cmd[0.00,0.05,0.78] seconds) 05:11:29 pre-commit: OK (60.89=setup[3.19]+cmd[0.01,0.01,34.52,23.16] seconds) 05:11:29 pylint: OK (36.80=setup[6.14]+cmd[30.66] seconds) 05:11:29 evaluation failed :( (5171.48 seconds) 05:11:29 + tox_status=255 05:11:29 + echo '---> Completed tox runs' 05:11:29 ---> Completed tox runs 05:11:29 + for i in .tox/*/log 05:11:29 ++ echo .tox/build_karaf_tests121/log 05:11:29 ++ awk -F/ '{print $2}' 05:11:29 + tox_env=build_karaf_tests121 05:11:29 + cp -r .tox/build_karaf_tests121/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/build_karaf_tests121 05:11:29 + for i in .tox/*/log 05:11:29 ++ echo .tox/build_karaf_tests221/log 05:11:29 ++ awk -F/ '{print $2}' 05:11:29 + tox_env=build_karaf_tests221 05:11:29 + cp -r .tox/build_karaf_tests221/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/build_karaf_tests221 05:11:29 + for i in .tox/*/log 05:11:29 ++ echo .tox/build_karaf_tests71/log 05:11:29 ++ awk -F/ '{print $2}' 05:11:29 + tox_env=build_karaf_tests71 05:11:29 + cp -r .tox/build_karaf_tests71/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/build_karaf_tests71 05:11:29 + for i in .tox/*/log 05:11:29 ++ echo .tox/build_karaf_tests_hybrid/log 05:11:29 ++ awk -F/ '{print $2}' 05:11:29 + tox_env=build_karaf_tests_hybrid 05:11:29 + cp -r .tox/build_karaf_tests_hybrid/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/build_karaf_tests_hybrid 05:11:29 + for i in .tox/*/log 05:11:29 ++ echo .tox/buildcontroller/log 05:11:29 ++ awk -F/ '{print $2}' 05:11:29 + tox_env=buildcontroller 05:11:29 + cp -r .tox/buildcontroller/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/buildcontroller 05:11:29 + for i in .tox/*/log 05:11:29 ++ echo .tox/buildlighty/log 05:11:29 ++ awk -F/ '{print $2}' 05:11:29 + tox_env=buildlighty 05:11:29 + cp -r .tox/buildlighty/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/buildlighty 05:11:29 + for i in .tox/*/log 05:11:29 ++ echo .tox/checkbashisms/log 05:11:29 ++ awk -F/ '{print $2}' 05:11:29 + tox_env=checkbashisms 05:11:29 + cp -r .tox/checkbashisms/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/checkbashisms 05:11:29 + for i in .tox/*/log 05:11:29 ++ echo .tox/docs-linkcheck/log 05:11:29 ++ awk -F/ '{print $2}' 05:11:29 + tox_env=docs-linkcheck 05:11:29 + cp -r .tox/docs-linkcheck/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/docs-linkcheck 05:11:29 + for i in .tox/*/log 05:11:29 ++ echo .tox/docs/log 05:11:29 ++ awk -F/ '{print $2}' 05:11:29 + tox_env=docs 05:11:29 + cp -r .tox/docs/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/docs 05:11:29 + for i in .tox/*/log 05:11:29 ++ echo .tox/pre-commit/log 05:11:29 ++ awk -F/ '{print $2}' 05:11:29 + tox_env=pre-commit 05:11:29 + cp -r .tox/pre-commit/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/pre-commit 05:11:29 + for i in .tox/*/log 05:11:29 ++ echo .tox/pylint/log 05:11:29 ++ awk -F/ '{print $2}' 05:11:29 + tox_env=pylint 05:11:29 + cp -r .tox/pylint/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/pylint 05:11:29 + for i in .tox/*/log 05:11:29 ++ echo .tox/sims/log 05:11:29 ++ awk -F/ '{print $2}' 05:11:29 + tox_env=sims 05:11:29 + cp -r .tox/sims/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/sims 05:11:29 + for i in .tox/*/log 05:11:29 ++ echo .tox/tests121/log 05:11:29 ++ awk -F/ '{print $2}' 05:11:29 + tox_env=tests121 05:11:29 + cp -r .tox/tests121/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/tests121 05:11:29 + for i in .tox/*/log 05:11:29 ++ echo .tox/tests221/log 05:11:29 ++ awk -F/ '{print $2}' 05:11:29 + tox_env=tests221 05:11:29 + cp -r .tox/tests221/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/tests221 05:11:29 + for i in .tox/*/log 05:11:29 ++ echo .tox/tests71/log 05:11:29 ++ awk -F/ '{print $2}' 05:11:29 + tox_env=tests71 05:11:29 + cp -r .tox/tests71/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/tests71 05:11:29 + for i in .tox/*/log 05:11:29 ++ echo .tox/testsPCE/log 05:11:29 ++ awk -F/ '{print $2}' 05:11:29 + tox_env=testsPCE 05:11:29 + cp -r .tox/testsPCE/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/testsPCE 05:11:29 + for i in .tox/*/log 05:11:29 ++ echo .tox/tests_hybrid/log 05:11:29 ++ awk -F/ '{print $2}' 05:11:29 + tox_env=tests_hybrid 05:11:29 + cp -r .tox/tests_hybrid/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/tests_hybrid 05:11:29 + for i in .tox/*/log 05:11:29 ++ echo .tox/tests_tapi/log 05:11:29 ++ awk -F/ '{print $2}' 05:11:29 + tox_env=tests_tapi 05:11:29 + cp -r .tox/tests_tapi/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/tests_tapi 05:11:29 + DOC_DIR=docs/_build/html 05:11:29 + [[ -d docs/_build/html ]] 05:11:29 + echo '---> Archiving generated docs' 05:11:29 ---> Archiving generated docs 05:11:29 + mv docs/_build/html /w/workspace/transportpce-tox-verify-scandium/archives/docs 05:11:29 + echo '---> tox-run.sh ends' 05:11:29 ---> tox-run.sh ends 05:11:29 + test 255 -eq 0 05:11:29 + exit 255 05:11:29 ++ '[' 1 = 1 ']' 05:11:29 ++ '[' -x /usr/bin/clear_console ']' 05:11:29 ++ /usr/bin/clear_console -q 05:11:29 Build step 'Execute shell' marked build as failure 05:11:29 $ ssh-agent -k 05:11:29 unset SSH_AUTH_SOCK; 05:11:29 unset SSH_AGENT_PID; 05:11:29 echo Agent pid 13201 killed; 05:11:29 [ssh-agent] Stopped. 05:11:29 [PostBuildScript] - [INFO] Executing post build scripts. 05:11:29 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins3945352519067172083.sh 05:11:29 ---> sysstat.sh 05:11:30 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins15160724382988039299.sh 05:11:30 ---> package-listing.sh 05:11:30 ++ tr '[:upper:]' '[:lower:]' 05:11:30 ++ facter osfamily 05:11:30 + OS_FAMILY=debian 05:11:30 + workspace=/w/workspace/transportpce-tox-verify-scandium 05:11:30 + START_PACKAGES=/tmp/packages_start.txt 05:11:30 + END_PACKAGES=/tmp/packages_end.txt 05:11:30 + DIFF_PACKAGES=/tmp/packages_diff.txt 05:11:30 + PACKAGES=/tmp/packages_start.txt 05:11:30 + '[' /w/workspace/transportpce-tox-verify-scandium ']' 05:11:30 + PACKAGES=/tmp/packages_end.txt 05:11:30 + case "${OS_FAMILY}" in 05:11:30 + dpkg -l 05:11:30 + grep '^ii' 05:11:30 + '[' -f /tmp/packages_start.txt ']' 05:11:30 + '[' -f /tmp/packages_end.txt ']' 05:11:30 + diff /tmp/packages_start.txt /tmp/packages_end.txt 05:11:30 + '[' /w/workspace/transportpce-tox-verify-scandium ']' 05:11:30 + mkdir -p /w/workspace/transportpce-tox-verify-scandium/archives/ 05:11:30 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/transportpce-tox-verify-scandium/archives/ 05:11:30 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins11158554755514788452.sh 05:11:30 ---> capture-instance-metadata.sh 05:11:30 Setup pyenv: 05:11:30 system 05:11:30 3.8.13 05:11:30 3.9.13 05:11:30 3.10.13 05:11:30 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) 05:11:31 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-yoJP from file:/tmp/.os_lf_venv 05:11:32 lf-activate-venv(): INFO: Installing: lftools 05:11:43 lf-activate-venv(): INFO: Adding /tmp/venv-yoJP/bin to PATH 05:11:43 INFO: Running in OpenStack, capturing instance metadata 05:11:43 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins13632800132776712826.sh 05:11:43 provisioning config files... 05:11:44 Could not find credentials [logs] for transportpce-tox-verify-scandium #13 05:11:44 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/transportpce-tox-verify-scandium@tmp/config6984626941741688081tmp 05:11:44 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[odl-logs-s3-cloudfront-index] 05:11:44 Run condition [Regular expression match] enabling perform for step [Provide Configuration files] 05:11:44 provisioning config files... 05:11:44 copy managed file [jenkins-s3-log-ship] to file:/home/jenkins/.aws/credentials 05:11:44 [EnvInject] - Injecting environment variables from a build step. 05:11:44 [EnvInject] - Injecting as environment variables the properties content 05:11:44 SERVER_ID=logs 05:11:44 05:11:44 [EnvInject] - Variables injected successfully. 05:11:44 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins10757106731058580847.sh 05:11:44 ---> create-netrc.sh 05:11:44 WARN: Log server credential not found. 05:11:44 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins5189683533259439168.sh 05:11:44 ---> python-tools-install.sh 05:11:44 Setup pyenv: 05:11:44 system 05:11:44 3.8.13 05:11:44 3.9.13 05:11:44 3.10.13 05:11:44 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) 05:11:44 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-yoJP from file:/tmp/.os_lf_venv 05:11:45 lf-activate-venv(): INFO: Installing: lftools 05:11:53 lf-activate-venv(): INFO: Adding /tmp/venv-yoJP/bin to PATH 05:11:53 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins5131500166899026665.sh 05:11:53 ---> sudo-logs.sh 05:11:53 Archiving 'sudo' log.. 05:11:54 [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins9074415148591506616.sh 05:11:54 ---> job-cost.sh 05:11:54 Setup pyenv: 05:11:54 system 05:11:54 3.8.13 05:11:54 3.9.13 05:11:54 3.10.13 05:11:54 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) 05:11:54 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-yoJP from file:/tmp/.os_lf_venv 05:11:55 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 05:12:00 lf-activate-venv(): INFO: Adding /tmp/venv-yoJP/bin to PATH 05:12:00 INFO: No Stack... 05:12:00 INFO: Retrieving Pricing Info for: v3-standard-4 05:12:01 INFO: Archiving Costs 05:12:01 [transportpce-tox-verify-scandium] $ /bin/bash -l /tmp/jenkins13553555948306099594.sh 05:12:01 ---> logs-deploy.sh 05:12:01 Setup pyenv: 05:12:01 system 05:12:01 3.8.13 05:12:01 3.9.13 05:12:01 3.10.13 05:12:01 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) 05:12:01 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-yoJP from file:/tmp/.os_lf_venv 05:12:02 lf-activate-venv(): INFO: Installing: lftools 05:12:10 lf-activate-venv(): INFO: Adding /tmp/venv-yoJP/bin to PATH 05:12:10 WARNING: Nexus logging server not set 05:12:10 INFO: S3 path logs/releng/vex-yul-odl-jenkins-1/transportpce-tox-verify-scandium/13/ 05:12:10 INFO: archiving logs to S3 05:12:12 ---> uname -a: 05:12:12 Linux prd-ubuntu2004-docker-4c-16g-25087 5.4.0-190-generic #210-Ubuntu SMP Fri Jul 5 17:03:38 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 05:12:12 05:12:12 05:12:12 ---> lscpu: 05:12:12 Architecture: x86_64 05:12:12 CPU op-mode(s): 32-bit, 64-bit 05:12:12 Byte Order: Little Endian 05:12:12 Address sizes: 40 bits physical, 48 bits virtual 05:12:12 CPU(s): 4 05:12:12 On-line CPU(s) list: 0-3 05:12:12 Thread(s) per core: 1 05:12:12 Core(s) per socket: 1 05:12:12 Socket(s): 4 05:12:12 NUMA node(s): 1 05:12:12 Vendor ID: AuthenticAMD 05:12:12 CPU family: 23 05:12:12 Model: 49 05:12:12 Model name: AMD EPYC-Rome Processor 05:12:12 Stepping: 0 05:12:12 CPU MHz: 2799.998 05:12:12 BogoMIPS: 5599.99 05:12:12 Virtualization: AMD-V 05:12:12 Hypervisor vendor: KVM 05:12:12 Virtualization type: full 05:12:12 L1d cache: 128 KiB 05:12:12 L1i cache: 128 KiB 05:12:12 L2 cache: 2 MiB 05:12:12 L3 cache: 64 MiB 05:12:12 NUMA node0 CPU(s): 0-3 05:12:12 Vulnerability Gather data sampling: Not affected 05:12:12 Vulnerability Itlb multihit: Not affected 05:12:12 Vulnerability L1tf: Not affected 05:12:12 Vulnerability Mds: Not affected 05:12:12 Vulnerability Meltdown: Not affected 05:12:12 Vulnerability Mmio stale data: Not affected 05:12:12 Vulnerability Retbleed: Vulnerable 05:12:12 Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp 05:12:12 Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization 05:12:12 Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected 05:12:12 Vulnerability Srbds: Not affected 05:12:12 Vulnerability Tsx async abort: Not affected 05:12:12 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities 05:12:12 05:12:12 05:12:12 ---> nproc: 05:12:12 4 05:12:12 05:12:12 05:12:12 ---> df -h: 05:12:12 Filesystem Size Used Avail Use% Mounted on 05:12:12 udev 7.8G 0 7.8G 0% /dev 05:12:12 tmpfs 1.6G 1.1M 1.6G 1% /run 05:12:12 /dev/vda1 78G 17G 62G 21% / 05:12:12 tmpfs 7.9G 0 7.9G 0% /dev/shm 05:12:12 tmpfs 5.0M 0 5.0M 0% /run/lock 05:12:12 tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup 05:12:12 /dev/loop0 68M 68M 0 100% /snap/lxd/22753 05:12:12 /dev/loop1 62M 62M 0 100% /snap/core20/1405 05:12:12 /dev/vda15 105M 6.1M 99M 6% /boot/efi 05:12:12 tmpfs 1.6G 0 1.6G 0% /run/user/1001 05:12:12 /dev/loop3 39M 39M 0 100% /snap/snapd/21759 05:12:12 /dev/loop4 64M 64M 0 100% /snap/core20/2379 05:12:12 /dev/loop5 92M 92M 0 100% /snap/lxd/29619 05:12:12 05:12:12 05:12:12 ---> free -m: 05:12:12 total used free shared buff/cache available 05:12:12 Mem: 15997 656 6096 1 9245 15002 05:12:12 Swap: 1023 0 1023 05:12:12 05:12:12 05:12:12 ---> ip addr: 05:12:12 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 05:12:12 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 05:12:12 inet 127.0.0.1/8 scope host lo 05:12:12 valid_lft forever preferred_lft forever 05:12:12 inet6 ::1/128 scope host 05:12:12 valid_lft forever preferred_lft forever 05:12:12 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 05:12:12 link/ether fa:16:3e:93:8d:ee brd ff:ff:ff:ff:ff:ff 05:12:12 inet 10.30.170.12/23 brd 10.30.171.255 scope global dynamic ens3 05:12:12 valid_lft 81030sec preferred_lft 81030sec 05:12:12 inet6 fe80::f816:3eff:fe93:8dee/64 scope link 05:12:12 valid_lft forever preferred_lft forever 05:12:12 3: docker0: mtu 1458 qdisc noqueue state DOWN group default 05:12:12 link/ether 02:42:98:34:29:04 brd ff:ff:ff:ff:ff:ff 05:12:12 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 05:12:12 valid_lft forever preferred_lft forever 05:12:12 05:12:12 05:12:12 ---> sar -b -r -n DEV: 05:12:12 Linux 5.4.0-190-generic (prd-ubuntu2004-docker-4c-16g-25087) 09/21/24 _x86_64_ (4 CPU) 05:12:12 05:12:12 03:42:45 LINUX RESTART (4 CPU) 05:12:12 05:12:12 03:43:01 tps rtps wtps dtps bread/s bwrtn/s bdscd/s 05:12:12 03:44:02 223.20 145.33 77.87 0.00 11569.14 71248.93 0.00 05:12:12 03:45:01 105.80 32.05 73.75 0.00 1227.18 37900.17 0.00 05:12:12 03:46:01 187.52 36.32 151.20 0.00 2488.50 26507.03 0.00 05:12:12 03:47:01 106.41 6.76 99.65 0.00 314.70 40636.72 0.00 05:12:12 03:48:01 132.83 1.60 131.22 0.00 120.99 73723.37 0.00 05:12:12 03:49:01 171.95 13.94 158.00 0.00 4774.68 120599.70 0.00 05:12:12 03:50:01 124.93 2.10 122.83 0.00 109.85 46196.43 0.00 05:12:12 03:51:01 47.86 1.70 46.16 0.00 52.68 821.28 0.00 05:12:12 03:52:01 83.36 3.63 79.72 0.00 576.07 1370.21 0.00 05:12:12 03:53:01 63.61 0.18 63.42 0.00 30.66 1437.63 0.00 05:12:12 03:54:01 115.11 0.37 114.75 0.00 17.19 3778.34 0.00 05:12:12 03:55:01 25.73 0.02 25.71 0.00 0.27 7225.46 0.00 05:12:12 03:56:01 2.47 0.02 2.45 0.00 0.13 60.79 0.00 05:12:12 03:57:01 2.30 0.00 2.30 0.00 0.00 36.79 0.00 05:12:12 03:58:01 10.85 1.02 9.83 0.00 25.46 1078.22 0.00 05:12:12 03:59:01 136.22 0.00 136.22 0.00 0.00 10146.35 0.00 05:12:12 04:00:01 2.03 0.00 2.03 0.00 0.00 38.53 0.00 05:12:12 04:01:01 2.50 0.00 2.50 0.00 0.00 44.66 0.00 05:12:12 04:02:01 37.16 0.00 37.16 0.00 0.00 577.77 0.00 05:12:12 04:03:01 2.67 0.00 2.67 0.00 0.00 51.59 0.00 05:12:12 04:04:01 15.78 0.00 15.78 0.00 0.00 263.02 0.00 05:12:12 04:05:02 13.01 0.03 12.98 0.00 0.27 810.80 0.00 05:12:12 04:06:01 32.05 0.02 32.03 0.00 0.14 2142.15 0.00 05:12:12 04:07:01 24.80 0.00 24.80 0.00 0.00 404.07 0.00 05:12:12 04:08:01 64.73 0.00 64.73 0.00 0.00 1171.34 0.00 05:12:12 04:09:01 7.15 0.00 7.15 0.00 0.00 124.65 0.00 05:12:12 04:10:01 2.08 0.00 2.08 0.00 0.00 43.05 0.00 05:12:12 04:11:01 68.71 0.00 68.71 0.00 0.00 1006.90 0.00 05:12:12 04:12:01 67.08 0.00 67.08 0.00 0.00 991.54 0.00 05:12:12 04:13:01 2.28 0.00 2.28 0.00 0.00 33.73 0.00 05:12:12 04:14:01 99.17 0.00 99.17 0.00 0.00 1468.69 0.00 05:12:12 04:15:01 47.26 0.00 47.26 0.00 0.00 678.29 0.00 05:12:12 04:16:01 56.69 0.00 56.69 0.00 0.00 836.93 0.00 05:12:12 04:17:01 15.98 0.05 15.93 0.00 1.07 263.82 0.00 05:12:12 04:18:01 53.28 0.00 53.28 0.00 0.00 769.21 0.00 05:12:12 04:19:01 3.47 0.00 3.47 0.00 0.00 66.92 0.00 05:12:12 04:20:01 3.35 0.00 3.35 0.00 0.00 63.86 0.00 05:12:12 04:21:01 2.02 0.00 2.02 0.00 0.00 25.06 0.00 05:12:12 04:22:01 1.48 0.00 1.48 0.00 0.00 18.00 0.00 05:12:12 04:23:01 2.20 0.00 2.20 0.00 0.00 33.46 0.00 05:12:12 04:24:01 42.73 0.00 42.73 0.00 0.00 663.62 0.00 05:12:12 04:25:01 2.87 0.00 2.87 0.00 0.00 63.06 0.00 05:12:12 04:26:01 2.52 0.00 2.52 0.00 0.00 43.19 0.00 05:12:12 04:27:01 2.77 0.00 2.77 0.00 0.00 47.32 0.00 05:12:12 04:28:01 1.57 0.00 1.57 0.00 0.00 30.00 0.00 05:12:12 04:29:01 2.67 0.00 2.67 0.00 0.00 54.26 0.00 05:12:12 04:30:01 1.95 0.00 1.95 0.00 0.00 37.46 0.00 05:12:12 04:31:01 1.47 0.00 1.47 0.00 0.00 27.60 0.00 05:12:12 04:32:01 70.78 0.00 70.78 0.00 0.00 1043.12 0.00 05:12:12 04:33:01 2.95 0.00 2.95 0.00 0.00 76.39 0.00 05:12:12 04:34:01 2.03 0.00 2.03 0.00 0.00 36.13 0.00 05:12:12 04:35:01 1.90 0.00 1.90 0.00 0.00 53.58 0.00 05:12:12 04:36:01 2.73 0.00 2.73 0.00 0.00 62.66 0.00 05:12:12 04:37:01 1.63 0.00 1.63 0.00 0.00 37.06 0.00 05:12:12 04:38:01 2.70 0.00 2.70 0.00 0.00 63.72 0.00 05:12:12 04:39:01 15.76 0.00 15.76 0.00 0.00 282.09 0.00 05:12:12 04:40:01 58.31 0.00 58.31 0.00 0.00 835.45 0.00 05:12:12 04:41:01 3.08 0.00 3.08 0.00 0.00 70.65 0.00 05:12:12 04:42:01 2.38 0.00 2.38 0.00 0.00 58.39 0.00 05:12:12 04:43:01 3.40 0.00 3.40 0.00 0.00 57.72 0.00 05:12:12 04:44:01 2.02 0.00 2.02 0.00 0.00 38.13 0.00 05:12:12 04:45:01 3.62 0.00 3.62 0.00 0.00 70.25 0.00 05:12:12 04:46:01 2.47 0.00 2.47 0.00 0.00 53.72 0.00 05:12:12 04:47:01 3.22 0.00 3.22 0.00 0.00 49.86 0.00 05:12:12 04:48:01 80.64 0.00 80.64 0.00 0.00 1175.14 0.00 05:12:12 04:49:01 2.80 0.00 2.80 0.00 0.00 61.32 0.00 05:12:12 04:50:01 1.95 0.00 1.95 0.00 0.00 49.99 0.00 05:12:12 04:51:01 3.02 0.00 3.02 0.00 0.00 65.72 0.00 05:12:12 04:52:01 2.37 0.00 2.37 0.00 0.00 38.39 0.00 05:12:12 04:53:01 3.22 0.00 3.22 0.00 0.00 56.12 0.00 05:12:12 04:54:01 1.88 0.00 1.88 0.00 0.00 39.99 0.00 05:12:12 04:55:01 1.85 0.00 1.85 0.00 0.00 33.33 0.00 05:12:12 04:56:01 2.30 0.00 2.30 0.00 0.00 49.06 0.00 05:12:12 04:57:01 1.68 0.00 1.68 0.00 0.00 37.06 0.00 05:12:12 04:58:01 104.30 0.00 104.30 0.00 0.00 9965.81 0.00 05:12:12 04:59:01 4.63 0.00 4.63 0.00 0.00 144.49 0.00 05:12:12 05:00:01 23.93 0.00 23.93 0.00 0.00 690.68 0.00 05:12:12 05:01:01 44.34 0.00 44.34 0.00 0.00 666.56 0.00 05:12:12 05:02:01 2.18 0.00 2.18 0.00 0.00 38.53 0.00 05:12:12 05:03:01 1.78 0.00 1.78 0.00 0.00 35.19 0.00 05:12:12 05:04:01 1.82 0.00 1.82 0.00 0.00 36.13 0.00 05:12:12 05:05:01 3.52 0.00 3.52 0.00 0.00 61.32 0.00 05:12:12 05:06:01 1.68 0.00 1.68 0.00 0.00 36.53 0.00 05:12:12 05:07:01 15.73 0.00 15.73 0.00 0.00 364.68 0.00 05:12:12 05:08:01 39.83 0.00 39.83 0.00 0.00 691.62 0.00 05:12:12 05:09:01 2.93 0.00 2.93 0.00 0.00 177.17 0.00 05:12:12 05:10:01 3.18 0.00 3.18 0.00 0.00 80.24 0.00 05:12:12 05:11:01 3.17 0.00 3.17 0.00 0.00 69.46 0.00 05:12:12 05:12:01 49.03 10.68 38.34 0.00 395.40 7312.65 0.00 05:12:12 Average: 31.55 2.87 28.68 0.00 243.73 5400.30 0.00 05:12:12 05:12:12 03:43:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 05:12:12 03:44:02 13614436 15462072 526932 3.22 52112 2004012 1322428 7.59 735328 1776500 91268 05:12:12 03:45:01 13250304 15410740 560624 3.42 74456 2276700 1333916 7.65 831192 2009268 149492 05:12:12 03:46:01 10983132 14674164 1282784 7.83 131836 3635508 2064116 11.84 1733680 3242448 1017208 05:12:12 03:47:01 9758868 14140448 1815812 11.08 152112 4268316 2464088 14.14 2368892 3792800 503632 05:12:13 03:48:01 6313084 13066460 2881448 17.59 185660 6509584 3769724 21.63 3988800 5508480 560848 05:12:13 03:49:01 5038320 14028356 1914364 11.69 220412 8613384 2769616 15.89 3843916 6826924 480056 05:12:13 03:50:01 3183800 12612264 3329060 20.32 226204 9031480 4365880 25.05 5590524 6910008 1100 05:12:13 03:51:01 171840 8368304 7571112 46.22 221652 7822948 8431368 48.37 9621952 5893336 400 05:12:13 03:52:01 306784 8525436 7413468 45.26 224568 7841932 8882116 50.96 9519144 5862912 1744 05:12:13 03:53:01 6124724 14141384 1800912 10.99 226636 7640780 2638284 15.14 3910676 5677448 668 05:12:13 03:54:01 3083052 11344588 4595428 28.05 236684 7869408 5765624 33.08 6759064 5856476 202460 05:12:13 03:55:01 1085644 9346656 6592448 40.24 237252 7868176 7484088 42.94 8754220 5849048 736 05:12:13 03:56:01 1079040 9340396 6598752 40.28 237288 7868492 7500116 43.03 8761556 5849252 120 05:12:13 03:57:01 1070992 9332540 6606616 40.33 237300 7868656 7500116 43.03 8768248 5849380 260 05:12:13 03:58:01 6314240 14812628 1129312 6.89 243604 8091688 2070352 11.88 3355696 6031004 224304 05:12:13 03:59:01 5404800 13912268 2029964 12.39 247632 8096452 2846216 16.33 4305204 5990248 508 05:12:13 04:00:01 5342868 13850548 2091452 12.77 247680 8096596 2910824 16.70 4367684 5988616 184 05:12:13 04:01:01 5334300 13842188 2099796 12.82 247736 8096748 2926904 16.79 4375704 5988732 180 05:12:13 04:02:01 4629152 13138604 2803112 17.11 249132 8096836 3623824 20.79 5089776 5977580 196 05:12:13 04:03:01 4573784 13083488 2857952 17.45 249152 8097052 3672540 21.07 5144600 5977648 128 05:12:13 04:04:01 5428680 13938628 2003568 12.23 249216 8097204 2927812 16.80 4294748 5977392 440 05:12:13 04:05:02 6781828 15346152 596672 3.64 251268 8145200 1449304 8.32 2900156 6018896 35612 05:12:13 04:06:01 4057584 12637304 3303776 20.17 252104 8157772 4310172 24.73 5593360 6036528 580 05:12:13 04:07:01 5123136 13703420 2238336 13.66 252316 8158080 3449516 19.79 4538116 6031740 556 05:12:13 04:08:01 4427280 13009732 2931448 17.90 254088 8158468 4239432 24.32 5239948 6023128 512 05:12:13 04:09:01 3179508 11762336 4178252 25.51 254308 8158612 5102312 29.27 6481452 6023264 364 05:12:13 04:10:01 6684808 15267628 674996 4.12 254348 8158560 1803160 10.35 2994420 6022664 72 05:12:13 04:11:01 5147732 13732576 2209016 13.48 255876 8158980 2965868 17.02 4523264 6023076 248 05:12:13 04:12:01 5050072 13636512 2305080 14.07 257084 8159280 3129748 17.96 4620216 6023012 52 05:12:13 04:13:01 6078616 14665004 1277148 7.80 257108 8159196 2118780 12.16 3595640 6022920 72 05:12:13 04:14:01 5438388 14028020 1913748 11.68 259532 8160016 2815308 16.15 4234136 6023244 712 05:12:13 04:15:01 5186476 13777032 2164396 13.21 260288 8160164 2930316 16.81 4484328 6023392 144 05:12:13 04:16:01 4540792 13132544 2808432 17.14 261136 8160464 3641140 20.89 5126696 6023572 268 05:12:13 04:17:01 6445732 15037604 904280 5.52 261168 8160556 1739092 9.98 3229892 6023560 300 05:12:13 04:18:01 3192080 11785508 4154492 25.36 262116 8161136 5056048 29.01 6469708 6024104 72 05:12:13 04:19:01 2986312 11580104 4359848 26.61 262124 8161488 5153968 29.57 6675176 6024456 308 05:12:13 04:20:01 2977028 11570992 4368952 26.67 262152 8161640 5170004 29.66 6683716 6024608 52 05:12:13 04:21:01 2976036 11570016 4370028 26.68 262160 8161640 5170004 29.66 6683500 6024608 80 05:12:13 04:22:01 2975636 11569620 4370344 26.68 262184 8161644 5170004 29.66 6683744 6024612 64 05:12:13 04:23:01 6824948 15418916 523032 3.19 262204 8161564 1359088 7.80 2848756 6024532 176 05:12:13 04:24:01 3015036 11610136 4330116 26.43 262740 8162092 5164416 29.63 6647248 6025000 428 05:12:13 04:25:01 2855892 11451356 4488576 27.40 262748 8162448 5294592 30.38 6803608 6025356 312 05:12:13 04:26:01 2850308 11445964 4494012 27.43 262760 8162628 5294592 30.38 6808092 6025536 52 05:12:13 04:27:01 2829368 11425176 4514796 27.56 262764 8162776 5294592 30.38 6828856 6025684 92 05:12:13 04:28:01 2821092 11417112 4522820 27.61 262776 8162976 5310636 30.47 6836800 6025884 96 05:12:13 04:29:01 2812792 11409204 4530716 27.66 262808 8163340 5310636 30.47 6844508 6026252 84 05:12:13 04:30:01 2762704 11359336 4580576 27.96 262812 8163552 5360580 30.76 6894124 6026452 92 05:12:13 04:31:01 2758160 11355072 4584768 27.99 262828 8163804 5376628 30.85 6897972 6026708 468 05:12:13 04:32:01 3599972 12197872 3742548 22.85 263716 8163788 4931528 28.29 6059448 6026652 224 05:12:13 04:33:01 2954028 11552716 4387008 26.78 263724 8164572 5179984 29.72 6703120 6027412 272 05:12:13 04:34:01 2917844 11516932 4422824 27.00 263732 8164956 5212008 29.90 6737464 6027804 672 05:12:13 04:35:01 2909700 11509272 4430488 27.05 263740 8165428 5228020 29.99 6744484 6028280 528 05:12:13 04:36:01 2890092 11490116 4449600 27.16 263748 8165884 5244036 30.09 6763876 6028724 48 05:12:13 04:37:01 2879816 11480216 4459480 27.22 263752 8166240 5244036 30.09 6774184 6029092 316 05:12:13 04:38:01 2855748 11456796 4482860 27.37 263772 8166872 5244036 30.09 6796964 6029724 64 05:12:13 04:39:01 6227708 14829424 1112284 6.79 263792 8167468 1937344 11.12 3438156 6030272 616 05:12:13 04:40:01 1760436 10362724 5576460 34.04 264716 8167084 6532080 37.48 7890168 6029876 580 05:12:13 04:41:01 1608312 10210860 5728172 34.97 264716 8167336 6612876 37.94 8041816 6030124 164 05:12:13 04:42:01 1367036 9970160 5968620 36.44 264716 8167916 6760600 38.79 8279772 6030700 276 05:12:13 04:43:01 1355292 9958552 5980328 36.51 264720 8168044 6760600 38.79 8292268 6030832 44 05:12:13 04:44:01 1330084 9933512 6005332 36.66 264724 8168208 6776704 38.88 8317088 6030996 352 05:12:13 04:45:01 1316112 9920028 6018740 36.74 264728 8168692 6776704 38.88 8330180 6031484 48 05:12:13 04:46:01 1295912 9900252 6038500 36.86 264740 8169100 6776704 38.88 8349660 6031880 100 05:12:13 04:47:01 3810828 12415188 3524624 21.52 264756 8169084 4376484 25.11 5841984 6031872 72 05:12:13 04:48:01 2236264 10841732 5097740 31.12 265800 8169040 6302428 36.16 7414336 6031812 492 05:12:13 04:49:01 1711884 10317700 5621508 34.32 265808 8169372 6476996 37.16 7935600 6032136 220 05:12:13 04:50:01 1443404 10049944 5888836 35.95 265828 8170064 6624000 38.00 8201708 6032828 616 05:12:13 04:51:01 1430356 10037120 5901648 36.03 265840 8170276 6640044 38.10 8214340 6033036 136 05:12:13 04:52:01 1420576 10027456 5911300 36.09 265848 8170376 6640044 38.10 8223224 6033140 48 05:12:13 04:53:01 1396540 10003608 5935092 36.23 265852 8170564 6656076 38.19 8247112 6033324 212 05:12:13 04:54:01 1387192 9994468 5944196 36.29 265852 8170772 6672184 38.28 8255632 6033532 300 05:12:13 04:55:01 1381160 9988576 5950136 36.32 265860 8170900 6672184 38.28 8261532 6033664 376 05:12:13 04:56:01 1348384 9956104 5982536 36.52 265868 8171192 6720336 38.56 8293772 6033956 44 05:12:13 04:57:01 1341688 9949692 5988992 36.56 265876 8171460 6720336 38.56 8299668 6034216 540 05:12:13 04:58:01 2918504 11773500 4166136 25.43 272732 8397824 5080536 29.15 6550096 6204232 2584 05:12:13 04:59:01 2541168 11396536 4542872 27.73 272756 8398168 5350920 30.70 6925480 6202788 368 05:12:13 05:00:01 5190300 14046344 1894600 11.57 272976 8398532 2913328 16.71 4304020 6184728 816 05:12:13 05:01:01 2554372 11410948 4528320 27.64 273284 8398732 5312008 30.48 6934712 6181476 304 05:12:13 05:02:01 2526820 11383560 4555720 27.81 273284 8398888 5312008 30.48 6960424 6181476 116 05:12:13 05:03:01 2509124 11366132 4573128 27.92 273288 8399156 5328040 30.57 6977956 6181736 396 05:12:13 05:04:01 2505360 11362536 4576708 27.94 273296 8399316 5328040 30.57 6982480 6181896 404 05:12:13 05:05:01 2489452 11346920 4592300 28.03 273300 8399600 5328040 30.57 6997232 6182180 32 05:12:13 05:06:01 2467536 11325468 4613756 28.16 273312 8400044 5344028 30.66 7018488 6182604 72 05:12:13 05:07:01 5983752 14842076 1099168 6.71 273328 8400336 1926968 11.06 3526320 6171912 524 05:12:13 05:08:01 1924512 10782948 5155844 31.47 273628 8400160 5939160 34.07 7570332 6171656 444 05:12:13 05:09:01 1845344 10704372 5234328 31.95 273640 8400712 5991904 34.38 7648700 6172052 52 05:12:13 05:10:01 1796808 10656624 5281960 32.24 273656 8401488 6023924 34.56 7695652 6172824 80 05:12:13 05:11:01 1750252 10610728 5327912 32.52 273672 8402120 6039956 34.65 7741408 6173460 32 05:12:13 05:12:01 6334536 15423396 518288 3.16 279168 8610268 1283940 7.37 2979928 6353512 21100 05:12:13 Average: 3667252 12015502 3926005 23.97 251637 7940450 4771585 27.38 6132683 5901283 37219 05:12:13 05:12:13 03:43:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 05:12:13 03:44:02 lo 1.57 1.57 0.15 0.15 0.00 0.00 0.00 0.00 05:12:13 03:44:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 03:44:02 ens3 300.92 234.69 1567.05 62.26 0.00 0.00 0.00 0.00 05:12:13 03:45:01 lo 0.75 0.75 0.07 0.07 0.00 0.00 0.00 0.00 05:12:13 03:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 03:45:01 ens3 55.92 45.47 626.98 5.92 0.00 0.00 0.00 0.00 05:12:13 03:46:01 lo 6.00 6.00 0.61 0.61 0.00 0.00 0.00 0.00 05:12:13 03:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 03:46:01 ens3 401.77 329.94 6466.90 44.27 0.00 0.00 0.00 0.00 05:12:13 03:47:01 lo 0.67 0.67 0.06 0.06 0.00 0.00 0.00 0.00 05:12:13 03:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 03:47:01 ens3 326.77 242.89 4671.60 24.96 0.00 0.00 0.00 0.00 05:12:13 03:48:01 lo 1.19 1.19 0.12 0.12 0.00 0.00 0.00 0.00 05:12:13 03:48:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 03:48:01 ens3 221.06 129.65 4359.36 13.72 0.00 0.00 0.00 0.00 05:12:13 03:49:01 lo 1.93 1.93 0.17 0.17 0.00 0.00 0.00 0.00 05:12:13 03:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 03:49:01 ens3 216.11 108.11 2322.08 8.69 0.00 0.00 0.00 0.00 05:12:13 03:50:01 lo 4.25 4.25 1.15 1.15 0.00 0.00 0.00 0.00 05:12:13 03:50:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 03:50:01 ens3 4.07 1.62 1.30 0.90 0.00 0.00 0.00 0.00 05:12:13 03:51:01 lo 34.55 34.55 41.44 41.44 0.00 0.00 0.00 0.00 05:12:13 03:51:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 03:51:01 ens3 2.36 1.65 0.57 0.46 0.00 0.00 0.00 0.00 05:12:13 03:52:01 lo 28.61 28.61 15.50 15.50 0.00 0.00 0.00 0.00 05:12:13 03:52:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 03:52:01 ens3 2.30 1.47 0.36 0.23 0.00 0.00 0.00 0.00 05:12:13 03:53:01 lo 32.18 32.18 13.64 13.64 0.00 0.00 0.00 0.00 05:12:13 03:53:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 03:53:01 ens3 1.85 1.55 0.26 0.25 0.00 0.00 0.00 0.00 05:12:13 03:54:01 lo 4.97 4.97 1.25 1.25 0.00 0.00 0.00 0.00 05:12:13 03:54:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 03:54:01 ens3 2.28 2.50 0.97 0.84 0.00 0.00 0.00 0.00 05:12:13 03:55:01 lo 37.18 37.18 31.60 31.60 0.00 0.00 0.00 0.00 05:12:13 03:55:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 03:55:01 ens3 1.20 0.97 0.16 0.14 0.00 0.00 0.00 0.00 05:12:13 03:56:01 lo 11.36 11.36 5.62 5.62 0.00 0.00 0.00 0.00 05:12:13 03:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 03:56:01 ens3 0.48 0.22 0.05 0.03 0.00 0.00 0.00 0.00 05:12:13 03:57:01 lo 15.41 15.41 5.72 5.72 0.00 0.00 0.00 0.00 05:12:13 03:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 03:57:01 ens3 0.68 0.35 0.07 0.05 0.00 0.00 0.00 0.00 05:12:13 03:58:01 lo 20.31 20.31 5.97 5.97 0.00 0.00 0.00 0.00 05:12:13 03:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 03:58:01 ens3 20.20 18.60 4.83 12.66 0.00 0.00 0.00 0.00 05:12:13 03:59:01 lo 10.50 10.50 15.73 15.73 0.00 0.00 0.00 0.00 05:12:13 03:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 03:59:01 ens3 2.55 0.98 0.38 0.21 0.00 0.00 0.00 0.00 05:12:13 04:00:01 lo 25.28 25.28 11.21 11.21 0.00 0.00 0.00 0.00 05:12:13 04:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:00:01 ens3 3.50 1.20 0.67 0.40 0.00 0.00 0.00 0.00 05:12:13 04:01:01 lo 27.81 27.81 9.18 9.18 0.00 0.00 0.00 0.00 05:12:13 04:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:01:01 ens3 2.13 1.03 0.34 0.21 0.00 0.00 0.00 0.00 05:12:13 04:02:01 lo 7.88 7.88 7.31 7.31 0.00 0.00 0.00 0.00 05:12:13 04:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:02:01 ens3 1.98 0.65 0.25 0.11 0.00 0.00 0.00 0.00 05:12:13 04:03:01 lo 34.59 34.59 11.74 11.74 0.00 0.00 0.00 0.00 05:12:13 04:03:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:03:01 ens3 2.27 1.65 0.61 0.48 0.00 0.00 0.00 0.00 05:12:13 04:04:01 lo 20.65 20.65 6.45 6.45 0.00 0.00 0.00 0.00 05:12:13 04:04:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:04:01 ens3 1.52 1.47 0.35 0.29 0.00 0.00 0.00 0.00 05:12:13 04:05:02 lo 27.20 27.20 11.84 11.84 0.00 0.00 0.00 0.00 05:12:13 04:05:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:05:02 ens3 2.27 2.67 0.89 0.80 0.00 0.00 0.00 0.00 05:12:13 04:06:01 lo 7.64 7.64 2.81 2.81 0.00 0.00 0.00 0.00 05:12:13 04:06:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:06:01 ens3 1.24 0.86 0.16 0.13 0.00 0.00 0.00 0.00 05:12:13 04:07:01 lo 10.25 10.25 7.57 7.57 0.00 0.00 0.00 0.00 05:12:13 04:07:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:07:01 ens3 2.42 1.93 0.38 0.30 0.00 0.00 0.00 0.00 05:12:13 04:08:01 lo 13.16 13.16 4.63 4.63 0.00 0.00 0.00 0.00 05:12:13 04:08:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:08:01 ens3 0.92 0.72 0.11 0.10 0.00 0.00 0.00 0.00 05:12:13 04:09:01 lo 15.78 15.78 7.22 7.22 0.00 0.00 0.00 0.00 05:12:13 04:09:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:09:01 ens3 1.38 1.28 0.31 0.24 0.00 0.00 0.00 0.00 05:12:13 04:10:01 lo 15.93 15.93 6.80 6.80 0.00 0.00 0.00 0.00 05:12:13 04:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:10:01 ens3 1.65 1.52 0.29 0.24 0.00 0.00 0.00 0.00 05:12:13 04:11:01 lo 5.30 5.30 6.23 6.23 0.00 0.00 0.00 0.00 05:12:13 04:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:11:01 ens3 1.93 0.72 0.23 0.10 0.00 0.00 0.00 0.00 05:12:13 04:12:01 lo 7.43 7.43 2.66 2.66 0.00 0.00 0.00 0.00 05:12:13 04:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:12:01 ens3 1.33 1.02 0.64 0.52 0.00 0.00 0.00 0.00 05:12:13 04:13:01 lo 4.58 4.58 1.03 1.03 0.00 0.00 0.00 0.00 05:12:13 04:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:13:01 ens3 0.82 0.47 0.11 0.09 0.00 0.00 0.00 0.00 05:12:13 04:14:01 lo 28.95 28.95 9.60 9.60 0.00 0.00 0.00 0.00 05:12:13 04:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:14:01 ens3 1.67 1.02 0.34 0.19 0.00 0.00 0.00 0.00 05:12:13 04:15:01 lo 18.78 18.78 12.47 12.47 0.00 0.00 0.00 0.00 05:12:13 04:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:15:01 ens3 1.95 1.25 0.29 0.20 0.00 0.00 0.00 0.00 05:12:13 04:16:01 lo 9.63 9.63 12.29 12.29 0.00 0.00 0.00 0.00 05:12:13 04:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:16:01 ens3 1.27 0.72 0.16 0.11 0.00 0.00 0.00 0.00 05:12:13 04:17:01 lo 18.30 18.30 9.64 9.64 0.00 0.00 0.00 0.00 05:12:13 04:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:17:01 ens3 1.62 1.65 0.46 0.42 0.00 0.00 0.00 0.00 05:12:13 04:18:01 lo 21.31 21.31 9.21 9.21 0.00 0.00 0.00 0.00 05:12:13 04:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:18:01 ens3 1.12 0.95 0.15 0.14 0.00 0.00 0.00 0.00 05:12:13 04:19:01 lo 48.99 48.99 15.72 15.72 0.00 0.00 0.00 0.00 05:12:13 04:19:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:19:01 ens3 2.10 1.05 0.38 0.20 0.00 0.00 0.00 0.00 05:12:13 04:20:01 lo 19.71 19.71 5.67 5.67 0.00 0.00 0.00 0.00 05:12:13 04:20:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:20:01 ens3 1.12 0.80 0.38 0.33 0.00 0.00 0.00 0.00 05:12:13 04:21:01 lo 0.47 0.47 0.03 0.03 0.00 0.00 0.00 0.00 05:12:13 04:21:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:21:01 ens3 0.48 0.10 0.02 0.01 0.00 0.00 0.00 0.00 05:12:13 04:22:01 lo 0.37 0.37 0.04 0.04 0.00 0.00 0.00 0.00 05:12:13 04:22:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:22:01 ens3 1.73 0.00 0.17 0.00 0.00 0.00 0.00 0.00 05:12:13 04:23:01 lo 4.05 4.05 1.40 1.40 0.00 0.00 0.00 0.00 05:12:13 04:23:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:23:01 ens3 0.70 0.30 0.08 0.05 0.00 0.00 0.00 0.00 05:12:13 04:24:01 lo 14.36 14.36 14.60 14.60 0.00 0.00 0.00 0.00 05:12:13 04:24:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:24:01 ens3 0.97 0.58 0.21 0.13 0.00 0.00 0.00 0.00 05:12:13 04:25:01 lo 22.38 22.38 9.80 9.80 0.00 0.00 0.00 0.00 05:12:13 04:25:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:25:01 ens3 0.98 0.72 0.16 0.14 0.00 0.00 0.00 0.00 05:12:13 04:26:01 lo 8.17 8.17 5.00 5.00 0.00 0.00 0.00 0.00 05:12:13 04:26:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:26:01 ens3 0.65 0.40 0.10 0.08 0.00 0.00 0.00 0.00 05:12:13 04:27:01 lo 17.09 17.09 8.80 8.80 0.00 0.00 0.00 0.00 05:12:13 04:27:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:27:01 ens3 0.80 0.60 0.13 0.11 0.00 0.00 0.00 0.00 05:12:13 04:28:01 lo 19.80 19.80 7.36 7.36 0.00 0.00 0.00 0.00 05:12:13 04:28:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:28:01 ens3 0.77 0.43 0.11 0.09 0.00 0.00 0.00 0.00 05:12:13 04:29:01 lo 17.60 17.60 7.48 7.48 0.00 0.00 0.00 0.00 05:12:13 04:29:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:29:01 ens3 3.73 0.50 0.56 0.14 0.00 0.00 0.00 0.00 05:12:13 04:30:01 lo 18.41 18.41 7.96 7.96 0.00 0.00 0.00 0.00 05:12:13 04:30:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:30:01 ens3 2.25 0.87 0.74 0.52 0.00 0.00 0.00 0.00 05:12:13 04:31:01 lo 23.51 23.51 9.01 9.01 0.00 0.00 0.00 0.00 05:12:13 04:31:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:31:01 ens3 0.68 0.40 0.09 0.07 0.00 0.00 0.00 0.00 05:12:13 04:32:01 lo 4.48 4.48 1.45 1.45 0.00 0.00 0.00 0.00 05:12:13 04:32:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:32:01 ens3 5.83 1.00 0.99 0.32 0.00 0.00 0.00 0.00 05:12:13 04:33:01 lo 51.16 51.16 19.10 19.10 0.00 0.00 0.00 0.00 05:12:13 04:33:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:33:01 ens3 1.50 0.75 0.20 0.15 0.00 0.00 0.00 0.00 05:12:13 04:34:01 lo 38.38 38.38 11.33 11.33 0.00 0.00 0.00 0.00 05:12:13 04:34:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:34:01 ens3 1.30 0.45 0.24 0.13 0.00 0.00 0.00 0.00 05:12:13 04:35:01 lo 23.83 23.83 8.01 8.01 0.00 0.00 0.00 0.00 05:12:13 04:35:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:35:01 ens3 0.65 0.55 0.09 0.09 0.00 0.00 0.00 0.00 05:12:13 04:36:01 lo 32.64 32.64 9.74 9.74 0.00 0.00 0.00 0.00 05:12:13 04:36:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:36:01 ens3 1.07 0.58 0.13 0.11 0.00 0.00 0.00 0.00 05:12:13 04:37:01 lo 23.11 23.11 6.47 6.47 0.00 0.00 0.00 0.00 05:12:13 04:37:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:37:01 ens3 0.37 0.10 0.02 0.01 0.00 0.00 0.00 0.00 05:12:13 04:38:01 lo 51.22 51.22 15.43 15.43 0.00 0.00 0.00 0.00 05:12:13 04:38:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:38:01 ens3 0.50 0.08 0.03 0.01 0.00 0.00 0.00 0.00 05:12:13 04:39:01 lo 41.26 41.26 12.76 12.76 0.00 0.00 0.00 0.00 05:12:13 04:39:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:39:01 ens3 0.80 0.73 0.22 0.15 0.00 0.00 0.00 0.00 05:12:13 04:40:01 lo 18.89 18.89 21.17 21.17 0.00 0.00 0.00 0.00 05:12:13 04:40:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:40:01 ens3 1.08 0.70 0.11 0.09 0.00 0.00 0.00 0.00 05:12:13 04:41:01 lo 23.28 23.28 8.27 8.27 0.00 0.00 0.00 0.00 05:12:13 04:41:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:41:01 ens3 1.13 1.00 0.17 0.15 0.00 0.00 0.00 0.00 05:12:13 04:42:01 lo 31.09 31.09 14.64 14.64 0.00 0.00 0.00 0.00 05:12:13 04:42:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:42:01 ens3 0.77 0.62 0.11 0.10 0.00 0.00 0.00 0.00 05:12:13 04:43:01 lo 9.82 9.82 4.28 4.28 0.00 0.00 0.00 0.00 05:12:13 04:43:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:43:01 ens3 1.47 0.75 0.21 0.11 0.00 0.00 0.00 0.00 05:12:13 04:44:01 lo 23.63 23.63 10.40 10.40 0.00 0.00 0.00 0.00 05:12:13 04:44:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:44:01 ens3 2.08 1.17 0.40 0.23 0.00 0.00 0.00 0.00 05:12:13 04:45:01 lo 32.24 32.24 9.97 9.97 0.00 0.00 0.00 0.00 05:12:13 04:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:45:01 ens3 1.40 0.78 0.60 0.45 0.00 0.00 0.00 0.00 05:12:13 04:46:01 lo 21.85 21.85 9.66 9.66 0.00 0.00 0.00 0.00 05:12:13 04:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:46:01 ens3 0.72 0.37 0.10 0.07 0.00 0.00 0.00 0.00 05:12:13 04:47:01 lo 19.11 19.11 8.05 8.05 0.00 0.00 0.00 0.00 05:12:13 04:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:47:01 ens3 1.17 0.90 0.18 0.17 0.00 0.00 0.00 0.00 05:12:13 04:48:01 lo 5.83 5.83 10.62 10.62 0.00 0.00 0.00 0.00 05:12:13 04:48:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:48:01 ens3 1.02 0.70 0.10 0.09 0.00 0.00 0.00 0.00 05:12:13 04:49:01 lo 24.00 24.00 13.59 13.59 0.00 0.00 0.00 0.00 05:12:13 04:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:49:01 ens3 1.93 1.22 0.32 0.23 0.00 0.00 0.00 0.00 05:12:13 04:50:01 lo 41.08 41.08 18.85 18.85 0.00 0.00 0.00 0.00 05:12:13 04:50:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:50:01 ens3 1.53 0.40 0.11 0.06 0.00 0.00 0.00 0.00 05:12:13 04:51:01 lo 12.10 12.10 8.05 8.05 0.00 0.00 0.00 0.00 05:12:13 04:51:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:51:01 ens3 0.88 0.72 0.11 0.11 0.00 0.00 0.00 0.00 05:12:13 04:52:01 lo 14.66 14.66 7.25 7.25 0.00 0.00 0.00 0.00 05:12:13 04:52:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:52:01 ens3 0.80 0.73 0.12 0.12 0.00 0.00 0.00 0.00 05:12:13 04:53:01 lo 16.20 16.20 9.66 9.66 0.00 0.00 0.00 0.00 05:12:13 04:53:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:53:01 ens3 1.13 0.78 0.18 0.12 0.00 0.00 0.00 0.00 05:12:13 04:54:01 lo 11.38 11.38 5.94 5.94 0.00 0.00 0.00 0.00 05:12:13 04:54:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:54:01 ens3 1.00 0.82 0.45 0.34 0.00 0.00 0.00 0.00 05:12:13 04:55:01 lo 20.06 20.06 10.65 10.65 0.00 0.00 0.00 0.00 05:12:13 04:55:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:55:01 ens3 0.82 0.87 0.13 0.14 0.00 0.00 0.00 0.00 05:12:13 04:56:01 lo 25.45 25.45 10.90 10.90 0.00 0.00 0.00 0.00 05:12:13 04:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:56:01 ens3 0.48 0.23 0.05 0.03 0.00 0.00 0.00 0.00 05:12:13 04:57:01 lo 21.03 21.03 7.61 7.61 0.00 0.00 0.00 0.00 05:12:13 04:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:57:01 ens3 0.77 0.80 0.12 0.12 0.00 0.00 0.00 0.00 05:12:13 04:58:01 lo 17.63 17.63 26.95 26.95 0.00 0.00 0.00 0.00 05:12:13 04:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:58:01 ens3 2.05 2.10 0.81 0.72 0.00 0.00 0.00 0.00 05:12:13 04:59:01 lo 32.59 32.59 13.22 13.22 0.00 0.00 0.00 0.00 05:12:13 04:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 04:59:01 ens3 1.37 1.12 0.35 0.27 0.00 0.00 0.00 0.00 05:12:13 05:00:01 lo 24.33 24.33 7.64 7.64 0.00 0.00 0.00 0.00 05:12:13 05:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 05:00:01 ens3 0.95 0.70 0.15 0.13 0.00 0.00 0.00 0.00 05:12:13 05:01:01 lo 33.99 33.99 17.58 17.58 0.00 0.00 0.00 0.00 05:12:13 05:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 05:01:01 ens3 0.87 0.67 0.13 0.11 0.00 0.00 0.00 0.00 05:12:13 05:02:01 lo 11.71 11.71 4.95 4.95 0.00 0.00 0.00 0.00 05:12:13 05:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 05:02:01 ens3 0.92 0.68 0.16 0.14 0.00 0.00 0.00 0.00 05:12:13 05:03:01 lo 26.60 26.60 10.38 10.38 0.00 0.00 0.00 0.00 05:12:13 05:03:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 05:03:01 ens3 0.90 0.60 0.20 0.11 0.00 0.00 0.00 0.00 05:12:13 05:04:01 lo 16.10 16.10 6.77 6.77 0.00 0.00 0.00 0.00 05:12:13 05:04:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 05:04:01 ens3 1.07 0.83 0.30 0.22 0.00 0.00 0.00 0.00 05:12:13 05:05:01 lo 28.95 28.95 9.36 9.36 0.00 0.00 0.00 0.00 05:12:13 05:05:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 05:05:01 ens3 0.53 0.30 0.06 0.05 0.00 0.00 0.00 0.00 05:12:13 05:06:01 lo 21.48 21.48 7.95 7.95 0.00 0.00 0.00 0.00 05:12:13 05:06:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 05:06:01 ens3 0.62 0.48 0.11 0.10 0.00 0.00 0.00 0.00 05:12:13 05:07:01 lo 37.70 37.70 12.14 12.14 0.00 0.00 0.00 0.00 05:12:13 05:07:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 05:07:01 ens3 1.17 0.95 0.21 0.19 0.00 0.00 0.00 0.00 05:12:13 05:08:01 lo 39.13 39.13 20.15 20.15 0.00 0.00 0.00 0.00 05:12:13 05:08:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 05:08:01 ens3 1.00 0.73 0.14 0.12 0.00 0.00 0.00 0.00 05:12:13 05:09:01 lo 40.33 40.33 15.09 15.09 0.00 0.00 0.00 0.00 05:12:13 05:09:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 05:09:01 ens3 1.25 1.35 0.32 0.26 0.00 0.00 0.00 0.00 05:12:13 05:10:01 lo 60.13 60.13 20.61 20.61 0.00 0.00 0.00 0.00 05:12:13 05:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 05:10:01 ens3 0.37 0.17 0.04 0.03 0.00 0.00 0.00 0.00 05:12:13 05:11:01 lo 73.44 73.44 23.83 23.83 0.00 0.00 0.00 0.00 05:12:13 05:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 05:11:01 ens3 0.60 0.70 0.10 0.11 0.00 0.00 0.00 0.00 05:12:13 05:12:01 lo 3.65 3.65 1.12 1.12 0.00 0.00 0.00 0.00 05:12:13 05:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 05:12:01 ens3 156.77 120.81 1962.26 19.47 0.00 0.00 0.00 0.00 05:12:13 Average: lo 20.82 20.82 9.59 9.59 0.00 0.00 0.00 0.00 05:12:13 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:12:13 Average: ens3 20.39 14.62 247.60 2.34 0.00 0.00 0.00 0.00 05:12:13 05:12:13 05:12:13 ---> sar -P ALL: 05:12:13 Linux 5.4.0-190-generic (prd-ubuntu2004-docker-4c-16g-25087) 09/21/24 _x86_64_ (4 CPU) 05:12:13 05:12:13 03:42:45 LINUX RESTART (4 CPU) 05:12:13 05:12:13 03:43:01 CPU %user %nice %system %iowait %steal %idle 05:12:13 03:44:02 all 16.25 16.19 14.18 7.64 0.18 45.56 05:12:13 03:44:02 0 11.95 17.25 15.55 8.74 0.19 46.33 05:12:13 03:44:02 1 20.52 15.02 13.22 5.98 0.17 45.10 05:12:13 03:44:02 2 19.70 14.99 13.58 10.31 0.17 41.26 05:12:13 03:44:02 3 12.81 17.52 14.37 5.56 0.19 49.56 05:12:13 03:45:01 all 14.65 4.02 3.93 9.89 0.09 67.42 05:12:13 03:45:01 0 13.61 4.85 4.67 25.32 0.08 51.46 05:12:13 03:45:01 1 12.56 4.12 3.68 7.75 0.09 71.80 05:12:13 03:45:01 2 22.87 3.86 3.84 3.60 0.10 65.73 05:12:13 03:45:01 3 9.59 3.25 3.54 2.82 0.09 80.72 05:12:13 03:46:01 all 62.73 0.00 4.61 3.76 0.11 28.79 05:12:13 03:46:01 0 56.50 0.00 4.58 5.24 0.12 33.56 05:12:13 03:46:01 1 61.63 0.00 4.75 3.06 0.12 30.45 05:12:13 03:46:01 2 66.74 0.00 4.26 3.16 0.10 25.73 05:12:13 03:46:01 3 66.06 0.00 4.85 3.58 0.10 25.40 05:12:13 03:47:01 all 65.71 0.00 2.53 5.37 2.78 23.62 05:12:13 03:47:01 0 66.61 0.00 3.21 11.19 3.86 15.13 05:12:13 03:47:01 1 68.11 0.00 1.76 7.69 3.32 19.13 05:12:13 03:47:01 2 43.91 0.00 1.61 2.18 2.83 49.48 05:12:13 03:47:01 3 84.53 0.00 3.56 0.33 1.07 10.51 05:12:13 03:48:01 all 65.74 0.00 4.01 7.09 0.44 22.73 05:12:13 03:48:01 0 62.07 0.00 3.14 6.09 0.35 28.35 05:12:13 03:48:01 1 61.35 0.00 5.21 7.04 0.37 26.03 05:12:13 03:48:01 2 57.64 0.00 3.62 9.56 0.66 28.52 05:12:13 03:48:01 3 81.89 0.00 4.07 5.67 0.37 8.01 05:12:13 03:49:01 all 72.73 0.00 5.05 15.22 1.43 5.56 05:12:13 03:49:01 0 74.00 0.00 5.63 14.60 1.14 4.63 05:12:13 03:49:01 1 68.43 0.00 4.13 18.63 1.75 7.06 05:12:13 03:49:01 2 73.15 0.00 5.53 15.24 1.63 4.46 05:12:13 03:49:01 3 75.37 0.00 4.93 12.41 1.22 6.07 05:12:13 03:50:01 all 91.81 0.00 3.15 0.95 0.55 3.55 05:12:13 03:50:01 0 92.36 0.00 3.14 2.41 0.47 1.62 05:12:13 03:50:01 1 90.88 0.00 3.28 1.19 0.69 3.97 05:12:13 03:50:01 2 92.52 0.00 2.72 0.17 0.55 4.04 05:12:13 03:50:01 3 91.48 0.00 3.45 0.03 0.48 4.55 05:12:13 03:51:01 all 51.07 0.00 2.37 0.41 1.79 44.35 05:12:13 03:51:01 0 46.82 0.00 2.55 0.96 1.77 47.90 05:12:13 03:51:01 1 49.72 0.00 1.96 0.32 1.79 46.21 05:12:13 03:51:01 2 53.38 0.00 2.23 0.10 1.72 42.57 05:12:13 03:51:01 3 54.36 0.00 2.75 0.27 1.89 40.73 05:12:13 03:52:01 all 47.89 0.00 1.99 0.45 2.61 47.06 05:12:13 03:52:01 0 51.49 0.00 1.69 1.44 2.01 43.37 05:12:13 03:52:01 1 46.22 0.00 2.08 0.10 3.28 48.32 05:12:13 03:52:01 2 46.97 0.00 2.04 0.13 2.50 48.36 05:12:13 03:52:01 3 46.88 0.00 2.17 0.13 2.63 48.19 05:12:13 03:53:01 all 45.36 0.00 1.85 0.29 0.94 51.56 05:12:13 03:53:01 0 46.14 0.00 1.92 0.78 0.95 50.20 05:12:13 03:53:01 1 45.15 0.00 1.73 0.03 0.83 52.24 05:12:13 03:53:01 2 44.45 0.00 1.63 0.03 1.02 52.87 05:12:13 03:53:01 3 45.69 0.00 2.09 0.32 0.96 50.94 05:12:13 03:54:01 all 88.44 0.00 3.01 0.22 0.55 7.78 05:12:13 03:54:01 0 87.56 0.00 2.87 0.63 0.53 8.40 05:12:13 03:54:01 1 88.38 0.00 2.83 0.10 0.54 8.15 05:12:13 03:54:01 2 87.77 0.00 3.34 0.10 0.65 8.14 05:12:13 03:54:01 3 90.07 0.00 3.01 0.03 0.47 6.42 05:12:13 05:12:13 03:54:01 CPU %user %nice %system %iowait %steal %idle 05:12:13 03:55:01 all 34.31 0.00 1.23 0.35 0.79 63.33 05:12:13 03:55:01 0 35.36 0.00 1.15 1.12 0.77 61.60 05:12:13 03:55:01 1 33.73 0.00 1.35 0.00 0.88 64.04 05:12:13 03:55:01 2 34.89 0.00 1.28 0.27 0.76 62.81 05:12:13 03:55:01 3 33.25 0.00 1.13 0.00 0.73 64.89 05:12:13 03:56:01 all 4.77 0.00 0.49 0.02 1.05 93.68 05:12:13 03:56:01 0 4.95 0.00 0.63 0.07 1.13 93.23 05:12:13 03:56:01 1 5.14 0.00 0.48 0.00 1.04 93.34 05:12:13 03:56:01 2 4.67 0.00 0.56 0.00 1.01 93.75 05:12:13 03:56:01 3 4.31 0.00 0.28 0.00 1.00 94.41 05:12:13 03:57:01 all 3.18 0.00 0.45 0.02 0.39 95.95 05:12:13 03:57:01 0 3.24 0.00 0.52 0.05 0.48 95.71 05:12:13 03:57:01 1 3.25 0.00 0.42 0.03 0.37 95.93 05:12:13 03:57:01 2 2.97 0.00 0.42 0.00 0.33 96.28 05:12:13 03:57:01 3 3.28 0.00 0.47 0.00 0.38 95.87 05:12:13 03:58:01 all 28.46 0.00 1.66 0.28 1.61 67.99 05:12:13 03:58:01 0 25.47 0.00 1.61 0.68 1.84 70.41 05:12:13 03:58:01 1 28.17 0.00 1.61 0.05 1.33 68.84 05:12:13 03:58:01 2 30.44 0.00 2.11 0.22 1.63 65.61 05:12:13 03:58:01 3 29.77 0.00 1.33 0.17 1.66 67.09 05:12:13 03:59:01 all 49.96 0.00 1.83 2.02 1.05 45.14 05:12:13 03:59:01 0 49.43 0.00 1.57 2.76 0.94 45.30 05:12:13 03:59:01 1 47.15 0.00 2.10 1.75 1.22 47.78 05:12:13 03:59:01 2 50.52 0.00 2.28 3.53 1.14 42.53 05:12:13 03:59:01 3 52.76 0.00 1.35 0.05 0.90 44.94 05:12:13 04:00:01 all 8.97 0.00 0.38 0.01 1.48 89.16 05:12:13 04:00:01 0 9.00 0.00 0.42 0.05 1.63 88.90 05:12:13 04:00:01 1 9.80 0.00 0.37 0.00 1.35 88.49 05:12:13 04:00:01 2 8.13 0.00 0.32 0.00 1.53 90.03 05:12:13 04:00:01 3 8.95 0.00 0.42 0.00 1.40 89.24 05:12:13 04:01:01 all 3.19 0.00 0.25 0.02 0.71 95.84 05:12:13 04:01:01 0 3.66 0.00 0.28 0.08 0.77 95.20 05:12:13 04:01:01 1 2.89 0.00 0.25 0.00 0.57 96.30 05:12:13 04:01:01 2 2.61 0.00 0.28 0.00 0.80 96.30 05:12:13 04:01:01 3 3.59 0.00 0.17 0.00 0.70 95.54 05:12:13 04:02:01 all 44.20 0.00 1.81 0.57 1.97 51.45 05:12:13 04:02:01 0 44.34 0.00 2.24 1.26 2.05 50.12 05:12:13 04:02:01 1 46.12 0.00 1.58 0.30 1.73 50.27 05:12:13 04:02:01 2 42.81 0.00 1.41 0.00 2.06 53.72 05:12:13 04:02:01 3 43.55 0.00 1.99 0.73 2.03 51.70 05:12:13 04:03:01 all 7.68 0.00 0.31 0.09 0.07 91.84 05:12:13 04:03:01 0 7.43 0.00 0.29 0.29 0.08 91.91 05:12:13 04:03:01 1 7.71 0.00 0.32 0.00 0.07 91.90 05:12:13 04:03:01 2 7.52 0.00 0.37 0.00 0.07 92.04 05:12:13 04:03:01 3 8.06 0.00 0.28 0.07 0.07 91.52 05:12:13 04:04:01 all 24.79 0.00 0.85 0.42 0.07 73.87 05:12:13 04:04:01 0 25.76 0.00 0.89 0.65 0.07 72.63 05:12:13 04:04:01 1 24.90 0.00 0.77 0.10 0.07 74.16 05:12:13 04:04:01 2 25.10 0.00 0.60 0.03 0.07 74.20 05:12:13 04:04:01 3 23.40 0.00 1.14 0.89 0.07 74.50 05:12:13 04:05:02 all 12.77 0.00 0.69 0.74 0.08 85.72 05:12:13 04:05:02 0 10.70 0.00 0.52 2.05 0.07 86.67 05:12:13 04:05:02 1 11.24 0.00 0.62 0.00 0.07 88.08 05:12:13 04:05:02 2 18.43 0.00 1.16 0.30 0.08 80.02 05:12:13 04:05:02 3 10.71 0.00 0.47 0.61 0.08 88.13 05:12:13 05:12:13 04:05:02 CPU %user %nice %system %iowait %steal %idle 05:12:13 04:06:01 all 54.08 0.00 2.34 1.48 3.69 38.41 05:12:13 04:06:01 0 56.81 0.00 2.45 4.56 3.95 32.24 05:12:13 04:06:01 1 50.79 0.00 1.98 0.20 3.33 43.70 05:12:13 04:06:01 2 50.59 0.00 2.85 0.07 4.07 42.41 05:12:13 04:06:01 3 58.20 0.00 2.10 1.08 3.38 35.24 05:12:13 04:07:01 all 33.43 0.00 1.23 0.04 0.10 65.21 05:12:13 04:07:01 0 31.35 0.00 1.64 0.08 0.10 66.83 05:12:13 04:07:01 1 33.86 0.00 1.47 0.03 0.10 64.53 05:12:13 04:07:01 2 33.88 0.00 0.71 0.00 0.10 65.31 05:12:13 04:07:01 3 34.62 0.00 1.10 0.03 0.08 64.16 05:12:13 04:08:01 all 50.99 0.00 1.89 0.32 0.13 46.67 05:12:13 04:08:01 0 53.86 0.00 2.07 0.52 0.13 43.42 05:12:13 04:08:01 1 49.64 0.00 1.58 0.12 0.13 48.53 05:12:13 04:08:01 2 49.64 0.00 2.07 0.61 0.13 47.55 05:12:13 04:08:01 3 50.83 0.00 1.83 0.03 0.13 47.18 05:12:13 04:09:01 all 23.73 0.00 1.15 0.61 2.90 71.61 05:12:13 04:09:01 0 22.88 0.00 1.04 2.02 3.06 71.00 05:12:13 04:09:01 1 24.71 0.00 1.14 0.28 2.60 71.27 05:12:13 04:09:01 2 23.21 0.00 1.48 0.12 2.97 72.22 05:12:13 04:09:01 3 24.13 0.00 0.94 0.00 2.98 71.95 05:12:13 04:10:01 all 7.12 0.00 0.81 0.05 1.54 90.49 05:12:13 04:10:01 0 6.59 0.00 0.78 0.08 1.51 91.03 05:12:13 04:10:01 1 6.89 0.00 0.76 0.08 1.77 90.49 05:12:13 04:10:01 2 8.20 0.00 0.61 0.02 1.11 90.05 05:12:13 04:10:01 3 6.81 0.00 1.07 0.00 1.75 90.37 05:12:13 04:11:01 all 35.56 0.00 1.16 0.30 0.15 62.84 05:12:13 04:11:01 0 35.46 0.00 1.44 0.67 0.18 62.25 05:12:13 04:11:01 1 37.65 0.00 1.20 0.15 0.12 60.88 05:12:13 04:11:01 2 29.18 0.00 1.27 0.37 0.17 69.02 05:12:13 04:11:01 3 39.98 0.00 0.72 0.02 0.12 59.17 05:12:13 04:12:01 all 30.86 0.00 1.17 0.32 0.09 67.56 05:12:13 04:12:01 0 29.96 0.00 1.09 0.80 0.12 68.04 05:12:13 04:12:01 1 31.36 0.00 1.27 0.22 0.08 67.07 05:12:13 04:12:01 2 32.49 0.00 1.15 0.05 0.08 66.22 05:12:13 04:12:01 3 29.64 0.00 1.17 0.20 0.08 68.91 05:12:13 04:13:01 all 2.31 0.00 0.23 0.09 0.05 97.32 05:12:13 04:13:01 0 3.55 0.00 0.26 0.00 0.05 96.13 05:12:13 04:13:01 1 1.87 0.00 0.12 0.00 0.05 97.96 05:12:13 04:13:01 2 1.79 0.00 0.33 0.00 0.05 97.83 05:12:13 04:13:01 3 2.01 0.00 0.22 0.35 0.05 97.38 05:12:13 04:14:01 all 66.45 0.00 2.09 0.31 0.12 31.03 05:12:13 04:14:01 0 63.60 0.00 2.18 0.17 0.12 33.94 05:12:13 04:14:01 1 69.50 0.00 2.03 0.60 0.12 27.75 05:12:13 04:14:01 2 66.13 0.00 1.89 0.18 0.12 31.67 05:12:13 04:14:01 3 66.59 0.00 2.28 0.27 0.12 30.75 05:12:13 04:15:01 all 10.82 0.00 0.38 0.30 0.06 88.44 05:12:13 04:15:01 0 10.29 0.00 0.38 0.00 0.07 89.26 05:12:13 04:15:01 1 10.53 0.00 0.38 0.28 0.05 88.76 05:12:13 04:15:01 2 10.45 0.00 0.40 0.72 0.07 88.36 05:12:13 04:15:01 3 11.99 0.00 0.35 0.18 0.07 87.41 05:12:13 04:16:01 all 39.47 0.00 1.38 0.94 0.09 58.12 05:12:13 04:16:01 0 40.17 0.00 1.19 0.05 0.10 58.49 05:12:13 04:16:01 1 35.06 0.00 1.55 0.05 0.10 63.23 05:12:13 04:16:01 2 41.79 0.00 1.46 3.43 0.10 53.22 05:12:13 04:16:01 3 40.87 0.00 1.32 0.23 0.07 57.50 05:12:13 05:12:13 04:16:01 CPU %user %nice %system %iowait %steal %idle 05:12:13 04:17:01 all 13.82 0.00 0.58 0.02 0.08 85.50 05:12:13 04:17:01 0 13.64 0.00 0.65 0.02 0.08 85.61 05:12:13 04:17:01 1 14.69 0.00 0.62 0.00 0.08 84.61 05:12:13 04:17:01 2 13.25 0.00 0.57 0.05 0.08 86.05 05:12:13 04:17:01 3 13.69 0.00 0.50 0.00 0.08 85.73 05:12:13 04:18:01 all 46.06 0.00 1.20 0.26 0.10 52.36 05:12:13 04:18:01 0 43.88 0.00 0.95 0.00 0.10 55.07 05:12:13 04:18:01 1 44.17 0.00 1.17 0.00 0.12 54.54 05:12:13 04:18:01 2 46.76 0.00 1.32 0.65 0.10 51.18 05:12:13 04:18:01 3 49.45 0.00 1.37 0.40 0.10 48.68 05:12:13 04:19:01 all 8.51 0.00 0.41 0.42 0.08 90.58 05:12:13 04:19:01 0 8.75 0.00 0.34 0.27 0.08 90.56 05:12:13 04:19:01 1 8.10 0.00 0.49 0.20 0.07 91.14 05:12:13 04:19:01 2 8.48 0.00 0.38 0.77 0.07 90.30 05:12:13 04:19:01 3 8.73 0.00 0.42 0.45 0.10 90.30 05:12:13 04:20:01 all 2.27 0.00 0.22 0.60 0.08 96.84 05:12:13 04:20:01 0 2.28 0.00 0.22 1.66 0.08 95.77 05:12:13 04:20:01 1 2.23 0.00 0.22 0.00 0.10 97.45 05:12:13 04:20:01 2 2.00 0.00 0.25 0.54 0.05 97.17 05:12:13 04:20:01 3 2.56 0.00 0.20 0.20 0.07 96.97 05:12:13 04:21:01 all 0.58 0.00 0.18 0.18 0.14 98.92 05:12:13 04:21:01 0 0.79 0.00 0.18 0.35 0.18 98.50 05:12:13 04:21:01 1 0.52 0.00 0.20 0.00 0.15 99.13 05:12:13 04:21:01 2 0.55 0.00 0.17 0.35 0.10 98.83 05:12:13 04:21:01 3 0.47 0.00 0.15 0.02 0.13 99.23 05:12:13 04:22:01 all 0.42 0.00 0.10 0.21 0.03 99.24 05:12:13 04:22:01 0 0.92 0.00 0.15 0.48 0.03 98.41 05:12:13 04:22:01 1 0.17 0.00 0.02 0.00 0.02 99.80 05:12:13 04:22:01 2 0.17 0.00 0.10 0.34 0.03 99.36 05:12:13 04:22:01 3 0.44 0.00 0.12 0.03 0.03 99.38 05:12:13 04:23:01 all 0.83 0.00 0.25 0.08 0.06 98.79 05:12:13 04:23:01 0 1.24 0.00 0.23 0.29 0.07 98.17 05:12:13 04:23:01 1 0.53 0.00 0.35 0.00 0.03 99.08 05:12:13 04:23:01 2 0.75 0.00 0.18 0.02 0.07 98.98 05:12:13 04:23:01 3 0.79 0.00 0.22 0.02 0.07 98.91 05:12:13 04:24:01 all 55.96 0.00 1.97 0.32 2.34 39.42 05:12:13 04:24:01 0 56.18 0.00 1.63 0.03 2.29 39.87 05:12:13 04:24:01 1 49.83 0.00 1.87 0.35 2.56 45.39 05:12:13 04:24:01 2 57.34 0.00 1.87 0.62 2.20 37.98 05:12:13 04:24:01 3 60.52 0.00 2.52 0.27 2.29 34.40 05:12:13 04:25:01 all 8.86 0.00 0.39 0.02 0.86 89.87 05:12:13 04:25:01 0 8.45 0.00 0.40 0.00 0.83 90.32 05:12:13 04:25:01 1 8.84 0.00 0.47 0.05 0.92 89.73 05:12:13 04:25:01 2 9.26 0.00 0.27 0.00 0.73 89.74 05:12:13 04:25:01 3 8.89 0.00 0.44 0.03 0.97 89.67 05:12:13 04:26:01 all 2.16 0.00 0.18 0.02 0.79 96.85 05:12:13 04:26:01 0 1.83 0.00 0.22 0.02 0.85 97.08 05:12:13 04:26:01 1 1.81 0.00 0.13 0.02 0.77 97.27 05:12:13 04:26:01 2 2.31 0.00 0.13 0.03 0.67 96.85 05:12:13 04:26:01 3 2.69 0.00 0.23 0.03 0.87 96.18 05:12:13 04:27:01 all 3.45 0.00 0.28 0.02 0.85 95.39 05:12:13 04:27:01 0 3.83 0.00 0.23 0.00 0.63 95.30 05:12:13 04:27:01 1 2.24 0.00 0.36 0.00 1.01 96.38 05:12:13 04:27:01 2 3.46 0.00 0.28 0.03 0.83 95.39 05:12:13 04:27:01 3 4.28 0.00 0.25 0.07 0.94 94.47 05:12:13 05:12:13 04:27:01 CPU %user %nice %system %iowait %steal %idle 05:12:13 04:28:01 all 2.60 0.00 0.25 0.01 0.74 96.39 05:12:13 04:28:01 0 2.89 0.00 0.20 0.00 0.70 96.21 05:12:13 04:28:01 1 2.10 0.00 0.28 0.00 0.70 96.91 05:12:13 04:28:01 2 2.70 0.00 0.27 0.00 0.65 96.39 05:12:13 04:28:01 3 2.71 0.00 0.27 0.05 0.92 96.05 05:12:13 04:29:01 all 2.12 0.00 0.27 0.02 0.58 97.02 05:12:13 04:29:01 0 2.67 0.00 0.42 0.00 0.67 96.24 05:12:13 04:29:01 1 1.89 0.00 0.32 0.00 0.53 97.26 05:12:13 04:29:01 2 1.92 0.00 0.18 0.03 0.53 97.33 05:12:13 04:29:01 3 1.99 0.00 0.17 0.05 0.57 97.22 05:12:13 04:30:01 all 2.84 0.00 0.24 0.01 0.16 96.75 05:12:13 04:30:01 0 3.48 0.00 0.27 0.00 0.20 96.06 05:12:13 04:30:01 1 2.24 0.00 0.23 0.00 0.15 97.37 05:12:13 04:30:01 2 2.73 0.00 0.23 0.02 0.13 96.89 05:12:13 04:30:01 3 2.92 0.00 0.22 0.03 0.17 96.66 05:12:13 04:31:01 all 2.26 0.00 0.26 0.01 0.56 96.91 05:12:13 04:31:01 0 2.33 0.00 0.32 0.00 0.68 96.67 05:12:13 04:31:01 1 2.22 0.00 0.23 0.00 0.48 97.06 05:12:13 04:31:01 2 1.98 0.00 0.30 0.02 0.68 97.02 05:12:13 04:31:01 3 2.52 0.00 0.18 0.02 0.38 96.89 05:12:13 04:32:01 all 42.93 0.00 1.67 0.33 0.10 54.97 05:12:13 04:32:01 0 43.41 0.00 1.63 0.05 0.10 54.82 05:12:13 04:32:01 1 42.21 0.00 1.34 0.08 0.10 56.26 05:12:13 04:32:01 2 42.83 0.00 1.76 0.03 0.10 55.27 05:12:13 04:32:01 3 43.26 0.00 1.94 1.17 0.08 53.55 05:12:13 04:33:01 all 18.68 0.00 0.63 0.20 0.06 80.43 05:12:13 04:33:01 0 19.18 0.00 0.59 0.03 0.07 80.14 05:12:13 04:33:01 1 18.31 0.00 0.65 0.00 0.05 80.98 05:12:13 04:33:01 2 17.88 0.00 0.73 0.28 0.07 81.04 05:12:13 04:33:01 3 19.36 0.00 0.55 0.47 0.07 79.56 05:12:13 04:34:01 all 4.17 0.00 0.41 0.01 0.06 95.36 05:12:13 04:34:01 0 4.06 0.00 0.33 0.00 0.05 95.55 05:12:13 04:34:01 1 4.19 0.00 0.35 0.00 0.07 95.39 05:12:13 04:34:01 2 4.40 0.00 0.40 0.02 0.07 95.12 05:12:13 04:34:01 3 4.01 0.00 0.55 0.02 0.05 95.37 05:12:13 04:35:01 all 2.87 0.00 0.31 0.02 0.05 96.75 05:12:13 04:35:01 0 2.75 0.00 0.37 0.00 0.05 96.83 05:12:13 04:35:01 1 2.87 0.00 0.37 0.03 0.07 96.66 05:12:13 04:35:01 2 3.17 0.00 0.20 0.03 0.03 96.56 05:12:13 04:35:01 3 2.68 0.00 0.32 0.00 0.07 96.94 05:12:13 04:36:01 all 2.88 0.00 0.33 0.02 0.04 96.73 05:12:13 04:36:01 0 3.02 0.00 0.40 0.00 0.05 96.53 05:12:13 04:36:01 1 2.37 0.00 0.35 0.03 0.03 97.21 05:12:13 04:36:01 2 3.14 0.00 0.23 0.03 0.03 96.56 05:12:13 04:36:01 3 3.01 0.00 0.33 0.00 0.05 96.60 05:12:13 04:37:01 all 1.91 0.00 0.26 0.01 0.05 97.77 05:12:13 04:37:01 0 1.50 0.00 0.30 0.00 0.05 98.15 05:12:13 04:37:01 1 1.94 0.00 0.17 0.02 0.05 97.83 05:12:13 04:37:01 2 2.15 0.00 0.17 0.02 0.05 97.62 05:12:13 04:37:01 3 2.04 0.00 0.40 0.00 0.07 97.50 05:12:13 04:38:01 all 3.51 0.00 0.36 0.02 0.04 96.08 05:12:13 04:38:01 0 3.39 0.00 0.35 0.00 0.05 96.21 05:12:13 04:38:01 1 3.42 0.00 0.32 0.07 0.03 96.16 05:12:13 04:38:01 2 3.54 0.00 0.30 0.00 0.03 96.13 05:12:13 04:38:01 3 3.69 0.00 0.45 0.00 0.05 95.81 05:12:13 05:12:13 04:38:01 CPU %user %nice %system %iowait %steal %idle 05:12:13 04:39:01 all 17.66 0.00 0.76 0.06 0.08 81.45 05:12:13 04:39:01 0 17.82 0.00 0.74 0.00 0.07 81.37 05:12:13 04:39:01 1 18.17 0.00 0.74 0.07 0.08 80.95 05:12:13 04:39:01 2 15.73 0.00 0.83 0.03 0.08 83.32 05:12:13 04:39:01 3 18.93 0.00 0.72 0.13 0.07 80.15 05:12:13 04:40:01 all 49.20 0.00 1.33 0.31 0.11 49.05 05:12:13 04:40:01 0 50.03 0.00 1.24 0.02 0.12 48.59 05:12:13 04:40:01 1 45.83 0.00 1.39 1.21 0.10 51.48 05:12:13 04:40:01 2 49.88 0.00 1.19 0.00 0.10 48.83 05:12:13 04:40:01 3 51.06 0.00 1.51 0.00 0.12 47.31 05:12:13 04:41:01 all 5.06 0.00 0.25 0.01 0.05 94.63 05:12:13 04:41:01 0 4.63 0.00 0.27 0.03 0.05 95.01 05:12:13 04:41:01 1 5.15 0.00 0.27 0.02 0.05 94.51 05:12:13 04:41:01 2 5.08 0.00 0.27 0.00 0.05 94.60 05:12:13 04:41:01 3 5.37 0.00 0.20 0.00 0.05 94.38 05:12:13 04:42:01 all 8.48 0.00 0.33 0.01 0.06 91.11 05:12:13 04:42:01 0 8.35 0.00 0.37 0.02 0.05 91.22 05:12:13 04:42:01 1 8.53 0.00 0.37 0.03 0.08 90.98 05:12:13 04:42:01 2 8.80 0.00 0.25 0.00 0.07 90.88 05:12:13 04:42:01 3 8.27 0.00 0.33 0.00 0.05 91.35 05:12:13 04:43:01 all 2.29 0.00 0.15 0.05 0.07 97.44 05:12:13 04:43:01 0 2.09 0.00 0.18 0.00 0.05 97.67 05:12:13 04:43:01 1 2.35 0.00 0.15 0.08 0.08 97.33 05:12:13 04:43:01 2 2.58 0.00 0.13 0.13 0.07 97.08 05:12:13 04:43:01 3 2.12 0.00 0.13 0.00 0.07 97.67 05:12:13 04:44:01 all 3.83 0.00 0.20 0.18 0.05 95.74 05:12:13 04:44:01 0 3.23 0.00 0.20 0.15 0.05 96.36 05:12:13 04:44:01 1 3.66 0.00 0.20 0.47 0.08 95.58 05:12:13 04:44:01 2 4.00 0.00 0.13 0.08 0.03 95.75 05:12:13 04:44:01 3 4.40 0.00 0.28 0.00 0.05 95.27 05:12:13 04:45:01 all 3.09 0.00 0.21 0.03 0.05 96.62 05:12:13 04:45:01 0 2.94 0.00 0.18 0.02 0.05 96.81 05:12:13 04:45:01 1 2.84 0.00 0.23 0.08 0.05 96.80 05:12:13 04:45:01 2 2.86 0.00 0.23 0.02 0.05 96.83 05:12:13 04:45:01 3 3.73 0.00 0.18 0.00 0.05 96.04 05:12:13 04:46:01 all 2.45 0.00 0.20 0.03 0.05 97.28 05:12:13 04:46:01 0 2.48 0.00 0.20 0.00 0.05 97.27 05:12:13 04:46:01 1 2.15 0.00 0.18 0.02 0.05 97.60 05:12:13 04:46:01 2 2.40 0.00 0.17 0.08 0.03 97.32 05:12:13 04:46:01 3 2.78 0.00 0.23 0.00 0.05 96.94 05:12:13 04:47:01 all 2.02 0.00 0.25 0.03 0.05 97.65 05:12:13 04:47:01 0 2.01 0.00 0.28 0.00 0.05 97.66 05:12:13 04:47:01 1 2.07 0.00 0.20 0.03 0.05 97.64 05:12:13 04:47:01 2 1.89 0.00 0.27 0.08 0.03 97.72 05:12:13 04:47:01 3 2.10 0.00 0.24 0.00 0.07 97.59 05:12:13 04:48:01 all 56.63 0.00 1.76 0.25 0.12 41.24 05:12:13 04:48:01 0 54.06 0.00 1.62 0.00 0.12 44.20 05:12:13 04:48:01 1 54.40 0.00 1.64 0.52 0.12 43.33 05:12:13 04:48:01 2 58.60 0.00 1.37 0.02 0.13 39.88 05:12:13 04:48:01 3 59.46 0.00 2.43 0.47 0.12 37.52 05:12:13 04:49:01 all 10.80 0.00 0.31 0.02 0.07 88.80 05:12:13 04:49:01 0 11.28 0.00 0.35 0.02 0.08 88.27 05:12:13 04:49:01 1 10.49 0.00 0.39 0.02 0.08 89.03 05:12:13 04:49:01 2 10.83 0.00 0.23 0.03 0.07 88.84 05:12:13 04:49:01 3 10.62 0.00 0.25 0.00 0.05 89.08 05:12:13 05:12:13 04:49:01 CPU %user %nice %system %iowait %steal %idle 05:12:13 04:50:01 all 10.03 0.00 0.36 0.13 0.07 89.42 05:12:13 04:50:01 0 10.27 0.00 0.30 0.00 0.05 89.38 05:12:13 04:50:01 1 10.20 0.00 0.42 0.32 0.08 88.98 05:12:13 04:50:01 2 9.58 0.00 0.39 0.02 0.08 89.94 05:12:13 04:50:01 3 10.06 0.00 0.32 0.17 0.07 89.39 05:12:13 04:51:01 all 3.57 0.00 0.27 0.32 0.07 95.78 05:12:13 04:51:01 0 3.50 0.00 0.23 0.08 0.10 96.08 05:12:13 04:51:01 1 3.65 0.00 0.39 0.72 0.07 95.17 05:12:13 04:51:01 2 3.35 0.00 0.23 0.40 0.05 95.96 05:12:13 04:51:01 3 3.75 0.00 0.22 0.07 0.07 95.89 05:12:13 04:52:01 all 2.79 0.00 0.24 0.04 0.05 96.88 05:12:13 04:52:01 0 3.12 0.00 0.22 0.00 0.07 96.60 05:12:13 04:52:01 1 2.83 0.00 0.30 0.17 0.05 96.65 05:12:13 04:52:01 2 2.65 0.00 0.22 0.00 0.05 97.08 05:12:13 04:52:01 3 2.56 0.00 0.22 0.00 0.03 97.19 05:12:13 04:53:01 all 2.22 0.00 0.18 0.03 0.07 97.50 05:12:13 04:53:01 0 1.87 0.00 0.17 0.00 0.07 97.89 05:12:13 04:53:01 1 2.66 0.00 0.20 0.10 0.07 96.97 05:12:13 04:53:01 2 2.12 0.00 0.25 0.00 0.07 97.56 05:12:13 04:53:01 3 2.22 0.00 0.10 0.00 0.08 97.59 05:12:13 04:54:01 all 1.57 0.00 0.18 0.01 0.04 98.20 05:12:13 04:54:01 0 1.47 0.00 0.13 0.02 0.03 98.34 05:12:13 04:54:01 1 1.49 0.00 0.15 0.03 0.03 98.29 05:12:13 04:54:01 2 1.69 0.00 0.22 0.00 0.05 98.04 05:12:13 04:54:01 3 1.61 0.00 0.20 0.00 0.05 98.14 05:12:13 04:55:01 all 2.17 0.00 0.17 0.02 0.05 97.60 05:12:13 04:55:01 0 2.26 0.00 0.18 0.00 0.05 97.50 05:12:13 04:55:01 1 2.10 0.00 0.15 0.05 0.07 97.64 05:12:13 04:55:01 2 2.21 0.00 0.10 0.02 0.03 97.64 05:12:13 04:55:01 3 2.11 0.00 0.23 0.00 0.05 97.60 05:12:13 04:56:01 all 2.45 0.00 0.20 0.02 0.05 97.28 05:12:13 04:56:01 0 2.50 0.00 0.22 0.00 0.05 97.23 05:12:13 04:56:01 1 2.21 0.00 0.22 0.05 0.05 97.47 05:12:13 04:56:01 2 1.96 0.00 0.18 0.02 0.05 97.78 05:12:13 04:56:01 3 3.11 0.00 0.18 0.02 0.07 96.63 05:12:13 04:57:01 all 1.98 0.00 0.15 0.01 0.06 97.81 05:12:13 04:57:01 0 2.28 0.00 0.18 0.00 0.05 97.48 05:12:13 04:57:01 1 1.78 0.00 0.17 0.00 0.07 97.99 05:12:13 04:57:01 2 1.64 0.00 0.12 0.00 0.05 98.19 05:12:13 04:57:01 3 2.22 0.00 0.12 0.03 0.07 97.56 05:12:13 04:58:01 all 57.64 0.00 2.05 1.25 0.11 38.95 05:12:13 04:58:01 0 65.42 0.00 1.52 0.43 0.12 32.51 05:12:13 04:58:01 1 51.50 0.00 1.96 3.14 0.12 43.28 05:12:13 04:58:01 2 56.03 0.00 2.53 1.07 0.10 40.27 05:12:13 04:58:01 3 57.61 0.00 2.20 0.35 0.10 39.74 05:12:13 04:59:01 all 8.82 0.00 0.31 0.03 0.06 90.78 05:12:13 04:59:01 0 8.85 0.00 0.25 0.00 0.05 90.85 05:12:13 04:59:01 1 9.00 0.00 0.38 0.03 0.07 90.52 05:12:13 04:59:01 2 7.99 0.00 0.23 0.03 0.07 91.67 05:12:13 04:59:01 3 9.44 0.00 0.36 0.05 0.07 90.08 05:12:13 05:00:01 all 25.47 0.00 0.99 0.02 0.08 73.44 05:12:13 05:00:01 0 25.02 0.00 1.13 0.00 0.07 73.78 05:12:13 05:00:01 1 25.94 0.00 1.07 0.05 0.10 72.84 05:12:13 05:00:01 2 24.77 0.00 0.70 0.02 0.08 74.43 05:12:13 05:00:01 3 26.16 0.00 1.04 0.00 0.08 72.72 05:12:13 05:12:13 05:00:01 CPU %user %nice %system %iowait %steal %idle 05:12:13 05:01:01 all 34.94 0.00 0.93 0.29 0.08 63.76 05:12:13 05:01:01 0 36.29 0.00 1.25 0.37 0.08 62.00 05:12:13 05:01:01 1 35.38 0.00 0.89 0.79 0.08 62.86 05:12:13 05:01:01 2 31.60 0.00 0.81 0.02 0.08 67.49 05:12:13 05:01:01 3 36.49 0.00 0.75 0.00 0.08 62.68 05:12:13 05:02:01 all 3.14 0.00 0.16 0.01 0.05 96.63 05:12:13 05:02:01 0 2.98 0.00 0.18 0.02 0.05 96.77 05:12:13 05:02:01 1 3.43 0.00 0.15 0.02 0.05 96.35 05:12:13 05:02:01 2 3.01 0.00 0.08 0.02 0.05 96.84 05:12:13 05:02:01 3 3.13 0.00 0.22 0.00 0.07 96.58 05:12:13 05:03:01 all 3.13 0.00 0.22 0.01 0.05 96.59 05:12:13 05:03:01 0 3.16 0.00 0.23 0.00 0.05 96.56 05:12:13 05:03:01 1 3.36 0.00 0.22 0.05 0.07 96.30 05:12:13 05:03:01 2 2.98 0.00 0.20 0.00 0.05 96.77 05:12:13 05:03:01 3 3.00 0.00 0.22 0.00 0.05 96.73 05:12:13 05:04:01 all 1.76 0.00 0.16 0.01 0.05 98.02 05:12:13 05:04:01 0 1.66 0.00 0.22 0.00 0.05 98.07 05:12:13 05:04:01 1 2.08 0.00 0.23 0.03 0.05 97.60 05:12:13 05:04:01 2 1.66 0.00 0.10 0.00 0.05 98.19 05:12:13 05:04:01 3 1.64 0.00 0.10 0.00 0.05 98.21 05:12:13 05:05:01 all 2.94 0.00 0.18 0.02 0.05 96.81 05:12:13 05:05:01 0 2.74 0.00 0.17 0.00 0.07 97.03 05:12:13 05:05:01 1 2.64 0.00 0.18 0.02 0.05 97.11 05:12:13 05:05:01 2 3.50 0.00 0.18 0.03 0.05 96.23 05:12:13 05:05:01 3 2.86 0.00 0.17 0.03 0.05 96.89 05:12:13 05:06:01 all 2.08 0.00 0.20 0.01 0.06 97.65 05:12:13 05:06:01 0 2.02 0.00 0.25 0.00 0.07 97.66 05:12:13 05:06:01 1 2.00 0.00 0.15 0.03 0.07 97.75 05:12:13 05:06:01 2 2.36 0.00 0.23 0.00 0.07 97.34 05:12:13 05:06:01 3 1.94 0.00 0.17 0.00 0.05 97.84 05:12:13 05:07:01 all 15.66 0.00 0.68 0.05 0.06 83.55 05:12:13 05:07:01 0 16.57 0.00 0.77 0.03 0.05 82.57 05:12:13 05:07:01 1 14.41 0.00 0.76 0.08 0.05 84.69 05:12:13 05:07:01 2 15.74 0.00 0.55 0.07 0.08 83.56 05:12:13 05:07:01 3 15.92 0.00 0.62 0.03 0.07 83.36 05:12:13 05:08:01 all 49.32 0.00 1.37 0.30 0.12 48.90 05:12:13 05:08:01 0 50.88 0.00 1.34 0.00 0.12 47.67 05:12:13 05:08:01 1 47.41 0.00 1.12 0.84 0.12 50.51 05:12:13 05:08:01 2 49.58 0.00 1.34 0.10 0.12 48.86 05:12:13 05:08:01 3 49.39 0.00 1.66 0.27 0.12 48.57 05:12:13 05:09:01 all 7.21 0.00 0.29 0.03 0.05 92.41 05:12:13 05:09:01 0 6.91 0.00 0.27 0.00 0.05 92.77 05:12:13 05:09:01 1 8.17 0.00 0.33 0.10 0.07 91.33 05:12:13 05:09:01 2 7.10 0.00 0.29 0.03 0.07 92.52 05:12:13 05:09:01 3 6.64 0.00 0.28 0.00 0.03 93.04 05:12:13 05:10:01 all 6.55 0.00 0.26 0.05 0.06 93.07 05:12:13 05:10:01 0 6.69 0.00 0.32 0.03 0.07 92.89 05:12:13 05:10:01 1 6.56 0.00 0.22 0.12 0.08 93.02 05:12:13 05:10:01 2 6.71 0.00 0.27 0.03 0.05 92.94 05:12:13 05:10:01 3 6.23 0.00 0.25 0.02 0.05 93.45 05:12:13 05:11:01 all 4.88 0.00 0.24 0.02 0.05 94.80 05:12:13 05:11:01 0 4.78 0.00 0.29 0.00 0.07 94.87 05:12:13 05:11:01 1 4.73 0.00 0.22 0.03 0.03 94.98 05:12:13 05:11:01 2 4.98 0.00 0.29 0.05 0.07 94.62 05:12:13 05:11:01 3 5.02 0.00 0.18 0.00 0.05 94.74 05:12:13 05:12:13 05:11:01 CPU %user %nice %system %iowait %steal %idle 05:12:13 05:12:01 all 24.82 0.00 1.58 1.29 0.07 72.24 05:12:13 05:12:01 0 28.22 0.00 1.52 1.87 0.08 68.30 05:12:13 05:12:01 1 31.76 0.00 1.99 2.76 0.07 63.43 05:12:13 05:12:01 2 13.72 0.00 1.11 0.32 0.05 84.80 05:12:13 05:12:01 3 25.54 0.00 1.70 0.20 0.07 72.49 05:12:13 Average: all 20.98 0.22 1.10 0.77 0.45 76.47 05:12:13 Average: 0 20.95 0.25 1.13 1.14 0.46 76.07 05:12:13 Average: 1 20.61 0.21 1.08 0.78 0.46 76.86 05:12:13 Average: 2 20.63 0.21 1.07 0.68 0.45 76.96 05:12:13 Average: 3 21.75 0.23 1.13 0.46 0.42 76.00 05:12:13 05:12:13 05:12:13